title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: reoder whatsnew enhancements | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d252c19a95d4a..6bd20ace44b65 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -14,81 +14,6 @@ including other versions of pandas.
Enhancements
~~~~~~~~~~~~
-.. _whatsnew_220.enhancements.calamine:
-
-Calamine engine for :func:`read_excel`
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``calamine`` engine was added to :func:`read_excel`.
-It uses ``python-calamine``, which provides Python bindings for the Rust library `calamine <https://crates.io/crates/calamine>`__.
-This engine supports Excel files (``.xlsx``, ``.xlsm``, ``.xls``, ``.xlsb``) and OpenDocument spreadsheets (``.ods``) (:issue:`50395`).
-
-There are two advantages of this engine:
-
-1. Calamine is often faster than other engines, some benchmarks show results up to 5x faster than 'openpyxl', 20x - 'odf', 4x - 'pyxlsb', and 1.5x - 'xlrd'.
- But, 'openpyxl' and 'pyxlsb' are faster in reading a few rows from large files because of lazy iteration over rows.
-2. Calamine supports the recognition of datetime in ``.xlsb`` files, unlike 'pyxlsb' which is the only other engine in pandas that can read ``.xlsb`` files.
-
-.. code-block:: python
-
- pd.read_excel("path_to_file.xlsb", engine="calamine")
-
-
-For more, see :ref:`io.calamine` in the user guide on IO tools.
-
-.. _whatsnew_220.enhancements.struct_accessor:
-
-Series.struct accessor to with PyArrow structured data
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``Series.struct`` accessor provides attributes and methods for processing
-data with ``struct[pyarrow]`` dtype Series. For example,
-:meth:`Series.struct.explode` converts PyArrow structured data to a pandas
-DataFrame. (:issue:`54938`)
-
-.. ipython:: python
-
- import pyarrow as pa
- series = pd.Series(
- [
- {"project": "pandas", "version": "2.2.0"},
- {"project": "numpy", "version": "1.25.2"},
- {"project": "pyarrow", "version": "13.0.0"},
- ],
- dtype=pd.ArrowDtype(
- pa.struct([
- ("project", pa.string()),
- ("version", pa.string()),
- ])
- ),
- )
- series.struct.explode()
-
-.. _whatsnew_220.enhancements.list_accessor:
-
-Series.list accessor for PyArrow list data
-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
-
-The ``Series.list`` accessor provides attributes and methods for processing
-data with ``list[pyarrow]`` dtype Series. For example,
-:meth:`Series.list.__getitem__` allows indexing pyarrow lists in
-a Series. (:issue:`55323`)
-
-.. ipython:: python
-
- import pyarrow as pa
- series = pd.Series(
- [
- [1, 2, 3],
- [4, 5],
- [6],
- ],
- dtype=pd.ArrowDtype(
- pa.list_(pa.int64())
- ),
- )
- series.list[0]
-
.. _whatsnew_220.enhancements.adbc_support:
ADBC Driver support in to_sql and read_sql
@@ -180,6 +105,81 @@ For a full list of ADBC drivers and their development status, see the `ADBC Driv
Implementation Status <https://arrow.apache.org/adbc/current/driver/status.html>`_
documentation.
+.. _whatsnew_220.enhancements.struct_accessor:
+
+Series.struct accessor to with PyArrow structured data
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``Series.struct`` accessor provides attributes and methods for processing
+data with ``struct[pyarrow]`` dtype Series. For example,
+:meth:`Series.struct.explode` converts PyArrow structured data to a pandas
+DataFrame. (:issue:`54938`)
+
+.. ipython:: python
+
+ import pyarrow as pa
+ series = pd.Series(
+ [
+ {"project": "pandas", "version": "2.2.0"},
+ {"project": "numpy", "version": "1.25.2"},
+ {"project": "pyarrow", "version": "13.0.0"},
+ ],
+ dtype=pd.ArrowDtype(
+ pa.struct([
+ ("project", pa.string()),
+ ("version", pa.string()),
+ ])
+ ),
+ )
+ series.struct.explode()
+
+.. _whatsnew_220.enhancements.list_accessor:
+
+Series.list accessor for PyArrow list data
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``Series.list`` accessor provides attributes and methods for processing
+data with ``list[pyarrow]`` dtype Series. For example,
+:meth:`Series.list.__getitem__` allows indexing pyarrow lists in
+a Series. (:issue:`55323`)
+
+.. ipython:: python
+
+ import pyarrow as pa
+ series = pd.Series(
+ [
+ [1, 2, 3],
+ [4, 5],
+ [6],
+ ],
+ dtype=pd.ArrowDtype(
+ pa.list_(pa.int64())
+ ),
+ )
+ series.list[0]
+
+.. _whatsnew_220.enhancements.calamine:
+
+Calamine engine for :func:`read_excel`
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The ``calamine`` engine was added to :func:`read_excel`.
+It uses ``python-calamine``, which provides Python bindings for the Rust library `calamine <https://crates.io/crates/calamine>`__.
+This engine supports Excel files (``.xlsx``, ``.xlsm``, ``.xls``, ``.xlsb``) and OpenDocument spreadsheets (``.ods``) (:issue:`50395`).
+
+There are two advantages of this engine:
+
+1. Calamine is often faster than other engines, some benchmarks show results up to 5x faster than 'openpyxl', 20x - 'odf', 4x - 'pyxlsb', and 1.5x - 'xlrd'.
+ But, 'openpyxl' and 'pyxlsb' are faster in reading a few rows from large files because of lazy iteration over rows.
+2. Calamine supports the recognition of datetime in ``.xlsb`` files, unlike 'pyxlsb' which is the only other engine in pandas that can read ``.xlsb`` files.
+
+.. code-block:: python
+
+ pd.read_excel("path_to_file.xlsb", engine="calamine")
+
+
+For more, see :ref:`io.calamine` in the user guide on IO tools.
+
.. _whatsnew_220.enhancements.other:
Other enhancements
| Reodering this for potential impact for users, e.g.
- ADBC driver should go first
- read_excel improvements are probably relatively low-key for most users, so I put it at the end.
These blocks are relatively long, so we should make sure that the more impactful items go first | https://api.github.com/repos/pandas-dev/pandas/pulls/56196 | 2023-11-26T23:13:34Z | 2023-11-27T02:32:58Z | 2023-11-27T02:32:58Z | 2023-11-27T11:10:15Z |
BUG: read_json not handling string dtype when converting to dates | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d252c19a95d4a..5cb99afdcb98d 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -530,6 +530,7 @@ I/O
- Bug in :func:`read_csv` where ``on_bad_lines="warn"`` would write to ``stderr`` instead of raise a Python warning. This now yields a :class:`.errors.ParserWarning` (:issue:`54296`)
- Bug in :func:`read_csv` with ``engine="pyarrow"`` where ``usecols`` wasn't working with a csv with no headers (:issue:`54459`)
- Bug in :func:`read_excel`, with ``engine="xlrd"`` (``xls`` files) erroring when file contains NaNs/Infs (:issue:`54564`)
+- Bug in :func:`read_json` not handling dtype conversion properly if ``infer_string`` is set (:issue:`56195`)
- Bug in :func:`to_excel`, with ``OdsWriter`` (``ods`` files) writing boolean/string value (:issue:`54994`)
- Bug in :meth:`DataFrame.to_hdf` and :func:`read_hdf` with ``datetime64`` dtypes with non-nanosecond resolution failing to round-trip correctly (:issue:`55622`)
- Bug in :meth:`pandas.read_excel` with ``engine="odf"`` (``ods`` files) when string contains annotation (:issue:`55200`)
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index e17fcea0aae71..9c56089560507 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -32,7 +32,10 @@
from pandas.util._exceptions import find_stack_level
from pandas.util._validators import check_dtype_backend
-from pandas.core.dtypes.common import ensure_str
+from pandas.core.dtypes.common import (
+ ensure_str,
+ is_string_dtype,
+)
from pandas.core.dtypes.dtypes import PeriodDtype
from pandas.core.dtypes.generic import ABCIndex
@@ -1249,7 +1252,7 @@ def _try_convert_data(
if self.dtype_backend is not lib.no_default and not isinstance(data, ABCIndex):
# Fall through for conversion later on
return data, True
- elif data.dtype == "object":
+ elif is_string_dtype(data.dtype):
# try float
try:
data = data.astype("float64")
@@ -1301,6 +1304,10 @@ def _try_convert_to_date(self, data):
return data, False
new_data = data
+
+ if new_data.dtype == "string":
+ new_data = new_data.astype(object)
+
if new_data.dtype == "object":
try:
new_data = data.astype("int64")
diff --git a/pandas/tests/io/json/test_compression.py b/pandas/tests/io/json/test_compression.py
index 410c20bb22d1e..ff7d34c85c015 100644
--- a/pandas/tests/io/json/test_compression.py
+++ b/pandas/tests/io/json/test_compression.py
@@ -93,27 +93,31 @@ def test_read_unsupported_compression_type():
pd.read_json(path, compression="unsupported")
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
@pytest.mark.parametrize("to_infer", [True, False])
@pytest.mark.parametrize("read_infer", [True, False])
def test_to_json_compression(
- compression_only, read_infer, to_infer, compression_to_extension
+ compression_only, read_infer, to_infer, compression_to_extension, infer_string
):
- # see gh-15008
- compression = compression_only
+ with pd.option_context("future.infer_string", infer_string):
+ # see gh-15008
+ compression = compression_only
- # We'll complete file extension subsequently.
- filename = "test."
- filename += compression_to_extension[compression]
+ # We'll complete file extension subsequently.
+ filename = "test."
+ filename += compression_to_extension[compression]
- df = pd.DataFrame({"A": [1]})
+ df = pd.DataFrame({"A": [1]})
- to_compression = "infer" if to_infer else compression
- read_compression = "infer" if read_infer else compression
+ to_compression = "infer" if to_infer else compression
+ read_compression = "infer" if read_infer else compression
- with tm.ensure_clean(filename) as path:
- df.to_json(path, compression=to_compression)
- result = pd.read_json(path, compression=read_compression)
- tm.assert_frame_equal(result, df)
+ with tm.ensure_clean(filename) as path:
+ df.to_json(path, compression=to_compression)
+ result = pd.read_json(path, compression=read_compression)
+ tm.assert_frame_equal(result, df)
def test_to_json_compression_mode(compression):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56195 | 2023-11-26T23:09:57Z | 2023-11-27T17:30:36Z | 2023-11-27T17:30:36Z | 2023-11-27T17:33:28Z |
BUG: hdf can't deal with ea dtype columns | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 77ce303dc1bfe..0cbf211305d12 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -24,6 +24,7 @@ Bug fixes
- Bug in :class:`Series` constructor raising DeprecationWarning when ``index`` is a list of :class:`Series` (:issue:`55228`)
- Bug in :meth:`Index.__getitem__` returning wrong result for Arrow dtypes and negative stepsize (:issue:`55832`)
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
+- Fixed bug in :meth:`DataFrame.to_hdf` raising when columns have ``StringDtype`` (:issue:`55088`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 9e0e3686e4aa2..50611197ad7dd 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -57,7 +57,6 @@
is_bool_dtype,
is_complex_dtype,
is_list_like,
- is_object_dtype,
is_string_dtype,
needs_i8_conversion,
)
@@ -2647,7 +2646,7 @@ class DataIndexableCol(DataCol):
is_data_indexable = True
def validate_names(self) -> None:
- if not is_object_dtype(Index(self.values).dtype):
+ if not is_string_dtype(Index(self.values).dtype):
# TODO: should the message here be more specifically non-str?
raise ValueError("cannot have non-object label DataIndexableCol")
diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py
index 6c24843f18d0d..baac90e52e962 100644
--- a/pandas/tests/io/pytables/test_round_trip.py
+++ b/pandas/tests/io/pytables/test_round_trip.py
@@ -531,3 +531,18 @@ def test_round_trip_equals(tmp_path, setup_path):
tm.assert_frame_equal(df, other)
assert df.equals(other)
assert other.equals(df)
+
+
+def test_infer_string_columns(tmp_path, setup_path):
+ # GH#
+ pytest.importorskip("pyarrow")
+ path = tmp_path / setup_path
+ with pd.option_context("future.infer_string", True):
+ df = DataFrame(1, columns=list("ABCD"), index=list(range(10))).set_index(
+ ["A", "B"]
+ )
+ expected = df.copy()
+ df.to_hdf(path, key="df", format="table")
+
+ result = read_hdf(path, "df")
+ tm.assert_frame_equal(result, expected)
| - [ ] closes #55088 (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56194 | 2023-11-26T22:23:53Z | 2023-11-27T02:43:05Z | 2023-11-27T02:43:05Z | 2023-11-27T11:09:32Z |
Adjust test in excel folder for new string option | diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index abbdb77efad0e..caa2da1b6123b 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -14,6 +14,8 @@
import numpy as np
import pytest
+from pandas._config import using_pyarrow_string_dtype
+
import pandas.util._test_decorators as td
import pandas as pd
@@ -637,6 +639,9 @@ def test_dtype_backend_and_dtype(self, read_ext):
)
tm.assert_frame_equal(result, df)
+ @pytest.mark.xfail(
+ using_pyarrow_string_dtype(), reason="infer_string takes precedence"
+ )
def test_dtype_backend_string(self, read_ext, string_storage):
# GH#36712
if read_ext in (".xlsb", ".xls"):
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 22cd0621fd4c4..507d7ed4bf9d0 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -709,7 +709,7 @@ def test_excel_date_datetime_format(self, ext, path):
# we need to use df_expected to check the result.
tm.assert_frame_equal(rs2, df_expected)
- def test_to_excel_interval_no_labels(self, path):
+ def test_to_excel_interval_no_labels(self, path, using_infer_string):
# see gh-19242
#
# Test writing Interval without labels.
@@ -719,7 +719,9 @@ def test_to_excel_interval_no_labels(self, path):
expected = df.copy()
df["new"] = pd.cut(df[0], 10)
- expected["new"] = pd.cut(expected[0], 10).astype(str)
+ expected["new"] = pd.cut(expected[0], 10).astype(
+ str if not using_infer_string else "string[pyarrow_numpy]"
+ )
df.to_excel(path, sheet_name="test1")
with ExcelFile(path) as reader:
@@ -1213,10 +1215,9 @@ def test_render_as_column_name(self, path):
def test_true_and_false_value_options(self, path):
# see gh-13347
- df = DataFrame([["foo", "bar"]], columns=["col1", "col2"])
- msg = "Downcasting behavior in `replace`"
- with tm.assert_produces_warning(FutureWarning, match=msg):
- expected = df.replace({"foo": True, "bar": False})
+ df = DataFrame([["foo", "bar"]], columns=["col1", "col2"], dtype=object)
+ with option_context("future.no_silent_downcasting", True):
+ expected = df.replace({"foo": True, "bar": False}).astype("bool")
df.to_excel(path)
read_frame = pd.read_excel(
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56193 | 2023-11-26T22:12:04Z | 2023-11-27T02:44:42Z | 2023-11-27T02:44:42Z | 2023-11-27T11:09:23Z |
Adjust tests in extension folder for new string option | diff --git a/pandas/tests/extension/base/dtype.py b/pandas/tests/extension/base/dtype.py
index 5ba65ceaeeada..c7b768f6e3c88 100644
--- a/pandas/tests/extension/base/dtype.py
+++ b/pandas/tests/extension/base/dtype.py
@@ -59,7 +59,12 @@ def test_check_dtype(self, data):
# check equivalency for using .dtypes
df = pd.DataFrame(
- {"A": pd.Series(data, dtype=dtype), "B": data, "C": "foo", "D": 1}
+ {
+ "A": pd.Series(data, dtype=dtype),
+ "B": data,
+ "C": pd.Series(["foo"] * len(data), dtype=object),
+ "D": 1,
+ }
)
result = df.dtypes == str(dtype)
assert np.dtype("int64") != "Int64"
diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py
index 5c21c4f7137a5..4e8221f67a74d 100644
--- a/pandas/tests/extension/base/groupby.py
+++ b/pandas/tests/extension/base/groupby.py
@@ -21,7 +21,12 @@ class BaseGroupbyTests:
def test_grouping_grouper(self, data_for_grouping):
df = pd.DataFrame(
- {"A": ["B", "B", None, None, "A", "A", "B", "C"], "B": data_for_grouping}
+ {
+ "A": pd.Series(
+ ["B", "B", None, None, "A", "A", "B", "C"], dtype=object
+ ),
+ "B": data_for_grouping,
+ }
)
gr1 = df.groupby("A").grouper.groupings[0]
gr2 = df.groupby("B").grouper.groupings[0]
diff --git a/pandas/tests/extension/base/missing.py b/pandas/tests/extension/base/missing.py
index 40cc952d44200..ffb7a24b4b390 100644
--- a/pandas/tests/extension/base/missing.py
+++ b/pandas/tests/extension/base/missing.py
@@ -44,7 +44,7 @@ def test_dropna_series(self, data_missing):
tm.assert_series_equal(result, expected)
def test_dropna_frame(self, data_missing):
- df = pd.DataFrame({"A": data_missing})
+ df = pd.DataFrame({"A": data_missing}, columns=pd.Index(["A"], dtype=object))
# defaults
result = df.dropna()
diff --git a/pandas/tests/extension/base/ops.py b/pandas/tests/extension/base/ops.py
index 40fab5ec11d7d..5cd66d8a874c7 100644
--- a/pandas/tests/extension/base/ops.py
+++ b/pandas/tests/extension/base/ops.py
@@ -5,6 +5,8 @@
import numpy as np
import pytest
+from pandas._config import using_pyarrow_string_dtype
+
from pandas.core.dtypes.common import is_string_dtype
import pandas as pd
@@ -27,13 +29,23 @@ def _get_expected_exception(
# The self.obj_bar_exc pattern isn't great in part because it can depend
# on op_name or dtypes, but we use it here for backward-compatibility.
if op_name in ["__divmod__", "__rdivmod__"]:
- return self.divmod_exc
- if isinstance(obj, pd.Series) and isinstance(other, pd.Series):
- return self.series_array_exc
+ result = self.divmod_exc
+ elif isinstance(obj, pd.Series) and isinstance(other, pd.Series):
+ result = self.series_array_exc
elif isinstance(obj, pd.Series):
- return self.series_scalar_exc
+ result = self.series_scalar_exc
else:
- return self.frame_scalar_exc
+ result = self.frame_scalar_exc
+
+ if using_pyarrow_string_dtype() and result is not None:
+ import pyarrow as pa
+
+ result = ( # type: ignore[assignment]
+ result,
+ pa.lib.ArrowNotImplementedError,
+ NotImplementedError,
+ )
+ return result
def _cast_pointwise_result(self, op_name: str, obj, other, pointwise_result):
# In _check_op we check that the result of a pointwise operation
diff --git a/pandas/tests/extension/base/setitem.py b/pandas/tests/extension/base/setitem.py
index 067b401ce2f23..187da89729f0e 100644
--- a/pandas/tests/extension/base/setitem.py
+++ b/pandas/tests/extension/base/setitem.py
@@ -351,11 +351,11 @@ def test_setitem_preserves_views(self, data):
def test_setitem_with_expansion_dataframe_column(self, data, full_indexer):
# https://github.com/pandas-dev/pandas/issues/32395
- df = expected = pd.DataFrame({"data": pd.Series(data)})
+ df = expected = pd.DataFrame({0: pd.Series(data)})
result = pd.DataFrame(index=df.index)
key = full_indexer(df)
- result.loc[key, "data"] = df["data"]
+ result.loc[key, 0] = df[0]
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/extension/test_categorical.py b/pandas/tests/extension/test_categorical.py
index 5cde5df4bc007..6f33b18b19c51 100644
--- a/pandas/tests/extension/test_categorical.py
+++ b/pandas/tests/extension/test_categorical.py
@@ -18,6 +18,8 @@
import numpy as np
import pytest
+from pandas._config import using_pyarrow_string_dtype
+
import pandas as pd
from pandas import Categorical
import pandas._testing as tm
@@ -100,7 +102,9 @@ def test_contains(self, data, data_missing):
if na_value_obj is na_value:
continue
assert na_value_obj not in data
- assert na_value_obj in data_missing # this line differs from super method
+ # this section suffers from super method
+ if not using_pyarrow_string_dtype():
+ assert na_value_obj in data_missing
def test_empty(self, dtype):
cls = dtype.construct_array_type()
diff --git a/pandas/tests/extension/test_numpy.py b/pandas/tests/extension/test_numpy.py
index f1939ea174841..c0692064cfaec 100644
--- a/pandas/tests/extension/test_numpy.py
+++ b/pandas/tests/extension/test_numpy.py
@@ -196,7 +196,7 @@ def test_series_constructor_scalar_with_index(self, data, dtype):
class TestDtype(BaseNumPyTests, base.BaseDtypeTests):
- def test_check_dtype(self, data, request):
+ def test_check_dtype(self, data, request, using_infer_string):
if data.dtype.numpy_dtype == "object":
request.applymarker(
pytest.mark.xfail(
@@ -429,7 +429,7 @@ def test_setitem_with_expansion_dataframe_column(self, data, full_indexer):
if data.dtype.numpy_dtype != object:
if not isinstance(key, slice) or key != slice(None):
expected = pd.DataFrame({"data": data.to_numpy()})
- tm.assert_frame_equal(result, expected)
+ tm.assert_frame_equal(result, expected, check_column_type=False)
@skip_nested
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56191 | 2023-11-26T21:11:49Z | 2023-12-08T23:30:40Z | 2023-12-08T23:30:40Z | 2023-12-17T20:17:22Z |
Adjust tests for apply folder for new string option | diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py
index 2d7549e09a986..b7eac6b8f0ea1 100644
--- a/pandas/tests/apply/test_frame_apply.py
+++ b/pandas/tests/apply/test_frame_apply.py
@@ -1464,13 +1464,16 @@ def test_apply_datetime_tz_issue(engine, request):
@pytest.mark.parametrize("df", [DataFrame({"A": ["a", None], "B": ["c", "d"]})])
@pytest.mark.parametrize("method", ["min", "max", "sum"])
-def test_mixed_column_raises(df, method):
+def test_mixed_column_raises(df, method, using_infer_string):
# GH 16832
if method == "sum":
- msg = r'can only concatenate str \(not "int"\) to str'
+ msg = r'can only concatenate str \(not "int"\) to str|does not support'
else:
msg = "not supported between instances of 'str' and 'float'"
- with pytest.raises(TypeError, match=msg):
+ if not using_infer_string:
+ with pytest.raises(TypeError, match=msg):
+ getattr(df, method)()
+ else:
getattr(df, method)()
diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py
index ae5b3eef4e4fe..48dde6d42f743 100644
--- a/pandas/tests/apply/test_invalid_arg.py
+++ b/pandas/tests/apply/test_invalid_arg.py
@@ -224,9 +224,14 @@ def transform2(row):
DataFrame([["a", "b"], ["b", "a"]]), [["cumprod", TypeError]]
),
)
-def test_agg_cython_table_raises_frame(df, func, expected, axis):
+def test_agg_cython_table_raises_frame(df, func, expected, axis, using_infer_string):
# GH 21224
- msg = "can't multiply sequence by non-int of type 'str'"
+ if using_infer_string:
+ import pyarrow as pa
+
+ expected = (expected, pa.lib.ArrowNotImplementedError)
+
+ msg = "can't multiply sequence by non-int of type 'str'|has no kernel"
warn = None if isinstance(func, str) else FutureWarning
with pytest.raises(expected, match=msg):
with tm.assert_produces_warning(warn, match="using DataFrame.cumprod"):
@@ -249,11 +254,18 @@ def test_agg_cython_table_raises_frame(df, func, expected, axis):
)
),
)
-def test_agg_cython_table_raises_series(series, func, expected):
+def test_agg_cython_table_raises_series(series, func, expected, using_infer_string):
# GH21224
msg = r"[Cc]ould not convert|can't multiply sequence by non-int of type"
if func == "median" or func is np.nanmedian or func is np.median:
msg = r"Cannot convert \['a' 'b' 'c'\] to numeric"
+
+ if using_infer_string:
+ import pyarrow as pa
+
+ expected = (expected, pa.lib.ArrowNotImplementedError)
+
+ msg = msg + "|does not support|has no kernel"
warn = None if isinstance(func, str) else FutureWarning
with pytest.raises(expected, match=msg):
diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py
index 177dff2d771d4..24e48ebd4ed54 100644
--- a/pandas/tests/apply/test_series_apply.py
+++ b/pandas/tests/apply/test_series_apply.py
@@ -222,7 +222,7 @@ def f(x):
assert result == "Asia/Tokyo"
-def test_apply_categorical(by_row):
+def test_apply_categorical(by_row, using_infer_string):
values = pd.Categorical(list("ABBABCD"), categories=list("DCBA"), ordered=True)
ser = Series(values, name="XX", index=list("abcdefg"))
@@ -245,7 +245,7 @@ def test_apply_categorical(by_row):
result = ser.apply(lambda x: "A")
exp = Series(["A"] * 7, name="XX", index=list("abcdefg"))
tm.assert_series_equal(result, exp)
- assert result.dtype == object
+ assert result.dtype == object if not using_infer_string else "string[pyarrow_numpy]"
@pytest.mark.parametrize("series", [["1-1", "1-1", np.nan], ["1-1", "1-2", np.nan]])
| sits on #56189 | https://api.github.com/repos/pandas-dev/pandas/pulls/56190 | 2023-11-26T20:32:34Z | 2023-11-30T18:28:24Z | 2023-11-30T18:28:24Z | 2023-11-30T18:29:10Z |
BUG: numba raises for string columns or index | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d252c19a95d4a..d4954e6caf2d0 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -496,8 +496,8 @@ Conversion
Strings
^^^^^^^
- Bug in :func:`pandas.api.types.is_string_dtype` while checking object array with no elements is of the string dtype (:issue:`54661`)
+- Bug in :meth:`DataFrame.apply` failing when ``engine="numba"`` and columns or index have ``StringDtype`` (:issue:`56189`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for ``string[pyarrow]`` (:issue:`54942`)
--
Interval
^^^^^^^^
diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 3b79882d3c762..bb3cc3a03760f 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -1172,11 +1172,17 @@ def apply_with_numba(self) -> dict[int, Any]:
)
from pandas.core._numba.extensions import set_numba_data
+ index = self.obj.index
+ if index.dtype == "string":
+ index = index.astype(object)
+
+ columns = self.obj.columns
+ if columns.dtype == "string":
+ columns = columns.astype(object)
+
# Convert from numba dict to regular dict
# Our isinstance checks in the df constructor don't pass for numbas typed dict
- with set_numba_data(self.obj.index) as index, set_numba_data(
- self.columns
- ) as columns:
+ with set_numba_data(index) as index, set_numba_data(columns) as columns:
res = dict(nb_func(self.values, columns, index))
return res
diff --git a/pandas/tests/apply/test_numba.py b/pandas/tests/apply/test_numba.py
index ee239568d057d..85d7baee1bdf5 100644
--- a/pandas/tests/apply/test_numba.py
+++ b/pandas/tests/apply/test_numba.py
@@ -24,6 +24,22 @@ def test_numba_vs_python_noop(float_frame, apply_axis):
tm.assert_frame_equal(result, expected)
+def test_numba_vs_python_string_index():
+ # GH#56189
+ pytest.importorskip("pyarrow")
+ df = DataFrame(
+ 1,
+ index=Index(["a", "b"], dtype="string[pyarrow_numpy]"),
+ columns=Index(["x", "y"], dtype="string[pyarrow_numpy]"),
+ )
+ func = lambda x: x
+ result = df.apply(func, engine="numba", axis=0)
+ expected = df.apply(func, engine="python", axis=0)
+ tm.assert_frame_equal(
+ result, expected, check_column_type=False, check_index_type=False
+ )
+
+
def test_numba_vs_python_indexing():
frame = DataFrame(
{"a": [1, 2, 3], "b": [4, 5, 6], "c": [7.0, 8.0, 9.0]},
@@ -88,7 +104,8 @@ def test_numba_unsupported_dtypes(apply_axis):
df["c"] = df["c"].astype("double[pyarrow]")
with pytest.raises(
- ValueError, match="Column b must have a numeric dtype. Found 'object' instead"
+ ValueError,
+ match="Column b must have a numeric dtype. Found 'object|string' instead",
):
df.apply(f, engine="numba", axis=apply_axis)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56189 | 2023-11-26T20:29:28Z | 2023-11-27T02:52:43Z | 2023-11-27T02:52:43Z | 2023-11-27T11:08:24Z |
Adjust tests in array folder for new string option | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 82de8ae96160f..cfa41a4e1969b 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -932,7 +932,10 @@ def value_counts_internal(
idx = Index(keys)
if idx.dtype == bool and keys.dtype == object:
idx = idx.astype(object)
- elif idx.dtype != keys.dtype:
+ elif (
+ idx.dtype != keys.dtype # noqa: PLR1714 # # pylint: disable=R1714
+ and idx.dtype != "string[pyarrow_numpy]"
+ ):
warnings.warn(
# GH#56161
"The behavior of value_counts with object-dtype is deprecated. "
diff --git a/pandas/tests/arrays/boolean/test_arithmetic.py b/pandas/tests/arrays/boolean/test_arithmetic.py
index 197e83121567e..0c4fcf149eb20 100644
--- a/pandas/tests/arrays/boolean/test_arithmetic.py
+++ b/pandas/tests/arrays/boolean/test_arithmetic.py
@@ -90,9 +90,16 @@ def test_op_int8(left_array, right_array, opname):
# -----------------------------------------------------------------------------
-def test_error_invalid_values(data, all_arithmetic_operators):
+def test_error_invalid_values(data, all_arithmetic_operators, using_infer_string):
# invalid ops
+ if using_infer_string:
+ import pyarrow as pa
+
+ err = (TypeError, pa.lib.ArrowNotImplementedError, NotImplementedError)
+ else:
+ err = TypeError
+
op = all_arithmetic_operators
s = pd.Series(data)
ops = getattr(s, op)
@@ -110,9 +117,10 @@ def test_error_invalid_values(data, all_arithmetic_operators):
[
r"unsupported operand type\(s\) for",
"Concatenation operation is not implemented for NumPy arrays",
+ "has no kernel",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(err, match=msg):
ops(pd.Timestamp("20180101"))
# invalid array-likes
@@ -123,7 +131,9 @@ def test_error_invalid_values(data, all_arithmetic_operators):
r"unsupported operand type\(s\) for",
"can only concatenate str",
"not all arguments converted during string formatting",
+ "has no kernel",
+ "not implemented",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(err, match=msg):
ops(pd.Series("foo", index=s.index))
diff --git a/pandas/tests/arrays/categorical/test_astype.py b/pandas/tests/arrays/categorical/test_astype.py
index 7fba150c9113f..a2a53af6ab1ad 100644
--- a/pandas/tests/arrays/categorical/test_astype.py
+++ b/pandas/tests/arrays/categorical/test_astype.py
@@ -89,7 +89,7 @@ def test_astype(self, ordered):
expected = np.array(cat)
tm.assert_numpy_array_equal(result, expected)
- msg = r"Cannot cast object dtype to float64"
+ msg = r"Cannot cast object|string dtype to float64"
with pytest.raises(ValueError, match=msg):
cat.astype(float)
diff --git a/pandas/tests/arrays/categorical/test_constructors.py b/pandas/tests/arrays/categorical/test_constructors.py
index e25e31e2f2e9e..50aaa42e09f22 100644
--- a/pandas/tests/arrays/categorical/test_constructors.py
+++ b/pandas/tests/arrays/categorical/test_constructors.py
@@ -6,6 +6,8 @@
import numpy as np
import pytest
+from pandas._config import using_pyarrow_string_dtype
+
from pandas.core.dtypes.common import (
is_float_dtype,
is_integer_dtype,
@@ -447,6 +449,7 @@ def test_constructor_str_unknown(self):
with pytest.raises(ValueError, match="Unknown dtype"):
Categorical([1, 2], dtype="foo")
+ @pytest.mark.xfail(using_pyarrow_string_dtype(), reason="Can't be NumPy strings")
def test_constructor_np_strs(self):
# GH#31499 Hashtable.map_locations needs to work on np.str_ objects
cat = Categorical(["1", "0", "1"], [np.str_("0"), np.str_("1")])
diff --git a/pandas/tests/arrays/categorical/test_operators.py b/pandas/tests/arrays/categorical/test_operators.py
index a1e50917fed98..16b941eab4830 100644
--- a/pandas/tests/arrays/categorical/test_operators.py
+++ b/pandas/tests/arrays/categorical/test_operators.py
@@ -92,7 +92,7 @@ def test_comparisons(self, factor):
cat > cat_unordered
# comparison (in both directions) with Series will raise
- s = Series(["b", "b", "b"])
+ s = Series(["b", "b", "b"], dtype=object)
msg = (
"Cannot compare a Categorical for op __gt__ with type "
r"<class 'numpy\.ndarray'>"
@@ -108,7 +108,7 @@ def test_comparisons(self, factor):
# comparison with numpy.array will raise in both direction, but only on
# newer numpy versions
- a = np.array(["b", "b", "b"])
+ a = np.array(["b", "b", "b"], dtype=object)
with pytest.raises(TypeError, match=msg):
cat > a
with pytest.raises(TypeError, match=msg):
@@ -248,7 +248,7 @@ def test_comparisons(self, data, reverse, base):
cat_base = Series(
Categorical(base, categories=cat.cat.categories, ordered=True)
)
- s = Series(base)
+ s = Series(base, dtype=object if base == list("bbb") else None)
a = np.array(base)
# comparisons need to take categories ordering into account
diff --git a/pandas/tests/arrays/categorical/test_repr.py b/pandas/tests/arrays/categorical/test_repr.py
index dca171bf81047..d6f93fbbd912f 100644
--- a/pandas/tests/arrays/categorical/test_repr.py
+++ b/pandas/tests/arrays/categorical/test_repr.py
@@ -1,9 +1,13 @@
import numpy as np
+import pytest
+
+from pandas._config import using_pyarrow_string_dtype
from pandas import (
Categorical,
CategoricalDtype,
CategoricalIndex,
+ Index,
Series,
date_range,
option_context,
@@ -13,11 +17,17 @@
class TestCategoricalReprWithFactor:
- def test_print(self, factor):
- expected = [
- "['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']",
- "Categories (3, object): ['a' < 'b' < 'c']",
- ]
+ def test_print(self, factor, using_infer_string):
+ if using_infer_string:
+ expected = [
+ "['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']",
+ "Categories (3, string): [a < b < c]",
+ ]
+ else:
+ expected = [
+ "['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']",
+ "Categories (3, object): ['a' < 'b' < 'c']",
+ ]
expected = "\n".join(expected)
actual = repr(factor)
assert actual == expected
@@ -26,7 +36,7 @@ def test_print(self, factor):
class TestCategoricalRepr:
def test_big_print(self):
codes = np.array([0, 1, 2, 0, 1, 2] * 100)
- dtype = CategoricalDtype(categories=["a", "b", "c"])
+ dtype = CategoricalDtype(categories=Index(["a", "b", "c"], dtype=object))
factor = Categorical.from_codes(codes, dtype=dtype)
expected = [
"['a', 'b', 'c', 'a', 'b', ..., 'b', 'c', 'a', 'b', 'c']",
@@ -40,13 +50,13 @@ def test_big_print(self):
assert actual == expected
def test_empty_print(self):
- factor = Categorical([], ["a", "b", "c"])
+ factor = Categorical([], Index(["a", "b", "c"], dtype=object))
expected = "[], Categories (3, object): ['a', 'b', 'c']"
actual = repr(factor)
assert actual == expected
assert expected == actual
- factor = Categorical([], ["a", "b", "c"], ordered=True)
+ factor = Categorical([], Index(["a", "b", "c"], dtype=object), ordered=True)
expected = "[], Categories (3, object): ['a' < 'b' < 'c']"
actual = repr(factor)
assert expected == actual
@@ -66,6 +76,10 @@ def test_print_none_width(self):
with option_context("display.width", None):
assert exp == repr(a)
+ @pytest.mark.skipif(
+ using_pyarrow_string_dtype(),
+ reason="Change once infer_string is set to True by default",
+ )
def test_unicode_print(self):
c = Categorical(["aaaaa", "bb", "cccc"] * 20)
expected = """\
diff --git a/pandas/tests/arrays/floating/test_arithmetic.py b/pandas/tests/arrays/floating/test_arithmetic.py
index 056c22d8c1131..ba081bd01062a 100644
--- a/pandas/tests/arrays/floating/test_arithmetic.py
+++ b/pandas/tests/arrays/floating/test_arithmetic.py
@@ -122,11 +122,18 @@ def test_arith_zero_dim_ndarray(other):
# -----------------------------------------------------------------------------
-def test_error_invalid_values(data, all_arithmetic_operators):
+def test_error_invalid_values(data, all_arithmetic_operators, using_infer_string):
op = all_arithmetic_operators
s = pd.Series(data)
ops = getattr(s, op)
+ if using_infer_string:
+ import pyarrow as pa
+
+ errs = (TypeError, pa.lib.ArrowNotImplementedError, NotImplementedError)
+ else:
+ errs = TypeError
+
# invalid scalars
msg = "|".join(
[
@@ -140,15 +147,17 @@ def test_error_invalid_values(data, all_arithmetic_operators):
"ufunc '.*' not supported for the input types, and the inputs could not",
"ufunc '.*' did not contain a loop with signature matching types",
"Concatenation operation is not implemented for NumPy arrays",
+ "has no kernel",
+ "not implemented",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops("foo")
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(pd.Timestamp("20180101"))
# invalid array-likes
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(pd.Series("foo", index=s.index))
msg = "|".join(
@@ -167,9 +176,11 @@ def test_error_invalid_values(data, all_arithmetic_operators):
),
r"ufunc 'add' cannot use operands with types dtype\('float\d{2}'\)",
"cannot subtract DatetimeArray from ndarray",
+ "has no kernel",
+ "not implemented",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(pd.Series(pd.date_range("20180101", periods=len(s))))
diff --git a/pandas/tests/arrays/integer/test_arithmetic.py b/pandas/tests/arrays/integer/test_arithmetic.py
index ce6c245cd0f37..d979dd445a61a 100644
--- a/pandas/tests/arrays/integer/test_arithmetic.py
+++ b/pandas/tests/arrays/integer/test_arithmetic.py
@@ -172,11 +172,18 @@ def test_numpy_zero_dim_ndarray(other):
# -----------------------------------------------------------------------------
-def test_error_invalid_values(data, all_arithmetic_operators):
+def test_error_invalid_values(data, all_arithmetic_operators, using_infer_string):
op = all_arithmetic_operators
s = pd.Series(data)
ops = getattr(s, op)
+ if using_infer_string:
+ import pyarrow as pa
+
+ errs = (TypeError, pa.lib.ArrowNotImplementedError, NotImplementedError)
+ else:
+ errs = TypeError
+
# invalid scalars
msg = "|".join(
[
@@ -188,20 +195,26 @@ def test_error_invalid_values(data, all_arithmetic_operators):
"ufunc '.*' not supported for the input types, and the inputs could not",
"ufunc '.*' did not contain a loop with signature matching types",
"Addition/subtraction of integers and integer-arrays with Timestamp",
+ "has no kernel",
+ "not implemented",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops("foo")
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(pd.Timestamp("20180101"))
# invalid array-likes
str_ser = pd.Series("foo", index=s.index)
# with pytest.raises(TypeError, match=msg):
- if all_arithmetic_operators in [
- "__mul__",
- "__rmul__",
- ]: # (data[~data.isna()] >= 0).all():
+ if (
+ all_arithmetic_operators
+ in [
+ "__mul__",
+ "__rmul__",
+ ]
+ and not using_infer_string
+ ): # (data[~data.isna()] >= 0).all():
res = ops(str_ser)
expected = pd.Series(["foo" * x for x in data], index=s.index)
expected = expected.fillna(np.nan)
@@ -210,7 +223,7 @@ def test_error_invalid_values(data, all_arithmetic_operators):
# more-correct than np.nan here.
tm.assert_series_equal(res, expected)
else:
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(str_ser)
msg = "|".join(
@@ -223,9 +236,11 @@ def test_error_invalid_values(data, all_arithmetic_operators):
r"can only concatenate str \(not \"int\"\) to str",
"not all arguments converted during string",
"cannot subtract DatetimeArray from ndarray",
+ "has no kernel",
+ "not implemented",
]
)
- with pytest.raises(TypeError, match=msg):
+ with pytest.raises(errs, match=msg):
ops(pd.Series(pd.date_range("20180101", periods=len(s))))
diff --git a/pandas/tests/arrays/integer/test_reduction.py b/pandas/tests/arrays/integer/test_reduction.py
index 1c91cd25ba69c..db04862e4ea07 100644
--- a/pandas/tests/arrays/integer/test_reduction.py
+++ b/pandas/tests/arrays/integer/test_reduction.py
@@ -102,7 +102,9 @@ def test_groupby_reductions(op, expected):
["all", Series([True, True, True], index=["A", "B", "C"], dtype="boolean")],
],
)
-def test_mixed_reductions(op, expected):
+def test_mixed_reductions(op, expected, using_infer_string):
+ if op in ["any", "all"] and using_infer_string:
+ expected = expected.astype("bool")
df = DataFrame(
{
"A": ["a", "b", "b"],
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index 8dcda44aa68e5..d015e899c4231 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -191,7 +191,7 @@ def test_mul(dtype):
@pytest.mark.xfail(reason="GH-28527")
def test_add_strings(dtype):
arr = pd.array(["a", "b", "c", "d"], dtype=dtype)
- df = pd.DataFrame([["t", "y", "v", "w"]])
+ df = pd.DataFrame([["t", "y", "v", "w"]], dtype=object)
assert arr.__add__(df) is NotImplemented
result = arr + df
@@ -498,10 +498,17 @@ def test_arrow_array(dtype):
@pytest.mark.filterwarnings("ignore:Passing a BlockManager:DeprecationWarning")
-def test_arrow_roundtrip(dtype, string_storage2):
+def test_arrow_roundtrip(dtype, string_storage2, request, using_infer_string):
# roundtrip possible from arrow 1.0.0
pa = pytest.importorskip("pyarrow")
+ if using_infer_string and string_storage2 != "pyarrow_numpy":
+ request.applymarker(
+ pytest.mark.xfail(
+ reason="infer_string takes precedence over string storage"
+ )
+ )
+
data = pd.array(["a", "b", None], dtype=dtype)
df = pd.DataFrame({"a": data})
table = pa.table(df)
@@ -516,10 +523,19 @@ def test_arrow_roundtrip(dtype, string_storage2):
@pytest.mark.filterwarnings("ignore:Passing a BlockManager:DeprecationWarning")
-def test_arrow_load_from_zero_chunks(dtype, string_storage2):
+def test_arrow_load_from_zero_chunks(
+ dtype, string_storage2, request, using_infer_string
+):
# GH-41040
pa = pytest.importorskip("pyarrow")
+ if using_infer_string and string_storage2 != "pyarrow_numpy":
+ request.applymarker(
+ pytest.mark.xfail(
+ reason="infer_string takes precedence over string storage"
+ )
+ )
+
data = pd.array([], dtype=dtype)
df = pd.DataFrame({"a": data})
table = pa.table(df)
diff --git a/pandas/tests/arrays/string_/test_string_arrow.py b/pandas/tests/arrays/string_/test_string_arrow.py
index a801a845bc7be..a022dfffbdd2b 100644
--- a/pandas/tests/arrays/string_/test_string_arrow.py
+++ b/pandas/tests/arrays/string_/test_string_arrow.py
@@ -26,7 +26,9 @@ def test_eq_all_na():
tm.assert_extension_array_equal(result, expected)
-def test_config(string_storage):
+def test_config(string_storage, request, using_infer_string):
+ if using_infer_string and string_storage != "pyarrow_numpy":
+ request.applymarker(pytest.mark.xfail(reason="infer string takes precedence"))
with pd.option_context("string_storage", string_storage):
assert StringDtype().storage == string_storage
result = pd.array(["a", "b"])
@@ -101,7 +103,7 @@ def test_constructor_from_list():
assert result.dtype.storage == "pyarrow"
-def test_from_sequence_wrong_dtype_raises():
+def test_from_sequence_wrong_dtype_raises(using_infer_string):
pytest.importorskip("pyarrow")
with pd.option_context("string_storage", "python"):
ArrowStringArray._from_sequence(["a", None, "c"], dtype="string")
@@ -114,15 +116,19 @@ def test_from_sequence_wrong_dtype_raises():
ArrowStringArray._from_sequence(["a", None, "c"], dtype="string[pyarrow]")
- with pytest.raises(AssertionError, match=None):
- with pd.option_context("string_storage", "python"):
- ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
+ if not using_infer_string:
+ with pytest.raises(AssertionError, match=None):
+ with pd.option_context("string_storage", "python"):
+ ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
with pd.option_context("string_storage", "pyarrow"):
ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
- with pytest.raises(AssertionError, match=None):
- ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype("python"))
+ if not using_infer_string:
+ with pytest.raises(AssertionError, match=None):
+ ArrowStringArray._from_sequence(
+ ["a", None, "c"], dtype=StringDtype("python")
+ )
ArrowStringArray._from_sequence(["a", None, "c"], dtype=StringDtype("pyarrow"))
@@ -137,13 +143,15 @@ def test_from_sequence_wrong_dtype_raises():
with pytest.raises(AssertionError, match=None):
StringArray._from_sequence(["a", None, "c"], dtype="string[pyarrow]")
- with pd.option_context("string_storage", "python"):
- StringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
-
- with pytest.raises(AssertionError, match=None):
- with pd.option_context("string_storage", "pyarrow"):
+ if not using_infer_string:
+ with pd.option_context("string_storage", "python"):
StringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
+ if not using_infer_string:
+ with pytest.raises(AssertionError, match=None):
+ with pd.option_context("string_storage", "pyarrow"):
+ StringArray._from_sequence(["a", None, "c"], dtype=StringDtype())
+
StringArray._from_sequence(["a", None, "c"], dtype=StringDtype("python"))
with pytest.raises(AssertionError, match=None):
diff --git a/pandas/tests/arrays/test_array.py b/pandas/tests/arrays/test_array.py
index e2b8ebcb79a3b..b0ec2787097f0 100644
--- a/pandas/tests/arrays/test_array.py
+++ b/pandas/tests/arrays/test_array.py
@@ -440,7 +440,7 @@ def test_array_unboxes(index_or_series):
def test_array_to_numpy_na():
# GH#40638
- arr = pd.array([pd.NA, 1], dtype="string")
+ arr = pd.array([pd.NA, 1], dtype="string[python]")
result = arr.to_numpy(na_value=True, dtype=bool)
expected = np.array([True, True])
tm.assert_numpy_array_equal(result, expected)
| sits on top of #56187 | https://api.github.com/repos/pandas-dev/pandas/pulls/56188 | 2023-11-26T20:09:46Z | 2023-12-09T23:35:25Z | 2023-12-09T23:35:25Z | 2023-12-09T23:35:52Z |
TST/CLN: Remove makeCategoricalIndex | diff --git a/asv_bench/benchmarks/categoricals.py b/asv_bench/benchmarks/categoricals.py
index 7e70db5681850..1716110b619d6 100644
--- a/asv_bench/benchmarks/categoricals.py
+++ b/asv_bench/benchmarks/categoricals.py
@@ -6,8 +6,6 @@
import pandas as pd
-from .pandas_vb_common import tm
-
try:
from pandas.api.types import union_categoricals
except ImportError:
@@ -189,7 +187,7 @@ def setup(self):
N = 10**5
ncats = 15
- self.s_str = pd.Series(tm.makeCategoricalIndex(N, ncats)).astype(str)
+ self.s_str = pd.Series(np.random.randint(0, ncats, size=N).astype(str))
self.s_str_cat = pd.Series(self.s_str, dtype="category")
with warnings.catch_warnings(record=True):
str_cat_type = pd.CategoricalDtype(set(self.s_str), ordered=True)
@@ -242,7 +240,7 @@ def time_categorical_series_is_monotonic_decreasing(self):
class Contains:
def setup(self):
N = 10**5
- self.ci = tm.makeCategoricalIndex(N)
+ self.ci = pd.CategoricalIndex(np.arange(N))
self.c = self.ci.values
self.key = self.ci.categories[0]
@@ -325,7 +323,7 @@ def time_sort_values(self):
class SearchSorted:
def setup(self):
N = 10**5
- self.ci = tm.makeCategoricalIndex(N).sort_values()
+ self.ci = pd.CategoricalIndex(np.arange(N)).sort_values()
self.c = self.ci.values
self.key = self.ci.categories[1]
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 14ee29d24800e..0744c6d4cffb4 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -39,7 +39,6 @@
from pandas import (
ArrowDtype,
Categorical,
- CategoricalIndex,
DataFrame,
DatetimeIndex,
Index,
@@ -350,36 +349,10 @@ def to_array(obj):
# Others
-def rands_array(
- nchars, size: int, dtype: NpDtype = "O", replace: bool = True
-) -> np.ndarray:
- """
- Generate an array of byte strings.
- """
- chars = np.array(list(string.ascii_letters + string.digits), dtype=(np.str_, 1))
- retval = (
- np.random.default_rng(2)
- .choice(chars, size=nchars * np.prod(size), replace=replace)
- .view((np.str_, nchars))
- .reshape(size)
- )
- return retval.astype(dtype)
-
-
def getCols(k) -> str:
return string.ascii_uppercase[:k]
-def makeCategoricalIndex(
- k: int = 10, n: int = 3, name=None, **kwargs
-) -> CategoricalIndex:
- """make a length k index or n categories"""
- x = rands_array(nchars=4, size=n, replace=False)
- return CategoricalIndex(
- Categorical.from_codes(np.arange(k) % n, categories=x), name=name, **kwargs
- )
-
-
def makeNumericIndex(k: int = 10, *, name=None, dtype: Dtype | None) -> Index:
dtype = pandas_dtype(dtype)
assert isinstance(dtype, np.dtype)
@@ -1017,7 +990,6 @@ def shares_memory(left, right) -> bool:
"iat",
"iloc",
"loc",
- "makeCategoricalIndex",
"makeCustomDataframe",
"makeCustomIndex",
"makeDataFrame",
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 3205b6657439f..d886bf8167dd4 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -59,6 +59,7 @@
import pandas as pd
from pandas import (
+ CategoricalIndex,
DataFrame,
Interval,
IntervalIndex,
@@ -630,7 +631,7 @@ def _create_mi_with_dt64tz_level():
"bool-dtype": Index(np.random.default_rng(2).standard_normal(10) < 0),
"complex64": tm.makeNumericIndex(100, dtype="float64").astype("complex64"),
"complex128": tm.makeNumericIndex(100, dtype="float64").astype("complex128"),
- "categorical": tm.makeCategoricalIndex(100),
+ "categorical": CategoricalIndex(list("abcd") * 25),
"interval": IntervalIndex.from_breaks(np.linspace(0, 100, num=101)),
"empty": Index([]),
"tuples": MultiIndex.from_tuples(zip(["foo", "bar", "baz"], [1, 2, 3])),
diff --git a/pandas/tests/frame/methods/test_set_index.py b/pandas/tests/frame/methods/test_set_index.py
index 98113b6c41821..97f3a76311711 100644
--- a/pandas/tests/frame/methods/test_set_index.py
+++ b/pandas/tests/frame/methods/test_set_index.py
@@ -12,6 +12,7 @@
from pandas import (
Categorical,
+ CategoricalIndex,
DataFrame,
DatetimeIndex,
Index,
@@ -398,8 +399,7 @@ def test_set_index_pass_multiindex(self, frame_of_index_cols, drop, append):
tm.assert_frame_equal(result, expected)
def test_construction_with_categorical_index(self):
- ci = tm.makeCategoricalIndex(10)
- ci.name = "B"
+ ci = CategoricalIndex(list("ab") * 5, name="B")
# with Categorical
df = DataFrame(
diff --git a/pandas/tests/generic/test_to_xarray.py b/pandas/tests/generic/test_to_xarray.py
index 5fb432f849643..e0d79c3f15282 100644
--- a/pandas/tests/generic/test_to_xarray.py
+++ b/pandas/tests/generic/test_to_xarray.py
@@ -18,14 +18,14 @@ class TestDataFrameToXArray:
def df(self):
return DataFrame(
{
- "a": list("abc"),
- "b": list(range(1, 4)),
- "c": np.arange(3, 6).astype("u1"),
- "d": np.arange(4.0, 7.0, dtype="float64"),
- "e": [True, False, True],
- "f": Categorical(list("abc")),
- "g": date_range("20130101", periods=3),
- "h": date_range("20130101", periods=3, tz="US/Eastern"),
+ "a": list("abcd"),
+ "b": list(range(1, 5)),
+ "c": np.arange(3, 7).astype("u1"),
+ "d": np.arange(4.0, 8.0, dtype="float64"),
+ "e": [True, False, True, False],
+ "f": Categorical(list("abcd")),
+ "g": date_range("20130101", periods=4),
+ "h": date_range("20130101", periods=4, tz="US/Eastern"),
}
)
@@ -37,11 +37,11 @@ def test_to_xarray_index_types(self, index_flat, df, using_infer_string):
from xarray import Dataset
- df.index = index[:3]
+ df.index = index[:4]
df.index.name = "foo"
df.columns.name = "bar"
result = df.to_xarray()
- assert result.dims["foo"] == 3
+ assert result.dims["foo"] == 4
assert len(result.coords) == 1
assert len(result.data_vars) == 8
tm.assert_almost_equal(list(result.coords.keys()), ["foo"])
@@ -69,10 +69,10 @@ def test_to_xarray_with_multiindex(self, df, using_infer_string):
from xarray import Dataset
# MultiIndex
- df.index = MultiIndex.from_product([["a"], range(3)], names=["one", "two"])
+ df.index = MultiIndex.from_product([["a"], range(4)], names=["one", "two"])
result = df.to_xarray()
assert result.dims["one"] == 1
- assert result.dims["two"] == 3
+ assert result.dims["two"] == 4
assert len(result.coords) == 2
assert len(result.data_vars) == 8
tm.assert_almost_equal(list(result.coords.keys()), ["one", "two"])
diff --git a/pandas/tests/indexes/categorical/test_category.py b/pandas/tests/indexes/categorical/test_category.py
index 7af4f6809ec64..142a00d32815a 100644
--- a/pandas/tests/indexes/categorical/test_category.py
+++ b/pandas/tests/indexes/categorical/test_category.py
@@ -248,7 +248,7 @@ def test_ensure_copied_data(self):
#
# Must be tested separately from other indexes because
# self.values is not an ndarray.
- index = tm.makeCategoricalIndex(10)
+ index = CategoricalIndex(list("ab") * 5)
result = CategoricalIndex(index.values, copy=True)
tm.assert_index_equal(index, result)
@@ -261,7 +261,7 @@ def test_ensure_copied_data(self):
class TestCategoricalIndex2:
def test_view_i8(self):
# GH#25464
- ci = tm.makeCategoricalIndex(100)
+ ci = CategoricalIndex(list("ab") * 50)
msg = "When changing to a larger dtype, its size must be a divisor"
with pytest.raises(ValueError, match=msg):
ci.view("i8")
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index 9d69321ff7dbb..75b1d370560be 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -68,8 +68,8 @@ def test_tab_completion_with_categorical(self):
@pytest.mark.parametrize(
"index",
[
+ Index(list("ab") * 5, dtype="category"),
Index([str(i) for i in range(10)]),
- tm.makeCategoricalIndex(10),
Index(["foo", "bar", "baz"] * 2),
tm.makeDateIndex(10),
tm.makePeriodIndex(10),
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56186 | 2023-11-26T19:41:03Z | 2023-11-28T17:07:35Z | 2023-11-28T17:07:35Z | 2023-11-28T17:07:38Z |
Backport PR #56152 on branch 2.1.x (BUG: translate losing object dtype with new string dtype) | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index e4f973611c578..eb28e42d303a1 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -25,7 +25,7 @@ Bug fixes
- Bug in :meth:`Index.__getitem__` returning wrong result for Arrow dtypes and negative stepsize (:issue:`55832`)
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
--
+- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
.. ---------------------------------------------------------------------------
.. _whatsnew_214.other:
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index b299f5d6deab3..e2a3b9378a4f7 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -259,6 +259,7 @@ def _wrap_result(
fill_value=np.nan,
returns_string: bool = True,
returns_bool: bool = False,
+ dtype=None,
):
from pandas import (
Index,
@@ -379,29 +380,29 @@ def cons_row(x):
out = out.get_level_values(0)
return out
else:
- return Index(result, name=name)
+ return Index(result, name=name, dtype=dtype)
else:
index = self._orig.index
# This is a mess.
- dtype: DtypeObj | str | None
+ _dtype: DtypeObj | str | None = dtype
vdtype = getattr(result, "dtype", None)
if self._is_string:
if is_bool_dtype(vdtype):
- dtype = result.dtype
+ _dtype = result.dtype
elif returns_string:
- dtype = self._orig.dtype
+ _dtype = self._orig.dtype
else:
- dtype = vdtype
- else:
- dtype = vdtype
+ _dtype = vdtype
+ elif vdtype is not None:
+ _dtype = vdtype
if expand:
cons = self._orig._constructor_expanddim
- result = cons(result, columns=name, index=index, dtype=dtype)
+ result = cons(result, columns=name, index=index, dtype=_dtype)
else:
# Must be a Series
cons = self._orig._constructor
- result = cons(result, name=name, index=index, dtype=dtype)
+ result = cons(result, name=name, index=index, dtype=_dtype)
result = result.__finalize__(self._orig, method="str")
if name is not None and result.ndim == 1:
# __finalize__ might copy over the original name, but we may
@@ -2317,7 +2318,8 @@ def translate(self, table):
dtype: object
"""
result = self._data.array._str_translate(table)
- return self._wrap_result(result)
+ dtype = object if self._data.dtype == "object" else None
+ return self._wrap_result(result, dtype=dtype)
@forbid_nonstring_types(["bytes"])
def count(self, pat, flags: int = 0):
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 78f0730d730e8..bd64a5dce3b9a 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -5,6 +5,7 @@
import pytest
from pandas.errors import PerformanceWarning
+import pandas.util._test_decorators as td
import pandas as pd
from pandas import (
@@ -893,7 +894,10 @@ def test_find_nan(any_string_dtype):
# --------------------------------------------------------------------------------------
-def test_translate(index_or_series, any_string_dtype):
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
+def test_translate(index_or_series, any_string_dtype, infer_string):
obj = index_or_series(
["abcdefg", "abcc", "cdddfg", "cdefggg"], dtype=any_string_dtype
)
| Backport PR #56152: BUG: translate losing object dtype with new string dtype | https://api.github.com/repos/pandas-dev/pandas/pulls/56185 | 2023-11-26T18:25:40Z | 2023-11-26T20:11:45Z | 2023-11-26T20:11:45Z | 2023-11-26T20:11:45Z |
Adjust tests in root directory for new string option | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 0cbf211305d12..5f6514946d4a2 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -26,6 +26,7 @@ Bug fixes
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`DataFrame.to_hdf` raising when columns have ``StringDtype`` (:issue:`55088`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
+- Fixed bug in :meth:`Series.mode` not keeping object dtype when ``infer_string`` is set (:issue:`56183`)
- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 888e8cc4e7d40..1b6fa912b7dc3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2302,7 +2302,11 @@ def mode(self, dropna: bool = True) -> Series:
# Ensure index is type stable (should always use int index)
return self._constructor(
- res_values, index=range(len(res_values)), name=self.name, copy=False
+ res_values,
+ index=range(len(res_values)),
+ name=self.name,
+ copy=False,
+ dtype=self.dtype,
).__finalize__(self, method="mode")
def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation
diff --git a/pandas/tests/series/test_reductions.py b/pandas/tests/series/test_reductions.py
index 4bbbcf3bf54c2..76353ab25fca6 100644
--- a/pandas/tests/series/test_reductions.py
+++ b/pandas/tests/series/test_reductions.py
@@ -51,6 +51,16 @@ def test_mode_nullable_dtype(any_numeric_ea_dtype):
tm.assert_series_equal(result, expected)
+def test_mode_infer_string():
+ # GH#56183
+ pytest.importorskip("pyarrow")
+ ser = Series(["a", "b"], dtype=object)
+ with pd.option_context("future.infer_string", True):
+ result = ser.mode()
+ expected = Series(["a", "b"], dtype=object)
+ tm.assert_series_equal(result, expected)
+
+
def test_reductions_td64_with_nat():
# GH#8617
ser = Series([0, pd.NaT], dtype="m8[ns]")
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index e5302ec9833f1..5356704cc64a2 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1946,7 +1946,7 @@ def test_timedelta_mode(self):
tm.assert_series_equal(ser.mode(), exp)
def test_mixed_dtype(self):
- exp = Series(["foo"])
+ exp = Series(["foo"], dtype=object)
ser = Series([1, "foo", "foo"])
tm.assert_numpy_array_equal(algos.mode(ser.values), exp.values)
tm.assert_series_equal(ser.mode(), exp)
| sits on #56183 | https://api.github.com/repos/pandas-dev/pandas/pulls/56184 | 2023-11-26T18:06:06Z | 2023-11-30T17:45:44Z | 2023-11-30T17:45:44Z | 2023-11-30T17:46:31Z |
BUG: mode not preserving object dtype for string option | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 0cbf211305d12..5f6514946d4a2 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -26,6 +26,7 @@ Bug fixes
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`DataFrame.to_hdf` raising when columns have ``StringDtype`` (:issue:`55088`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
+- Fixed bug in :meth:`Series.mode` not keeping object dtype when ``infer_string`` is set (:issue:`56183`)
- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 888e8cc4e7d40..1b6fa912b7dc3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2302,7 +2302,11 @@ def mode(self, dropna: bool = True) -> Series:
# Ensure index is type stable (should always use int index)
return self._constructor(
- res_values, index=range(len(res_values)), name=self.name, copy=False
+ res_values,
+ index=range(len(res_values)),
+ name=self.name,
+ copy=False,
+ dtype=self.dtype,
).__finalize__(self, method="mode")
def unique(self) -> ArrayLike: # pylint: disable=useless-parent-delegation
diff --git a/pandas/tests/series/test_reductions.py b/pandas/tests/series/test_reductions.py
index 4bbbcf3bf54c2..76353ab25fca6 100644
--- a/pandas/tests/series/test_reductions.py
+++ b/pandas/tests/series/test_reductions.py
@@ -51,6 +51,16 @@ def test_mode_nullable_dtype(any_numeric_ea_dtype):
tm.assert_series_equal(result, expected)
+def test_mode_infer_string():
+ # GH#56183
+ pytest.importorskip("pyarrow")
+ ser = Series(["a", "b"], dtype=object)
+ with pd.option_context("future.infer_string", True):
+ result = ser.mode()
+ expected = Series(["a", "b"], dtype=object)
+ tm.assert_series_equal(result, expected)
+
+
def test_reductions_td64_with_nat():
# GH#8617
ser = Series([0, pd.NaT], dtype="m8[ns]")
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56183 | 2023-11-26T18:04:12Z | 2023-11-30T17:47:59Z | 2023-11-30T17:47:59Z | 2023-11-30T17:48:45Z |
Adjust tests in window folder for new string option | diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py
index 33858e10afd75..fe2da210c6fe9 100644
--- a/pandas/tests/window/test_api.py
+++ b/pandas/tests/window/test_api.py
@@ -70,7 +70,9 @@ def tests_skip_nuisance(step):
def test_sum_object_str_raises(step):
df = DataFrame({"A": range(5), "B": range(5, 10), "C": "foo"})
r = df.rolling(window=3, step=step)
- with pytest.raises(DataError, match="Cannot aggregate non-numeric type: object"):
+ with pytest.raises(
+ DataError, match="Cannot aggregate non-numeric type: object|string"
+ ):
# GH#42738, enforced in 2.0
r.sum()
diff --git a/pandas/tests/window/test_groupby.py b/pandas/tests/window/test_groupby.py
index 4fd33d54ef846..400bf10817ab8 100644
--- a/pandas/tests/window/test_groupby.py
+++ b/pandas/tests/window/test_groupby.py
@@ -1181,7 +1181,9 @@ def test_pairwise_methods(self, method, expected_data):
)
tm.assert_frame_equal(result, expected)
- expected = df.groupby("A").apply(lambda x: getattr(x.ewm(com=1.0), method)())
+ expected = df.groupby("A")[["B"]].apply(
+ lambda x: getattr(x.ewm(com=1.0), method)()
+ )
tm.assert_frame_equal(result, expected)
def test_times(self, times_frame):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56182 | 2023-11-26T17:55:09Z | 2023-11-27T17:42:24Z | 2023-11-27T17:42:24Z | 2023-11-27T17:42:57Z |
Adjust tests in util folder for new string option | diff --git a/pandas/tests/util/test_assert_frame_equal.py b/pandas/tests/util/test_assert_frame_equal.py
index 2d3b47cd2e994..0c93ee453bb1c 100644
--- a/pandas/tests/util/test_assert_frame_equal.py
+++ b/pandas/tests/util/test_assert_frame_equal.py
@@ -109,12 +109,16 @@ def test_empty_dtypes(check_dtype):
@pytest.mark.parametrize("check_like", [True, False])
-def test_frame_equal_index_mismatch(check_like, obj_fixture):
+def test_frame_equal_index_mismatch(check_like, obj_fixture, using_infer_string):
+ if using_infer_string:
+ dtype = "string"
+ else:
+ dtype = "object"
msg = f"""{obj_fixture}\\.index are different
{obj_fixture}\\.index values are different \\(33\\.33333 %\\)
-\\[left\\]: Index\\(\\['a', 'b', 'c'\\], dtype='object'\\)
-\\[right\\]: Index\\(\\['a', 'b', 'd'\\], dtype='object'\\)
+\\[left\\]: Index\\(\\['a', 'b', 'c'\\], dtype='{dtype}'\\)
+\\[right\\]: Index\\(\\['a', 'b', 'd'\\], dtype='{dtype}'\\)
At positional index 2, first diff: c != d"""
df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"])
@@ -125,12 +129,16 @@ def test_frame_equal_index_mismatch(check_like, obj_fixture):
@pytest.mark.parametrize("check_like", [True, False])
-def test_frame_equal_columns_mismatch(check_like, obj_fixture):
+def test_frame_equal_columns_mismatch(check_like, obj_fixture, using_infer_string):
+ if using_infer_string:
+ dtype = "string"
+ else:
+ dtype = "object"
msg = f"""{obj_fixture}\\.columns are different
{obj_fixture}\\.columns values are different \\(50\\.0 %\\)
-\\[left\\]: Index\\(\\['A', 'B'\\], dtype='object'\\)
-\\[right\\]: Index\\(\\['A', 'b'\\], dtype='object'\\)"""
+\\[left\\]: Index\\(\\['A', 'B'\\], dtype='{dtype}'\\)
+\\[right\\]: Index\\(\\['A', 'b'\\], dtype='{dtype}'\\)"""
df1 = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6]}, index=["a", "b", "c"])
df2 = DataFrame({"A": [1, 2, 3], "b": [4, 5, 6]}, index=["a", "b", "c"])
diff --git a/pandas/tests/util/test_assert_index_equal.py b/pandas/tests/util/test_assert_index_equal.py
index 15263db9ec645..dc6efdcec380e 100644
--- a/pandas/tests/util/test_assert_index_equal.py
+++ b/pandas/tests/util/test_assert_index_equal.py
@@ -205,14 +205,18 @@ def test_index_equal_names(name1, name2):
tm.assert_index_equal(idx1, idx2)
-def test_index_equal_category_mismatch(check_categorical):
- msg = """Index are different
+def test_index_equal_category_mismatch(check_categorical, using_infer_string):
+ if using_infer_string:
+ dtype = "string"
+ else:
+ dtype = "object"
+ msg = f"""Index are different
Attribute "dtype" are different
\\[left\\]: CategoricalDtype\\(categories=\\['a', 'b'\\], ordered=False, \
-categories_dtype=object\\)
+categories_dtype={dtype}\\)
\\[right\\]: CategoricalDtype\\(categories=\\['a', 'b', 'c'\\], \
-ordered=False, categories_dtype=object\\)"""
+ordered=False, categories_dtype={dtype}\\)"""
idx1 = Index(Categorical(["a", "b"]))
idx2 = Index(Categorical(["a", "b"], categories=["a", "b", "c"]))
diff --git a/pandas/tests/util/test_assert_series_equal.py b/pandas/tests/util/test_assert_series_equal.py
index 12b5987cdb3de..ffc5e105d6f2f 100644
--- a/pandas/tests/util/test_assert_series_equal.py
+++ b/pandas/tests/util/test_assert_series_equal.py
@@ -214,8 +214,18 @@ def test_series_equal_numeric_values_mismatch(rtol):
tm.assert_series_equal(s1, s2, rtol=rtol)
-def test_series_equal_categorical_values_mismatch(rtol):
- msg = """Series are different
+def test_series_equal_categorical_values_mismatch(rtol, using_infer_string):
+ if using_infer_string:
+ msg = """Series are different
+
+Series values are different \\(66\\.66667 %\\)
+\\[index\\]: \\[0, 1, 2\\]
+\\[left\\]: \\['a', 'b', 'c'\\]
+Categories \\(3, string\\): \\[a, b, c\\]
+\\[right\\]: \\['a', 'c', 'b'\\]
+Categories \\(3, string\\): \\[a, b, c\\]"""
+ else:
+ msg = """Series are different
Series values are different \\(66\\.66667 %\\)
\\[index\\]: \\[0, 1, 2\\]
@@ -246,14 +256,18 @@ def test_series_equal_datetime_values_mismatch(rtol):
tm.assert_series_equal(s1, s2, rtol=rtol)
-def test_series_equal_categorical_mismatch(check_categorical):
- msg = """Attributes of Series are different
+def test_series_equal_categorical_mismatch(check_categorical, using_infer_string):
+ if using_infer_string:
+ dtype = "string"
+ else:
+ dtype = "object"
+ msg = f"""Attributes of Series are different
Attribute "dtype" are different
\\[left\\]: CategoricalDtype\\(categories=\\['a', 'b'\\], ordered=False, \
-categories_dtype=object\\)
+categories_dtype={dtype}\\)
\\[right\\]: CategoricalDtype\\(categories=\\['a', 'b', 'c'\\], \
-ordered=False, categories_dtype=object\\)"""
+ordered=False, categories_dtype={dtype}\\)"""
s1 = Series(Categorical(["a", "b"]))
s2 = Series(Categorical(["a", "b"], categories=list("abc")))
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56181 | 2023-11-26T17:35:58Z | 2023-11-26T18:36:13Z | 2023-11-26T18:36:13Z | 2023-11-26T19:34:10Z |
Adjust tests in tseries folder for new string option | diff --git a/pandas/tests/tseries/frequencies/test_inference.py b/pandas/tests/tseries/frequencies/test_inference.py
index ee8f161858b2b..75528a8b99c4d 100644
--- a/pandas/tests/tseries/frequencies/test_inference.py
+++ b/pandas/tests/tseries/frequencies/test_inference.py
@@ -431,12 +431,18 @@ def test_series_invalid_type(end):
frequencies.infer_freq(s)
-def test_series_inconvertible_string():
+def test_series_inconvertible_string(using_infer_string):
# see gh-6407
- msg = "Unknown datetime string format"
+ if using_infer_string:
+ msg = "cannot infer freq from"
- with pytest.raises(ValueError, match=msg):
- frequencies.infer_freq(Series(["foo", "bar"]))
+ with pytest.raises(TypeError, match=msg):
+ frequencies.infer_freq(Series(["foo", "bar"]))
+ else:
+ msg = "Unknown datetime string format"
+
+ with pytest.raises(ValueError, match=msg):
+ frequencies.infer_freq(Series(["foo", "bar"]))
@pytest.mark.parametrize("freq", [None, "ms"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56180 | 2023-11-26T17:28:06Z | 2023-11-26T18:25:44Z | 2023-11-26T18:25:44Z | 2023-11-26T18:25:56Z |
BUG: to_numeric casting to ea for new string dtype | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 0cbf211305d12..332f8da704e38 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -23,6 +23,7 @@ Bug fixes
~~~~~~~~~
- Bug in :class:`Series` constructor raising DeprecationWarning when ``index`` is a list of :class:`Series` (:issue:`55228`)
- Bug in :meth:`Index.__getitem__` returning wrong result for Arrow dtypes and negative stepsize (:issue:`55832`)
+- Fixed bug in :func:`to_numeric` converting to extension dtype for ``string[pyarrow_numpy]`` dtype (:issue:`56179`)
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`DataFrame.to_hdf` raising when columns have ``StringDtype`` (:issue:`55088`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
diff --git a/pandas/core/tools/numeric.py b/pandas/core/tools/numeric.py
index c5a2736d4f926..09652a7d8bc92 100644
--- a/pandas/core/tools/numeric.py
+++ b/pandas/core/tools/numeric.py
@@ -234,7 +234,8 @@ def to_numeric(
set(),
coerce_numeric=coerce_numeric,
convert_to_masked_nullable=dtype_backend is not lib.no_default
- or isinstance(values_dtype, StringDtype),
+ or isinstance(values_dtype, StringDtype)
+ and not values_dtype.storage == "pyarrow_numpy",
)
except (ValueError, TypeError):
if errors == "raise":
@@ -249,6 +250,7 @@ def to_numeric(
dtype_backend is not lib.no_default
and new_mask is None
or isinstance(values_dtype, StringDtype)
+ and not values_dtype.storage == "pyarrow_numpy"
):
new_mask = np.zeros(values.shape, dtype=np.bool_)
diff --git a/pandas/tests/tools/test_to_numeric.py b/pandas/tests/tools/test_to_numeric.py
index d6b085b7954db..c452382ec572b 100644
--- a/pandas/tests/tools/test_to_numeric.py
+++ b/pandas/tests/tools/test_to_numeric.py
@@ -4,12 +4,15 @@
from numpy import iinfo
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas import (
ArrowDtype,
DataFrame,
Index,
Series,
+ option_context,
to_numeric,
)
import pandas._testing as tm
@@ -67,10 +70,14 @@ def test_empty(input_kwargs, result_kwargs):
tm.assert_series_equal(result, expected)
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
@pytest.mark.parametrize("last_val", ["7", 7])
-def test_series(last_val):
- ser = Series(["1", "-3.14", last_val])
- result = to_numeric(ser)
+def test_series(last_val, infer_string):
+ with option_context("future.infer_string", infer_string):
+ ser = Series(["1", "-3.14", last_val])
+ result = to_numeric(ser)
expected = Series([1, -3.14, 7])
tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56179 | 2023-11-26T17:24:35Z | 2023-11-30T17:46:47Z | 2023-11-30T17:46:47Z | 2023-11-30T17:47:22Z |
Adjust test in tools for new string option | diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py
index 503032293dc81..ee209f74eb4e0 100644
--- a/pandas/tests/tools/test_to_datetime.py
+++ b/pandas/tests/tools/test_to_datetime.py
@@ -183,7 +183,7 @@ def test_to_datetime_format_YYYYMMDD_ignore_with_outofbounds(self, cache):
errors="ignore",
cache=cache,
)
- expected = Index(["15010101", "20150101", np.nan])
+ expected = Index(["15010101", "20150101", np.nan], dtype=object)
tm.assert_index_equal(result, expected)
def test_to_datetime_format_YYYYMMDD_coercion(self, cache):
@@ -1206,7 +1206,9 @@ def test_out_of_bounds_errors_ignore2(self):
# GH#12424
msg = "errors='ignore' is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
- res = to_datetime(Series(["2362-01-01", np.nan]), errors="ignore")
+ res = to_datetime(
+ Series(["2362-01-01", np.nan], dtype=object), errors="ignore"
+ )
exp = Series(["2362-01-01", np.nan], dtype=object)
tm.assert_series_equal(res, exp)
@@ -1489,7 +1491,7 @@ def test_datetime_invalid_index(self, values, format):
warn, match="Could not infer format", raise_on_extra_warnings=False
):
res = to_datetime(values, errors="ignore", format=format)
- tm.assert_index_equal(res, Index(values))
+ tm.assert_index_equal(res, Index(values, dtype=object))
with tm.assert_produces_warning(
warn, match="Could not infer format", raise_on_extra_warnings=False
@@ -1667,7 +1669,7 @@ def test_to_datetime_coerce_oob(self, string_arg, format, outofbounds):
"errors, expected",
[
("coerce", Index([NaT, NaT])),
- ("ignore", Index(["200622-12-31", "111111-24-11"])),
+ ("ignore", Index(["200622-12-31", "111111-24-11"], dtype=object)),
],
)
def test_to_datetime_malformed_no_raise(self, errors, expected):
@@ -2681,7 +2683,7 @@ def test_string_na_nat_conversion_malformed(self, cache):
result = to_datetime(malformed, errors="ignore", cache=cache)
# GH 21864
- expected = Index(malformed)
+ expected = Index(malformed, dtype=object)
tm.assert_index_equal(result, expected)
with pytest.raises(ValueError, match=msg):
@@ -3670,7 +3672,7 @@ def test_to_datetime_mixed_not_necessarily_iso8601_raise():
("errors", "expected"),
[
("coerce", DatetimeIndex(["2020-01-01 00:00:00", NaT])),
- ("ignore", Index(["2020-01-01", "01-01-2000"])),
+ ("ignore", Index(["2020-01-01", "01-01-2000"], dtype=object)),
],
)
def test_to_datetime_mixed_not_necessarily_iso8601_coerce(errors, expected):
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56178 | 2023-11-26T17:19:34Z | 2023-11-26T18:24:53Z | 2023-11-26T18:24:53Z | 2023-11-26T18:25:02Z |
DOC: Fix broken link for mamba installation | diff --git a/doc/source/development/contributing_environment.rst b/doc/source/development/contributing_environment.rst
index 0cc1fe2629e46..7fc42f6021f00 100644
--- a/doc/source/development/contributing_environment.rst
+++ b/doc/source/development/contributing_environment.rst
@@ -86,7 +86,7 @@ Before we begin, please:
Option 1: using mamba (recommended)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-* Install `mamba <https://mamba.readthedocs.io/en/latest/installation.html>`_
+* Install `mamba <https://mamba.readthedocs.io/en/latest/installation/mamba-installation.html>`_
* Make sure your mamba is up to date (``mamba update mamba``)
.. code-block:: none
| Prior to this change, this link:
https://pandas.pydata.org/docs/dev/development/contributing_environment.html#contributing-mamba
<img width="1051" alt="image" src="https://github.com/pandas-dev/pandas/assets/122238526/aa20e607-a764-49c2-92fc-e32a7c92fae8">
was broken:

With this change it now goes here:

- [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56176 | 2023-11-26T05:04:12Z | 2023-11-26T05:05:52Z | 2023-11-26T05:05:52Z | 2023-11-27T00:18:12Z |
ENH: Allow dictionaries to be passed to pandas.Series.str.replace | diff --git a/doc/source/whatsnew/v3.0.0.rst b/doc/source/whatsnew/v3.0.0.rst
index 8b8f5bf3d028c..84b6d12d71165 100644
--- a/doc/source/whatsnew/v3.0.0.rst
+++ b/doc/source/whatsnew/v3.0.0.rst
@@ -30,6 +30,7 @@ Other enhancements
^^^^^^^^^^^^^^^^^^
- :func:`DataFrame.to_excel` now raises an ``UserWarning`` when the character count in a cell exceeds Excel's limitation of 32767 characters (:issue:`56954`)
- :func:`read_stata` now returns ``datetime64`` resolutions better matching those natively stored in the stata format (:issue:`55642`)
+- Allow dictionaries to be passed to :meth:`pandas.Series.str.replace` via ``pat`` parameter (:issue:`51748`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index bd523969fba13..dc8a71c0122c5 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1427,8 +1427,8 @@ def fullmatch(self, pat, case: bool = True, flags: int = 0, na=None):
@forbid_nonstring_types(["bytes"])
def replace(
self,
- pat: str | re.Pattern,
- repl: str | Callable,
+ pat: str | re.Pattern | dict,
+ repl: str | Callable | None = None,
n: int = -1,
case: bool | None = None,
flags: int = 0,
@@ -1442,11 +1442,14 @@ def replace(
Parameters
----------
- pat : str or compiled regex
+ pat : str, compiled regex, or a dict
String can be a character sequence or regular expression.
+ Dictionary contains <key : value> pairs of strings to be replaced
+ along with the updated value.
repl : str or callable
Replacement string or a callable. The callable is passed the regex
match object and must return a replacement string to be used.
+ Must have a value of None if `pat` is a dict
See :func:`re.sub`.
n : int, default -1 (all)
Number of replacements to make from start.
@@ -1480,6 +1483,7 @@ def replace(
* if `regex` is False and `repl` is a callable or `pat` is a compiled
regex
* if `pat` is a compiled regex and `case` or `flags` is set
+ * if `pat` is a dictionary and `repl` is not None.
Notes
-----
@@ -1489,6 +1493,15 @@ def replace(
Examples
--------
+ When `pat` is a dictionary, every key in `pat` is replaced
+ with its corresponding value:
+
+ >>> pd.Series(["A", "B", np.nan]).str.replace(pat={"A": "a", "B": "b"})
+ 0 a
+ 1 b
+ 2 NaN
+ dtype: object
+
When `pat` is a string and `regex` is True, the given `pat`
is compiled as a regex. When `repl` is a string, it replaces matching
regex patterns as with :meth:`re.sub`. NaN value(s) in the Series are
@@ -1551,8 +1564,11 @@ def replace(
2 NaN
dtype: object
"""
+ if isinstance(pat, dict) and repl is not None:
+ raise ValueError("repl cannot be used when pat is a dictionary")
+
# Check whether repl is valid (GH 13438, GH 15055)
- if not (isinstance(repl, str) or callable(repl)):
+ if not isinstance(pat, dict) and not (isinstance(repl, str) or callable(repl)):
raise TypeError("repl must be a string or callable")
is_compiled_re = is_re(pat)
@@ -1572,10 +1588,17 @@ def replace(
if case is None:
case = True
- result = self._data.array._str_replace(
- pat, repl, n=n, case=case, flags=flags, regex=regex
- )
- return self._wrap_result(result)
+ res_output = self._data
+ if not isinstance(pat, dict):
+ pat = {pat: repl}
+
+ for key, value in pat.items():
+ result = res_output.array._str_replace(
+ key, value, n=n, case=case, flags=flags, regex=regex
+ )
+ res_output = self._wrap_result(result)
+
+ return res_output
@forbid_nonstring_types(["bytes"])
def repeat(self, repeats):
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 9f0994b968a47..f2233a1110059 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -355,6 +355,21 @@ def test_endswith_nullable_string_dtype(nullable_string_dtype, na):
# --------------------------------------------------------------------------------------
# str.replace
# --------------------------------------------------------------------------------------
+def test_replace_dict_invalid(any_string_dtype):
+ # GH 51914
+ series = Series(data=["A", "B_junk", "C_gunk"], name="my_messy_col")
+ msg = "repl cannot be used when pat is a dictionary"
+
+ with pytest.raises(ValueError, match=msg):
+ series.str.replace(pat={"A": "a", "B": "b"}, repl="A")
+
+
+def test_replace_dict(any_string_dtype):
+ # GH 51914
+ series = Series(data=["A", "B", "C"], name="my_messy_col")
+ new_series = series.str.replace(pat={"A": "a", "B": "b"})
+ expected = Series(data=["a", "b", "C"], name="my_messy_col")
+ tm.assert_series_equal(new_series, expected)
def test_replace(any_string_dtype):
| - [X] closes #51748
- [X] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [X] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [X] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [X] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56175 | 2023-11-26T01:04:22Z | 2024-02-12T22:27:47Z | 2024-02-12T22:27:47Z | 2024-02-17T17:19:57Z |
CI: Add 3.12 builds | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 4c38324280528..30397632a0af6 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -26,7 +26,7 @@ jobs:
timeout-minutes: 90
strategy:
matrix:
- env_file: [actions-39.yaml, actions-310.yaml, actions-311.yaml]
+ env_file: [actions-39.yaml, actions-310.yaml, actions-311.yaml, actions-312.yaml]
# Prevent the include jobs from overriding other jobs
pattern: [""]
include:
@@ -69,6 +69,10 @@ jobs:
env_file: actions-311.yaml
pattern: "not slow and not network and not single_cpu"
pandas_copy_on_write: "1"
+ - name: "Copy-on-Write 3.12"
+ env_file: actions-312.yaml
+ pattern: "not slow and not network and not single_cpu"
+ pandas_copy_on_write: "1"
- name: "Copy-on-Write 3.11 (warnings)"
env_file: actions-311.yaml
pattern: "not slow and not network and not single_cpu"
@@ -190,7 +194,7 @@ jobs:
strategy:
matrix:
os: [macos-latest, windows-latest]
- env_file: [actions-39.yaml, actions-310.yaml, actions-311.yaml]
+ env_file: [actions-39.yaml, actions-310.yaml, actions-311.yaml, actions-312.yaml]
fail-fast: false
runs-on: ${{ matrix.os }}
name: ${{ format('{0} {1}', matrix.os, matrix.env_file) }}
@@ -321,7 +325,7 @@ jobs:
# To freeze this file, uncomment out the ``if: false`` condition, and migrate the jobs
# to the corresponding posix/windows-macos/sdist etc. workflows.
# Feel free to modify this comment as necessary.
- #if: false # Uncomment this to freeze the workflow, comment it to unfreeze
+ if: false # Uncomment this to freeze the workflow, comment it to unfreeze
defaults:
run:
shell: bash -eou pipefail {0}
diff --git a/ci/deps/actions-312.yaml b/ci/deps/actions-312.yaml
new file mode 100644
index 0000000000000..394b65525c791
--- /dev/null
+++ b/ci/deps/actions-312.yaml
@@ -0,0 +1,63 @@
+name: pandas-dev-312
+channels:
+ - conda-forge
+dependencies:
+ - python=3.12
+
+ # build dependencies
+ - versioneer[toml]
+ - cython>=0.29.33
+ - meson[ninja]=1.2.1
+ - meson-python=0.13.1
+
+ # test dependencies
+ - pytest>=7.3.2
+ - pytest-cov
+ - pytest-xdist>=2.2.0
+ - pytest-localserver>=0.7.1
+ - pytest-qt>=4.2.0
+ - boto3
+
+ # required dependencies
+ - python-dateutil
+ - numpy<2
+ - pytz
+
+ # optional dependencies
+ - beautifulsoup4>=4.11.2
+ - blosc>=1.21.3
+ - bottleneck>=1.3.6
+ - fastparquet>=2022.12.0
+ - fsspec>=2022.11.0
+ - html5lib>=1.1
+ - hypothesis>=6.46.1
+ - gcsfs>=2022.11.0
+ - jinja2>=3.1.2
+ - lxml>=4.9.2
+ - matplotlib>=3.6.3
+ # - numba>=0.56.4
+ - numexpr>=2.8.4
+ - odfpy>=1.4.1
+ - qtpy>=2.3.0
+ - pyqt>=5.15.9
+ - openpyxl>=3.1.0
+ - psycopg2>=2.9.6
+ - pyarrow>=10.0.1
+ - pymysql>=1.0.2
+ - pyreadstat>=1.2.0
+ # - pytables>=3.8.0
+ # - python-calamine>=0.1.6
+ - pyxlsb>=1.0.10
+ - s3fs>=2022.11.0
+ - scipy>=1.10.0
+ - sqlalchemy>=2.0.0
+ - tabulate>=0.9.0
+ - xarray>=2022.12.0
+ - xlrd>=2.0.1
+ - xlsxwriter>=3.0.5
+ - zstandard>=0.19.0
+
+ - pip:
+ - adbc-driver-postgresql>=0.8.0
+ - adbc-driver-sqlite>=0.8.0
+ - tzdata>=2022.7
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 45215cd3b5e96..304fe824682f9 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -542,6 +542,9 @@ def test_series_pos(self, lhs, engine, parser):
def test_scalar_unary(self, engine, parser):
msg = "bad operand type for unary ~: 'float'"
+ warn = None
+ if PY312 and not (engine == "numexpr" and parser == "pandas"):
+ warn = DeprecationWarning
with pytest.raises(TypeError, match=msg):
pd.eval("~1.0", engine=engine, parser=parser)
@@ -550,8 +553,14 @@ def test_scalar_unary(self, engine, parser):
assert pd.eval("~1", parser=parser, engine=engine) == ~1
assert pd.eval("-1", parser=parser, engine=engine) == -1
assert pd.eval("+1", parser=parser, engine=engine) == +1
- assert pd.eval("~True", parser=parser, engine=engine) == ~True
- assert pd.eval("~False", parser=parser, engine=engine) == ~False
+ with tm.assert_produces_warning(
+ warn, match="Bitwise inversion", check_stacklevel=False
+ ):
+ assert pd.eval("~True", parser=parser, engine=engine) == ~True
+ with tm.assert_produces_warning(
+ warn, match="Bitwise inversion", check_stacklevel=False
+ ):
+ assert pd.eval("~False", parser=parser, engine=engine) == ~False
assert pd.eval("-True", parser=parser, engine=engine) == -True
assert pd.eval("-False", parser=parser, engine=engine) == -False
assert pd.eval("+True", parser=parser, engine=engine) == +True
diff --git a/pandas/tests/extension/test_arrow.py b/pandas/tests/extension/test_arrow.py
index 8298d39a5eca9..7131a50956a7d 100644
--- a/pandas/tests/extension/test_arrow.py
+++ b/pandas/tests/extension/test_arrow.py
@@ -34,6 +34,7 @@
from pandas._libs.tslibs import timezones
from pandas.compat import (
PY311,
+ PY312,
is_ci_environment,
is_platform_windows,
pa_version_under11p0,
@@ -716,7 +717,13 @@ def test_invert(self, data, request):
reason=f"pyarrow.compute.invert does support {pa_dtype}",
)
)
- super().test_invert(data)
+ if PY312 and pa.types.is_boolean(pa_dtype):
+ with tm.assert_produces_warning(
+ DeprecationWarning, match="Bitwise inversion", check_stacklevel=False
+ ):
+ super().test_invert(data)
+ else:
+ super().test_invert(data)
@pytest.mark.parametrize("periods", [1, -2])
def test_diff(self, data, periods, request):
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index caa2da1b6123b..98d82f10375b4 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -1442,7 +1442,9 @@ def test_deprecate_bytes_input(self, engine, read_ext):
"byte string, wrap it in a `BytesIO` object."
)
- with tm.assert_produces_warning(FutureWarning, match=msg):
+ with tm.assert_produces_warning(
+ FutureWarning, match=msg, raise_on_extra_warnings=False
+ ):
with open("test1" + read_ext, "rb") as f:
pd.read_excel(f.read(), engine=engine)
diff --git a/pandas/tests/io/excel/test_writers.py b/pandas/tests/io/excel/test_writers.py
index 9aeac58de50bb..6576c98042333 100644
--- a/pandas/tests/io/excel/test_writers.py
+++ b/pandas/tests/io/excel/test_writers.py
@@ -287,7 +287,9 @@ def test_read_excel_parse_dates(self, ext):
date_parser = lambda x: datetime.strptime(x, "%m/%d/%Y")
with tm.assert_produces_warning(
- FutureWarning, match="use 'date_format' instead"
+ FutureWarning,
+ match="use 'date_format' instead",
+ raise_on_extra_warnings=False,
):
res = pd.read_excel(
pth,
| Some CoW warnings broke for 3.11, so we should test at least CoW for 3.12 as well | https://api.github.com/repos/pandas-dev/pandas/pulls/56174 | 2023-11-26T00:08:56Z | 2023-11-30T14:38:02Z | 2023-11-30T14:38:02Z | 2023-12-04T23:29:19Z |
CI: Add CoW builds and fix inplace warnings | diff --git a/.github/workflows/unit-tests.yml b/.github/workflows/unit-tests.yml
index 33b6e7a8c2340..4c38324280528 100644
--- a/.github/workflows/unit-tests.yml
+++ b/.github/workflows/unit-tests.yml
@@ -73,6 +73,14 @@ jobs:
env_file: actions-311.yaml
pattern: "not slow and not network and not single_cpu"
pandas_copy_on_write: "warn"
+ - name: "Copy-on-Write 3.10 (warnings)"
+ env_file: actions-310.yaml
+ pattern: "not slow and not network and not single_cpu"
+ pandas_copy_on_write: "warn"
+ - name: "Copy-on-Write 3.9 (warnings)"
+ env_file: actions-39.yaml
+ pattern: "not slow and not network and not single_cpu"
+ pandas_copy_on_write: "warn"
- name: "Pypy"
env_file: actions-pypy-39.yaml
pattern: "not slow and not network and not single_cpu"
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e4e95a973a3c1..c832d9ca257f9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -12450,7 +12450,7 @@ def _inplace_method(self, other, op) -> Self:
"""
warn = True
if not PYPY and warn_copy_on_write():
- if sys.getrefcount(self) <= 4:
+ if sys.getrefcount(self) <= REF_COUNT + 2:
# we are probably in an inplace setitem context (e.g. df['a'] += 1)
warn = False
| related to #56172, want to check that the other pr fails and this fixes the warnings | https://api.github.com/repos/pandas-dev/pandas/pulls/56173 | 2023-11-26T00:05:53Z | 2023-11-27T11:26:40Z | 2023-11-27T11:26:40Z | 2023-11-27T11:28:50Z |
CoW: Fix warnings for eval | diff --git a/pandas/core/computation/eval.py b/pandas/core/computation/eval.py
index 0c99e5e7bdc54..f1fe528de06f8 100644
--- a/pandas/core/computation/eval.py
+++ b/pandas/core/computation/eval.py
@@ -388,17 +388,10 @@ def eval(
# we will ignore numpy warnings here; e.g. if trying
# to use a non-numeric indexer
try:
- with warnings.catch_warnings(record=True):
- warnings.filterwarnings(
- "always", "Setting a value on a view", FutureWarning
- )
- # TODO: Filter the warnings we actually care about here.
- if inplace and isinstance(target, NDFrame):
- target.loc[:, assigner] = ret
- else:
- target[ # pyright: ignore[reportGeneralTypeIssues]
- assigner
- ] = ret
+ if inplace and isinstance(target, NDFrame):
+ target.loc[:, assigner] = ret
+ else:
+ target[assigner] = ret # pyright: ignore[reportGeneralTypeIssues]
except (TypeError, IndexError) as err:
raise ValueError("Cannot assign expression output to target") from err
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 376354aedea63..eaae515c4d7d5 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -650,12 +650,17 @@ def _get_cleaned_column_resolvers(self) -> dict[Hashable, Series]:
Used in :meth:`DataFrame.eval`.
"""
from pandas.core.computation.parsing import clean_column_name
+ from pandas.core.series import Series
if isinstance(self, ABCSeries):
return {clean_column_name(self.name): self}
return {
- clean_column_name(k): v for k, v in self.items() if not isinstance(k, int)
+ clean_column_name(k): Series(
+ v, copy=False, index=self.index, name=k
+ ).__finalize__(self)
+ for k, v in zip(self.columns, self._iter_column_arrays())
+ if not isinstance(k, int)
}
@final
diff --git a/pandas/tests/computation/test_eval.py b/pandas/tests/computation/test_eval.py
index 304fe824682f9..75473b8c50f4e 100644
--- a/pandas/tests/computation/test_eval.py
+++ b/pandas/tests/computation/test_eval.py
@@ -1173,9 +1173,7 @@ def test_assignment_single_assign_new(self):
df.eval("c = a + b", inplace=True)
tm.assert_frame_equal(df, expected)
- # TODO(CoW-warn) this should not warn (DataFrame.eval creates refs to self)
- @pytest.mark.filterwarnings("ignore:Setting a value on a view:FutureWarning")
- def test_assignment_single_assign_local_overlap(self, warn_copy_on_write):
+ def test_assignment_single_assign_local_overlap(self):
df = DataFrame(
np.random.default_rng(2).standard_normal((5, 2)), columns=list("ab")
)
@@ -1229,8 +1227,6 @@ def test_column_in(self):
tm.assert_series_equal(result, expected, check_names=False)
@pytest.mark.xfail(reason="Unknown: Omitted test_ in name prior.")
- # TODO(CoW-warn) this should not warn (DataFrame.eval creates refs to self)
- @pytest.mark.filterwarnings("ignore:Setting a value on a view:FutureWarning")
def test_assignment_not_inplace(self):
# see gh-9297
df = DataFrame(
@@ -1244,7 +1240,7 @@ def test_assignment_not_inplace(self):
expected["c"] = expected["a"] + expected["b"]
tm.assert_frame_equal(df, expected)
- def test_multi_line_expression(self):
+ def test_multi_line_expression(self, warn_copy_on_write):
# GH 11149
df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6]})
expected = df.copy()
@@ -1917,8 +1913,8 @@ def test_set_inplace(using_copy_on_write, warn_copy_on_write):
df = DataFrame({"A": [1, 2, 3], "B": [4, 5, 6], "C": [7, 8, 9]})
result_view = df[:]
ser = df["A"]
- # with tm.assert_cow_warning(warn_copy_on_write):
- df.eval("A = B + C", inplace=True)
+ with tm.assert_cow_warning(warn_copy_on_write):
+ df.eval("A = B + C", inplace=True)
expected = DataFrame({"A": [11, 13, 15], "B": [4, 5, 6], "C": [7, 8, 9]})
tm.assert_frame_equal(df, expected)
if not using_copy_on_write:
| This is an option if eval can't do inplace on numpy under the hood
xref https://github.com/pandas-dev/pandas/issues/56019 | https://api.github.com/repos/pandas-dev/pandas/pulls/56170 | 2023-11-25T23:19:06Z | 2023-12-04T10:55:57Z | 2023-12-04T10:55:57Z | 2023-12-04T11:04:53Z |
CoW: Remove todos that aren't necessary | diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py
index 9f5157181843e..53de64c72674b 100644
--- a/pandas/tests/apply/test_invalid_arg.py
+++ b/pandas/tests/apply/test_invalid_arg.py
@@ -152,7 +152,7 @@ def test_transform_axis_1_raises():
# TODO(CoW-warn) should not need to warn
@pytest.mark.filterwarnings("ignore:Setting a value on a view:FutureWarning")
-def test_apply_modify_traceback():
+def test_apply_modify_traceback(warn_copy_on_write):
data = DataFrame(
{
"A": [
@@ -214,7 +214,8 @@ def transform2(row):
msg = "'float' object has no attribute 'startswith'"
with pytest.raises(AttributeError, match=msg):
- data.apply(transform, axis=1)
+ with tm.assert_cow_warning(warn_copy_on_write):
+ data.apply(transform, axis=1)
@pytest.mark.parametrize(
diff --git a/pandas/tests/copy_view/test_constructors.py b/pandas/tests/copy_view/test_constructors.py
index 89384a4ef6ba6..7d5c485958039 100644
--- a/pandas/tests/copy_view/test_constructors.py
+++ b/pandas/tests/copy_view/test_constructors.py
@@ -297,7 +297,6 @@ def test_dataframe_from_series_or_index(
if using_copy_on_write:
assert not df._mgr._has_no_reference(0)
- # TODO(CoW-warn) should not warn for an index?
with tm.assert_cow_warning(warn_copy_on_write):
df.iloc[0, 0] = data[-1]
if using_copy_on_write:
diff --git a/pandas/tests/copy_view/test_indexing.py b/pandas/tests/copy_view/test_indexing.py
index 2e623f885b648..72b7aea3709c0 100644
--- a/pandas/tests/copy_view/test_indexing.py
+++ b/pandas/tests/copy_view/test_indexing.py
@@ -913,7 +913,6 @@ def test_del_frame(backend, using_copy_on_write, warn_copy_on_write):
tm.assert_frame_equal(df2, df_orig[["a", "c"]])
df2._mgr._verify_integrity()
- # TODO(CoW-warn) false positive, this should not warn?
with tm.assert_cow_warning(warn_copy_on_write and dtype_backend == "numpy"):
df.loc[0, "b"] = 200
assert np.shares_memory(get_array(df, "a"), get_array(df2, "a"))
| xref https://github.com/pandas-dev/pandas/issues/56019
This should warn, row is a Series and the setitem is inplace then
The Index case is a bug at the moment, creating the df is zero copy, updating the df afterwards will propagate to the index, but will be fixed with CoW
``test_del_frame`` the parent is a single block, we are viewing the same block in df2 even though we deleted column "b" | https://api.github.com/repos/pandas-dev/pandas/pulls/56169 | 2023-11-25T22:41:51Z | 2023-11-27T22:45:46Z | 2023-11-27T22:45:46Z | 2023-11-27T22:45:49Z |
CoW: Warn for cases that go through putmask | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index ced930b936ba5..87c0df7164967 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -10496,6 +10496,7 @@ def _where(
inplace: bool_t = False,
axis: Axis | None = None,
level=None,
+ warn: bool_t = True,
):
"""
Equivalent to public method `where`, except that `other` is not
@@ -10626,7 +10627,7 @@ def _where(
# we may have different type blocks come out of putmask, so
# reconstruct the block manager
- new_data = self._mgr.putmask(mask=cond, new=other, align=align)
+ new_data = self._mgr.putmask(mask=cond, new=other, align=align, warn=warn)
result = self._constructor_from_mgr(new_data, axes=new_data.axes)
return self._update_inplace(result)
diff --git a/pandas/core/internals/base.py b/pandas/core/internals/base.py
index fe366c7375c6a..664856b828347 100644
--- a/pandas/core/internals/base.py
+++ b/pandas/core/internals/base.py
@@ -14,7 +14,10 @@
import numpy as np
-from pandas._config import using_copy_on_write
+from pandas._config import (
+ using_copy_on_write,
+ warn_copy_on_write,
+)
from pandas._libs import (
algos as libalgos,
@@ -49,6 +52,16 @@
)
+class _AlreadyWarned:
+ def __init__(self):
+ # This class is used on the manager level to the block level to
+ # ensure that we warn only once. The block method can update the
+ # warned_already option without returning a value to keep the
+ # interface consistent. This is only a temporary solution for
+ # CoW warnings.
+ self.warned_already = False
+
+
class DataManager(PandasObject):
# TODO share more methods/attributes
@@ -196,19 +209,26 @@ def where(self, other, cond, align: bool) -> Self:
)
@final
- def putmask(self, mask, new, align: bool = True) -> Self:
+ def putmask(self, mask, new, align: bool = True, warn: bool = True) -> Self:
if align:
align_keys = ["new", "mask"]
else:
align_keys = ["mask"]
new = extract_array(new, extract_numpy=True)
+ already_warned = None
+ if warn_copy_on_write():
+ already_warned = _AlreadyWarned()
+ if not warn:
+ already_warned.warned_already = True
+
return self.apply_with_block(
"putmask",
align_keys=align_keys,
mask=mask,
new=new,
using_cow=using_copy_on_write(),
+ already_warned=already_warned,
)
@final
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 535d18f99f0ef..a06d266870edc 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -18,6 +18,7 @@
from pandas._config import (
get_option,
using_copy_on_write,
+ warn_copy_on_write,
)
from pandas._libs import (
@@ -136,6 +137,29 @@
_dtype_obj = np.dtype("object")
+COW_WARNING_GENERAL_MSG = """\
+Setting a value on a view: behaviour will change in pandas 3.0.
+You are mutating a Series or DataFrame object, and currently this mutation will
+also have effect on other Series or DataFrame objects that share data with this
+object. In pandas 3.0 (with Copy-on-Write), updating one Series or DataFrame object
+will never modify another.
+"""
+
+
+COW_WARNING_SETITEM_MSG = """\
+Setting a value on a view: behaviour will change in pandas 3.0.
+Currently, the mutation will also have effect on the object that shares data
+with this object. For example, when setting a value in a Series that was
+extracted from a column of a DataFrame, that DataFrame will also be updated:
+
+ ser = df["col"]
+ ser[0] = 0 <--- in pandas 2, this also updates `df`
+
+In pandas 3.0 (with Copy-on-Write), updating one Series/DataFrame will never
+modify another, and thus in the example above, `df` will not be changed.
+"""
+
+
def maybe_split(meth: F) -> F:
"""
If we have a multi-column block, split and operate block-wise. Otherwise
@@ -1355,7 +1379,9 @@ def setitem(self, indexer, value, using_cow: bool = False) -> Block:
values[indexer] = casted
return self
- def putmask(self, mask, new, using_cow: bool = False) -> list[Block]:
+ def putmask(
+ self, mask, new, using_cow: bool = False, already_warned=None
+ ) -> list[Block]:
"""
putmask the data to the block; it is possible that we may create a
new dtype of block
@@ -1388,6 +1414,19 @@ def putmask(self, mask, new, using_cow: bool = False) -> list[Block]:
return [self.copy(deep=False)]
return [self]
+ if (
+ warn_copy_on_write()
+ and already_warned is not None
+ and not already_warned.warned_already
+ ):
+ if self.refs.has_reference():
+ warnings.warn(
+ COW_WARNING_GENERAL_MSG,
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ already_warned.warned_already = True
+
try:
casted = np_can_hold_element(values.dtype, new)
@@ -2020,7 +2059,9 @@ def where(
return [nb]
@final
- def putmask(self, mask, new, using_cow: bool = False) -> list[Block]:
+ def putmask(
+ self, mask, new, using_cow: bool = False, already_warned=None
+ ) -> list[Block]:
"""
See Block.putmask.__doc__
"""
@@ -2038,6 +2079,19 @@ def putmask(self, mask, new, using_cow: bool = False) -> list[Block]:
return [self.copy(deep=False)]
return [self]
+ if (
+ warn_copy_on_write()
+ and already_warned is not None
+ and not already_warned.warned_already
+ ):
+ if self.refs.has_reference():
+ warnings.warn(
+ COW_WARNING_GENERAL_MSG,
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ already_warned.warned_already = True
+
self = self._maybe_copy(using_cow, inplace=True)
values = self.values
if values.ndim == 2:
diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py
index c11150eb4c4d7..a02f31d4483b2 100644
--- a/pandas/core/internals/managers.py
+++ b/pandas/core/internals/managers.py
@@ -72,6 +72,8 @@
interleaved_dtype,
)
from pandas.core.internals.blocks import (
+ COW_WARNING_GENERAL_MSG,
+ COW_WARNING_SETITEM_MSG,
Block,
NumpyBlock,
ensure_block_shape,
@@ -100,29 +102,6 @@
from pandas.api.extensions import ExtensionArray
-COW_WARNING_GENERAL_MSG = """\
-Setting a value on a view: behaviour will change in pandas 3.0.
-You are mutating a Series or DataFrame object, and currently this mutation will
-also have effect on other Series or DataFrame objects that share data with this
-object. In pandas 3.0 (with Copy-on-Write), updating one Series or DataFrame object
-will never modify another.
-"""
-
-
-COW_WARNING_SETITEM_MSG = """\
-Setting a value on a view: behaviour will change in pandas 3.0.
-Currently, the mutation will also have effect on the object that shares data
-with this object. For example, when setting a value in a Series that was
-extracted from a column of a DataFrame, that DataFrame will also be updated:
-
- ser = df["col"]
- ser[0] = 0 <--- in pandas 2, this also updates `df`
-
-In pandas 3.0 (with Copy-on-Write), updating one Series/DataFrame will never
-modify another, and thus in the example above, `df` will not be changed.
-"""
-
-
class BaseBlockManager(DataManager):
"""
Core internal data structure to implement DataFrame, Series, etc.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1b6fa912b7dc3..b060645d735c6 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1320,7 +1320,7 @@ def __setitem__(self, key, value) -> None:
# otherwise with listlike other we interpret series[mask] = other
# as series[mask] = other[mask]
try:
- self._where(~key, value, inplace=True)
+ self._where(~key, value, inplace=True, warn=warn)
except InvalidIndexError:
# test_where_dups
self.iloc[key] = value
diff --git a/pandas/tests/copy_view/test_chained_assignment_deprecation.py b/pandas/tests/copy_view/test_chained_assignment_deprecation.py
index 7b08d9b80fc9b..5c16c7e18b89f 100644
--- a/pandas/tests/copy_view/test_chained_assignment_deprecation.py
+++ b/pandas/tests/copy_view/test_chained_assignment_deprecation.py
@@ -70,7 +70,7 @@ def test_methods_iloc_getitem_item_cache(func, args, using_copy_on_write):
@pytest.mark.parametrize(
"indexer", [0, [0, 1], slice(0, 2), np.array([True, False, True])]
)
-def test_series_setitem(indexer, using_copy_on_write):
+def test_series_setitem(indexer, using_copy_on_write, warn_copy_on_write):
# ensure we only get a single warning for those typical cases of chained
# assignment
df = DataFrame({"a": [1, 2, 3], "b": 1})
diff --git a/pandas/tests/copy_view/test_clip.py b/pandas/tests/copy_view/test_clip.py
index 13ddd479c7c67..7ed6a1f803ead 100644
--- a/pandas/tests/copy_view/test_clip.py
+++ b/pandas/tests/copy_view/test_clip.py
@@ -8,12 +8,16 @@
from pandas.tests.copy_view.util import get_array
-def test_clip_inplace_reference(using_copy_on_write):
+def test_clip_inplace_reference(using_copy_on_write, warn_copy_on_write):
df = DataFrame({"a": [1.5, 2, 3]})
df_copy = df.copy()
arr_a = get_array(df, "a")
view = df[:]
- df.clip(lower=2, inplace=True)
+ if warn_copy_on_write:
+ with tm.assert_cow_warning():
+ df.clip(lower=2, inplace=True)
+ else:
+ df.clip(lower=2, inplace=True)
if using_copy_on_write:
assert not np.shares_memory(get_array(df, "a"), arr_a)
diff --git a/pandas/tests/copy_view/test_indexing.py b/pandas/tests/copy_view/test_indexing.py
index 72b7aea3709c0..355eb2db0ef09 100644
--- a/pandas/tests/copy_view/test_indexing.py
+++ b/pandas/tests/copy_view/test_indexing.py
@@ -367,10 +367,11 @@ def test_subset_set_with_mask(backend, using_copy_on_write, warn_copy_on_write):
mask = subset > 3
- # TODO(CoW-warn) should warn -> mask is a DataFrame, which ends up going through
- # DataFrame._where(..., inplace=True)
- if using_copy_on_write or warn_copy_on_write:
+ if using_copy_on_write:
subset[mask] = 0
+ elif warn_copy_on_write:
+ with tm.assert_cow_warning():
+ subset[mask] = 0
else:
with pd.option_context("chained_assignment", "warn"):
with tm.assert_produces_warning(SettingWithCopyWarning):
@@ -867,18 +868,8 @@ def test_series_subset_set_with_indexer(
and indexer.dtype.kind == "i"
):
warn = FutureWarning
- is_mask = (
- indexer_si is tm.setitem
- and isinstance(indexer, np.ndarray)
- and indexer.dtype.kind == "b"
- )
if warn_copy_on_write:
- # TODO(CoW-warn) should also warn for setting with mask
- # -> Series.__setitem__ with boolean mask ends up using Series._set_values
- # or Series._where depending on value being set
- with tm.assert_cow_warning(
- not is_mask, raise_on_extra_warnings=warn is not None
- ):
+ with tm.assert_cow_warning(raise_on_extra_warnings=warn is not None):
indexer_si(subset)[indexer] = 0
else:
with tm.assert_produces_warning(warn, match=msg):
diff --git a/pandas/tests/copy_view/test_methods.py b/pandas/tests/copy_view/test_methods.py
index 806842dcab57a..ba8e4bd684198 100644
--- a/pandas/tests/copy_view/test_methods.py
+++ b/pandas/tests/copy_view/test_methods.py
@@ -1407,11 +1407,12 @@ def test_items(using_copy_on_write, warn_copy_on_write):
@pytest.mark.parametrize("dtype", ["int64", "Int64"])
-def test_putmask(using_copy_on_write, dtype):
+def test_putmask(using_copy_on_write, dtype, warn_copy_on_write):
df = DataFrame({"a": [1, 2], "b": 1, "c": 2}, dtype=dtype)
view = df[:]
df_orig = df.copy()
- df[df == df] = 5
+ with tm.assert_cow_warning(warn_copy_on_write):
+ df[df == df] = 5
if using_copy_on_write:
assert not np.shares_memory(get_array(view, "a"), get_array(df, "a"))
@@ -1445,15 +1446,21 @@ def test_putmask_aligns_rhs_no_reference(using_copy_on_write, dtype):
@pytest.mark.parametrize(
"val, exp, warn", [(5.5, True, FutureWarning), (5, False, None)]
)
-def test_putmask_dont_copy_some_blocks(using_copy_on_write, val, exp, warn):
+def test_putmask_dont_copy_some_blocks(
+ using_copy_on_write, val, exp, warn, warn_copy_on_write
+):
df = DataFrame({"a": [1, 2], "b": 1, "c": 1.5})
view = df[:]
df_orig = df.copy()
indexer = DataFrame(
[[True, False, False], [True, False, False]], columns=list("abc")
)
- with tm.assert_produces_warning(warn, match="incompatible dtype"):
- df[indexer] = val
+ if warn_copy_on_write:
+ with tm.assert_cow_warning():
+ df[indexer] = val
+ else:
+ with tm.assert_produces_warning(warn, match="incompatible dtype"):
+ df[indexer] = val
if using_copy_on_write:
assert not np.shares_memory(get_array(view, "a"), get_array(df, "a"))
@@ -1796,13 +1803,17 @@ def test_update_frame(using_copy_on_write, warn_copy_on_write):
tm.assert_frame_equal(view, expected)
-def test_update_series(using_copy_on_write):
+def test_update_series(using_copy_on_write, warn_copy_on_write):
ser1 = Series([1.0, 2.0, 3.0])
ser2 = Series([100.0], index=[1])
ser1_orig = ser1.copy()
view = ser1[:]
- ser1.update(ser2)
+ if warn_copy_on_write:
+ with tm.assert_cow_warning():
+ ser1.update(ser2)
+ else:
+ ser1.update(ser2)
expected = Series([1.0, 100.0, 3.0])
tm.assert_series_equal(ser1, expected)
diff --git a/pandas/tests/copy_view/test_replace.py b/pandas/tests/copy_view/test_replace.py
index d11a2893becdc..3d8559a1905fc 100644
--- a/pandas/tests/copy_view/test_replace.py
+++ b/pandas/tests/copy_view/test_replace.py
@@ -279,14 +279,18 @@ def test_replace_categorical(using_copy_on_write, val):
@pytest.mark.parametrize("method", ["where", "mask"])
-def test_masking_inplace(using_copy_on_write, method):
+def test_masking_inplace(using_copy_on_write, method, warn_copy_on_write):
df = DataFrame({"a": [1.5, 2, 3]})
df_orig = df.copy()
arr_a = get_array(df, "a")
view = df[:]
method = getattr(df, method)
- method(df["a"] > 1.6, -1, inplace=True)
+ if warn_copy_on_write:
+ with tm.assert_cow_warning():
+ method(df["a"] > 1.6, -1, inplace=True)
+ else:
+ method(df["a"] > 1.6, -1, inplace=True)
if using_copy_on_write:
assert not np.shares_memory(get_array(df, "a"), arr_a)
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py
index 4b32d3de59ca2..13e2c1a249ac2 100644
--- a/pandas/tests/frame/methods/test_replace.py
+++ b/pandas/tests/frame/methods/test_replace.py
@@ -729,11 +729,7 @@ def test_replace_for_new_dtypes(self, datetime_frame):
tsframe.loc[tsframe.index[:5], "A"] = np.nan
tsframe.loc[tsframe.index[-5:], "A"] = np.nan
- tsframe.loc[tsframe.index[:5], "B"] = -1e8
-
- b = tsframe["B"]
- b[b == -1e8] = np.nan
- tsframe["B"] = b
+ tsframe.loc[tsframe.index[:5], "B"] = np.nan
msg = "DataFrame.fillna with 'method' is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
# TODO: what is this even testing?
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Open for suggestions how we can improve the put mask mechanism to avoid duplicate warnings
xref https://github.com/pandas-dev/pandas/issues/56019 | https://api.github.com/repos/pandas-dev/pandas/pulls/56168 | 2023-11-25T22:32:57Z | 2023-12-04T10:08:26Z | 2023-12-04T10:08:26Z | 2023-12-04T11:05:56Z |
[ENH]: Expand types allowed in Series.struct.field | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index d1481639ca5a0..39361c3505e61 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -251,6 +251,14 @@ DataFrame. (:issue:`54938`)
)
series.struct.explode()
+Use :meth:`Series.struct.field` to index into a (possible nested)
+struct field.
+
+
+.. ipython:: python
+
+ series.struct.field("project")
+
.. _whatsnew_220.enhancements.list_accessor:
Series.list accessor for PyArrow list data
diff --git a/pandas/core/arrays/arrow/accessors.py b/pandas/core/arrays/arrow/accessors.py
index 7f88267943526..124f8fb6ad8bc 100644
--- a/pandas/core/arrays/arrow/accessors.py
+++ b/pandas/core/arrays/arrow/accessors.py
@@ -6,13 +6,18 @@
ABCMeta,
abstractmethod,
)
-from typing import TYPE_CHECKING
+from typing import (
+ TYPE_CHECKING,
+ cast,
+)
from pandas.compat import (
pa_version_under10p1,
pa_version_under11p0,
)
+from pandas.core.dtypes.common import is_list_like
+
if not pa_version_under10p1:
import pyarrow as pa
import pyarrow.compute as pc
@@ -267,15 +272,27 @@ def dtypes(self) -> Series:
names = [struct.name for struct in pa_type]
return Series(types, index=Index(names))
- def field(self, name_or_index: str | int) -> Series:
+ def field(
+ self,
+ name_or_index: list[str]
+ | list[bytes]
+ | list[int]
+ | pc.Expression
+ | bytes
+ | str
+ | int,
+ ) -> Series:
"""
Extract a child field of a struct as a Series.
Parameters
----------
- name_or_index : str | int
+ name_or_index : str | bytes | int | expression | list
Name or index of the child field to extract.
+ For list-like inputs, this will index into a nested
+ struct.
+
Returns
-------
pandas.Series
@@ -285,6 +302,19 @@ def field(self, name_or_index: str | int) -> Series:
--------
Series.struct.explode : Return all child fields as a DataFrame.
+ Notes
+ -----
+ The name of the resulting Series will be set using the following
+ rules:
+
+ - For string, bytes, or integer `name_or_index` (or a list of these, for
+ a nested selection), the Series name is set to the selected
+ field's name.
+ - For a :class:`pyarrow.compute.Expression`, this is set to
+ the string form of the expression.
+ - For list-like `name_or_index`, the name will be set to the
+ name of the final field selected.
+
Examples
--------
>>> import pyarrow as pa
@@ -314,27 +344,92 @@ def field(self, name_or_index: str | int) -> Series:
1 2
2 1
Name: version, dtype: int64[pyarrow]
+
+ Or an expression
+
+ >>> import pyarrow.compute as pc
+ >>> s.struct.field(pc.field("project"))
+ 0 pandas
+ 1 pandas
+ 2 numpy
+ Name: project, dtype: string[pyarrow]
+
+ For nested struct types, you can pass a list of values to index
+ multiple levels:
+
+ >>> version_type = pa.struct([
+ ... ("major", pa.int64()),
+ ... ("minor", pa.int64()),
+ ... ])
+ >>> s = pd.Series(
+ ... [
+ ... {"version": {"major": 1, "minor": 5}, "project": "pandas"},
+ ... {"version": {"major": 2, "minor": 1}, "project": "pandas"},
+ ... {"version": {"major": 1, "minor": 26}, "project": "numpy"},
+ ... ],
+ ... dtype=pd.ArrowDtype(pa.struct(
+ ... [("version", version_type), ("project", pa.string())]
+ ... ))
+ ... )
+ >>> s.struct.field(["version", "minor"])
+ 0 5
+ 1 1
+ 2 26
+ Name: minor, dtype: int64[pyarrow]
+ >>> s.struct.field([0, 0])
+ 0 1
+ 1 2
+ 2 1
+ Name: major, dtype: int64[pyarrow]
"""
from pandas import Series
+ def get_name(
+ level_name_or_index: list[str]
+ | list[bytes]
+ | list[int]
+ | pc.Expression
+ | bytes
+ | str
+ | int,
+ data: pa.ChunkedArray,
+ ):
+ if isinstance(level_name_or_index, int):
+ name = data.type.field(level_name_or_index).name
+ elif isinstance(level_name_or_index, (str, bytes)):
+ name = level_name_or_index
+ elif isinstance(level_name_or_index, pc.Expression):
+ name = str(level_name_or_index)
+ elif is_list_like(level_name_or_index):
+ # For nested input like [2, 1, 2]
+ # iteratively get the struct and field name. The last
+ # one is used for the name of the index.
+ level_name_or_index = list(reversed(level_name_or_index))
+ selected = data
+ while level_name_or_index:
+ # we need the cast, otherwise mypy complains about
+ # getting ints, bytes, or str here, which isn't possible.
+ level_name_or_index = cast(list, level_name_or_index)
+ name_or_index = level_name_or_index.pop()
+ name = get_name(name_or_index, selected)
+ selected = selected.type.field(selected.type.get_field_index(name))
+ name = selected.name
+ else:
+ raise ValueError(
+ "name_or_index must be an int, str, bytes, "
+ "pyarrow.compute.Expression, or list of those"
+ )
+ return name
+
pa_arr = self._data.array._pa_array
- if isinstance(name_or_index, int):
- index = name_or_index
- elif isinstance(name_or_index, str):
- index = pa_arr.type.get_field_index(name_or_index)
- else:
- raise ValueError(
- "name_or_index must be an int or str, "
- f"got {type(name_or_index).__name__}"
- )
+ name = get_name(name_or_index, pa_arr)
+ field_arr = pc.struct_field(pa_arr, name_or_index)
- pa_field = pa_arr.type[index]
- field_arr = pc.struct_field(pa_arr, [index])
return Series(
field_arr,
dtype=ArrowDtype(field_arr.type),
index=self._data.index,
- name=pa_field.name,
+ name=name,
)
def explode(self) -> DataFrame:
diff --git a/pandas/tests/series/accessors/test_struct_accessor.py b/pandas/tests/series/accessors/test_struct_accessor.py
index 1ec5b3b726d17..80aea75fda406 100644
--- a/pandas/tests/series/accessors/test_struct_accessor.py
+++ b/pandas/tests/series/accessors/test_struct_accessor.py
@@ -2,6 +2,11 @@
import pytest
+from pandas.compat.pyarrow import (
+ pa_version_under11p0,
+ pa_version_under13p0,
+)
+
from pandas import (
ArrowDtype,
DataFrame,
@@ -11,6 +16,7 @@
import pandas._testing as tm
pa = pytest.importorskip("pyarrow")
+pc = pytest.importorskip("pyarrow.compute")
def test_struct_accessor_dtypes():
@@ -53,6 +59,7 @@ def test_struct_accessor_dtypes():
tm.assert_series_equal(actual, expected)
+@pytest.mark.skipif(pa_version_under13p0, reason="pyarrow>=13.0.0 required")
def test_struct_accessor_field():
index = Index([-100, 42, 123])
ser = Series(
@@ -94,10 +101,11 @@ def test_struct_accessor_field():
def test_struct_accessor_field_with_invalid_name_or_index():
ser = Series([], dtype=ArrowDtype(pa.struct([("field", pa.int64())])))
- with pytest.raises(ValueError, match="name_or_index must be an int or str"):
+ with pytest.raises(ValueError, match="name_or_index must be an int, str,"):
ser.struct.field(1.1)
+@pytest.mark.skipif(pa_version_under11p0, reason="pyarrow>=11.0.0 required")
def test_struct_accessor_explode():
index = Index([-100, 42, 123])
ser = Series(
@@ -148,3 +156,41 @@ def test_struct_accessor_api_for_invalid(invalid):
),
):
invalid.struct
+
+
+@pytest.mark.parametrize(
+ ["indices", "name"],
+ [
+ (0, "int_col"),
+ ([1, 2], "str_col"),
+ (pc.field("int_col"), "int_col"),
+ ("int_col", "int_col"),
+ (b"string_col", b"string_col"),
+ ([b"string_col"], "string_col"),
+ ],
+)
+@pytest.mark.skipif(pa_version_under13p0, reason="pyarrow>=13.0.0 required")
+def test_struct_accessor_field_expanded(indices, name):
+ arrow_type = pa.struct(
+ [
+ ("int_col", pa.int64()),
+ (
+ "struct_col",
+ pa.struct(
+ [
+ ("int_col", pa.int64()),
+ ("float_col", pa.float64()),
+ ("str_col", pa.string()),
+ ]
+ ),
+ ),
+ (b"string_col", pa.string()),
+ ]
+ )
+
+ data = pa.array([], type=arrow_type)
+ ser = Series(data, dtype=ArrowDtype(arrow_type))
+ expected = pc.struct_field(data, indices)
+ result = ser.struct.field(indices)
+ tm.assert_equal(result.array._pa_array.combine_chunks(), expected)
+ assert result.name == name
| This expands the set of types allowed by Series.struct.field to allow those allowed by pyarrow.
Closes https://github.com/pandas-dev/pandas/issues/56065
- [x] closes #56065
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56167 | 2023-11-25T22:20:47Z | 2024-01-02T19:15:33Z | 2024-01-02T19:15:33Z | 2024-01-02T22:55:04Z |
CoW: Fix deprecation warning for chained assignment | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e4e95a973a3c1..64f1341235710 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -101,6 +101,7 @@
SettingWithCopyWarning,
_chained_assignment_method_msg,
_chained_assignment_warning_method_msg,
+ _check_cacher,
)
from pandas.util._decorators import (
deprecate_nonkeyword_arguments,
@@ -7195,7 +7196,7 @@ def fillna(
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if isinstance(self, ABCSeries) and hasattr(self, "_cacher"):
+ if isinstance(self, ABCSeries) and _check_cacher(self):
# see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221
ref_count += 1
if ctr <= ref_count:
@@ -7477,7 +7478,7 @@ def ffill(
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if isinstance(self, ABCSeries) and hasattr(self, "_cacher"):
+ if isinstance(self, ABCSeries) and _check_cacher(self):
# see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221
ref_count += 1
if ctr <= ref_count:
@@ -7660,7 +7661,7 @@ def bfill(
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if isinstance(self, ABCSeries) and hasattr(self, "_cacher"):
+ if isinstance(self, ABCSeries) and _check_cacher(self):
# see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221
ref_count += 1
if ctr <= ref_count:
@@ -7826,12 +7827,12 @@ def replace(
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if isinstance(self, ABCSeries) and hasattr(self, "_cacher"):
+ if isinstance(self, ABCSeries) and _check_cacher(self):
# in non-CoW mode, chained Series access will populate the
# `_item_cache` which results in an increased ref count not below
# the threshold, while we still need to warn. We detect this case
# of a Series derived from a DataFrame through the presence of
- # `_cacher`
+ # checking the `_cacher`
ref_count += 1
if ctr <= ref_count:
warnings.warn(
@@ -8267,7 +8268,7 @@ def interpolate(
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if isinstance(self, ABCSeries) and hasattr(self, "_cacher"):
+ if isinstance(self, ABCSeries) and _check_cacher(self):
# see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221
ref_count += 1
if ctr <= ref_count:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a9679f22f9933..13b3423497b54 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -45,6 +45,7 @@
_chained_assignment_method_msg,
_chained_assignment_msg,
_chained_assignment_warning_method_msg,
+ _check_cacher,
)
from pandas.util._decorators import (
Appender,
@@ -3564,7 +3565,7 @@ def update(self, other: Series | Sequence | Mapping) -> None:
elif not PYPY and not using_copy_on_write():
ctr = sys.getrefcount(self)
ref_count = REF_COUNT
- if hasattr(self, "_cacher"):
+ if _check_cacher(self):
# see https://github.com/pandas-dev/pandas/pull/56060#discussion_r1399245221
ref_count += 1
if ctr <= ref_count:
diff --git a/pandas/errors/__init__.py b/pandas/errors/__init__.py
index e2aa9010dc109..c89e4aa2cac0f 100644
--- a/pandas/errors/__init__.py
+++ b/pandas/errors/__init__.py
@@ -516,6 +516,24 @@ class ChainedAssignmentError(Warning):
)
+def _check_cacher(obj):
+ # This is a mess, selection paths that return a view set the _cacher attribute
+ # on the Series; most of them also set _item_cache which adds 1 to our relevant
+ # reference count, but iloc does not, so we have to check if we are actually
+ # in the item cache
+ if hasattr(obj, "_cacher"):
+ parent = obj._cacher[1]()
+ # parent could be dead
+ if parent is None:
+ return False
+ if hasattr(parent, "_item_cache"):
+ if obj._cacher[0] in parent._item_cache:
+ # Check if we are actually the item from item_cache, iloc creates a
+ # new object
+ return obj is parent._item_cache[obj._cacher[0]]
+ return False
+
+
class NumExprClobberingError(NameError):
"""
Exception raised when trying to use a built-in numexpr name as a variable name.
diff --git a/pandas/tests/copy_view/test_chained_assignment_deprecation.py b/pandas/tests/copy_view/test_chained_assignment_deprecation.py
new file mode 100644
index 0000000000000..37431f39bdaa0
--- /dev/null
+++ b/pandas/tests/copy_view/test_chained_assignment_deprecation.py
@@ -0,0 +1,63 @@
+import pytest
+
+from pandas import DataFrame
+import pandas._testing as tm
+
+
+def test_methods_iloc_warn(using_copy_on_write):
+ if not using_copy_on_write:
+ df = DataFrame({"a": [1, 2, 3], "b": 1})
+ with tm.assert_cow_warning(match="A value"):
+ df.iloc[:, 0].replace(1, 5, inplace=True)
+
+ with tm.assert_cow_warning(match="A value"):
+ df.iloc[:, 0].fillna(1, inplace=True)
+
+ with tm.assert_cow_warning(match="A value"):
+ df.iloc[:, 0].interpolate(inplace=True)
+
+ with tm.assert_cow_warning(match="A value"):
+ df.iloc[:, 0].ffill(inplace=True)
+
+ with tm.assert_cow_warning(match="A value"):
+ df.iloc[:, 0].bfill(inplace=True)
+
+
+@pytest.mark.parametrize(
+ "func, args",
+ [
+ ("replace", (1, 5)),
+ ("fillna", (1,)),
+ ("interpolate", ()),
+ ("bfill", ()),
+ ("ffill", ()),
+ ],
+)
+def test_methods_iloc_getitem_item_cache(func, args, using_copy_on_write):
+ df = DataFrame({"a": [1, 2, 3], "b": 1})
+ ser = df.iloc[:, 0]
+ TODO(CoW-warn) should warn about updating a view
+ getattr(ser, func)(*args, inplace=True)
+
+ # parent that holds item_cache is dead, so don't increase ref count
+ ser = df.copy()["a"]
+ getattr(ser, func)(*args, inplace=True)
+
+ df = df.copy()
+
+ df["a"] # populate the item_cache
+ ser = df.iloc[:, 0] # iloc creates a new object
+ ser.fillna(0, inplace=True)
+
+ df["a"] # populate the item_cache
+ ser = df["a"]
+ ser.fillna(0, inplace=True)
+
+ df = df.copy()
+ df["a"] # populate the item_cache
+ if using_copy_on_write:
+ with tm.raises_chained_assignment_error():
+ df["a"].fillna(0, inplace=True)
+ else:
+ with tm.assert_cow_warning(match="A value"):
+ df["a"].fillna(0, inplace=True)
diff --git a/scripts/validate_unwanted_patterns.py b/scripts/validate_unwanted_patterns.py
index 5bde4c21cfab5..26f1311e950ef 100755
--- a/scripts/validate_unwanted_patterns.py
+++ b/scripts/validate_unwanted_patterns.py
@@ -51,6 +51,7 @@
"_chained_assignment_msg",
"_chained_assignment_method_msg",
"_chained_assignment_warning_method_msg",
+ "_check_cacher",
"_version_meson",
# The numba extensions need this to mock the iloc object
"_iLocIndexer",
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
I hate the SettingWithCopyWarning mechanism...
2 cases were fixed here:
- the parent that holds the item_cache might be dead, e.g. the ref count is not increased. We have to check this when looking at the cacher
- iloc does not populate the item_cache but it sets _cacher on the new object, so we have to check if we are actually part of the item_cache
xref https://github.com/pandas-dev/pandas/issues/56019 | https://api.github.com/repos/pandas-dev/pandas/pulls/56166 | 2023-11-25T19:35:12Z | 2023-11-27T14:07:48Z | 2023-11-27T14:07:48Z | 2023-11-27T14:08:12Z |
TST: parametrize over dt64 unit | diff --git a/pandas/tests/arithmetic/test_datetime64.py b/pandas/tests/arithmetic/test_datetime64.py
index a63bfbf1835a9..9014ba4b6093e 100644
--- a/pandas/tests/arithmetic/test_datetime64.py
+++ b/pandas/tests/arithmetic/test_datetime64.py
@@ -905,7 +905,7 @@ def test_dt64arr_add_sub_td64_nat(self, box_with_array, tz_naive_fixture):
dti = date_range("1994-04-01", periods=9, tz=tz, freq="QS")
other = np.timedelta64("NaT")
- expected = DatetimeIndex(["NaT"] * 9, tz=tz)
+ expected = DatetimeIndex(["NaT"] * 9, tz=tz).as_unit("ns")
obj = tm.box_expected(dti, box_with_array)
expected = tm.box_expected(expected, box_with_array)
@@ -1590,13 +1590,13 @@ class TestDatetime64OverflowHandling:
def test_dt64_overflow_masking(self, box_with_array):
# GH#25317
- left = Series([Timestamp("1969-12-31")])
+ left = Series([Timestamp("1969-12-31")], dtype="M8[ns]")
right = Series([NaT])
left = tm.box_expected(left, box_with_array)
right = tm.box_expected(right, box_with_array)
- expected = TimedeltaIndex([NaT])
+ expected = TimedeltaIndex([NaT], dtype="m8[ns]")
expected = tm.box_expected(expected, box_with_array)
result = left - right
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index bc632209ff7e1..0130820fc3de6 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -1354,7 +1354,8 @@ def test_setitem_frame_dup_cols_dtype(self):
def test_frame_setitem_empty_dataframe(self):
# GH#28871
- df = DataFrame({"date": [datetime(2000, 1, 1)]}).set_index("date")
+ dti = DatetimeIndex(["2000-01-01"], dtype="M8[ns]", name="date")
+ df = DataFrame({"date": dti}).set_index("date")
df = df[0:0].copy()
df["3010"] = None
@@ -1363,6 +1364,6 @@ def test_frame_setitem_empty_dataframe(self):
expected = DataFrame(
[],
columns=["3010", "2010"],
- index=Index([], dtype="datetime64[ns]", name="date"),
+ index=dti[:0],
)
tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py
index dcec68ab3530d..787b77a5c725a 100644
--- a/pandas/tests/frame/methods/test_quantile.py
+++ b/pandas/tests/frame/methods/test_quantile.py
@@ -623,23 +623,23 @@ def test_quantile_nan(self, interp_method, request, using_array_manager):
exp = DataFrame({"a": [3.0, 4.0], "b": [np.nan, np.nan]}, index=[0.5, 0.75])
tm.assert_frame_equal(res, exp)
- def test_quantile_nat(self, interp_method, request, using_array_manager):
+ def test_quantile_nat(self, interp_method, request, using_array_manager, unit):
interpolation, method = interp_method
if method == "table" and using_array_manager:
request.applymarker(pytest.mark.xfail(reason="Axis name incorrectly set."))
# full NaT column
- df = DataFrame({"a": [pd.NaT, pd.NaT, pd.NaT]})
+ df = DataFrame({"a": [pd.NaT, pd.NaT, pd.NaT]}, dtype=f"M8[{unit}]")
res = df.quantile(
0.5, numeric_only=False, interpolation=interpolation, method=method
)
- exp = Series([pd.NaT], index=["a"], name=0.5)
+ exp = Series([pd.NaT], index=["a"], name=0.5, dtype=f"M8[{unit}]")
tm.assert_series_equal(res, exp)
res = df.quantile(
[0.5], numeric_only=False, interpolation=interpolation, method=method
)
- exp = DataFrame({"a": [pd.NaT]}, index=[0.5])
+ exp = DataFrame({"a": [pd.NaT]}, index=[0.5], dtype=f"M8[{unit}]")
tm.assert_frame_equal(res, exp)
# mixed non-null / full null column
@@ -651,20 +651,29 @@ def test_quantile_nat(self, interp_method, request, using_array_manager):
Timestamp("2012-01-03"),
],
"b": [pd.NaT, pd.NaT, pd.NaT],
- }
+ },
+ dtype=f"M8[{unit}]",
)
res = df.quantile(
0.5, numeric_only=False, interpolation=interpolation, method=method
)
- exp = Series([Timestamp("2012-01-02"), pd.NaT], index=["a", "b"], name=0.5)
+ exp = Series(
+ [Timestamp("2012-01-02"), pd.NaT],
+ index=["a", "b"],
+ name=0.5,
+ dtype=f"M8[{unit}]",
+ )
tm.assert_series_equal(res, exp)
res = df.quantile(
[0.5], numeric_only=False, interpolation=interpolation, method=method
)
exp = DataFrame(
- [[Timestamp("2012-01-02"), pd.NaT]], index=[0.5], columns=["a", "b"]
+ [[Timestamp("2012-01-02"), pd.NaT]],
+ index=[0.5],
+ columns=["a", "b"],
+ dtype=f"M8[{unit}]",
)
tm.assert_frame_equal(res, exp)
diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py
index 20ad93e6dce4d..0d71fb0926df9 100644
--- a/pandas/tests/frame/test_reductions.py
+++ b/pandas/tests/frame/test_reductions.py
@@ -598,7 +598,7 @@ def test_sem(self, datetime_frame):
"C": [1.0],
"D": ["a"],
"E": Categorical(["a"], categories=["a"]),
- "F": to_datetime(["2000-1-2"]),
+ "F": pd.DatetimeIndex(["2000-01-02"], dtype="M8[ns]"),
"G": to_timedelta(["1 days"]),
},
),
@@ -610,7 +610,7 @@ def test_sem(self, datetime_frame):
"C": [np.nan],
"D": np.array([np.nan], dtype=object),
"E": Categorical([np.nan], categories=["a"]),
- "F": [pd.NaT],
+ "F": pd.DatetimeIndex([pd.NaT], dtype="M8[ns]"),
"G": to_timedelta([pd.NaT]),
},
),
@@ -621,7 +621,9 @@ def test_sem(self, datetime_frame):
"I": [8, 9, np.nan, np.nan],
"J": [1, np.nan, np.nan, np.nan],
"K": Categorical(["a", np.nan, np.nan, np.nan], categories=["a"]),
- "L": to_datetime(["2000-1-2", "NaT", "NaT", "NaT"]),
+ "L": pd.DatetimeIndex(
+ ["2000-01-02", "NaT", "NaT", "NaT"], dtype="M8[ns]"
+ ),
"M": to_timedelta(["1 days", "nan", "nan", "nan"]),
"N": [0, 1, 2, 3],
},
@@ -633,7 +635,9 @@ def test_sem(self, datetime_frame):
"I": [8, 9, np.nan, np.nan],
"J": [1, np.nan, np.nan, np.nan],
"K": Categorical([np.nan, "a", np.nan, np.nan], categories=["a"]),
- "L": to_datetime(["NaT", "2000-1-2", "NaT", "NaT"]),
+ "L": pd.DatetimeIndex(
+ ["NaT", "2000-01-02", "NaT", "NaT"], dtype="M8[ns]"
+ ),
"M": to_timedelta(["nan", "1 days", "nan", "nan"]),
"N": [0, 1, 2, 3],
},
@@ -648,13 +652,17 @@ def test_mode_dropna(self, dropna, expected):
"C": [1, np.nan, np.nan, np.nan],
"D": [np.nan, np.nan, "a", np.nan],
"E": Categorical([np.nan, np.nan, "a", np.nan]),
- "F": to_datetime(["NaT", "2000-1-2", "NaT", "NaT"]),
+ "F": pd.DatetimeIndex(
+ ["NaT", "2000-01-02", "NaT", "NaT"], dtype="M8[ns]"
+ ),
"G": to_timedelta(["1 days", "nan", "nan", "nan"]),
"H": [8, 8, 9, 9],
"I": [9, 9, 8, 8],
"J": [1, 1, np.nan, np.nan],
"K": Categorical(["a", np.nan, "a", np.nan]),
- "L": to_datetime(["2000-1-2", "2000-1-2", "NaT", "NaT"]),
+ "L": pd.DatetimeIndex(
+ ["2000-01-02", "2000-01-02", "NaT", "NaT"], dtype="M8[ns]"
+ ),
"M": to_timedelta(["1 days", "nan", "1 days", "nan"]),
"N": np.arange(4, dtype="int64"),
}
diff --git a/pandas/tests/groupby/methods/test_groupby_shift_diff.py b/pandas/tests/groupby/methods/test_groupby_shift_diff.py
index f2d40867af03a..0ce6a6462a5d8 100644
--- a/pandas/tests/groupby/methods/test_groupby_shift_diff.py
+++ b/pandas/tests/groupby/methods/test_groupby_shift_diff.py
@@ -117,10 +117,11 @@ def test_group_diff_real_frame(any_real_numpy_dtype):
[Timedelta("5 days"), Timedelta("6 days"), Timedelta("7 days")],
],
)
-def test_group_diff_datetimelike(data):
+def test_group_diff_datetimelike(data, unit):
df = DataFrame({"a": [1, 2, 2], "b": data})
+ df["b"] = df["b"].dt.as_unit(unit)
result = df.groupby("a")["b"].diff()
- expected = Series([NaT, NaT, Timedelta("1 days")], name="b")
+ expected = Series([NaT, NaT, Timedelta("1 days")], name="b").dt.as_unit(unit)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index be02c7f79ba01..d1faab9cabfba 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -98,11 +98,17 @@ def test_groupby_with_timegrouper(self):
for df in [df_original, df_reordered]:
df = df.set_index(["Date"])
+ exp_dti = date_range(
+ "20130901",
+ "20131205",
+ freq="5D",
+ name="Date",
+ inclusive="left",
+ unit=df.index.unit,
+ )
expected = DataFrame(
{"Buyer": 0, "Quantity": 0},
- index=date_range(
- "20130901", "20131205", freq="5D", name="Date", inclusive="left"
- ),
+ index=exp_dti,
)
# Cast to object to avoid implicit cast when setting entry to "CarlCarlCarl"
expected = expected.astype({"Buyer": object})
@@ -514,6 +520,7 @@ def test_groupby_groups_datetimeindex(self):
groups = grouped.groups
assert isinstance(next(iter(groups.keys())), datetime)
+ def test_groupby_groups_datetimeindex2(self):
# GH#11442
index = date_range("2015/01/01", periods=5, name="date")
df = DataFrame({"A": [5, 6, 7, 8, 9], "B": [1, 2, 3, 4, 5]}, index=index)
@@ -876,7 +883,9 @@ def test_groupby_apply_timegrouper_with_nat_dict_returns(
res = gb["Quantity"].apply(lambda x: {"foo": len(x)})
- dti = date_range("2013-09-01", "2013-10-01", freq="5D", name="Date")
+ df = gb.obj
+ unit = df["Date"]._values.unit
+ dti = date_range("2013-09-01", "2013-10-01", freq="5D", name="Date", unit=unit)
mi = MultiIndex.from_arrays([dti, ["foo"] * len(dti)])
expected = Series([3, 0, 0, 0, 0, 0, 2], index=mi, name="Quantity")
tm.assert_series_equal(res, expected)
@@ -890,7 +899,9 @@ def test_groupby_apply_timegrouper_with_nat_scalar_returns(
res = gb["Quantity"].apply(lambda x: x.iloc[0] if len(x) else np.nan)
- dti = date_range("2013-09-01", "2013-10-01", freq="5D", name="Date")
+ df = gb.obj
+ unit = df["Date"]._values.unit
+ dti = date_range("2013-09-01", "2013-10-01", freq="5D", name="Date", unit=unit)
expected = Series(
[18, np.nan, np.nan, np.nan, np.nan, np.nan, 5],
index=dti._with_freq(None),
@@ -919,9 +930,10 @@ def test_groupby_apply_timegrouper_with_nat_apply_squeeze(
with tm.assert_produces_warning(FutureWarning, match=msg):
res = gb.apply(lambda x: x["Quantity"] * 2)
+ dti = Index([Timestamp("2013-12-31")], dtype=df["Date"].dtype, name="Date")
expected = DataFrame(
[[36, 6, 6, 10, 2]],
- index=Index([Timestamp("2013-12-31")], name="Date"),
+ index=dti,
columns=Index([0, 1, 5, 2, 3], name="Quantity"),
)
tm.assert_frame_equal(res, expected)
diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py
index cf122832d86b4..a6f160d92fb66 100644
--- a/pandas/tests/groupby/transform/test_transform.py
+++ b/pandas/tests/groupby/transform/test_transform.py
@@ -115,12 +115,15 @@ def test_transform_fast2():
)
result = df.groupby("grouping").transform("first")
- dates = [
- Timestamp("2014-1-1"),
- Timestamp("2014-1-2"),
- Timestamp("2014-1-2"),
- Timestamp("2014-1-4"),
- ]
+ dates = pd.Index(
+ [
+ Timestamp("2014-1-1"),
+ Timestamp("2014-1-2"),
+ Timestamp("2014-1-2"),
+ Timestamp("2014-1-4"),
+ ],
+ dtype="M8[ns]",
+ )
expected = DataFrame(
{"f": [1.1, 2.1, 2.1, 4.5], "d": dates, "i": [1, 2, 2, 4]},
columns=["f", "i", "d"],
@@ -532,7 +535,7 @@ def test_series_fast_transform_date():
Timestamp("2014-1-2"),
Timestamp("2014-1-4"),
]
- expected = Series(dates, name="d")
+ expected = Series(dates, name="d", dtype="M8[ns]")
tm.assert_series_equal(result, expected)
@@ -1204,7 +1207,9 @@ def test_groupby_transform_with_datetimes(func, values):
result = stocks.groupby(stocks["week_id"])["price"].transform(func)
- expected = Series(data=pd.to_datetime(values), index=dates, name="price")
+ expected = Series(
+ data=pd.to_datetime(values).as_unit("ns"), index=dates, name="price"
+ )
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/methods/test_astype.py b/pandas/tests/indexes/datetimes/methods/test_astype.py
index 9ee6250feeac6..c0bc6601769b1 100644
--- a/pandas/tests/indexes/datetimes/methods/test_astype.py
+++ b/pandas/tests/indexes/datetimes/methods/test_astype.py
@@ -292,7 +292,7 @@ def test_integer_index_astype_datetime(self, tz, dtype):
# GH 20997, 20964, 24559
val = [Timestamp("2018-01-01", tz=tz).as_unit("ns")._value]
result = Index(val, name="idx").astype(dtype)
- expected = DatetimeIndex(["2018-01-01"], tz=tz, name="idx")
+ expected = DatetimeIndex(["2018-01-01"], tz=tz, name="idx").as_unit("ns")
tm.assert_index_equal(result, expected)
def test_dti_astype_period(self):
@@ -312,8 +312,9 @@ class TestAstype:
def test_astype_category(self, tz):
obj = date_range("2000", periods=2, tz=tz, name="idx")
result = obj.astype("category")
+ dti = DatetimeIndex(["2000-01-01", "2000-01-02"], tz=tz).as_unit("ns")
expected = pd.CategoricalIndex(
- [Timestamp("2000-01-01", tz=tz), Timestamp("2000-01-02", tz=tz)],
+ dti,
name="idx",
)
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py
index 73de33607ca0b..b7932715c3ac7 100644
--- a/pandas/tests/indexes/datetimes/test_indexing.py
+++ b/pandas/tests/indexes/datetimes/test_indexing.py
@@ -35,46 +35,43 @@ def test_getitem_slice_keeps_name(self):
dr = date_range(st, et, freq="h", name="timebucket")
assert dr[1:].name == dr.name
- def test_getitem(self):
- idx1 = date_range("2011-01-01", "2011-01-31", freq="D", name="idx")
- idx2 = date_range(
- "2011-01-01", "2011-01-31", freq="D", tz="Asia/Tokyo", name="idx"
- )
+ @pytest.mark.parametrize("tz", [None, "Asia/Tokyo"])
+ def test_getitem(self, tz):
+ idx = date_range("2011-01-01", "2011-01-31", freq="D", tz=tz, name="idx")
- for idx in [idx1, idx2]:
- result = idx[0]
- assert result == Timestamp("2011-01-01", tz=idx.tz)
+ result = idx[0]
+ assert result == Timestamp("2011-01-01", tz=idx.tz)
- result = idx[0:5]
- expected = date_range(
- "2011-01-01", "2011-01-05", freq="D", tz=idx.tz, name="idx"
- )
- tm.assert_index_equal(result, expected)
- assert result.freq == expected.freq
+ result = idx[0:5]
+ expected = date_range(
+ "2011-01-01", "2011-01-05", freq="D", tz=idx.tz, name="idx"
+ )
+ tm.assert_index_equal(result, expected)
+ assert result.freq == expected.freq
- result = idx[0:10:2]
- expected = date_range(
- "2011-01-01", "2011-01-09", freq="2D", tz=idx.tz, name="idx"
- )
- tm.assert_index_equal(result, expected)
- assert result.freq == expected.freq
+ result = idx[0:10:2]
+ expected = date_range(
+ "2011-01-01", "2011-01-09", freq="2D", tz=idx.tz, name="idx"
+ )
+ tm.assert_index_equal(result, expected)
+ assert result.freq == expected.freq
- result = idx[-20:-5:3]
- expected = date_range(
- "2011-01-12", "2011-01-24", freq="3D", tz=idx.tz, name="idx"
- )
- tm.assert_index_equal(result, expected)
- assert result.freq == expected.freq
+ result = idx[-20:-5:3]
+ expected = date_range(
+ "2011-01-12", "2011-01-24", freq="3D", tz=idx.tz, name="idx"
+ )
+ tm.assert_index_equal(result, expected)
+ assert result.freq == expected.freq
- result = idx[4::-1]
- expected = DatetimeIndex(
- ["2011-01-05", "2011-01-04", "2011-01-03", "2011-01-02", "2011-01-01"],
- freq="-1D",
- tz=idx.tz,
- name="idx",
- )
- tm.assert_index_equal(result, expected)
- assert result.freq == expected.freq
+ result = idx[4::-1]
+ expected = DatetimeIndex(
+ ["2011-01-05", "2011-01-04", "2011-01-03", "2011-01-02", "2011-01-01"],
+ dtype=idx.dtype,
+ freq="-1D",
+ name="idx",
+ )
+ tm.assert_index_equal(result, expected)
+ assert result.freq == expected.freq
@pytest.mark.parametrize("freq", ["B", "C"])
def test_dti_business_getitem(self, freq):
@@ -264,8 +261,8 @@ def test_take(self):
result = idx.take([3, 2, 5])
expected = DatetimeIndex(
["2011-01-04", "2011-01-03", "2011-01-06"],
+ dtype=idx.dtype,
freq=None,
- tz=idx.tz,
name="idx",
)
tm.assert_index_equal(result, expected)
@@ -274,6 +271,7 @@ def test_take(self):
result = idx.take([-3, 2, 5])
expected = DatetimeIndex(
["2011-01-29", "2011-01-03", "2011-01-06"],
+ dtype=idx.dtype,
freq=None,
tz=idx.tz,
name="idx",
@@ -314,7 +312,7 @@ def test_take2(self, tz):
tz=tz,
name="idx",
)
- expected = DatetimeIndex(dates, freq=None, name="idx", tz=tz)
+ expected = DatetimeIndex(dates, freq=None, name="idx", dtype=idx.dtype)
taken1 = idx.take([5, 6, 8, 12])
taken2 = idx[[5, 6, 8, 12]]
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index dde680665a8bc..353026e81b390 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -249,19 +249,23 @@ def test_intersection(self, tz, sort):
# non-monotonic
base = DatetimeIndex(
["2011-01-05", "2011-01-04", "2011-01-02", "2011-01-03"], tz=tz, name="idx"
- )
+ ).as_unit("ns")
rng2 = DatetimeIndex(
["2011-01-04", "2011-01-02", "2011-02-02", "2011-02-03"], tz=tz, name="idx"
- )
- expected2 = DatetimeIndex(["2011-01-04", "2011-01-02"], tz=tz, name="idx")
+ ).as_unit("ns")
+ expected2 = DatetimeIndex(
+ ["2011-01-04", "2011-01-02"], tz=tz, name="idx"
+ ).as_unit("ns")
rng3 = DatetimeIndex(
["2011-01-04", "2011-01-02", "2011-02-02", "2011-02-03"],
tz=tz,
name="other",
- )
- expected3 = DatetimeIndex(["2011-01-04", "2011-01-02"], tz=tz, name=None)
+ ).as_unit("ns")
+ expected3 = DatetimeIndex(
+ ["2011-01-04", "2011-01-02"], tz=tz, name=None
+ ).as_unit("ns")
# GH 7880
rng4 = date_range("7/1/2000", "7/31/2000", freq="D", tz=tz, name="idx")
@@ -350,7 +354,7 @@ def test_difference_freq(self, sort):
index = date_range("20160920", "20160925", freq="D")
other = date_range("20160921", "20160924", freq="D")
- expected = DatetimeIndex(["20160920", "20160925"], freq=None)
+ expected = DatetimeIndex(["20160920", "20160925"], dtype="M8[ns]", freq=None)
idx_diff = index.difference(other, sort)
tm.assert_index_equal(idx_diff, expected)
tm.assert_attr_equal("freq", idx_diff, expected)
@@ -359,7 +363,7 @@ def test_difference_freq(self, sort):
# subset of the original range
other = date_range("20160922", "20160925", freq="D")
idx_diff = index.difference(other, sort)
- expected = DatetimeIndex(["20160920", "20160921"], freq="D")
+ expected = DatetimeIndex(["20160920", "20160921"], dtype="M8[ns]", freq="D")
tm.assert_index_equal(idx_diff, expected)
tm.assert_attr_equal("freq", idx_diff, expected)
diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py
index 379c727f4ed0f..daa5b346eb4ec 100644
--- a/pandas/tests/indexes/datetimes/test_timezones.py
+++ b/pandas/tests/indexes/datetimes/test_timezones.py
@@ -92,7 +92,7 @@ def test_drop_dst_boundary(self):
"201710290245",
"201710290300",
],
- tz=tz,
+ dtype="M8[ns, Europe/Brussels]",
freq=freq,
ambiguous=[
True,
@@ -112,10 +112,14 @@ def test_drop_dst_boundary(self):
result = index.drop(index[0])
tm.assert_index_equal(result, expected)
- def test_date_range_localize(self):
- rng = date_range("3/11/2012 03:00", periods=15, freq="h", tz="US/Eastern")
- rng2 = DatetimeIndex(["3/11/2012 03:00", "3/11/2012 04:00"], tz="US/Eastern")
- rng3 = date_range("3/11/2012 03:00", periods=15, freq="h")
+ def test_date_range_localize(self, unit):
+ rng = date_range(
+ "3/11/2012 03:00", periods=15, freq="h", tz="US/Eastern", unit=unit
+ )
+ rng2 = DatetimeIndex(
+ ["3/11/2012 03:00", "3/11/2012 04:00"], dtype=f"M8[{unit}, US/Eastern]"
+ )
+ rng3 = date_range("3/11/2012 03:00", periods=15, freq="h", unit=unit)
rng3 = rng3.tz_localize("US/Eastern")
tm.assert_index_equal(rng._with_freq(None), rng3)
@@ -129,10 +133,15 @@ def test_date_range_localize(self):
assert val == exp # same UTC value
tm.assert_index_equal(rng[:2], rng2)
+ def test_date_range_localize2(self, unit):
# Right before the DST transition
- rng = date_range("3/11/2012 00:00", periods=2, freq="h", tz="US/Eastern")
+ rng = date_range(
+ "3/11/2012 00:00", periods=2, freq="h", tz="US/Eastern", unit=unit
+ )
rng2 = DatetimeIndex(
- ["3/11/2012 00:00", "3/11/2012 01:00"], tz="US/Eastern", freq="h"
+ ["3/11/2012 00:00", "3/11/2012 01:00"],
+ dtype=f"M8[{unit}, US/Eastern]",
+ freq="h",
)
tm.assert_index_equal(rng, rng2)
exp = Timestamp("3/11/2012 00:00", tz="US/Eastern")
@@ -142,7 +151,9 @@ def test_date_range_localize(self):
assert exp.hour == 1
assert rng[1] == exp
- rng = date_range("3/11/2012 00:00", periods=10, freq="h", tz="US/Eastern")
+ rng = date_range(
+ "3/11/2012 00:00", periods=10, freq="h", tz="US/Eastern", unit=unit
+ )
assert rng[2].hour == 3
def test_timestamp_equality_different_timezones(self):
@@ -231,10 +242,10 @@ def test_dti_convert_tz_aware_datetime_datetime(self, tz):
dates = [datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)]
dates_aware = [conversion.localize_pydatetime(x, tz) for x in dates]
- result = DatetimeIndex(dates_aware)
+ result = DatetimeIndex(dates_aware).as_unit("ns")
assert timezones.tz_compare(result.tz, tz)
- converted = to_datetime(dates_aware, utc=True)
+ converted = to_datetime(dates_aware, utc=True).as_unit("ns")
ex_vals = np.array([Timestamp(x).as_unit("ns")._value for x in dates_aware])
tm.assert_numpy_array_equal(converted.asi8, ex_vals)
assert converted.tz is timezone.utc
diff --git a/pandas/tests/indexes/interval/test_indexing.py b/pandas/tests/indexes/interval/test_indexing.py
index 2007a793843c9..fd03047b2c127 100644
--- a/pandas/tests/indexes/interval/test_indexing.py
+++ b/pandas/tests/indexes/interval/test_indexing.py
@@ -363,15 +363,18 @@ def test_get_indexer_categorical_with_nans(self):
def test_get_indexer_datetime(self):
ii = IntervalIndex.from_breaks(date_range("2018-01-01", periods=4))
- result = ii.get_indexer(DatetimeIndex(["2018-01-02"]))
+ # TODO: with mismatched resolution get_indexer currently raises;
+ # this should probably coerce?
+ target = DatetimeIndex(["2018-01-02"], dtype="M8[ns]")
+ result = ii.get_indexer(target)
expected = np.array([0], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
- result = ii.get_indexer(DatetimeIndex(["2018-01-02"]).astype(str))
+ result = ii.get_indexer(target.astype(str))
tm.assert_numpy_array_equal(result, expected)
# https://github.com/pandas-dev/pandas/issues/47772
- result = ii.get_indexer(DatetimeIndex(["2018-01-02"]).asi8)
+ result = ii.get_indexer(target.asi8)
expected = np.array([-1], dtype=np.intp)
tm.assert_numpy_array_equal(result, expected)
diff --git a/pandas/tests/indexes/interval/test_interval_range.py b/pandas/tests/indexes/interval/test_interval_range.py
index 37606bda9efca..d4d4a09c44d13 100644
--- a/pandas/tests/indexes/interval/test_interval_range.py
+++ b/pandas/tests/indexes/interval/test_interval_range.py
@@ -184,6 +184,9 @@ def test_no_invalid_float_truncation(self, start, end, freq):
def test_linspace_dst_transition(self, start, mid, end):
# GH 20976: linspace behavior defined from start/end/periods
# accounts for the hour gained/lost during DST transition
+ start = start.as_unit("ns")
+ mid = mid.as_unit("ns")
+ end = end.as_unit("ns")
result = interval_range(start=start, end=end, periods=2)
expected = IntervalIndex.from_breaks([start, mid, end])
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/period/methods/test_to_timestamp.py b/pandas/tests/indexes/period/methods/test_to_timestamp.py
index 7be2602135578..3867f9e3245dc 100644
--- a/pandas/tests/indexes/period/methods/test_to_timestamp.py
+++ b/pandas/tests/indexes/period/methods/test_to_timestamp.py
@@ -58,7 +58,9 @@ def test_to_timestamp_pi_nat(self):
result = index.to_timestamp("D")
expected = DatetimeIndex(
- [NaT, datetime(2011, 1, 1), datetime(2011, 2, 1)], name="idx"
+ [NaT, datetime(2011, 1, 1), datetime(2011, 2, 1)],
+ dtype="M8[ns]",
+ name="idx",
)
tm.assert_index_equal(result, expected)
assert result.name == "idx"
@@ -98,11 +100,15 @@ def test_to_timestamp_pi_mult(self):
idx = PeriodIndex(["2011-01", "NaT", "2011-02"], freq="2M", name="idx")
result = idx.to_timestamp()
- expected = DatetimeIndex(["2011-01-01", "NaT", "2011-02-01"], name="idx")
+ expected = DatetimeIndex(
+ ["2011-01-01", "NaT", "2011-02-01"], dtype="M8[ns]", name="idx"
+ )
tm.assert_index_equal(result, expected)
result = idx.to_timestamp(how="E")
- expected = DatetimeIndex(["2011-02-28", "NaT", "2011-03-31"], name="idx")
+ expected = DatetimeIndex(
+ ["2011-02-28", "NaT", "2011-03-31"], dtype="M8[ns]", name="idx"
+ )
expected = expected + Timedelta(1, "D") - Timedelta(1, "ns")
tm.assert_index_equal(result, expected)
@@ -110,18 +116,22 @@ def test_to_timestamp_pi_combined(self):
idx = period_range(start="2011", periods=2, freq="1D1h", name="idx")
result = idx.to_timestamp()
- expected = DatetimeIndex(["2011-01-01 00:00", "2011-01-02 01:00"], name="idx")
+ expected = DatetimeIndex(
+ ["2011-01-01 00:00", "2011-01-02 01:00"], dtype="M8[ns]", name="idx"
+ )
tm.assert_index_equal(result, expected)
result = idx.to_timestamp(how="E")
expected = DatetimeIndex(
- ["2011-01-02 00:59:59", "2011-01-03 01:59:59"], name="idx"
+ ["2011-01-02 00:59:59", "2011-01-03 01:59:59"], name="idx", dtype="M8[ns]"
)
expected = expected + Timedelta(1, "s") - Timedelta(1, "ns")
tm.assert_index_equal(result, expected)
result = idx.to_timestamp(how="E", freq="h")
- expected = DatetimeIndex(["2011-01-02 00:00", "2011-01-03 01:00"], name="idx")
+ expected = DatetimeIndex(
+ ["2011-01-02 00:00", "2011-01-03 01:00"], dtype="M8[ns]", name="idx"
+ )
expected = expected + Timedelta(1, "h") - Timedelta(1, "ns")
tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexing/multiindex/test_partial.py b/pandas/tests/indexing/multiindex/test_partial.py
index 9cf11b4602cb2..7be3d8c657766 100644
--- a/pandas/tests/indexing/multiindex/test_partial.py
+++ b/pandas/tests/indexing/multiindex/test_partial.py
@@ -5,9 +5,9 @@
from pandas import (
DataFrame,
+ DatetimeIndex,
MultiIndex,
date_range,
- to_datetime,
)
import pandas._testing as tm
@@ -219,7 +219,11 @@ def test_setitem_multiple_partial(self, multiindex_dataframe_random_data):
@pytest.mark.parametrize(
"indexer, exp_idx, exp_values",
[
- (slice("2019-2", None), [to_datetime("2019-02-01")], [2, 3]),
+ (
+ slice("2019-2", None),
+ DatetimeIndex(["2019-02-01"], dtype="M8[ns]"),
+ [2, 3],
+ ),
(
slice(None, "2019-2"),
date_range("2019", periods=2, freq="MS"),
diff --git a/pandas/tests/indexing/multiindex/test_setitem.py b/pandas/tests/indexing/multiindex/test_setitem.py
index e31b675efb69a..9870868a3e1e9 100644
--- a/pandas/tests/indexing/multiindex/test_setitem.py
+++ b/pandas/tests/indexing/multiindex/test_setitem.py
@@ -9,7 +9,6 @@
DataFrame,
MultiIndex,
Series,
- Timestamp,
date_range,
isna,
notna,
@@ -91,11 +90,11 @@ def test_setitem_multiindex3(self):
np.random.default_rng(2).random((12, 4)), index=idx, columns=cols
)
- subidx = MultiIndex.from_tuples(
- [("A", Timestamp("2015-01-01")), ("A", Timestamp("2015-02-01"))]
+ subidx = MultiIndex.from_arrays(
+ [["A", "A"], date_range("2015-01-01", "2015-02-01", freq="MS")]
)
- subcols = MultiIndex.from_tuples(
- [("foo", Timestamp("2016-01-01")), ("foo", Timestamp("2016-02-01"))]
+ subcols = MultiIndex.from_arrays(
+ [["foo", "foo"], date_range("2016-01-01", "2016-02-01", freq="MS")]
)
vals = DataFrame(
diff --git a/pandas/tests/io/xml/test_xml_dtypes.py b/pandas/tests/io/xml/test_xml_dtypes.py
index fb24902efc0f5..a85576ff13f5c 100644
--- a/pandas/tests/io/xml/test_xml_dtypes.py
+++ b/pandas/tests/io/xml/test_xml_dtypes.py
@@ -9,6 +9,7 @@
from pandas import (
DataFrame,
+ DatetimeIndex,
Series,
to_datetime,
)
@@ -146,7 +147,9 @@ def test_dtypes_with_names(parser):
"Col1": ["square", "circle", "triangle"],
"Col2": Series(["00360", "00360", "00180"]).astype("string"),
"Col3": Series([4.0, float("nan"), 3.0]).astype("Int64"),
- "Col4": to_datetime(["2020-01-01", "2021-01-01", "2022-01-01"]),
+ "Col4": DatetimeIndex(
+ ["2020-01-01", "2021-01-01", "2022-01-01"], dtype="M8[ns]"
+ ),
}
)
diff --git a/pandas/tests/resample/test_resampler_grouper.py b/pandas/tests/resample/test_resampler_grouper.py
index 4afd3b477c3ee..3cf5201d573d4 100644
--- a/pandas/tests/resample/test_resampler_grouper.py
+++ b/pandas/tests/resample/test_resampler_grouper.py
@@ -118,12 +118,11 @@ def test_getitem_multiple():
df = DataFrame(data, index=date_range("2016-01-01", periods=2))
r = df.groupby("id").resample("1D")
result = r["buyer"].count()
+
+ exp_mi = pd.MultiIndex.from_arrays([[1, 2], df.index], names=("id", None))
expected = Series(
[1, 1],
- index=pd.MultiIndex.from_tuples(
- [(1, Timestamp("2016-01-01")), (2, Timestamp("2016-01-02"))],
- names=["id", None],
- ),
+ index=exp_mi,
name="buyer",
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/reshape/merge/test_join.py b/pandas/tests/reshape/merge/test_join.py
index a38e4ffe2eaf7..c4c83e2046b76 100644
--- a/pandas/tests/reshape/merge/test_join.py
+++ b/pandas/tests/reshape/merge/test_join.py
@@ -787,7 +787,7 @@ def test_join_datetime_string(self):
index=[2, 4],
columns=["x", "y", "z", "a"],
)
- expected["x"] = expected["x"].dt.as_unit("ns")
+ expected["x"] = expected["x"].astype("M8[ns]")
tm.assert_frame_equal(result, expected)
def test_join_with_categorical_index(self):
diff --git a/pandas/tests/series/methods/test_quantile.py b/pandas/tests/series/methods/test_quantile.py
index 016635a50fdf4..fa0563271d7df 100644
--- a/pandas/tests/series/methods/test_quantile.py
+++ b/pandas/tests/series/methods/test_quantile.py
@@ -105,8 +105,8 @@ def test_quantile_interpolation_dtype(self):
def test_quantile_nan(self):
# GH 13098
- s = Series([1, 2, 3, 4, np.nan])
- result = s.quantile(0.5)
+ ser = Series([1, 2, 3, 4, np.nan])
+ result = ser.quantile(0.5)
expected = 2.5
assert result == expected
@@ -114,14 +114,14 @@ def test_quantile_nan(self):
s1 = Series([], dtype=object)
cases = [s1, Series([np.nan, np.nan])]
- for s in cases:
- res = s.quantile(0.5)
+ for ser in cases:
+ res = ser.quantile(0.5)
assert np.isnan(res)
- res = s.quantile([0.5])
+ res = ser.quantile([0.5])
tm.assert_series_equal(res, Series([np.nan], index=[0.5]))
- res = s.quantile([0.2, 0.3])
+ res = ser.quantile([0.2, 0.3])
tm.assert_series_equal(res, Series([np.nan, np.nan], index=[0.2, 0.3]))
@pytest.mark.parametrize(
@@ -160,11 +160,11 @@ def test_quantile_nan(self):
],
)
def test_quantile_box(self, case):
- s = Series(case, name="XXX")
- res = s.quantile(0.5)
+ ser = Series(case, name="XXX")
+ res = ser.quantile(0.5)
assert res == case[1]
- res = s.quantile([0.5])
+ res = ser.quantile([0.5])
exp = Series([case[1]], index=[0.5], name="XXX")
tm.assert_series_equal(res, exp)
@@ -190,35 +190,37 @@ def test_quantile_sparse(self, values, dtype):
expected = Series(np.asarray(ser)).quantile([0.5]).astype("Sparse[float]")
tm.assert_series_equal(result, expected)
- def test_quantile_empty(self):
+ def test_quantile_empty_float64(self):
# floats
- s = Series([], dtype="float64")
+ ser = Series([], dtype="float64")
- res = s.quantile(0.5)
+ res = ser.quantile(0.5)
assert np.isnan(res)
- res = s.quantile([0.5])
+ res = ser.quantile([0.5])
exp = Series([np.nan], index=[0.5])
tm.assert_series_equal(res, exp)
+ def test_quantile_empty_int64(self):
# int
- s = Series([], dtype="int64")
+ ser = Series([], dtype="int64")
- res = s.quantile(0.5)
+ res = ser.quantile(0.5)
assert np.isnan(res)
- res = s.quantile([0.5])
+ res = ser.quantile([0.5])
exp = Series([np.nan], index=[0.5])
tm.assert_series_equal(res, exp)
+ def test_quantile_empty_dt64(self):
# datetime
- s = Series([], dtype="datetime64[ns]")
+ ser = Series([], dtype="datetime64[ns]")
- res = s.quantile(0.5)
+ res = ser.quantile(0.5)
assert res is pd.NaT
- res = s.quantile([0.5])
- exp = Series([pd.NaT], index=[0.5])
+ res = ser.quantile([0.5])
+ exp = Series([pd.NaT], index=[0.5], dtype=ser.dtype)
tm.assert_series_equal(res, exp)
@pytest.mark.parametrize("dtype", [int, float, "Int64"])
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56165 | 2023-11-25T15:35:12Z | 2023-11-25T21:05:02Z | 2023-11-25T21:05:02Z | 2023-11-25T23:30:13Z |
TST/CLN: Remove make_rand_series | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 3a3c98f253fcc..a74fb2bf48bc4 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -456,23 +456,6 @@ def makePeriodIndex(k: int = 10, name=None, **kwargs) -> PeriodIndex:
return pi
-# make series
-def make_rand_series(name=None, dtype=np.float64) -> Series:
- index = makeStringIndex(_N)
- data = np.random.default_rng(2).standard_normal(_N)
- with np.errstate(invalid="ignore"):
- data = data.astype(dtype, copy=False)
- return Series(data, index=index, name=name)
-
-
-def makeFloatSeries(name=None) -> Series:
- return make_rand_series(name=name)
-
-
-def makeStringSeries(name=None) -> Series:
- return make_rand_series(name=name)
-
-
def makeObjectSeries(name=None) -> Series:
data = makeStringIndex(_N)
data = Index(data, dtype=object)
@@ -1073,16 +1056,13 @@ def shares_memory(left, right) -> bool:
"makeDataFrame",
"makeDateIndex",
"makeFloatIndex",
- "makeFloatSeries",
"makeIntIndex",
"makeMixedDataFrame",
"makeNumericIndex",
"makeObjectSeries",
"makePeriodIndex",
- "make_rand_series",
"makeRangeIndex",
"makeStringIndex",
- "makeStringSeries",
"makeTimeDataFrame",
"makeTimedeltaIndex",
"makeTimeSeries",
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 4faf7faa6aa5d..350871c3085c1 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -729,9 +729,11 @@ def string_series() -> Series:
"""
Fixture for Series of floats with Index of unique strings
"""
- s = tm.makeStringSeries()
- s.name = "series"
- return s
+ return Series(
+ np.arange(30, dtype=np.float64) * 1.1,
+ index=Index([f"i_{i}" for i in range(30)], dtype=object),
+ name="series",
+ )
@pytest.fixture
@@ -776,7 +778,9 @@ def series_with_simple_index(index) -> Series:
_narrow_series = {
- f"{dtype.__name__}-series": tm.make_rand_series(name="a", dtype=dtype)
+ f"{dtype.__name__}-series": Series(
+ range(30), index=[f"i-{i}" for i in range(30)], name="a", dtype=dtype
+ )
for dtype in tm.NARROW_NP_DTYPES
}
diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py
index 9f5157181843e..9f8611dd4b08b 100644
--- a/pandas/tests/apply/test_invalid_arg.py
+++ b/pandas/tests/apply/test_invalid_arg.py
@@ -338,11 +338,8 @@ def test_transform_wont_agg_series(string_series, func):
# we are trying to transform with an aggregator
msg = "Function did not transform"
- warn = RuntimeWarning if func[0] == "sqrt" else None
- warn_msg = "invalid value encountered in sqrt"
with pytest.raises(ValueError, match=msg):
- with tm.assert_produces_warning(warn, match=warn_msg, check_stacklevel=False):
- string_series.transform(func)
+ string_series.transform(func)
@pytest.mark.parametrize(
diff --git a/pandas/tests/base/test_misc.py b/pandas/tests/base/test_misc.py
index 15daca86b14ee..65e234e799353 100644
--- a/pandas/tests/base/test_misc.py
+++ b/pandas/tests/base/test_misc.py
@@ -132,7 +132,7 @@ def test_memory_usage_components_series(series_with_simple_index):
@pytest.mark.parametrize("dtype", tm.NARROW_NP_DTYPES)
def test_memory_usage_components_narrow_series(dtype):
- series = tm.make_rand_series(name="a", dtype=dtype)
+ series = Series(range(5), dtype=dtype, index=[f"i-{i}" for i in range(5)], name="a")
total_usage = series.memory_usage(index=True)
non_index_usage = series.memory_usage(index=False)
index_usage = series.index.memory_usage()
diff --git a/pandas/tests/dtypes/test_missing.py b/pandas/tests/dtypes/test_missing.py
index d008249db1a3f..88cec50c08aba 100644
--- a/pandas/tests/dtypes/test_missing.py
+++ b/pandas/tests/dtypes/test_missing.py
@@ -78,8 +78,6 @@ def test_notna_notnull(notna_f):
@pytest.mark.parametrize(
"ser",
[
- tm.makeFloatSeries(),
- tm.makeStringSeries(),
tm.makeObjectSeries(),
tm.makeTimeSeries(),
Series(range(5), period_range("2020-01-01", periods=5)),
diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py
index 87beab04bc586..1f08b9d5c35b8 100644
--- a/pandas/tests/generic/test_generic.py
+++ b/pandas/tests/generic/test_generic.py
@@ -316,7 +316,11 @@ class TestNDFrame:
# tests that don't fit elsewhere
@pytest.mark.parametrize(
- "ser", [tm.makeFloatSeries(), tm.makeStringSeries(), tm.makeObjectSeries()]
+ "ser",
+ [
+ Series(range(10), dtype=np.float64),
+ Series([str(i) for i in range(10)], dtype=object),
+ ],
)
def test_squeeze_series_noop(self, ser):
# noop
@@ -360,14 +364,18 @@ def test_squeeze_axis_len_3(self):
tm.assert_frame_equal(df.squeeze(axis=0), df)
def test_numpy_squeeze(self):
- s = tm.makeFloatSeries()
+ s = Series(range(2), dtype=np.float64)
tm.assert_series_equal(np.squeeze(s), s)
df = tm.makeTimeDataFrame().reindex(columns=["A"])
tm.assert_series_equal(np.squeeze(df), df["A"])
@pytest.mark.parametrize(
- "ser", [tm.makeFloatSeries(), tm.makeStringSeries(), tm.makeObjectSeries()]
+ "ser",
+ [
+ Series(range(10), dtype=np.float64),
+ Series([str(i) for i in range(10)], dtype=object),
+ ],
)
def test_transpose_series(self, ser):
# calls implementation in pandas/core/base.py
@@ -393,7 +401,11 @@ def test_numpy_transpose(self, frame_or_series):
np.transpose(obj, axes=1)
@pytest.mark.parametrize(
- "ser", [tm.makeFloatSeries(), tm.makeStringSeries(), tm.makeObjectSeries()]
+ "ser",
+ [
+ Series(range(10), dtype=np.float64),
+ Series([str(i) for i in range(10)], dtype=object),
+ ],
)
def test_take_series(self, ser):
indices = [1, 5, -2, 6, 3, -1]
diff --git a/pandas/tests/io/pytables/test_append.py b/pandas/tests/io/pytables/test_append.py
index dace8435595ee..50cf7d737eb99 100644
--- a/pandas/tests/io/pytables/test_append.py
+++ b/pandas/tests/io/pytables/test_append.py
@@ -99,7 +99,7 @@ def test_append(setup_path):
def test_append_series(setup_path):
with ensure_clean_store(setup_path) as store:
# basic
- ss = tm.makeStringSeries()
+ ss = Series(range(20), dtype=np.float64, index=[f"i_{i}" for i in range(20)])
ts = tm.makeTimeSeries()
ns = Series(np.arange(100))
diff --git a/pandas/tests/io/pytables/test_keys.py b/pandas/tests/io/pytables/test_keys.py
index 0dcc9f7f1b9c2..fd7df29595090 100644
--- a/pandas/tests/io/pytables/test_keys.py
+++ b/pandas/tests/io/pytables/test_keys.py
@@ -3,6 +3,7 @@
from pandas import (
DataFrame,
HDFStore,
+ Series,
_testing as tm,
)
from pandas.tests.io.pytables.common import (
@@ -16,7 +17,9 @@
def test_keys(setup_path):
with ensure_clean_store(setup_path) as store:
store["a"] = tm.makeTimeSeries()
- store["b"] = tm.makeStringSeries()
+ store["b"] = Series(
+ range(10), dtype="float64", index=[f"i_{i}" for i in range(10)]
+ )
store["c"] = tm.makeDataFrame()
assert len(store) == 3
diff --git a/pandas/tests/io/pytables/test_read.py b/pandas/tests/io/pytables/test_read.py
index 32af61de05ee4..2030b1eca3203 100644
--- a/pandas/tests/io/pytables/test_read.py
+++ b/pandas/tests/io/pytables/test_read.py
@@ -356,7 +356,7 @@ def test_read_hdf_series_mode_r(tmp_path, format, setup_path):
# GH 16583
# Tests that reading a Series saved to an HDF file
# still works if a mode='r' argument is supplied
- series = tm.makeFloatSeries()
+ series = Series(range(10), dtype=np.float64)
path = tmp_path / setup_path
series.to_hdf(path, key="data", format=format)
result = read_hdf(path, key="data", mode="r")
diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py
index 4f908f28cb5e9..6c24843f18d0d 100644
--- a/pandas/tests/io/pytables/test_round_trip.py
+++ b/pandas/tests/io/pytables/test_round_trip.py
@@ -36,7 +36,7 @@ def roundtrip(key, obj, **kwargs):
o = tm.makeTimeSeries()
tm.assert_series_equal(o, roundtrip("series", o))
- o = tm.makeStringSeries()
+ o = Series(range(10), dtype="float64", index=[f"i_{i}" for i in range(10)])
tm.assert_series_equal(o, roundtrip("string_series", o))
o = tm.makeDataFrame()
@@ -249,7 +249,7 @@ def test_table_values_dtypes_roundtrip(setup_path):
@pytest.mark.filterwarnings("ignore::pandas.errors.PerformanceWarning")
def test_series(setup_path):
- s = tm.makeStringSeries()
+ s = Series(range(10), dtype="float64", index=[f"i_{i}" for i in range(10)])
_check_roundtrip(s, tm.assert_series_equal, path=setup_path)
ts = tm.makeTimeSeries()
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 4624a48df18e3..96c160ab40bd8 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -103,7 +103,9 @@ def test_repr(setup_path):
repr(store)
store.info()
store["a"] = tm.makeTimeSeries()
- store["b"] = tm.makeStringSeries()
+ store["b"] = Series(
+ range(10), dtype="float64", index=[f"i_{i}" for i in range(10)]
+ )
store["c"] = tm.makeDataFrame()
df = tm.makeDataFrame()
diff --git a/pandas/tests/plotting/test_series.py b/pandas/tests/plotting/test_series.py
index 7d543d29c034d..c8b47666e1b4a 100644
--- a/pandas/tests/plotting/test_series.py
+++ b/pandas/tests/plotting/test_series.py
@@ -43,7 +43,9 @@ def ts():
@pytest.fixture
def series():
- return tm.makeStringSeries(name="series")
+ return Series(
+ range(20), dtype=np.float64, name="series", index=[f"i_{i}" for i in range(20)]
+ )
class TestSeriesPlots:
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 5a6d1af6257bb..9c4ae92224148 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -868,7 +868,7 @@ def test_idxmin_dt64index(self, unit):
def test_idxmin(self):
# test idxmin
# _check_stat_op approach can not be used here because of isna check.
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
# add some NaNs
string_series[5:15] = np.nan
@@ -901,7 +901,7 @@ def test_idxmin(self):
def test_idxmax(self):
# test idxmax
# _check_stat_op approach can not be used here because of isna check.
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
# add some NaNs
string_series[5:15] = np.nan
diff --git a/pandas/tests/reductions/test_stat_reductions.py b/pandas/tests/reductions/test_stat_reductions.py
index 74e521ab71f41..81f560caff3fa 100644
--- a/pandas/tests/reductions/test_stat_reductions.py
+++ b/pandas/tests/reductions/test_stat_reductions.py
@@ -154,15 +154,15 @@ def _check_stat_op(
f(string_series_, numeric_only=True)
def test_sum(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("sum", np.sum, string_series, check_allna=False)
def test_mean(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("mean", np.mean, string_series)
def test_median(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("median", np.median, string_series)
# test with integers, test failure
@@ -170,19 +170,19 @@ def test_median(self):
tm.assert_almost_equal(np.median(int_ts), int_ts.median())
def test_prod(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("prod", np.prod, string_series)
def test_min(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("min", np.min, string_series, check_objects=True)
def test_max(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
self._check_stat_op("max", np.max, string_series, check_objects=True)
def test_var_std(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
datetime_series = tm.makeTimeSeries().rename("ts")
alt = lambda x: np.std(x, ddof=1)
@@ -208,7 +208,7 @@ def test_var_std(self):
assert pd.isna(result)
def test_sem(self):
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
datetime_series = tm.makeTimeSeries().rename("ts")
alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x))
@@ -228,7 +228,7 @@ def test_sem(self):
def test_skew(self):
sp_stats = pytest.importorskip("scipy.stats")
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
alt = lambda x: sp_stats.skew(x, bias=False)
self._check_stat_op("skew", alt, string_series)
@@ -250,7 +250,7 @@ def test_skew(self):
def test_kurt(self):
sp_stats = pytest.importorskip("scipy.stats")
- string_series = tm.makeStringSeries().rename("series")
+ string_series = Series(range(20), dtype=np.float64, name="series")
alt = lambda x: sp_stats.kurtosis(x, bias=False)
self._check_stat_op("kurt", alt, string_series)
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index 6471cd71f0860..f38a29bfe7e88 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -47,7 +47,11 @@ class TestSeriesFlexArithmetic:
(lambda x: x, lambda x: x * 2, False),
(lambda x: x, lambda x: x[::2], False),
(lambda x: x, lambda x: 5, True),
- (lambda x: tm.makeFloatSeries(), lambda x: tm.makeFloatSeries(), True),
+ (
+ lambda x: Series(range(10), dtype=np.float64),
+ lambda x: Series(range(10), dtype=np.float64),
+ True,
+ ),
],
)
@pytest.mark.parametrize(
diff --git a/pandas/tests/series/test_unary.py b/pandas/tests/series/test_unary.py
index ad0e344fa4420..8f153788e413c 100644
--- a/pandas/tests/series/test_unary.py
+++ b/pandas/tests/series/test_unary.py
@@ -8,13 +8,11 @@ class TestSeriesUnaryOps:
# __neg__, __pos__, __invert__
def test_neg(self):
- ser = tm.makeStringSeries()
- ser.name = "series"
+ ser = Series(range(5), dtype="float64", name="series")
tm.assert_series_equal(-ser, -1 * ser)
def test_invert(self):
- ser = tm.makeStringSeries()
- ser.name = "series"
+ ser = Series(range(5), dtype="float64", name="series")
tm.assert_series_equal(-(ser < 0), ~(ser < 0))
@pytest.mark.parametrize(
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56163 | 2023-11-25T02:13:44Z | 2023-11-26T18:35:26Z | 2023-11-26T18:35:26Z | 2023-11-26T18:35:31Z |
DOC: Add note about CoW change to copy keyword docs | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 5d05983529fba..b86e9e08f2cce 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -369,6 +369,18 @@
values must not be None.
copy : bool, default True
If False, avoid copy if possible.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
indicator : bool or str, default False
If True, adds a column to the output DataFrame called "_merge" with
information on the source of each row. The column can be given a different
@@ -3728,6 +3740,18 @@ def transpose(self, *args, copy: bool = False) -> DataFrame:
Note that a copy is always required for mixed dtype DataFrames,
or for DataFrames with any extension types.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
DataFrame
@@ -5577,6 +5601,18 @@ def rename(
('index', 'columns') or number (0, 1). The default is 'index'.
copy : bool, default True
Also copy underlying data.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
inplace : bool, default False
Whether to modify the DataFrame rather than creating a new one.
If True then value of copy is ignored.
@@ -12095,6 +12131,18 @@ def to_timestamp(
copy : bool, default True
If False then underlying input data is not copied.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
DataFrame
@@ -12161,6 +12209,18 @@ def to_period(
copy : bool, default True
If False then underlying input data is not copied.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
DataFrame
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index e4e95a973a3c1..f20253cf12907 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -455,6 +455,18 @@ def set_flags(
----------
copy : bool, default False
Specify if a copy of the object should be made.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
allows_duplicate_labels : bool, optional
Whether the returned object allows duplicate labels.
@@ -741,7 +753,17 @@ def set_axis(
copy : bool, default True
Whether to make a copy of the underlying data.
- .. versionadded:: 1.5.0
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
Returns
-------
@@ -1172,6 +1194,18 @@ def rename_axis(
The axis to rename. For `Series` this parameter is unused and defaults to 0.
copy : bool, default None
Also copy underlying data.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
inplace : bool, default False
Modifies the object directly, instead of creating a new Series
or DataFrame.
@@ -4597,6 +4631,18 @@ def reindex_like(
copy : bool, default True
Return a new object, even if the passed indexes are the same.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
limit : int, default None
Maximum number of consecutive labels to fill for inexact matches.
tolerance : optional
@@ -5343,6 +5389,18 @@ def reindex(
copy : bool, default True
Return a new object, even if the passed indexes are the same.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
level : int or name
Broadcast across a level, matching Index values on the
passed MultiIndex level.
@@ -6775,6 +6833,18 @@ def infer_objects(self, copy: bool_t | None = None) -> Self:
Whether to make a copy for non-object or non-inferable columns
or Series.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
same type as input object
@@ -10077,6 +10147,18 @@ def align(
copy : bool, default True
Always returns new objects. If copy=False and no reindexing is
required then original objects are returned.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
fill_value : scalar, default np.nan
Value to use for missing values. Defaults to NaN, but can be any
"compatible" value.
@@ -11099,6 +11181,18 @@ def truncate(
copy : bool, default is True,
Return a copy of the truncated section.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
type of caller
@@ -11255,6 +11349,18 @@ def tz_convert(
copy : bool, default True
Also make a copy of the underlying data.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
{klass}
@@ -11344,6 +11450,18 @@ def tz_localize(
must be None.
copy : bool, default True
Also make a copy of the underlying data.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
ambiguous : 'infer', bool-ndarray, 'NaT', default 'raise'
When clocks moved backward due to DST, ambiguous times may arise.
For example in Central European Time (UTC+01), when going from
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a9679f22f9933..1e54682498618 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -4295,7 +4295,19 @@ def nsmallest(
klass=_shared_doc_kwargs["klass"],
extra_params=dedent(
"""copy : bool, default True
- Whether to copy underlying data."""
+ Whether to copy underlying data.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``"""
),
examples=dedent(
"""\
@@ -4948,6 +4960,18 @@ def rename(
Unused. Parameter needed for compatibility with DataFrame.
copy : bool, default True
Also copy underlying data.
+
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
inplace : bool, default False
Whether to return a new Series. If True the value of copy is ignored.
level : int or level name, default None
@@ -5732,6 +5756,18 @@ def to_timestamp(
copy : bool, default True
Whether or not to return a copy.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
Series with DatetimeIndex
@@ -5784,6 +5820,18 @@ def to_period(self, freq: str | None = None, copy: bool | None = None) -> Series
copy : bool, default True
Whether or not to return a copy.
+ .. note::
+ The `copy` keyword will change behavior in pandas 3.0.
+ `Copy-on-Write
+ <https://pandas.pydata.org/docs/dev/user_guide/copy_on_write.html>`__
+ will be enabled by default, which means that all methods with a
+ `copy` keyword will use a lazy copy mechanism to defer the copy and
+ ignore the `copy` keyword. The `copy` keyword will be removed in a
+ future version of pandas.
+
+ You can already get the future behavior and improvements through
+ enabling copy on write ``pd.options.mode.copy_on_write = True``
+
Returns
-------
Series
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56162 | 2023-11-24T23:57:12Z | 2023-12-02T00:14:21Z | 2023-12-02T00:14:21Z | 2023-12-02T00:17:49Z |
DEPR: dtype inference in value_counts | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 8893fe0ecd398..9318e1d9ffaff 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -396,6 +396,7 @@ Other Deprecations
- Deprecated the ``errors="ignore"`` option in :func:`to_datetime`, :func:`to_timedelta`, and :func:`to_numeric`; explicitly catch exceptions instead (:issue:`54467`)
- Deprecated the ``fastpath`` keyword in the :class:`Series` constructor (:issue:`20110`)
- Deprecated the ``ordinal`` keyword in :class:`PeriodIndex`, use :meth:`PeriodIndex.from_ordinals` instead (:issue:`55960`)
+- Deprecated the behavior of :meth:`Series.value_counts` and :meth:`Index.value_counts` with object dtype; in a future version these will not perform dtype inference on the resulting :class:`Index`, do ``result.index = result.index.infer_objects()`` to retain the old behavior (:issue:`56161`)
- Deprecated the extension test classes ``BaseNoReduceTests``, ``BaseBooleanReduceTests``, and ``BaseNumericReduceTests``, use ``BaseReduceTests`` instead (:issue:`54663`)
- Deprecated the option ``mode.data_manager`` and the ``ArrayManager``; only the ``BlockManager`` will be available in future versions (:issue:`55043`)
- Deprecated the previous implementation of :class:`DataFrame.stack`; specify ``future_stack=True`` to adopt the future version (:issue:`53515`)
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index b38a27a9f6d0a..82de8ae96160f 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -932,6 +932,16 @@ def value_counts_internal(
idx = Index(keys)
if idx.dtype == bool and keys.dtype == object:
idx = idx.astype(object)
+ elif idx.dtype != keys.dtype:
+ warnings.warn(
+ # GH#56161
+ "The behavior of value_counts with object-dtype is deprecated. "
+ "In a future version, this will *not* perform dtype inference "
+ "on the resulting index. To retain the old behavior, use "
+ "`result.index = result.index.infer_objects()`",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
idx.name = index_name
result = Series(counts, index=idx, name=name, copy=False)
@@ -1712,8 +1722,16 @@ def union_with_duplicates(
"""
from pandas import Series
- l_count = value_counts_internal(lvals, dropna=False)
- r_count = value_counts_internal(rvals, dropna=False)
+ with warnings.catch_warnings():
+ # filter warning from object dtype inference; we will end up discarding
+ # the index here, so the deprecation does not affect the end result here.
+ warnings.filterwarnings(
+ "ignore",
+ "The behavior of value_counts with object-dtype is deprecated",
+ category=FutureWarning,
+ )
+ l_count = value_counts_internal(lvals, dropna=False)
+ r_count = value_counts_internal(rvals, dropna=False)
l_count, r_count = l_count.align(r_count, fill_value=0)
final_count = np.maximum(l_count.values, r_count.values)
final_count = Series(final_count, index=l_count.index, dtype="int", copy=False)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 2f1363f180d08..126484ed4a2a0 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -12,6 +12,7 @@
Union,
overload,
)
+import warnings
import numpy as np
@@ -1226,7 +1227,16 @@ def value_counts(self, dropna: bool = True) -> Series:
Series.value_counts
"""
# TODO: implement this is a non-naive way!
- return value_counts(np.asarray(self), dropna=dropna)
+ with warnings.catch_warnings():
+ warnings.filterwarnings(
+ "ignore",
+ "The behavior of value_counts with object-dtype is deprecated",
+ category=FutureWarning,
+ )
+ result = value_counts(np.asarray(self), dropna=dropna)
+ # Once the deprecation is enforced, we will need to do
+ # `result.index = result.index.astype(self.dtype)`
+ return result
# ---------------------------------------------------------------------
# Rendering Methods
diff --git a/pandas/tests/base/test_value_counts.py b/pandas/tests/base/test_value_counts.py
index c42d064c476bb..75915b7c67548 100644
--- a/pandas/tests/base/test_value_counts.py
+++ b/pandas/tests/base/test_value_counts.py
@@ -336,3 +336,16 @@ def test_value_counts_with_nan(dropna, index_or_series):
else:
expected = Series([1, 1, 1], index=[True, pd.NA, np.nan], name="count")
tm.assert_series_equal(res, expected)
+
+
+def test_value_counts_object_inference_deprecated():
+ # GH#56161
+ dti = pd.date_range("2016-01-01", periods=3, tz="UTC")
+
+ idx = dti.astype(object)
+ msg = "The behavior of value_counts with object-dtype is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ res = idx.value_counts()
+
+ exp = dti.value_counts()
+ tm.assert_series_equal(res, exp)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Surfaced while implementing #55564 | https://api.github.com/repos/pandas-dev/pandas/pulls/56161 | 2023-11-24T23:56:41Z | 2023-11-26T04:13:39Z | 2023-11-26T04:13:39Z | 2023-11-26T05:03:10Z |
BUG: reset_index not preserving object dtype for string option | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 723c33280a679..1c412701ae5e9 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -30,6 +30,7 @@ Bug fixes
- Fixed bug in :meth:`DataFrame.to_hdf` raising when columns have ``StringDtype`` (:issue:`55088`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`Series.mode` not keeping object dtype when ``infer_string`` is set (:issue:`56183`)
+- Fixed bug in :meth:`Series.reset_index` not preserving object dtype when ``infer_string`` is set (:issue:`56160`)
- Fixed bug in :meth:`Series.str.split` and :meth:`Series.str.rsplit` when ``pat=None`` for :class:`ArrowDtype` with ``pyarrow.string`` (:issue:`56271`)
- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 464e066b4e86a..f884e61fac27b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1719,7 +1719,7 @@ def reset_index(
return new_ser.__finalize__(self, method="reset_index")
else:
return self._constructor(
- self._values.copy(), index=new_index, copy=False
+ self._values.copy(), index=new_index, copy=False, dtype=self.dtype
).__finalize__(self, method="reset_index")
elif inplace:
raise TypeError(
diff --git a/pandas/tests/series/methods/test_reset_index.py b/pandas/tests/series/methods/test_reset_index.py
index 634b8699a89e6..48e2608a1032a 100644
--- a/pandas/tests/series/methods/test_reset_index.py
+++ b/pandas/tests/series/methods/test_reset_index.py
@@ -11,6 +11,7 @@
RangeIndex,
Series,
date_range,
+ option_context,
)
import pandas._testing as tm
@@ -167,6 +168,14 @@ def test_reset_index_inplace_and_drop_ignore_name(self):
expected = Series(range(2), name="old")
tm.assert_series_equal(ser, expected)
+ def test_reset_index_drop_infer_string(self):
+ # GH#56160
+ pytest.importorskip("pyarrow")
+ ser = Series(["a", "b", "c"], dtype=object)
+ with option_context("future.infer_string", True):
+ result = ser.reset_index(drop=True)
+ tm.assert_series_equal(result, ser)
+
@pytest.mark.parametrize(
"array, dtype",
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56160 | 2023-11-24T23:47:10Z | 2023-12-07T20:05:12Z | 2023-12-07T20:05:12Z | 2023-12-07T21:35:46Z |
Adjust tests in strings folder for new string option | diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 9fa6e9973291d..75866c6f6013a 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -44,6 +44,7 @@
)
from pandas.core.dtypes.missing import isna
+from pandas.core.arrays import ExtensionArray
from pandas.core.base import NoNewAttributesMixin
from pandas.core.construction import extract_array
@@ -456,7 +457,7 @@ def _get_series_list(self, others):
# in case of list-like `others`, all elements must be
# either Series/Index/np.ndarray (1-dim)...
if all(
- isinstance(x, (ABCSeries, ABCIndex))
+ isinstance(x, (ABCSeries, ABCIndex, ExtensionArray))
or (isinstance(x, np.ndarray) and x.ndim == 1)
for x in others
):
@@ -690,12 +691,15 @@ def cat(
out: Index | Series
if isinstance(self._orig, ABCIndex):
# add dtype for case that result is all-NA
+ dtype = None
+ if isna(result).all():
+ dtype = object
- out = Index(result, dtype=object, name=self._orig.name)
+ out = Index(result, dtype=dtype, name=self._orig.name)
else: # Series
if isinstance(self._orig.dtype, CategoricalDtype):
# We need to infer the new categories.
- dtype = None
+ dtype = self._orig.dtype.categories.dtype # type: ignore[assignment]
else:
dtype = self._orig.dtype
res_ser = Series(
@@ -914,7 +918,13 @@ def split(
if is_re(pat):
regex = True
result = self._data.array._str_split(pat, n, expand, regex)
- return self._wrap_result(result, returns_string=expand, expand=expand)
+ if self._data.dtype == "category":
+ dtype = self._data.dtype.categories.dtype
+ else:
+ dtype = object if self._data.dtype == object else None
+ return self._wrap_result(
+ result, expand=expand, returns_string=expand, dtype=dtype
+ )
@Appender(
_shared_docs["str_split"]
@@ -932,7 +942,10 @@ def split(
@forbid_nonstring_types(["bytes"])
def rsplit(self, pat=None, *, n=-1, expand: bool = False):
result = self._data.array._str_rsplit(pat, n=n)
- return self._wrap_result(result, expand=expand, returns_string=expand)
+ dtype = object if self._data.dtype == object else None
+ return self._wrap_result(
+ result, expand=expand, returns_string=expand, dtype=dtype
+ )
_shared_docs[
"str_partition"
@@ -1028,7 +1041,13 @@ def rsplit(self, pat=None, *, n=-1, expand: bool = False):
@forbid_nonstring_types(["bytes"])
def partition(self, sep: str = " ", expand: bool = True):
result = self._data.array._str_partition(sep, expand)
- return self._wrap_result(result, expand=expand, returns_string=expand)
+ if self._data.dtype == "category":
+ dtype = self._data.dtype.categories.dtype
+ else:
+ dtype = object if self._data.dtype == object else None
+ return self._wrap_result(
+ result, expand=expand, returns_string=expand, dtype=dtype
+ )
@Appender(
_shared_docs["str_partition"]
@@ -1042,7 +1061,13 @@ def partition(self, sep: str = " ", expand: bool = True):
@forbid_nonstring_types(["bytes"])
def rpartition(self, sep: str = " ", expand: bool = True):
result = self._data.array._str_rpartition(sep, expand)
- return self._wrap_result(result, expand=expand, returns_string=expand)
+ if self._data.dtype == "category":
+ dtype = self._data.dtype.categories.dtype
+ else:
+ dtype = object if self._data.dtype == object else None
+ return self._wrap_result(
+ result, expand=expand, returns_string=expand, dtype=dtype
+ )
def get(self, i):
"""
@@ -2748,7 +2773,7 @@ def extract(
else:
name = _get_single_group_name(regex)
result = self._data.array._str_extract(pat, flags=flags, expand=returns_df)
- return self._wrap_result(result, name=name)
+ return self._wrap_result(result, name=name, dtype=result_dtype)
@forbid_nonstring_types(["bytes"])
def extractall(self, pat, flags: int = 0) -> DataFrame:
@@ -3488,7 +3513,7 @@ def str_extractall(arr, pat, flags: int = 0) -> DataFrame:
raise ValueError("pattern contains no capture groups")
if isinstance(arr, ABCIndex):
- arr = arr.to_series().reset_index(drop=True)
+ arr = arr.to_series().reset_index(drop=True).astype(arr.dtype)
columns = _get_group_names(regex)
match_list = []
diff --git a/pandas/tests/strings/test_api.py b/pandas/tests/strings/test_api.py
index 2914b22a52e94..31e005466af7b 100644
--- a/pandas/tests/strings/test_api.py
+++ b/pandas/tests/strings/test_api.py
@@ -2,11 +2,13 @@
import pytest
from pandas import (
+ CategoricalDtype,
DataFrame,
Index,
MultiIndex,
Series,
_testing as tm,
+ option_context,
)
from pandas.core.strings.accessor import StringMethods
@@ -162,7 +164,8 @@ def test_api_per_method(
if inferred_dtype in allowed_types:
# xref GH 23555, GH 23556
- method(*args, **kwargs) # works!
+ with option_context("future.no_silent_downcasting", True):
+ method(*args, **kwargs) # works!
else:
# GH 23011, GH 23163
msg = (
@@ -178,6 +181,7 @@ def test_api_for_categorical(any_string_method, any_string_dtype):
s = Series(list("aabb"), dtype=any_string_dtype)
s = s + " " + s
c = s.astype("category")
+ c = c.astype(CategoricalDtype(c.dtype.categories.astype("object")))
assert isinstance(c.str, StringMethods)
method_name, args, kwargs = any_string_method
diff --git a/pandas/tests/strings/test_case_justify.py b/pandas/tests/strings/test_case_justify.py
index 1dee25e631648..41aedae90ca76 100644
--- a/pandas/tests/strings/test_case_justify.py
+++ b/pandas/tests/strings/test_case_justify.py
@@ -21,7 +21,8 @@ def test_title_mixed_object():
s = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0])
result = s.str.title()
expected = Series(
- ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", None, np.nan, np.nan]
+ ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_almost_equal(result, expected)
@@ -41,11 +42,15 @@ def test_lower_upper_mixed_object():
s = Series(["a", np.nan, "b", True, datetime.today(), "foo", None, 1, 2.0])
result = s.str.upper()
- expected = Series(["A", np.nan, "B", np.nan, np.nan, "FOO", None, np.nan, np.nan])
+ expected = Series(
+ ["A", np.nan, "B", np.nan, np.nan, "FOO", None, np.nan, np.nan], dtype=object
+ )
tm.assert_series_equal(result, expected)
result = s.str.lower()
- expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan])
+ expected = Series(
+ ["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan], dtype=object
+ )
tm.assert_series_equal(result, expected)
@@ -71,7 +76,8 @@ def test_capitalize_mixed_object():
s = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0])
result = s.str.capitalize()
expected = Series(
- ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", None, np.nan, np.nan]
+ ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -87,7 +93,8 @@ def test_swapcase_mixed_object():
s = Series(["FOO", np.nan, "bar", True, datetime.today(), "Blah", None, 1, 2.0])
result = s.str.swapcase()
expected = Series(
- ["foo", np.nan, "BAR", np.nan, np.nan, "bLAH", None, np.nan, np.nan]
+ ["foo", np.nan, "BAR", np.nan, np.nan, "bLAH", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -138,19 +145,22 @@ def test_pad_mixed_object():
result = s.str.pad(5, side="left")
expected = Series(
- [" a", np.nan, " b", np.nan, np.nan, " ee", None, np.nan, np.nan]
+ [" a", np.nan, " b", np.nan, np.nan, " ee", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
result = s.str.pad(5, side="right")
expected = Series(
- ["a ", np.nan, "b ", np.nan, np.nan, "ee ", None, np.nan, np.nan]
+ ["a ", np.nan, "b ", np.nan, np.nan, "ee ", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
result = s.str.pad(5, side="both")
expected = Series(
- [" a ", np.nan, " b ", np.nan, np.nan, " ee ", None, np.nan, np.nan]
+ [" a ", np.nan, " b ", np.nan, np.nan, " ee ", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -238,7 +248,8 @@ def test_center_ljust_rjust_mixed_object():
None,
np.nan,
np.nan,
- ]
+ ],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -255,7 +266,8 @@ def test_center_ljust_rjust_mixed_object():
None,
np.nan,
np.nan,
- ]
+ ],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -272,7 +284,8 @@ def test_center_ljust_rjust_mixed_object():
None,
np.nan,
np.nan,
- ]
+ ],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_cat.py b/pandas/tests/strings/test_cat.py
index 3e620b7664335..284932491a65e 100644
--- a/pandas/tests/strings/test_cat.py
+++ b/pandas/tests/strings/test_cat.py
@@ -3,6 +3,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas import (
DataFrame,
Index,
@@ -10,6 +12,7 @@
Series,
_testing as tm,
concat,
+ option_context,
)
@@ -26,45 +29,49 @@ def test_str_cat_name(index_or_series, other):
assert result.name == "name"
-def test_str_cat(index_or_series):
- box = index_or_series
- # test_cat above tests "str_cat" from ndarray;
- # here testing "str.cat" from Series/Index to ndarray/list
- s = box(["a", "a", "b", "b", "c", np.nan])
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
+def test_str_cat(index_or_series, infer_string):
+ with option_context("future.infer_string", infer_string):
+ box = index_or_series
+ # test_cat above tests "str_cat" from ndarray;
+ # here testing "str.cat" from Series/Index to ndarray/list
+ s = box(["a", "a", "b", "b", "c", np.nan])
- # single array
- result = s.str.cat()
- expected = "aabbc"
- assert result == expected
+ # single array
+ result = s.str.cat()
+ expected = "aabbc"
+ assert result == expected
- result = s.str.cat(na_rep="-")
- expected = "aabbc-"
- assert result == expected
+ result = s.str.cat(na_rep="-")
+ expected = "aabbc-"
+ assert result == expected
- result = s.str.cat(sep="_", na_rep="NA")
- expected = "a_a_b_b_c_NA"
- assert result == expected
+ result = s.str.cat(sep="_", na_rep="NA")
+ expected = "a_a_b_b_c_NA"
+ assert result == expected
- t = np.array(["a", np.nan, "b", "d", "foo", np.nan], dtype=object)
- expected = box(["aa", "a-", "bb", "bd", "cfoo", "--"])
+ t = np.array(["a", np.nan, "b", "d", "foo", np.nan], dtype=object)
+ expected = box(["aa", "a-", "bb", "bd", "cfoo", "--"])
- # Series/Index with array
- result = s.str.cat(t, na_rep="-")
- tm.assert_equal(result, expected)
+ # Series/Index with array
+ result = s.str.cat(t, na_rep="-")
+ tm.assert_equal(result, expected)
- # Series/Index with list
- result = s.str.cat(list(t), na_rep="-")
- tm.assert_equal(result, expected)
+ # Series/Index with list
+ result = s.str.cat(list(t), na_rep="-")
+ tm.assert_equal(result, expected)
- # errors for incorrect lengths
- rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
- z = Series(["1", "2", "3"])
+ # errors for incorrect lengths
+ rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
+ z = Series(["1", "2", "3"])
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(z.values)
+ with pytest.raises(ValueError, match=rgx):
+ s.str.cat(z.values)
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(list(z))
+ with pytest.raises(ValueError, match=rgx):
+ s.str.cat(list(z))
def test_str_cat_raises_intuitive_error(index_or_series):
@@ -78,39 +85,54 @@ def test_str_cat_raises_intuitive_error(index_or_series):
s.str.cat(" ")
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
@pytest.mark.parametrize("sep", ["", None])
@pytest.mark.parametrize("dtype_target", ["object", "category"])
@pytest.mark.parametrize("dtype_caller", ["object", "category"])
-def test_str_cat_categorical(index_or_series, dtype_caller, dtype_target, sep):
+def test_str_cat_categorical(
+ index_or_series, dtype_caller, dtype_target, sep, infer_string
+):
box = index_or_series
- s = Index(["a", "a", "b", "a"], dtype=dtype_caller)
- s = s if box == Index else Series(s, index=s)
- t = Index(["b", "a", "b", "c"], dtype=dtype_target)
-
- expected = Index(["ab", "aa", "bb", "ac"])
- expected = expected if box == Index else Series(expected, index=s)
+ with option_context("future.infer_string", infer_string):
+ s = Index(["a", "a", "b", "a"], dtype=dtype_caller)
+ s = s if box == Index else Series(s, index=s)
+ t = Index(["b", "a", "b", "c"], dtype=dtype_target)
- # Series/Index with unaligned Index -> t.values
- result = s.str.cat(t.values, sep=sep)
- tm.assert_equal(result, expected)
-
- # Series/Index with Series having matching Index
- t = Series(t.values, index=s)
- result = s.str.cat(t, sep=sep)
- tm.assert_equal(result, expected)
-
- # Series/Index with Series.values
- result = s.str.cat(t.values, sep=sep)
- tm.assert_equal(result, expected)
+ expected = Index(["ab", "aa", "bb", "ac"])
+ expected = (
+ expected
+ if box == Index
+ else Series(expected, index=Index(s, dtype=dtype_caller))
+ )
- # Series/Index with Series having different Index
- t = Series(t.values, index=t.values)
- expected = Index(["aa", "aa", "bb", "bb", "aa"])
- expected = expected if box == Index else Series(expected, index=expected.str[:1])
+ # Series/Index with unaligned Index -> t.values
+ result = s.str.cat(t.values, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series having matching Index
+ t = Series(t.values, index=Index(s, dtype=dtype_caller))
+ result = s.str.cat(t, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series.values
+ result = s.str.cat(t.values, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series having different Index
+ t = Series(t.values, index=t.values)
+ expected = Index(["aa", "aa", "bb", "bb", "aa"])
+ dtype = object if dtype_caller == "object" else s.dtype.categories.dtype
+ expected = (
+ expected
+ if box == Index
+ else Series(expected, index=Index(expected.str[:1], dtype=dtype))
+ )
- result = s.str.cat(t, sep=sep)
- tm.assert_equal(result, expected)
+ result = s.str.cat(t, sep=sep)
+ tm.assert_equal(result, expected)
@pytest.mark.parametrize(
@@ -321,8 +343,9 @@ def test_str_cat_all_na(index_or_series, index_or_series2):
# all-NA target
if box == Series:
- expected = Series([np.nan] * 4, index=s.index, dtype=object)
+ expected = Series([np.nan] * 4, index=s.index, dtype=s.dtype)
else: # box == Index
+ # TODO: Strimg option, this should return string dtype
expected = Index([np.nan] * 4, dtype=object)
result = s.str.cat(t, join="left")
tm.assert_equal(result, expected)
diff --git a/pandas/tests/strings/test_extract.py b/pandas/tests/strings/test_extract.py
index 9ad9b1eca41d9..77d008c650264 100644
--- a/pandas/tests/strings/test_extract.py
+++ b/pandas/tests/strings/test_extract.py
@@ -47,13 +47,16 @@ def test_extract_expand_False_mixed_object():
# two groups
result = ser.str.extract(".*(BAD[_]+).*(BAD)", expand=False)
er = [np.nan, np.nan] # empty row
- expected = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er])
+ expected = DataFrame(
+ [["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er], dtype=object
+ )
tm.assert_frame_equal(result, expected)
# single group
result = ser.str.extract(".*(BAD[_]+).*BAD", expand=False)
expected = Series(
- ["BAD_", np.nan, "BAD_", np.nan, np.nan, np.nan, None, np.nan, np.nan]
+ ["BAD_", np.nan, "BAD_", np.nan, np.nan, np.nan, None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -238,7 +241,9 @@ def test_extract_expand_True_mixed_object():
)
result = mixed.str.extract(".*(BAD[_]+).*(BAD)", expand=True)
- expected = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er])
+ expected = DataFrame(
+ [["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er], dtype=object
+ )
tm.assert_frame_equal(result, expected)
@@ -603,8 +608,8 @@ def test_extractall_stringindex(any_string_dtype):
# index.name doesn't affect to the result
if any_string_dtype == "object":
for idx in [
- Index(["a1a2", "b1", "c1"]),
- Index(["a1a2", "b1", "c1"], name="xxx"),
+ Index(["a1a2", "b1", "c1"], dtype=object),
+ Index(["a1a2", "b1", "c1"], name="xxx", dtype=object),
]:
result = idx.str.extractall(r"[ab](?P<digit>\d)")
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index bd64a5dce3b9a..3f58c6d703f8f 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -242,7 +242,7 @@ def test_contains_nan(any_string_dtype):
@pytest.mark.parametrize("pat", ["foo", ("foo", "baz")])
-@pytest.mark.parametrize("dtype", [None, "category"])
+@pytest.mark.parametrize("dtype", ["object", "category"])
@pytest.mark.parametrize("null_value", [None, np.nan, pd.NA])
@pytest.mark.parametrize("na", [True, False])
def test_startswith(pat, dtype, null_value, na):
@@ -254,10 +254,10 @@ def test_startswith(pat, dtype, null_value, na):
result = values.str.startswith(pat)
exp = Series([False, np.nan, True, False, False, np.nan, True])
- if dtype is None and null_value is pd.NA:
+ if dtype == "object" and null_value is pd.NA:
# GH#18463
exp = exp.fillna(null_value)
- elif dtype is None and null_value is None:
+ elif dtype == "object" and null_value is None:
exp[exp.isna()] = None
tm.assert_series_equal(result, exp)
@@ -300,7 +300,7 @@ def test_startswith_nullable_string_dtype(nullable_string_dtype, na):
@pytest.mark.parametrize("pat", ["foo", ("foo", "baz")])
-@pytest.mark.parametrize("dtype", [None, "category"])
+@pytest.mark.parametrize("dtype", ["object", "category"])
@pytest.mark.parametrize("null_value", [None, np.nan, pd.NA])
@pytest.mark.parametrize("na", [True, False])
def test_endswith(pat, dtype, null_value, na):
@@ -312,10 +312,10 @@ def test_endswith(pat, dtype, null_value, na):
result = values.str.endswith(pat)
exp = Series([False, np.nan, False, False, True, np.nan, True])
- if dtype is None and null_value is pd.NA:
+ if dtype == "object" and null_value is pd.NA:
# GH#18463
- exp = exp.fillna(pd.NA)
- elif dtype is None and null_value is None:
+ exp = exp.fillna(null_value)
+ elif dtype == "object" and null_value is None:
exp[exp.isna()] = None
tm.assert_series_equal(result, exp)
@@ -382,7 +382,9 @@ def test_replace_mixed_object():
["aBAD", np.nan, "bBAD", True, datetime.today(), "fooBAD", None, 1, 2.0]
)
result = Series(ser).str.replace("BAD[_]*", "", regex=True)
- expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan])
+ expected = Series(
+ ["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan], dtype=object
+ )
tm.assert_series_equal(result, expected)
@@ -469,7 +471,9 @@ def test_replace_compiled_regex_mixed_object():
["aBAD", np.nan, "bBAD", True, datetime.today(), "fooBAD", None, 1, 2.0]
)
result = Series(ser).str.replace(pat, "", regex=True)
- expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan])
+ expected = Series(
+ ["a", np.nan, "b", np.nan, np.nan, "foo", None, np.nan, np.nan], dtype=object
+ )
tm.assert_series_equal(result, expected)
@@ -913,7 +917,7 @@ def test_translate_mixed_object():
# Series with non-string values
s = Series(["a", "b", "c", 1.2])
table = str.maketrans("abc", "cde")
- expected = Series(["c", "d", "e", np.nan])
+ expected = Series(["c", "d", "e", np.nan], dtype=object)
result = s.str.translate(table)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py
index 0a7d409773dd6..9ff1fc0e13ae9 100644
--- a/pandas/tests/strings/test_split_partition.py
+++ b/pandas/tests/strings/test_split_partition.py
@@ -681,14 +681,16 @@ def test_partition_sep_kwarg(any_string_dtype, method):
def test_get():
ser = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"])
result = ser.str.split("_").str.get(1)
- expected = Series(["b", "d", np.nan, "g"])
+ expected = Series(["b", "d", np.nan, "g"], dtype=object)
tm.assert_series_equal(result, expected)
def test_get_mixed_object():
ser = Series(["a_b_c", np.nan, "c_d_e", True, datetime.today(), None, 1, 2.0])
result = ser.str.split("_").str.get(1)
- expected = Series(["b", np.nan, "d", np.nan, np.nan, None, np.nan, np.nan])
+ expected = Series(
+ ["b", np.nan, "d", np.nan, np.nan, None, np.nan, np.nan], dtype=object
+ )
tm.assert_series_equal(result, expected)
@@ -696,7 +698,7 @@ def test_get_mixed_object():
def test_get_bounds(idx):
ser = Series(["1_2_3_4_5", "6_7_8_9_10", "11_12"])
result = ser.str.split("_").str.get(idx)
- expected = Series(["3", "8", np.nan])
+ expected = Series(["3", "8", np.nan], dtype=object)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/strings/test_string_array.py b/pandas/tests/strings/test_string_array.py
index a88dcc8956931..0b3f368afea5e 100644
--- a/pandas/tests/strings/test_string_array.py
+++ b/pandas/tests/strings/test_string_array.py
@@ -8,6 +8,7 @@
DataFrame,
Series,
_testing as tm,
+ option_context,
)
@@ -56,7 +57,8 @@ def test_string_array(nullable_string_dtype, any_string_method):
columns = expected.select_dtypes(include="object").columns
assert all(result[columns].dtypes == nullable_string_dtype)
result[columns] = result[columns].astype(object)
- expected[columns] = expected[columns].fillna(NA) # GH#18463
+ with option_context("future.no_silent_downcasting", True):
+ expected[columns] = expected[columns].fillna(NA) # GH#18463
tm.assert_equal(result, expected)
diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py
index 4315835b70a40..f662dfd7e2b14 100644
--- a/pandas/tests/strings/test_strings.py
+++ b/pandas/tests/strings/test_strings.py
@@ -76,7 +76,8 @@ def test_repeat_mixed_object():
ser = Series(["a", np.nan, "b", True, datetime.today(), "foo", None, 1, 2.0])
result = ser.str.repeat(3)
expected = Series(
- ["aaa", np.nan, "bbb", np.nan, np.nan, "foofoofoo", None, np.nan, np.nan]
+ ["aaa", np.nan, "bbb", np.nan, np.nan, "foofoofoo", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -270,7 +271,8 @@ def test_spilt_join_roundtrip_mixed_object():
)
result = ser.str.split("_").str.join("_")
expected = Series(
- ["a_b", np.nan, "asdf_cas_asdf", np.nan, np.nan, "foo", None, np.nan, np.nan]
+ ["a_b", np.nan, "asdf_cas_asdf", np.nan, np.nan, "foo", None, np.nan, np.nan],
+ dtype=object,
)
tm.assert_series_equal(result, expected)
@@ -398,7 +400,7 @@ def test_slice(start, stop, step, expected, any_string_dtype):
def test_slice_mixed_object(start, stop, step, expected):
ser = Series(["aafootwo", np.nan, "aabartwo", True, datetime.today(), None, 1, 2.0])
result = ser.str.slice(start, stop, step)
- expected = Series(expected)
+ expected = Series(expected, dtype=object)
tm.assert_series_equal(result, expected)
@@ -453,7 +455,7 @@ def test_strip_lstrip_rstrip_mixed_object(method, exp):
ser = Series([" aa ", np.nan, " bb \t\n", True, datetime.today(), None, 1, 2.0])
result = getattr(ser.str, method)()
- expected = Series(exp + [np.nan, np.nan, None, np.nan, np.nan])
+ expected = Series(exp + [np.nan, np.nan, None, np.nan, np.nan], dtype=object)
tm.assert_series_equal(result, expected)
@@ -529,7 +531,7 @@ def test_string_slice_out_of_bounds(any_string_dtype):
def test_encode_decode(any_string_dtype):
ser = Series(["a", "b", "a\xe4"], dtype=any_string_dtype).str.encode("utf-8")
result = ser.str.decode("utf-8")
- expected = ser.map(lambda x: x.decode("utf-8"))
+ expected = ser.map(lambda x: x.decode("utf-8")).astype(object)
tm.assert_series_equal(result, expected)
@@ -559,7 +561,7 @@ def test_decode_errors_kwarg():
ser.str.decode("cp1252")
result = ser.str.decode("cp1252", "ignore")
- expected = ser.map(lambda x: x.decode("cp1252", "ignore"))
+ expected = ser.map(lambda x: x.decode("cp1252", "ignore")).astype(object)
tm.assert_series_equal(result, expected)
@@ -672,7 +674,7 @@ def test_str_accessor_in_apply_func():
def test_zfill():
# https://github.com/pandas-dev/pandas/issues/20868
value = Series(["-1", "1", "1000", 10, np.nan])
- expected = Series(["-01", "001", "1000", np.nan, np.nan])
+ expected = Series(["-01", "001", "1000", np.nan, np.nan], dtype=object)
tm.assert_series_equal(value.str.zfill(3), expected)
value = Series(["-2", "+5"])
@@ -704,10 +706,10 @@ def test_get_with_dict_label():
]
)
result = s.str.get("name")
- expected = Series(["Hello", "Goodbye", None])
+ expected = Series(["Hello", "Goodbye", None], dtype=object)
tm.assert_series_equal(result, expected)
result = s.str.get("value")
- expected = Series(["World", "Planet", "Sea"])
+ expected = Series(["World", "Planet", "Sea"], dtype=object)
tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56159 | 2023-11-24T23:40:32Z | 2023-12-09T19:24:31Z | 2023-12-09T19:24:31Z | 2023-12-09T19:38:49Z |
BUG: round with non-nanosecond raising OverflowError | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 8893fe0ecd398..494d1931d9fa7 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -452,6 +452,7 @@ Datetimelike
- Bug in :meth:`DatetimeIndex.union` returning object dtype for tz-aware indexes with the same timezone but different units (:issue:`55238`)
- Bug in :meth:`Index.is_monotonic_increasing` and :meth:`Index.is_monotonic_decreasing` always caching :meth:`Index.is_unique` as ``True`` when first value in index is ``NaT`` (:issue:`55755`)
- Bug in :meth:`Index.view` to a datetime64 dtype with non-supported resolution incorrectly raising (:issue:`55710`)
+- Bug in :meth:`Series.dt.round` with non-nanosecond resolution and ``NaT`` entries incorrectly raising ``OverflowError`` (:issue:`56158`)
- Bug in :meth:`Tick.delta` with very large ticks raising ``OverflowError`` instead of ``OutOfBoundsTimedelta`` (:issue:`55503`)
- Bug in ``.astype`` converting from a higher-resolution ``datetime64`` dtype to a lower-resolution ``datetime64`` dtype (e.g. ``datetime64[us]->datetim64[ms]``) silently overflowing with values near the lower implementation bound (:issue:`55979`)
- Bug in adding or subtracting a :class:`Week` offset to a ``datetime64`` :class:`Series`, :class:`Index`, or :class:`DataFrame` column with non-nanosecond resolution returning incorrect results (:issue:`55583`)
diff --git a/pandas/_libs/tslibs/fields.pyx b/pandas/_libs/tslibs/fields.pyx
index a726c735bf9a1..ff4fb4d635d17 100644
--- a/pandas/_libs/tslibs/fields.pyx
+++ b/pandas/_libs/tslibs/fields.pyx
@@ -746,7 +746,31 @@ cdef ndarray[int64_t] _ceil_int64(const int64_t[:] values, int64_t unit):
cdef ndarray[int64_t] _rounddown_int64(values, int64_t unit):
- return _ceil_int64(values - unit // 2, unit)
+ cdef:
+ Py_ssize_t i, n = len(values)
+ ndarray[int64_t] result = np.empty(n, dtype="i8")
+ int64_t res, value, remainder, half
+
+ half = unit // 2
+
+ with cython.overflowcheck(True):
+ for i in range(n):
+ value = values[i]
+
+ if value == NPY_NAT:
+ res = NPY_NAT
+ else:
+ # This adjustment is the only difference between rounddown_int64
+ # and _ceil_int64
+ value = value - half
+ remainder = value % unit
+ if remainder == 0:
+ res = value
+ else:
+ res = value + (unit - remainder)
+
+ result[i] = res
+ return result
cdef ndarray[int64_t] _roundup_int64(values, int64_t unit):
diff --git a/pandas/tests/series/methods/test_round.py b/pandas/tests/series/methods/test_round.py
index 6c40e36419551..7f60c94f10e4f 100644
--- a/pandas/tests/series/methods/test_round.py
+++ b/pandas/tests/series/methods/test_round.py
@@ -56,9 +56,10 @@ def test_round_builtin(self, any_float_dtype):
@pytest.mark.parametrize("method", ["round", "floor", "ceil"])
@pytest.mark.parametrize("freq", ["s", "5s", "min", "5min", "h", "5h"])
- def test_round_nat(self, method, freq):
- # GH14940
- ser = Series([pd.NaT])
- expected = Series(pd.NaT)
+ def test_round_nat(self, method, freq, unit):
+ # GH14940, GH#56158
+ ser = Series([pd.NaT], dtype=f"M8[{unit}]")
+ expected = Series(pd.NaT, dtype=f"M8[{unit}]")
round_method = getattr(ser.dt, method)
- tm.assert_series_equal(round_method(freq), expected)
+ result = round_method(freq)
+ tm.assert_series_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56158 | 2023-11-24T23:08:11Z | 2023-11-26T04:23:38Z | 2023-11-26T04:23:38Z | 2023-11-26T05:00:13Z |
BUG: Index.str.cat casting result always to object | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index c878fd2664dc4..99faad8aff986 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -576,6 +576,7 @@ Strings
^^^^^^^
- Bug in :func:`pandas.api.types.is_string_dtype` while checking object array with no elements is of the string dtype (:issue:`54661`)
- Bug in :meth:`DataFrame.apply` failing when ``engine="numba"`` and columns or index have ``StringDtype`` (:issue:`56189`)
+- Bug in :meth:`Index.str.cat` always casting result to object dtype (:issue:`56157`)
- Bug in :meth:`Series.__mul__` for :class:`ArrowDtype` with ``pyarrow.string`` dtype and ``string[pyarrow]`` for the pyarrow backend (:issue:`51970`)
- Bug in :meth:`Series.str.startswith` and :meth:`Series.str.endswith` with arguments of type ``tuple[str, ...]`` for ``string[pyarrow]`` (:issue:`54942`)
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 9fa6e9973291d..127aee24e094f 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -44,6 +44,7 @@
)
from pandas.core.dtypes.missing import isna
+from pandas.core.arrays import ExtensionArray
from pandas.core.base import NoNewAttributesMixin
from pandas.core.construction import extract_array
@@ -456,7 +457,7 @@ def _get_series_list(self, others):
# in case of list-like `others`, all elements must be
# either Series/Index/np.ndarray (1-dim)...
if all(
- isinstance(x, (ABCSeries, ABCIndex))
+ isinstance(x, (ABCSeries, ABCIndex, ExtensionArray))
or (isinstance(x, np.ndarray) and x.ndim == 1)
for x in others
):
@@ -690,12 +691,15 @@ def cat(
out: Index | Series
if isinstance(self._orig, ABCIndex):
# add dtype for case that result is all-NA
+ dtype = None
+ if isna(result).all():
+ dtype = object
- out = Index(result, dtype=object, name=self._orig.name)
+ out = Index(result, dtype=dtype, name=self._orig.name)
else: # Series
if isinstance(self._orig.dtype, CategoricalDtype):
# We need to infer the new categories.
- dtype = None
+ dtype = self._orig.dtype.categories.dtype # type: ignore[assignment]
else:
dtype = self._orig.dtype
res_ser = Series(
diff --git a/pandas/tests/strings/test_api.py b/pandas/tests/strings/test_api.py
index 2914b22a52e94..fd2501835318d 100644
--- a/pandas/tests/strings/test_api.py
+++ b/pandas/tests/strings/test_api.py
@@ -2,6 +2,7 @@
import pytest
from pandas import (
+ CategoricalDtype,
DataFrame,
Index,
MultiIndex,
@@ -178,6 +179,7 @@ def test_api_for_categorical(any_string_method, any_string_dtype):
s = Series(list("aabb"), dtype=any_string_dtype)
s = s + " " + s
c = s.astype("category")
+ c = c.astype(CategoricalDtype(c.dtype.categories.astype("object")))
assert isinstance(c.str, StringMethods)
method_name, args, kwargs = any_string_method
diff --git a/pandas/tests/strings/test_cat.py b/pandas/tests/strings/test_cat.py
index 3e620b7664335..284932491a65e 100644
--- a/pandas/tests/strings/test_cat.py
+++ b/pandas/tests/strings/test_cat.py
@@ -3,6 +3,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas import (
DataFrame,
Index,
@@ -10,6 +12,7 @@
Series,
_testing as tm,
concat,
+ option_context,
)
@@ -26,45 +29,49 @@ def test_str_cat_name(index_or_series, other):
assert result.name == "name"
-def test_str_cat(index_or_series):
- box = index_or_series
- # test_cat above tests "str_cat" from ndarray;
- # here testing "str.cat" from Series/Index to ndarray/list
- s = box(["a", "a", "b", "b", "c", np.nan])
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
+def test_str_cat(index_or_series, infer_string):
+ with option_context("future.infer_string", infer_string):
+ box = index_or_series
+ # test_cat above tests "str_cat" from ndarray;
+ # here testing "str.cat" from Series/Index to ndarray/list
+ s = box(["a", "a", "b", "b", "c", np.nan])
- # single array
- result = s.str.cat()
- expected = "aabbc"
- assert result == expected
+ # single array
+ result = s.str.cat()
+ expected = "aabbc"
+ assert result == expected
- result = s.str.cat(na_rep="-")
- expected = "aabbc-"
- assert result == expected
+ result = s.str.cat(na_rep="-")
+ expected = "aabbc-"
+ assert result == expected
- result = s.str.cat(sep="_", na_rep="NA")
- expected = "a_a_b_b_c_NA"
- assert result == expected
+ result = s.str.cat(sep="_", na_rep="NA")
+ expected = "a_a_b_b_c_NA"
+ assert result == expected
- t = np.array(["a", np.nan, "b", "d", "foo", np.nan], dtype=object)
- expected = box(["aa", "a-", "bb", "bd", "cfoo", "--"])
+ t = np.array(["a", np.nan, "b", "d", "foo", np.nan], dtype=object)
+ expected = box(["aa", "a-", "bb", "bd", "cfoo", "--"])
- # Series/Index with array
- result = s.str.cat(t, na_rep="-")
- tm.assert_equal(result, expected)
+ # Series/Index with array
+ result = s.str.cat(t, na_rep="-")
+ tm.assert_equal(result, expected)
- # Series/Index with list
- result = s.str.cat(list(t), na_rep="-")
- tm.assert_equal(result, expected)
+ # Series/Index with list
+ result = s.str.cat(list(t), na_rep="-")
+ tm.assert_equal(result, expected)
- # errors for incorrect lengths
- rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
- z = Series(["1", "2", "3"])
+ # errors for incorrect lengths
+ rgx = r"If `others` contains arrays or lists \(or other list-likes.*"
+ z = Series(["1", "2", "3"])
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(z.values)
+ with pytest.raises(ValueError, match=rgx):
+ s.str.cat(z.values)
- with pytest.raises(ValueError, match=rgx):
- s.str.cat(list(z))
+ with pytest.raises(ValueError, match=rgx):
+ s.str.cat(list(z))
def test_str_cat_raises_intuitive_error(index_or_series):
@@ -78,39 +85,54 @@ def test_str_cat_raises_intuitive_error(index_or_series):
s.str.cat(" ")
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
@pytest.mark.parametrize("sep", ["", None])
@pytest.mark.parametrize("dtype_target", ["object", "category"])
@pytest.mark.parametrize("dtype_caller", ["object", "category"])
-def test_str_cat_categorical(index_or_series, dtype_caller, dtype_target, sep):
+def test_str_cat_categorical(
+ index_or_series, dtype_caller, dtype_target, sep, infer_string
+):
box = index_or_series
- s = Index(["a", "a", "b", "a"], dtype=dtype_caller)
- s = s if box == Index else Series(s, index=s)
- t = Index(["b", "a", "b", "c"], dtype=dtype_target)
-
- expected = Index(["ab", "aa", "bb", "ac"])
- expected = expected if box == Index else Series(expected, index=s)
+ with option_context("future.infer_string", infer_string):
+ s = Index(["a", "a", "b", "a"], dtype=dtype_caller)
+ s = s if box == Index else Series(s, index=s)
+ t = Index(["b", "a", "b", "c"], dtype=dtype_target)
- # Series/Index with unaligned Index -> t.values
- result = s.str.cat(t.values, sep=sep)
- tm.assert_equal(result, expected)
-
- # Series/Index with Series having matching Index
- t = Series(t.values, index=s)
- result = s.str.cat(t, sep=sep)
- tm.assert_equal(result, expected)
-
- # Series/Index with Series.values
- result = s.str.cat(t.values, sep=sep)
- tm.assert_equal(result, expected)
+ expected = Index(["ab", "aa", "bb", "ac"])
+ expected = (
+ expected
+ if box == Index
+ else Series(expected, index=Index(s, dtype=dtype_caller))
+ )
- # Series/Index with Series having different Index
- t = Series(t.values, index=t.values)
- expected = Index(["aa", "aa", "bb", "bb", "aa"])
- expected = expected if box == Index else Series(expected, index=expected.str[:1])
+ # Series/Index with unaligned Index -> t.values
+ result = s.str.cat(t.values, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series having matching Index
+ t = Series(t.values, index=Index(s, dtype=dtype_caller))
+ result = s.str.cat(t, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series.values
+ result = s.str.cat(t.values, sep=sep)
+ tm.assert_equal(result, expected)
+
+ # Series/Index with Series having different Index
+ t = Series(t.values, index=t.values)
+ expected = Index(["aa", "aa", "bb", "bb", "aa"])
+ dtype = object if dtype_caller == "object" else s.dtype.categories.dtype
+ expected = (
+ expected
+ if box == Index
+ else Series(expected, index=Index(expected.str[:1], dtype=dtype))
+ )
- result = s.str.cat(t, sep=sep)
- tm.assert_equal(result, expected)
+ result = s.str.cat(t, sep=sep)
+ tm.assert_equal(result, expected)
@pytest.mark.parametrize(
@@ -321,8 +343,9 @@ def test_str_cat_all_na(index_or_series, index_or_series2):
# all-NA target
if box == Series:
- expected = Series([np.nan] * 4, index=s.index, dtype=object)
+ expected = Series([np.nan] * 4, index=s.index, dtype=s.dtype)
else: # box == Index
+ # TODO: Strimg option, this should return string dtype
expected = Index([np.nan] * 4, dtype=object)
result = s.str.cat(t, join="left")
tm.assert_equal(result, expected)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56157 | 2023-11-24T22:52:01Z | 2023-12-08T23:28:19Z | 2023-12-08T23:28:19Z | 2023-12-08T23:28:46Z |
TST/CLN: Remove makeStringIndex | diff --git a/asv_bench/benchmarks/algorithms.py b/asv_bench/benchmarks/algorithms.py
index 3c78b0a9a60c8..933e8fbc175d8 100644
--- a/asv_bench/benchmarks/algorithms.py
+++ b/asv_bench/benchmarks/algorithms.py
@@ -4,8 +4,6 @@
import pandas as pd
-from .pandas_vb_common import tm
-
for imp in ["pandas.util", "pandas.tools.hashing"]:
try:
hashing = import_module(imp)
@@ -47,9 +45,12 @@ def setup(self, unique, sort, dtype):
elif dtype == "datetime64[ns, tz]":
data = pd.date_range("2011-01-01", freq="h", periods=N, tz="Asia/Tokyo")
elif dtype == "object_str":
- data = tm.makeStringIndex(N)
+ data = pd.Index([f"i-{i}" for i in range(N)], dtype=object)
elif dtype == "string[pyarrow]":
- data = pd.array(tm.makeStringIndex(N), dtype="string[pyarrow]")
+ data = pd.array(
+ pd.Index([f"i-{i}" for i in range(N)], dtype=object),
+ dtype="string[pyarrow]",
+ )
else:
raise NotImplementedError
@@ -88,7 +89,7 @@ def setup(self, unique, keep, dtype):
elif dtype == "float64":
data = pd.Index(np.random.randn(N), dtype="float64")
elif dtype == "string":
- data = tm.makeStringIndex(N)
+ data = pd.Index([f"i-{i}" for i in range(N)], dtype=object)
elif dtype == "datetime64[ns]":
data = pd.date_range("2011-01-01", freq="h", periods=N)
elif dtype == "datetime64[ns, tz]":
@@ -136,7 +137,9 @@ def setup_cache(self):
df = pd.DataFrame(
{
"strings": pd.Series(
- tm.makeStringIndex(10000).take(np.random.randint(0, 10000, size=N))
+ pd.Index([f"i-{i}" for i in range(10000)], dtype=object).take(
+ np.random.randint(0, 10000, size=N)
+ )
),
"floats": np.random.randn(N),
"ints": np.arange(N),
diff --git a/asv_bench/benchmarks/algos/isin.py b/asv_bench/benchmarks/algos/isin.py
index 92797425b2c30..2b3d32fb579dc 100644
--- a/asv_bench/benchmarks/algos/isin.py
+++ b/asv_bench/benchmarks/algos/isin.py
@@ -8,8 +8,6 @@
date_range,
)
-from ..pandas_vb_common import tm
-
class IsIn:
params = [
@@ -60,7 +58,9 @@ def setup(self, dtype):
elif dtype in ["str", "string[python]", "string[pyarrow]"]:
try:
- self.series = Series(tm.makeStringIndex(N), dtype=dtype)
+ self.series = Series(
+ Index([f"i-{i}" for i in range(N)], dtype=object), dtype=dtype
+ )
except ImportError:
raise NotImplementedError
self.values = list(self.series[:2])
diff --git a/asv_bench/benchmarks/ctors.py b/asv_bench/benchmarks/ctors.py
index 2db00cc7f2ad9..77c9faf3d3a87 100644
--- a/asv_bench/benchmarks/ctors.py
+++ b/asv_bench/benchmarks/ctors.py
@@ -9,8 +9,6 @@
date_range,
)
-from .pandas_vb_common import tm
-
def no_change(arr):
return arr
@@ -115,7 +113,7 @@ def time_dtindex_from_index_with_series(self):
class MultiIndexConstructor:
def setup(self):
N = 10**4
- self.iterables = [tm.makeStringIndex(N), range(20)]
+ self.iterables = [Index([f"i-{i}" for i in range(N)], dtype=object), range(20)]
def time_multiindex_from_iterables(self):
MultiIndex.from_product(self.iterables)
diff --git a/asv_bench/benchmarks/dtypes.py b/asv_bench/benchmarks/dtypes.py
index c33043c0eddc1..7f3429b5e3882 100644
--- a/asv_bench/benchmarks/dtypes.py
+++ b/asv_bench/benchmarks/dtypes.py
@@ -3,7 +3,10 @@
import numpy as np
import pandas as pd
-from pandas import DataFrame
+from pandas import (
+ DataFrame,
+ Index,
+)
import pandas._testing as tm
from pandas.api.types import (
is_extension_array_dtype,
@@ -73,8 +76,8 @@ class SelectDtypes:
def setup(self, dtype):
N, K = 5000, 50
- self.index = tm.makeStringIndex(N)
- self.columns = tm.makeStringIndex(K)
+ self.index = Index([f"i-{i}" for i in range(N)], dtype=object)
+ self.columns = Index([f"i-{i}" for i in range(K)], dtype=object)
def create_df(data):
return DataFrame(data, index=self.index, columns=self.columns)
diff --git a/asv_bench/benchmarks/frame_ctor.py b/asv_bench/benchmarks/frame_ctor.py
index 7092a679b8cf0..f938f7eb0d951 100644
--- a/asv_bench/benchmarks/frame_ctor.py
+++ b/asv_bench/benchmarks/frame_ctor.py
@@ -12,8 +12,6 @@
date_range,
)
-from .pandas_vb_common import tm
-
try:
from pandas.tseries.offsets import (
Hour,
@@ -30,8 +28,8 @@
class FromDicts:
def setup(self):
N, K = 5000, 50
- self.index = tm.makeStringIndex(N)
- self.columns = tm.makeStringIndex(K)
+ self.index = pd.Index([f"i-{i}" for i in range(N)], dtype=object)
+ self.columns = pd.Index([f"i-{i}" for i in range(K)], dtype=object)
frame = DataFrame(np.random.randn(N, K), index=self.index, columns=self.columns)
self.data = frame.to_dict()
self.dict_list = frame.to_dict(orient="records")
diff --git a/asv_bench/benchmarks/frame_methods.py b/asv_bench/benchmarks/frame_methods.py
index f22a261041e17..a283afd1f0f1e 100644
--- a/asv_bench/benchmarks/frame_methods.py
+++ b/asv_bench/benchmarks/frame_methods.py
@@ -5,6 +5,7 @@
from pandas import (
DataFrame,
+ Index,
MultiIndex,
NaT,
Series,
@@ -14,8 +15,6 @@
timedelta_range,
)
-from .pandas_vb_common import tm
-
class AsType:
params = [
@@ -703,8 +702,12 @@ def setup(self, monotonic):
K = 10
df = DataFrame(
{
- "key1": tm.makeStringIndex(N).values.repeat(K),
- "key2": tm.makeStringIndex(N).values.repeat(K),
+ "key1": Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(
+ K
+ ),
+ "key2": Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(
+ K
+ ),
"value": np.random.randn(N * K),
}
)
diff --git a/asv_bench/benchmarks/gil.py b/asv_bench/benchmarks/gil.py
index c78819f75c52a..a0c4189c72d0e 100644
--- a/asv_bench/benchmarks/gil.py
+++ b/asv_bench/benchmarks/gil.py
@@ -5,6 +5,7 @@
from pandas import (
DataFrame,
+ Index,
Series,
date_range,
factorize,
@@ -12,8 +13,6 @@
)
from pandas.core.algorithms import take_nd
-from .pandas_vb_common import tm
-
try:
from pandas import (
rolling_kurt,
@@ -34,7 +33,6 @@
except ImportError:
from pandas import algos
-
from .pandas_vb_common import BaseIO # isort:skip
@@ -305,7 +303,7 @@ class ParallelFactorize:
param_names = ["threads"]
def setup(self, threads):
- strings = tm.makeStringIndex(100000)
+ strings = Index([f"i-{i}" for i in range(100000)], dtype=object)
@test_parallel(num_threads=threads)
def parallel():
diff --git a/asv_bench/benchmarks/groupby.py b/asv_bench/benchmarks/groupby.py
index 202ba6e981b70..abffa1f702b9c 100644
--- a/asv_bench/benchmarks/groupby.py
+++ b/asv_bench/benchmarks/groupby.py
@@ -17,8 +17,6 @@
to_timedelta,
)
-from .pandas_vb_common import tm
-
method_blocklist = {
"object": {
"diff",
@@ -167,10 +165,14 @@ def setup_cache(self):
"int64_small": Series(np.random.randint(0, 100, size=size)),
"int64_large": Series(np.random.randint(0, 10000, size=size)),
"object_small": Series(
- tm.makeStringIndex(100).take(np.random.randint(0, 100, size=size))
+ Index([f"i-{i}" for i in range(100)], dtype=object).take(
+ np.random.randint(0, 100, size=size)
+ )
),
"object_large": Series(
- tm.makeStringIndex(10000).take(np.random.randint(0, 10000, size=size))
+ Index([f"i-{i}" for i in range(10000)], dtype=object).take(
+ np.random.randint(0, 10000, size=size)
+ )
),
}
return data
@@ -912,7 +914,7 @@ def setup(self):
n1 = 400
n2 = 250
index = MultiIndex(
- levels=[np.arange(n1), tm.makeStringIndex(n2)],
+ levels=[np.arange(n1), Index([f"i-{i}" for i in range(n2)], dtype=object)],
codes=[np.repeat(range(n1), n2).tolist(), list(range(n2)) * n1],
names=["lev1", "lev2"],
)
diff --git a/asv_bench/benchmarks/index_object.py b/asv_bench/benchmarks/index_object.py
index 7e33223260e0f..637b1b40f59a3 100644
--- a/asv_bench/benchmarks/index_object.py
+++ b/asv_bench/benchmarks/index_object.py
@@ -12,8 +12,6 @@
date_range,
)
-from .pandas_vb_common import tm
-
class SetOperations:
params = (
@@ -30,7 +28,7 @@ def setup(self, index_structure, dtype, method):
date_str_left = Index(dates_left.strftime(fmt))
int_left = Index(np.arange(N))
ea_int_left = Index(np.arange(N), dtype="Int64")
- str_left = tm.makeStringIndex(N)
+ str_left = Index([f"i-{i}" for i in range(N)], dtype=object)
data = {
"datetime": dates_left,
@@ -155,7 +153,12 @@ class Indexing:
def setup(self, dtype):
N = 10**6
- self.idx = getattr(tm, f"make{dtype}Index")(N)
+ if dtype == "String":
+ self.idx = Index([f"i-{i}" for i in range(N)], dtype=object)
+ elif dtype == "Float":
+ self.idx = Index(np.arange(N), dtype=np.float64)
+ elif dtype == "Int":
+ self.idx = Index(np.arange(N), dtype=np.int64)
self.array_mask = (np.arange(N) % 3) == 0
self.series_mask = Series(self.array_mask)
self.sorted = self.idx.sort_values()
diff --git a/asv_bench/benchmarks/indexing.py b/asv_bench/benchmarks/indexing.py
index 4961722c0e9cd..9ad1f5b31016d 100644
--- a/asv_bench/benchmarks/indexing.py
+++ b/asv_bench/benchmarks/indexing.py
@@ -22,8 +22,6 @@
period_range,
)
-from .pandas_vb_common import tm
-
class NumericSeriesIndexing:
params = [
@@ -124,7 +122,7 @@ class NonNumericSeriesIndexing:
def setup(self, index, index_structure):
N = 10**6
if index == "string":
- index = tm.makeStringIndex(N)
+ index = Index([f"i-{i}" for i in range(N)], dtype=object)
elif index == "datetime":
index = date_range("1900", periods=N, freq="s")
elif index == "period":
@@ -156,8 +154,8 @@ def time_getitem_list_like(self, index, index_structure):
class DataFrameStringIndexing:
def setup(self):
- index = tm.makeStringIndex(1000)
- columns = tm.makeStringIndex(30)
+ index = Index([f"i-{i}" for i in range(1000)], dtype=object)
+ columns = Index([f"i-{i}" for i in range(30)], dtype=object)
with warnings.catch_warnings(record=True):
self.df = DataFrame(np.random.randn(1000, 30), index=index, columns=columns)
self.idx_scalar = index[100]
diff --git a/asv_bench/benchmarks/inference.py b/asv_bench/benchmarks/inference.py
index 805b0c807452c..d5c58033c1157 100644
--- a/asv_bench/benchmarks/inference.py
+++ b/asv_bench/benchmarks/inference.py
@@ -9,6 +9,7 @@
import numpy as np
from pandas import (
+ Index,
NaT,
Series,
date_range,
@@ -17,10 +18,7 @@
to_timedelta,
)
-from .pandas_vb_common import (
- lib,
- tm,
-)
+from .pandas_vb_common import lib
class ToNumeric:
@@ -31,7 +29,7 @@ def setup(self, errors):
N = 10000
self.float = Series(np.random.randn(N))
self.numstr = self.float.astype("str")
- self.str = Series(tm.makeStringIndex(N))
+ self.str = Series(Index([f"i-{i}" for i in range(N)], dtype=object))
def time_from_float(self, errors):
to_numeric(self.float, errors=errors)
diff --git a/asv_bench/benchmarks/io/csv.py b/asv_bench/benchmarks/io/csv.py
index a45315f63d62e..9ac83db4f85b9 100644
--- a/asv_bench/benchmarks/io/csv.py
+++ b/asv_bench/benchmarks/io/csv.py
@@ -10,6 +10,7 @@
from pandas import (
Categorical,
DataFrame,
+ Index,
concat,
date_range,
period_range,
@@ -17,10 +18,7 @@
to_datetime,
)
-from ..pandas_vb_common import (
- BaseIO,
- tm,
-)
+from ..pandas_vb_common import BaseIO
class ToCSV(BaseIO):
@@ -288,7 +286,7 @@ class ReadCSVSkipRows(BaseIO):
def setup(self, skiprows, engine):
N = 20000
- index = tm.makeStringIndex(N)
+ index = Index([f"i-{i}" for i in range(N)], dtype=object)
df = DataFrame(
{
"float1": np.random.randn(N),
diff --git a/asv_bench/benchmarks/io/excel.py b/asv_bench/benchmarks/io/excel.py
index f8d81b0f6a699..902a61be901bd 100644
--- a/asv_bench/benchmarks/io/excel.py
+++ b/asv_bench/benchmarks/io/excel.py
@@ -12,12 +12,11 @@
from pandas import (
DataFrame,
ExcelWriter,
+ Index,
date_range,
read_excel,
)
-from ..pandas_vb_common import tm
-
def _generate_dataframe():
N = 2000
@@ -27,7 +26,7 @@ def _generate_dataframe():
columns=[f"float{i}" for i in range(C)],
index=date_range("20000101", periods=N, freq="h"),
)
- df["object"] = tm.makeStringIndex(N)
+ df["object"] = Index([f"i-{i}" for i in range(N)], dtype=object)
return df
diff --git a/asv_bench/benchmarks/io/hdf.py b/asv_bench/benchmarks/io/hdf.py
index 195aaa158e178..acf0ec4b9d359 100644
--- a/asv_bench/benchmarks/io/hdf.py
+++ b/asv_bench/benchmarks/io/hdf.py
@@ -3,20 +3,18 @@
from pandas import (
DataFrame,
HDFStore,
+ Index,
date_range,
read_hdf,
)
-from ..pandas_vb_common import (
- BaseIO,
- tm,
-)
+from ..pandas_vb_common import BaseIO
class HDFStoreDataFrame(BaseIO):
def setup(self):
N = 25000
- index = tm.makeStringIndex(N)
+ index = Index([f"i-{i}" for i in range(N)], dtype=object)
self.df = DataFrame(
{"float1": np.random.randn(N), "float2": np.random.randn(N)}, index=index
)
@@ -124,7 +122,7 @@ def setup(self, format):
columns=[f"float{i}" for i in range(C)],
index=date_range("20000101", periods=N, freq="h"),
)
- self.df["object"] = tm.makeStringIndex(N)
+ self.df["object"] = Index([f"i-{i}" for i in range(N)], dtype=object)
self.df.to_hdf(self.fname, "df", format=format)
# Numeric df
diff --git a/asv_bench/benchmarks/io/json.py b/asv_bench/benchmarks/io/json.py
index 8a2e3fa87eb37..bcbfcdea42dd9 100644
--- a/asv_bench/benchmarks/io/json.py
+++ b/asv_bench/benchmarks/io/json.py
@@ -4,6 +4,7 @@
from pandas import (
DataFrame,
+ Index,
concat,
date_range,
json_normalize,
@@ -11,10 +12,7 @@
timedelta_range,
)
-from ..pandas_vb_common import (
- BaseIO,
- tm,
-)
+from ..pandas_vb_common import BaseIO
class ReadJSON(BaseIO):
@@ -114,7 +112,7 @@ def setup(self, orient, frame):
ints = np.random.randint(100000000, size=N)
longints = sys.maxsize * np.random.randint(100000000, size=N)
floats = np.random.randn(N)
- strings = tm.makeStringIndex(N)
+ strings = Index([f"i-{i}" for i in range(N)], dtype=object)
self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
self.df_date_idx = DataFrame(np.random.randn(N, ncols), index=index)
self.df_td_int_ts = DataFrame(
@@ -220,7 +218,7 @@ def setup(self):
ints = np.random.randint(100000000, size=N)
longints = sys.maxsize * np.random.randint(100000000, size=N)
floats = np.random.randn(N)
- strings = tm.makeStringIndex(N)
+ strings = Index([f"i-{i}" for i in range(N)], dtype=object)
self.df = DataFrame(np.random.randn(N, ncols), index=np.arange(N))
self.df_date_idx = DataFrame(np.random.randn(N, ncols), index=index)
self.df_td_int_ts = DataFrame(
diff --git a/asv_bench/benchmarks/io/pickle.py b/asv_bench/benchmarks/io/pickle.py
index 54631d9236887..4787b57b54756 100644
--- a/asv_bench/benchmarks/io/pickle.py
+++ b/asv_bench/benchmarks/io/pickle.py
@@ -2,14 +2,12 @@
from pandas import (
DataFrame,
+ Index,
date_range,
read_pickle,
)
-from ..pandas_vb_common import (
- BaseIO,
- tm,
-)
+from ..pandas_vb_common import BaseIO
class Pickle(BaseIO):
@@ -22,7 +20,7 @@ def setup(self):
columns=[f"float{i}" for i in range(C)],
index=date_range("20000101", periods=N, freq="h"),
)
- self.df["object"] = tm.makeStringIndex(N)
+ self.df["object"] = Index([f"i-{i}" for i in range(N)], dtype=object)
self.df.to_pickle(self.fname)
def time_read_pickle(self):
diff --git a/asv_bench/benchmarks/io/sql.py b/asv_bench/benchmarks/io/sql.py
index 6f893ee72d918..e87cc4aaa80c7 100644
--- a/asv_bench/benchmarks/io/sql.py
+++ b/asv_bench/benchmarks/io/sql.py
@@ -5,13 +5,12 @@
from pandas import (
DataFrame,
+ Index,
date_range,
read_sql_query,
read_sql_table,
)
-from ..pandas_vb_common import tm
-
class SQL:
params = ["sqlalchemy", "sqlite"]
@@ -35,7 +34,7 @@ def setup(self, connection):
"int": np.random.randint(0, N, size=N),
"datetime": date_range("2000-01-01", periods=N, freq="s"),
},
- index=tm.makeStringIndex(N),
+ index=Index([f"i-{i}" for i in range(N)], dtype=object),
)
self.df.iloc[1000:3000, 1] = np.nan
self.df["date"] = self.df["datetime"].dt.date
@@ -84,7 +83,7 @@ def setup(self, connection, dtype):
"int": np.random.randint(0, N, size=N),
"datetime": date_range("2000-01-01", periods=N, freq="s"),
},
- index=tm.makeStringIndex(N),
+ index=Index([f"i-{i}" for i in range(N)], dtype=object),
)
self.df.iloc[1000:3000, 1] = np.nan
self.df["date"] = self.df["datetime"].dt.date
@@ -113,7 +112,7 @@ def setup(self):
"int": np.random.randint(0, N, size=N),
"datetime": date_range("2000-01-01", periods=N, freq="s"),
},
- index=tm.makeStringIndex(N),
+ index=Index([f"i-{i}" for i in range(N)], dtype=object),
)
self.df.iloc[1000:3000, 1] = np.nan
self.df["date"] = self.df["datetime"].dt.date
@@ -159,7 +158,7 @@ def setup(self, dtype):
"int": np.random.randint(0, N, size=N),
"datetime": date_range("2000-01-01", periods=N, freq="s"),
},
- index=tm.makeStringIndex(N),
+ index=Index([f"i-{i}" for i in range(N)], dtype=object),
)
self.df.iloc[1000:3000, 1] = np.nan
self.df["date"] = self.df["datetime"].dt.date
diff --git a/asv_bench/benchmarks/io/stata.py b/asv_bench/benchmarks/io/stata.py
index 750bcf4ccee5c..ff33ededdfed9 100644
--- a/asv_bench/benchmarks/io/stata.py
+++ b/asv_bench/benchmarks/io/stata.py
@@ -2,14 +2,12 @@
from pandas import (
DataFrame,
+ Index,
date_range,
read_stata,
)
-from ..pandas_vb_common import (
- BaseIO,
- tm,
-)
+from ..pandas_vb_common import BaseIO
class Stata(BaseIO):
@@ -25,7 +23,7 @@ def setup(self, convert_dates):
columns=[f"float{i}" for i in range(C)],
index=date_range("20000101", periods=N, freq="h"),
)
- self.df["object"] = tm.makeStringIndex(self.N)
+ self.df["object"] = Index([f"i-{i}" for i in range(self.N)], dtype=object)
self.df["int8_"] = np.random.randint(
np.iinfo(np.int8).min, np.iinfo(np.int8).max - 27, N
)
diff --git a/asv_bench/benchmarks/join_merge.py b/asv_bench/benchmarks/join_merge.py
index 23824c2c748df..6f494562103c2 100644
--- a/asv_bench/benchmarks/join_merge.py
+++ b/asv_bench/benchmarks/join_merge.py
@@ -14,8 +14,6 @@
merge_asof,
)
-from .pandas_vb_common import tm
-
try:
from pandas import merge_ordered
except ImportError:
@@ -28,7 +26,7 @@ class Concat:
def setup(self, axis):
N = 1000
- s = Series(N, index=tm.makeStringIndex(N))
+ s = Series(N, index=Index([f"i-{i}" for i in range(N)], dtype=object))
self.series = [s[i:-i] for i in range(1, 10)] * 50
self.small_frames = [DataFrame(np.random.randn(5, 4))] * 1000
df = DataFrame(
@@ -94,7 +92,7 @@ def setup(self, dtype, structure, axis, sort):
elif dtype in ("int64", "Int64", "int64[pyarrow]"):
vals = np.arange(N, dtype=np.int64)
elif dtype in ("string[python]", "string[pyarrow]"):
- vals = tm.makeStringIndex(N)
+ vals = Index([f"i-{i}" for i in range(N)], dtype=object)
else:
raise NotImplementedError
@@ -122,8 +120,8 @@ class Join:
param_names = ["sort"]
def setup(self, sort):
- level1 = tm.makeStringIndex(10).values
- level2 = tm.makeStringIndex(1000).values
+ level1 = Index([f"i-{i}" for i in range(10)], dtype=object).values
+ level2 = Index([f"i-{i}" for i in range(1000)], dtype=object).values
codes1 = np.arange(10).repeat(1000)
codes2 = np.tile(np.arange(1000), 10)
index2 = MultiIndex(levels=[level1, level2], codes=[codes1, codes2])
@@ -231,8 +229,8 @@ class Merge:
def setup(self, sort):
N = 10000
- indices = tm.makeStringIndex(N).values
- indices2 = tm.makeStringIndex(N).values
+ indices = Index([f"i-{i}" for i in range(N)], dtype=object).values
+ indices2 = Index([f"i-{i}" for i in range(N)], dtype=object).values
key = np.tile(indices[:8000], 10)
key2 = np.tile(indices2[:8000], 10)
self.left = DataFrame(
@@ -400,7 +398,7 @@ def time_merge_on_cat_idx(self):
class MergeOrdered:
def setup(self):
- groups = tm.makeStringIndex(10).values
+ groups = Index([f"i-{i}" for i in range(10)], dtype=object).values
self.left = DataFrame(
{
"group": groups.repeat(5000),
diff --git a/asv_bench/benchmarks/libs.py b/asv_bench/benchmarks/libs.py
index f041499c9c622..3419163bcfe09 100644
--- a/asv_bench/benchmarks/libs.py
+++ b/asv_bench/benchmarks/libs.py
@@ -15,13 +15,11 @@
from pandas import (
NA,
+ Index,
NaT,
)
-from .pandas_vb_common import (
- lib,
- tm,
-)
+from .pandas_vb_common import lib
try:
from pandas.util import cache_readonly
@@ -61,8 +59,8 @@ class FastZip:
def setup(self):
N = 10000
K = 10
- key1 = tm.makeStringIndex(N).values.repeat(K)
- key2 = tm.makeStringIndex(N).values.repeat(K)
+ key1 = Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(K)
+ key2 = Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(K)
col_array = np.vstack([key1, key2, np.random.randn(N * K)])
col_array2 = col_array.copy()
col_array2[:, :10000] = np.nan
diff --git a/asv_bench/benchmarks/multiindex_object.py b/asv_bench/benchmarks/multiindex_object.py
index 87dcdb16fa647..54788d41d83fe 100644
--- a/asv_bench/benchmarks/multiindex_object.py
+++ b/asv_bench/benchmarks/multiindex_object.py
@@ -5,6 +5,7 @@
from pandas import (
NA,
DataFrame,
+ Index,
MultiIndex,
RangeIndex,
Series,
@@ -12,8 +13,6 @@
date_range,
)
-from .pandas_vb_common import tm
-
class GetLoc:
def setup(self):
@@ -144,7 +143,11 @@ def time_is_monotonic(self):
class Duplicated:
def setup(self):
n, k = 200, 5000
- levels = [np.arange(n), tm.makeStringIndex(n).values, 1000 + np.arange(n)]
+ levels = [
+ np.arange(n),
+ Index([f"i-{i}" for i in range(n)], dtype=object).values,
+ 1000 + np.arange(n),
+ ]
codes = [np.random.choice(n, (k * n)) for lev in levels]
self.mi = MultiIndex(levels=levels, codes=codes)
@@ -249,7 +252,7 @@ def setup(self, index_structure, dtype, method, sort):
level2 = range(N // 1000)
int_left = MultiIndex.from_product([level1, level2])
- level2 = tm.makeStringIndex(N // 1000).values
+ level2 = Index([f"i-{i}" for i in range(N // 1000)], dtype=object).values
str_left = MultiIndex.from_product([level1, level2])
level2 = range(N // 1000)
@@ -293,7 +296,7 @@ def setup(self, dtype):
level2[0] = NA
ea_int_left = MultiIndex.from_product([level1, level2])
- level2 = tm.makeStringIndex(N // 1000).values
+ level2 = Index([f"i-{i}" for i in range(N // 1000)], dtype=object).values
str_left = MultiIndex.from_product([level1, level2])
data = {
@@ -354,7 +357,7 @@ def setup(self, dtype):
level2 = range(N // 1000)
int_midx = MultiIndex.from_product([level1, level2])
- level2 = tm.makeStringIndex(N // 1000).values
+ level2 = Index([f"i-{i}" for i in range(N // 1000)], dtype=object).values
str_midx = MultiIndex.from_product([level1, level2])
data = {
@@ -411,7 +414,7 @@ def setup(self, dtype):
elif dtype == "int64":
level2 = range(N2)
elif dtype == "string":
- level2 = tm.makeStringIndex(N2)
+ level2 = Index([f"i-{i}" for i in range(N2)], dtype=object)
else:
raise NotImplementedError
diff --git a/asv_bench/benchmarks/reindex.py b/asv_bench/benchmarks/reindex.py
index 1c5e6050db275..3d22bfce7e2b2 100644
--- a/asv_bench/benchmarks/reindex.py
+++ b/asv_bench/benchmarks/reindex.py
@@ -9,8 +9,6 @@
period_range,
)
-from .pandas_vb_common import tm
-
class Reindex:
def setup(self):
@@ -23,8 +21,8 @@ def setup(self):
)
N = 5000
K = 200
- level1 = tm.makeStringIndex(N).values.repeat(K)
- level2 = np.tile(tm.makeStringIndex(K).values, N)
+ level1 = Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(K)
+ level2 = np.tile(Index([f"i-{i}" for i in range(K)], dtype=object).values, N)
index = MultiIndex.from_arrays([level1, level2])
self.s = Series(np.random.randn(N * K), index=index)
self.s_subset = self.s[::2]
@@ -93,8 +91,8 @@ class DropDuplicates:
def setup(self, inplace):
N = 10000
K = 10
- key1 = tm.makeStringIndex(N).values.repeat(K)
- key2 = tm.makeStringIndex(N).values.repeat(K)
+ key1 = Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(K)
+ key2 = Index([f"i-{i}" for i in range(N)], dtype=object).values.repeat(K)
self.df = DataFrame(
{"key1": key1, "key2": key2, "value": np.random.randn(N * K)}
)
@@ -102,7 +100,9 @@ def setup(self, inplace):
self.df_nan.iloc[:10000, :] = np.nan
self.s = Series(np.random.randint(0, 1000, size=10000))
- self.s_str = Series(np.tile(tm.makeStringIndex(1000).values, 10))
+ self.s_str = Series(
+ np.tile(Index([f"i-{i}" for i in range(1000)], dtype=object).values, 10)
+ )
N = 1000000
K = 10000
@@ -133,7 +133,7 @@ class Align:
# blog "pandas escaped the zoo"
def setup(self):
n = 50000
- indices = tm.makeStringIndex(n)
+ indices = Index([f"i-{i}" for i in range(n)], dtype=object)
subsample_size = 40000
self.x = Series(np.random.randn(n), indices)
self.y = Series(
diff --git a/asv_bench/benchmarks/series_methods.py b/asv_bench/benchmarks/series_methods.py
index 79cf8f9cd2048..b021af4694d7d 100644
--- a/asv_bench/benchmarks/series_methods.py
+++ b/asv_bench/benchmarks/series_methods.py
@@ -10,8 +10,6 @@
date_range,
)
-from .pandas_vb_common import tm
-
class SeriesConstructor:
def setup(self):
@@ -253,7 +251,7 @@ def time_mode(self, N):
class Dir:
def setup(self):
- self.s = Series(index=tm.makeStringIndex(10000))
+ self.s = Series(index=Index([f"i-{i}" for i in range(10000)], dtype=object))
def time_dir_strings(self):
dir(self.s)
diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index 6682a60f42997..1f4a104255057 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -6,12 +6,11 @@
NA,
Categorical,
DataFrame,
+ Index,
Series,
)
from pandas.arrays import StringArray
-from .pandas_vb_common import tm
-
class Dtypes:
params = ["str", "string[python]", "string[pyarrow]"]
@@ -19,7 +18,9 @@ class Dtypes:
def setup(self, dtype):
try:
- self.s = Series(tm.makeStringIndex(10**5), dtype=dtype)
+ self.s = Series(
+ Index([f"i-{i}" for i in range(10000)], dtype=object), dtype=dtype
+ )
except ImportError:
raise NotImplementedError
@@ -172,7 +173,7 @@ class Repeat:
def setup(self, repeats):
N = 10**5
- self.s = Series(tm.makeStringIndex(N))
+ self.s = Series(Index([f"i-{i}" for i in range(N)], dtype=object))
repeat = {"int": 1, "array": np.random.randint(1, 3, N)}
self.values = repeat[repeats]
@@ -187,13 +188,20 @@ class Cat:
def setup(self, other_cols, sep, na_rep, na_frac):
N = 10**5
mask_gen = lambda: np.random.choice([True, False], N, p=[1 - na_frac, na_frac])
- self.s = Series(tm.makeStringIndex(N)).where(mask_gen())
+ self.s = Series(Index([f"i-{i}" for i in range(N)], dtype=object)).where(
+ mask_gen()
+ )
if other_cols == 0:
# str.cat self-concatenates only for others=None
self.others = None
else:
self.others = DataFrame(
- {i: tm.makeStringIndex(N).where(mask_gen()) for i in range(other_cols)}
+ {
+ i: Index([f"i-{i}" for i in range(N)], dtype=object).where(
+ mask_gen()
+ )
+ for i in range(other_cols)
+ }
)
def time_cat(self, other_cols, sep, na_rep, na_frac):
@@ -254,7 +262,7 @@ def time_get_dummies(self, dtype):
class Encode:
def setup(self):
- self.ser = Series(tm.makeStringIndex())
+ self.ser = Series(Index([f"i-{i}" for i in range(10_000)], dtype=object))
def time_encode_decode(self):
self.ser.str.encode("utf-8").str.decode("utf-8")
diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index a74fb2bf48bc4..c73d869b6c39c 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -370,11 +370,6 @@ def getCols(k) -> str:
return string.ascii_uppercase[:k]
-# make index
-def makeStringIndex(k: int = 10, name=None) -> Index:
- return Index(rands_array(nchars=10, size=k), name=name)
-
-
def makeCategoricalIndex(
k: int = 10, n: int = 3, name=None, **kwargs
) -> CategoricalIndex:
@@ -385,14 +380,6 @@ def makeCategoricalIndex(
)
-def makeBoolIndex(k: int = 10, name=None) -> Index:
- if k == 1:
- return Index([True], name=name)
- elif k == 2:
- return Index([False, True], name=name)
- return Index([False, True] + [False] * (k - 2), name=name)
-
-
def makeNumericIndex(k: int = 10, *, name=None, dtype: Dtype | None) -> Index:
dtype = pandas_dtype(dtype)
assert isinstance(dtype, np.dtype)
@@ -457,14 +444,13 @@ def makePeriodIndex(k: int = 10, name=None, **kwargs) -> PeriodIndex:
def makeObjectSeries(name=None) -> Series:
- data = makeStringIndex(_N)
- data = Index(data, dtype=object)
- index = makeStringIndex(_N)
- return Series(data, index=index, name=name)
+ data = [f"foo_{i}" for i in range(_N)]
+ index = Index([f"bar_{i}" for i in range(_N)])
+ return Series(data, index=index, name=name, dtype=object)
def getSeriesData() -> dict[str, Series]:
- index = makeStringIndex(_N)
+ index = Index([f"foo_{i}" for i in range(_N)])
return {
c: Series(np.random.default_rng(i).standard_normal(_N), index=index)
for i, c in enumerate(getCols(_K))
@@ -566,7 +552,7 @@ def makeCustomIndex(
idx_func_dict: dict[str, Callable[..., Index]] = {
"i": makeIntIndex,
"f": makeFloatIndex,
- "s": makeStringIndex,
+ "s": lambda n: Index([f"{i}_{chr(i)}" for i in range(97, 97 + n)]),
"dt": makeDateIndex,
"td": makeTimedeltaIndex,
"p": makePeriodIndex,
@@ -1049,7 +1035,6 @@ def shares_memory(left, right) -> bool:
"iat",
"iloc",
"loc",
- "makeBoolIndex",
"makeCategoricalIndex",
"makeCustomDataframe",
"makeCustomIndex",
@@ -1062,7 +1047,6 @@ def shares_memory(left, right) -> bool:
"makeObjectSeries",
"makePeriodIndex",
"makeRangeIndex",
- "makeStringIndex",
"makeTimeDataFrame",
"makeTimedeltaIndex",
"makeTimeSeries",
diff --git a/pandas/conftest.py b/pandas/conftest.py
index 350871c3085c1..3205b6657439f 100644
--- a/pandas/conftest.py
+++ b/pandas/conftest.py
@@ -610,7 +610,7 @@ def _create_mi_with_dt64tz_level():
indices_dict = {
- "string": tm.makeStringIndex(100),
+ "string": Index([f"pandas_{i}" for i in range(100)]),
"datetime": tm.makeDateIndex(100),
"datetime-tz": tm.makeDateIndex(100, tz="US/Pacific"),
"period": tm.makePeriodIndex(100),
@@ -626,7 +626,7 @@ def _create_mi_with_dt64tz_level():
"uint64": tm.makeUIntIndex(100, dtype="uint64"),
"float32": tm.makeFloatIndex(100, dtype="float32"),
"float64": tm.makeFloatIndex(100, dtype="float64"),
- "bool-object": tm.makeBoolIndex(10).astype(object),
+ "bool-object": Index([True, False] * 5, dtype=object),
"bool-dtype": Index(np.random.default_rng(2).standard_normal(10) < 0),
"complex64": tm.makeNumericIndex(100, dtype="float64").astype("complex64"),
"complex128": tm.makeNumericIndex(100, dtype="float64").astype("complex128"),
@@ -641,10 +641,12 @@ def _create_mi_with_dt64tz_level():
"nullable_uint": Index(np.arange(100), dtype="UInt16"),
"nullable_float": Index(np.arange(100), dtype="Float32"),
"nullable_bool": Index(np.arange(100).astype(bool), dtype="boolean"),
- "string-python": Index(pd.array(tm.makeStringIndex(100), dtype="string[python]")),
+ "string-python": Index(
+ pd.array([f"pandas_{i}" for i in range(100)], dtype="string[python]")
+ ),
}
if has_pyarrow:
- idx = Index(pd.array(tm.makeStringIndex(100), dtype="string[pyarrow]"))
+ idx = Index(pd.array([f"pandas_{i}" for i in range(100)], dtype="string[pyarrow]"))
indices_dict["string-pyarrow"] = idx
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py
index f89711c0edee7..c3c4a8b4fc6c0 100644
--- a/pandas/tests/arithmetic/test_numeric.py
+++ b/pandas/tests/arithmetic/test_numeric.py
@@ -916,7 +916,7 @@ def test_add_frames(self, first, second, expected):
# TODO: This came from series.test.test_operators, needs cleanup
def test_series_frame_radd_bug(self, fixed_now_ts):
# GH#353
- vals = Series(tm.makeStringIndex())
+ vals = Series([str(i) for i in range(5)])
result = "foo_" + vals
expected = vals.map(lambda x: "foo_" + x)
tm.assert_series_equal(result, expected)
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py
index 33953900c2006..6b36f447eb7d5 100644
--- a/pandas/tests/arithmetic/test_object.py
+++ b/pandas/tests/arithmetic/test_object.py
@@ -299,7 +299,7 @@ def test_iadd_string(self):
@pytest.mark.xfail(using_pyarrow_string_dtype(), reason="add doesn't work")
def test_add(self):
- index = tm.makeStringIndex(100)
+ index = pd.Index([str(i) for i in range(10)])
expected = pd.Index(index.values * 2)
tm.assert_index_equal(index + index, expected)
tm.assert_index_equal(index + index.tolist(), expected)
@@ -313,7 +313,7 @@ def test_add(self):
tm.assert_index_equal("1" + index, expected)
def test_sub_fail(self, using_infer_string):
- index = tm.makeStringIndex(100)
+ index = pd.Index([str(i) for i in range(10)])
if using_infer_string:
import pyarrow as pa
diff --git a/pandas/tests/frame/methods/test_first_valid_index.py b/pandas/tests/frame/methods/test_first_valid_index.py
index a448768f4173d..2e27f1aa71700 100644
--- a/pandas/tests/frame/methods/test_first_valid_index.py
+++ b/pandas/tests/frame/methods/test_first_valid_index.py
@@ -6,9 +6,10 @@
from pandas import (
DataFrame,
+ Index,
Series,
+ date_range,
)
-import pandas._testing as tm
class TestFirstValidIndex:
@@ -44,11 +45,12 @@ def test_first_last_valid_frame(self, data, idx, expected_first, expected_last):
assert expected_first == df.first_valid_index()
assert expected_last == df.last_valid_index()
- @pytest.mark.parametrize("index_func", [tm.makeStringIndex, tm.makeDateIndex])
- def test_first_last_valid(self, index_func):
- N = 30
- index = index_func(N)
- mat = np.random.default_rng(2).standard_normal(N)
+ @pytest.mark.parametrize(
+ "index",
+ [Index([str(i) for i in range(20)]), date_range("2020-01-01", periods=20)],
+ )
+ def test_first_last_valid(self, index):
+ mat = np.random.default_rng(2).standard_normal(len(index))
mat[:5] = np.nan
mat[-5:] = np.nan
@@ -60,10 +62,12 @@ def test_first_last_valid(self, index_func):
assert ser.first_valid_index() == frame.index[5]
assert ser.last_valid_index() == frame.index[-6]
- @pytest.mark.parametrize("index_func", [tm.makeStringIndex, tm.makeDateIndex])
- def test_first_last_valid_all_nan(self, index_func):
+ @pytest.mark.parametrize(
+ "index",
+ [Index([str(i) for i in range(10)]), date_range("2020-01-01", periods=10)],
+ )
+ def test_first_last_valid_all_nan(self, index):
# GH#17400: no valid entries
- index = index_func(30)
frame = DataFrame(np.nan, columns=["foo"], index=index)
assert frame.last_valid_index() is None
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index bf17b61b0e3f3..3111075c5c1a7 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -784,7 +784,7 @@ def test_constructor_dict_cast(self):
def test_constructor_dict_cast2(self):
# can't cast to float
test_data = {
- "A": dict(zip(range(20), tm.makeStringIndex(20))),
+ "A": dict(zip(range(20), [f"word_{i}" for i in range(20)])),
"B": dict(zip(range(15), np.random.default_rng(2).standard_normal(15))),
}
with pytest.raises(ValueError, match="could not convert string"):
diff --git a/pandas/tests/frame/test_repr.py b/pandas/tests/frame/test_repr.py
index 98962b3003b6d..eed48b9db116b 100644
--- a/pandas/tests/frame/test_repr.py
+++ b/pandas/tests/frame/test_repr.py
@@ -165,7 +165,7 @@ def test_repr_mixed_big(self):
biggie = DataFrame(
{
"A": np.random.default_rng(2).standard_normal(200),
- "B": tm.makeStringIndex(200),
+ "B": [str(i) for i in range(200)],
},
index=range(200),
)
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index 3c1a35c984031..c401762dace23 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -19,6 +19,7 @@
Series,
Timestamp,
date_range,
+ period_range,
)
import pandas._testing as tm
from pandas.core.groupby.grouper import Grouping
@@ -174,23 +175,21 @@ class TestGrouping:
@pytest.mark.parametrize(
"index",
[
- tm.makeFloatIndex,
- tm.makeStringIndex,
- tm.makeIntIndex,
- tm.makeDateIndex,
- tm.makePeriodIndex,
+ Index(list("abcde")),
+ Index(np.arange(5)),
+ Index(np.arange(5, dtype=float)),
+ date_range("2020-01-01", periods=5),
+ period_range("2020-01-01", periods=5),
],
)
- @pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
def test_grouper_index_types(self, index):
# related GH5375
# groupby misbehaving when using a Floatlike index
- df = DataFrame(np.arange(10).reshape(5, 2), columns=list("AB"))
+ df = DataFrame(np.arange(10).reshape(5, 2), columns=list("AB"), index=index)
- df.index = index(len(df))
df.groupby(list("abcde"), group_keys=False).apply(lambda x: x)
- df.index = list(reversed(df.index.tolist()))
+ df.index = df.index[::-1]
df.groupby(list("abcde"), group_keys=False).apply(lambda x: x)
def test_grouper_multilevel_freq(self):
diff --git a/pandas/tests/indexes/multi/test_duplicates.py b/pandas/tests/indexes/multi/test_duplicates.py
index a69248cf038f8..6c6d9022b1af3 100644
--- a/pandas/tests/indexes/multi/test_duplicates.py
+++ b/pandas/tests/indexes/multi/test_duplicates.py
@@ -257,7 +257,7 @@ def test_duplicated(idx_dup, keep, expected):
def test_duplicated_hashtable_impl(keep, monkeypatch):
# GH 9125
n, k = 6, 10
- levels = [np.arange(n), tm.makeStringIndex(n), 1000 + np.arange(n)]
+ levels = [np.arange(n), [str(i) for i in range(n)], 1000 + np.arange(n)]
codes = [np.random.default_rng(2).choice(n, k * n) for _ in levels]
with monkeypatch.context() as m:
m.setattr(libindex, "_SIZE_CUTOFF", 50)
diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py
index c9fbf95751dfe..cf9966145afce 100644
--- a/pandas/tests/indexing/test_floats.py
+++ b/pandas/tests/indexing/test_floats.py
@@ -6,6 +6,9 @@
Index,
RangeIndex,
Series,
+ date_range,
+ period_range,
+ timedelta_range,
)
import pandas._testing as tm
@@ -39,22 +42,21 @@ def check(self, result, original, indexer, getitem):
tm.assert_almost_equal(result, expected)
@pytest.mark.parametrize(
- "index_func",
+ "index",
[
- tm.makeStringIndex,
- tm.makeCategoricalIndex,
- tm.makeDateIndex,
- tm.makeTimedeltaIndex,
- tm.makePeriodIndex,
+ Index(list("abcde")),
+ Index(list("abcde"), dtype="category"),
+ date_range("2020-01-01", periods=5),
+ timedelta_range("1 day", periods=5),
+ period_range("2020-01-01", periods=5),
],
)
- def test_scalar_non_numeric(self, index_func, frame_or_series, indexer_sl):
+ def test_scalar_non_numeric(self, index, frame_or_series, indexer_sl):
# GH 4892
# float_indexers should raise exceptions
# on appropriate Index types & accessors
- i = index_func(5)
- s = gen_obj(frame_or_series, i)
+ s = gen_obj(frame_or_series, index)
# getting
with pytest.raises(KeyError, match="^3.0$"):
@@ -75,19 +77,18 @@ def test_scalar_non_numeric(self, index_func, frame_or_series, indexer_sl):
assert 3.0 not in s2.axes[-1]
@pytest.mark.parametrize(
- "index_func",
+ "index",
[
- tm.makeStringIndex,
- tm.makeCategoricalIndex,
- tm.makeDateIndex,
- tm.makeTimedeltaIndex,
- tm.makePeriodIndex,
+ Index(list("abcde")),
+ Index(list("abcde"), dtype="category"),
+ date_range("2020-01-01", periods=5),
+ timedelta_range("1 day", periods=5),
+ period_range("2020-01-01", periods=5),
],
)
- def test_scalar_non_numeric_series_fallback(self, index_func):
+ def test_scalar_non_numeric_series_fallback(self, index):
# fallsback to position selection, series only
- i = index_func(5)
- s = Series(np.arange(len(i)), index=i)
+ s = Series(np.arange(len(index)), index=index)
msg = "Series.__getitem__ treating keys as positions is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
@@ -214,21 +215,20 @@ def test_scalar_float(self, frame_or_series):
self.check(result, s, 3, False)
@pytest.mark.parametrize(
- "index_func",
+ "index",
[
- tm.makeStringIndex,
- tm.makeDateIndex,
- tm.makeTimedeltaIndex,
- tm.makePeriodIndex,
+ Index(list("abcde"), dtype=object),
+ date_range("2020-01-01", periods=5),
+ timedelta_range("1 day", periods=5),
+ period_range("2020-01-01", periods=5),
],
)
@pytest.mark.parametrize("idx", [slice(3.0, 4), slice(3, 4.0), slice(3.0, 4.0)])
- def test_slice_non_numeric(self, index_func, idx, frame_or_series, indexer_sli):
+ def test_slice_non_numeric(self, index, idx, frame_or_series, indexer_sli):
# GH 4892
# float_indexers should raise exceptions
# on appropriate Index types & accessors
- index = index_func(5)
s = gen_obj(frame_or_series, index)
# getitem
diff --git a/pandas/tests/io/formats/test_format.py b/pandas/tests/io/formats/test_format.py
index edac82193d1c8..f5738b83a8e64 100644
--- a/pandas/tests/io/formats/test_format.py
+++ b/pandas/tests/io/formats/test_format.py
@@ -25,7 +25,6 @@
read_csv,
reset_option,
)
-import pandas._testing as tm
from pandas.io.formats import printing
import pandas.io.formats.format as fmt
@@ -834,19 +833,21 @@ def test_to_string_buffer_all_unicode(self):
buf.getvalue()
@pytest.mark.parametrize(
- "index",
+ "index_scalar",
[
- tm.makeStringIndex,
- tm.makeIntIndex,
- tm.makeDateIndex,
- tm.makePeriodIndex,
+ "a" * 10,
+ 1,
+ Timestamp(2020, 1, 1),
+ pd.Period("2020-01-01"),
],
)
@pytest.mark.parametrize("h", [10, 20])
@pytest.mark.parametrize("w", [10, 20])
- def test_to_string_truncate_indices(self, index, h, w):
+ def test_to_string_truncate_indices(self, index_scalar, h, w):
with option_context("display.expand_frame_repr", False):
- df = DataFrame(index=index(h), columns=tm.makeStringIndex(w))
+ df = DataFrame(
+ index=[index_scalar] * h, columns=[str(i) * 10 for i in range(w)]
+ )
with option_context("display.max_rows", 15):
if h == 20:
assert has_vertically_truncated_repr(df)
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index a7648cf1c471a..605e5e182d8cc 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -59,7 +59,7 @@ def biggie_df_fixture(request):
df = DataFrame(
{
"A": np.random.default_rng(2).standard_normal(200),
- "B": tm.makeStringIndex(200),
+ "B": Index([f"{i}?!" for i in range(200)]),
},
index=np.arange(200),
)
diff --git a/pandas/tests/io/formats/test_to_string.py b/pandas/tests/io/formats/test_to_string.py
index e607b6eb454a1..02c20b0e25477 100644
--- a/pandas/tests/io/formats/test_to_string.py
+++ b/pandas/tests/io/formats/test_to_string.py
@@ -804,7 +804,7 @@ def test_to_string(self):
biggie = DataFrame(
{
"A": np.random.default_rng(2).standard_normal(200),
- "B": tm.makeStringIndex(200),
+ "B": Index([f"{i}?!" for i in range(200)]),
},
)
diff --git a/pandas/tests/io/pytables/test_put.py b/pandas/tests/io/pytables/test_put.py
index c109b72e9c239..59a05dc9ea546 100644
--- a/pandas/tests/io/pytables/test_put.py
+++ b/pandas/tests/io/pytables/test_put.py
@@ -196,32 +196,27 @@ def test_put_mixed_type(setup_path):
tm.assert_frame_equal(expected, df)
+@pytest.mark.parametrize("format", ["table", "fixed"])
@pytest.mark.parametrize(
- "format, index",
+ "index",
[
- ["table", tm.makeFloatIndex],
- ["table", tm.makeStringIndex],
- ["table", tm.makeIntIndex],
- ["table", tm.makeDateIndex],
- ["fixed", tm.makeFloatIndex],
- ["fixed", tm.makeStringIndex],
- ["fixed", tm.makeIntIndex],
- ["fixed", tm.makeDateIndex],
- ["table", tm.makePeriodIndex], # GH#7796
- ["fixed", tm.makePeriodIndex],
+ Index([str(i) for i in range(10)]),
+ Index(np.arange(10, dtype=float)),
+ Index(np.arange(10)),
+ date_range("2020-01-01", periods=10),
+ pd.period_range("2020-01-01", periods=10),
],
)
-@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
def test_store_index_types(setup_path, format, index):
# GH5386
# test storing various index types
with ensure_clean_store(setup_path) as store:
df = DataFrame(
- np.random.default_rng(2).standard_normal((10, 2)), columns=list("AB")
+ np.random.default_rng(2).standard_normal((10, 2)),
+ columns=list("AB"),
+ index=index,
)
- df.index = index(len(df))
-
_maybe_remove(store, "df")
store.put("df", df, format=format)
tm.assert_frame_equal(df, store["df"])
diff --git a/pandas/tests/io/pytables/test_round_trip.py b/pandas/tests/io/pytables/test_round_trip.py
index 6c24843f18d0d..d06935871cb56 100644
--- a/pandas/tests/io/pytables/test_round_trip.py
+++ b/pandas/tests/io/pytables/test_round_trip.py
@@ -51,7 +51,8 @@ def roundtrip(key, obj, **kwargs):
def test_long_strings(setup_path):
# GH6166
- df = DataFrame({"a": tm.makeStringIndex(10)}, index=tm.makeStringIndex(10))
+ data = ["a" * 50] * 10
+ df = DataFrame({"a": data}, index=data)
with ensure_clean_store(setup_path) as store:
store.append("df", df, data_columns=["a"])
@@ -65,7 +66,7 @@ def test_api(tmp_path, setup_path):
# API issue when to_hdf doesn't accept append AND format args
path = tmp_path / setup_path
- df = tm.makeDataFrame()
+ df = DataFrame(range(20))
df.iloc[:10].to_hdf(path, key="df", append=True, format="table")
df.iloc[10:].to_hdf(path, key="df", append=True, format="table")
tm.assert_frame_equal(read_hdf(path, "df"), df)
@@ -79,7 +80,7 @@ def test_api(tmp_path, setup_path):
def test_api_append(tmp_path, setup_path):
path = tmp_path / setup_path
- df = tm.makeDataFrame()
+ df = DataFrame(range(20))
df.iloc[:10].to_hdf(path, key="df", append=True)
df.iloc[10:].to_hdf(path, key="df", append=True, format="table")
tm.assert_frame_equal(read_hdf(path, "df"), df)
@@ -93,7 +94,7 @@ def test_api_append(tmp_path, setup_path):
def test_api_2(tmp_path, setup_path):
path = tmp_path / setup_path
- df = tm.makeDataFrame()
+ df = DataFrame(range(20))
df.to_hdf(path, key="df", append=False, format="fixed")
tm.assert_frame_equal(read_hdf(path, "df"), df)
@@ -107,7 +108,7 @@ def test_api_2(tmp_path, setup_path):
tm.assert_frame_equal(read_hdf(path, "df"), df)
with ensure_clean_store(setup_path) as store:
- df = tm.makeDataFrame()
+ df = DataFrame(range(20))
_maybe_remove(store, "df")
store.append("df", df.iloc[:10], append=True, format="table")
diff --git a/pandas/tests/io/pytables/test_store.py b/pandas/tests/io/pytables/test_store.py
index 96c160ab40bd8..df8a1e3cb7470 100644
--- a/pandas/tests/io/pytables/test_store.py
+++ b/pandas/tests/io/pytables/test_store.py
@@ -956,10 +956,6 @@ def test_to_hdf_with_object_column_names(tmp_path, setup_path):
tm.makeTimedeltaIndex,
tm.makePeriodIndex,
]
- types_should_run = [
- tm.makeStringIndex,
- tm.makeCategoricalIndex,
- ]
for index in types_should_fail:
df = DataFrame(
@@ -970,14 +966,18 @@ def test_to_hdf_with_object_column_names(tmp_path, setup_path):
with pytest.raises(ValueError, match=msg):
df.to_hdf(path, key="df", format="table", data_columns=True)
- for index in types_should_run:
- df = DataFrame(
- np.random.default_rng(2).standard_normal((10, 2)), columns=index(2)
- )
- path = tmp_path / setup_path
- df.to_hdf(path, key="df", format="table", data_columns=True)
- result = read_hdf(path, "df", where=f"index = [{df.index[0]}]")
- assert len(result)
+
+@pytest.mark.parametrize("dtype", [None, "category"])
+def test_to_hdf_with_object_column_names_should_run(tmp_path, setup_path, dtype):
+ # GH9057
+ df = DataFrame(
+ np.random.default_rng(2).standard_normal((10, 2)),
+ columns=Index(["a", "b"], dtype=dtype),
+ )
+ path = tmp_path / setup_path
+ df.to_hdf(path, key="df", format="table", data_columns=True)
+ result = read_hdf(path, "df", where=f"index = [{df.index[0]}]")
+ assert len(result)
def test_hdfstore_strides(setup_path):
diff --git a/pandas/tests/reductions/test_reductions.py b/pandas/tests/reductions/test_reductions.py
index 9c4ae92224148..303f8550c5a80 100644
--- a/pandas/tests/reductions/test_reductions.py
+++ b/pandas/tests/reductions/test_reductions.py
@@ -33,13 +33,13 @@
def get_objs():
indexes = [
- tm.makeBoolIndex(10, name="a"),
+ Index([True, False] * 5, name="a"),
tm.makeIntIndex(10, name="a"),
tm.makeFloatIndex(10, name="a"),
tm.makeDateIndex(10, name="a"),
tm.makeDateIndex(10, name="a").tz_localize(tz="US/Eastern"),
tm.makePeriodIndex(10, name="a"),
- tm.makeStringIndex(10, name="a"),
+ Index([str(i) for i in range(10)], name="a"),
]
arr = np.random.default_rng(2).standard_normal(10)
diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py
index 1dc25cb9d4c1e..3d9098917a12d 100644
--- a/pandas/tests/resample/test_time_grouper.py
+++ b/pandas/tests/resample/test_time_grouper.py
@@ -87,19 +87,17 @@ def f(df):
@pytest.mark.parametrize(
- "func",
+ "index",
[
- tm.makeIntIndex,
- tm.makeStringIndex,
- tm.makeFloatIndex,
- (lambda m: tm.makeCustomIndex(m, 2)),
+ Index([1, 2]),
+ Index(["a", "b"]),
+ Index([1.1, 2.2]),
+ pd.MultiIndex.from_arrays([[1, 2], ["a", "b"]]),
],
)
-def test_fails_on_no_datetime_index(func):
- n = 2
- index = func(n)
+def test_fails_on_no_datetime_index(index):
name = type(index).__name__
- df = DataFrame({"a": np.random.default_rng(2).standard_normal(n)}, index=index)
+ df = DataFrame({"a": range(len(index))}, index=index)
msg = (
"Only valid with DatetimeIndex, TimedeltaIndex "
diff --git a/pandas/tests/reshape/merge/test_multi.py b/pandas/tests/reshape/merge/test_multi.py
index f5d78fbd44812..269d3a2b7078e 100644
--- a/pandas/tests/reshape/merge/test_multi.py
+++ b/pandas/tests/reshape/merge/test_multi.py
@@ -188,7 +188,7 @@ def test_merge_multiple_cols_with_mixed_cols_index(self):
def test_compress_group_combinations(self):
# ~ 40000000 possible unique groups
- key1 = tm.makeStringIndex(10000)
+ key1 = [str(i) for i in range(10000)]
key1 = np.tile(key1, 2)
key2 = key1[::-1]
diff --git a/pandas/tests/series/methods/test_combine_first.py b/pandas/tests/series/methods/test_combine_first.py
index 795b2eab82aca..1cbb7c7982802 100644
--- a/pandas/tests/series/methods/test_combine_first.py
+++ b/pandas/tests/series/methods/test_combine_first.py
@@ -51,9 +51,9 @@ def test_combine_first(self):
tm.assert_series_equal(combined[1::2], series_copy[1::2])
# mixed types
- index = tm.makeStringIndex(20)
+ index = pd.Index([str(i) for i in range(20)])
floats = Series(np.random.default_rng(2).standard_normal(20), index=index)
- strings = Series(tm.makeStringIndex(10), index=index[::2], dtype=object)
+ strings = Series([str(i) for i in range(10)], index=index[::2], dtype=object)
combined = strings.combine_first(floats)
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index a39b3ff7e6f2b..c29fe6ba06ab4 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -68,7 +68,7 @@ def test_tab_completion_with_categorical(self):
@pytest.mark.parametrize(
"index",
[
- tm.makeStringIndex(10),
+ Index([str(i) for i in range(10)]),
tm.makeCategoricalIndex(10),
Index(["foo", "bar", "baz"] * 2),
tm.makeDateIndex(10),
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index a6e63dfd5f409..24c4706810154 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1663,19 +1663,18 @@ def test_unique_complex_numbers(self, array, expected):
class TestHashTable:
@pytest.mark.parametrize(
- "htable, tm_dtype",
+ "htable, data",
[
- (ht.PyObjectHashTable, "String"),
- (ht.StringHashTable, "String"),
- (ht.Float64HashTable, "Float"),
- (ht.Int64HashTable, "Int"),
- (ht.UInt64HashTable, "UInt"),
+ (ht.PyObjectHashTable, [f"foo_{i}" for i in range(1000)]),
+ (ht.StringHashTable, [f"foo_{i}" for i in range(1000)]),
+ (ht.Float64HashTable, np.arange(1000, dtype=np.float64)),
+ (ht.Int64HashTable, np.arange(1000, dtype=np.int64)),
+ (ht.UInt64HashTable, np.arange(1000, dtype=np.uint64)),
],
)
- def test_hashtable_unique(self, htable, tm_dtype, writable):
+ def test_hashtable_unique(self, htable, data, writable):
# output of maker has guaranteed unique elements
- maker = getattr(tm, "make" + tm_dtype + "Index")
- s = Series(maker(1000))
+ s = Series(data)
if htable == ht.Float64HashTable:
# add NaN for float column
s.loc[500] = np.nan
@@ -1703,19 +1702,18 @@ def test_hashtable_unique(self, htable, tm_dtype, writable):
tm.assert_numpy_array_equal(reconstr, s_duplicated.values)
@pytest.mark.parametrize(
- "htable, tm_dtype",
+ "htable, data",
[
- (ht.PyObjectHashTable, "String"),
- (ht.StringHashTable, "String"),
- (ht.Float64HashTable, "Float"),
- (ht.Int64HashTable, "Int"),
- (ht.UInt64HashTable, "UInt"),
+ (ht.PyObjectHashTable, [f"foo_{i}" for i in range(1000)]),
+ (ht.StringHashTable, [f"foo_{i}" for i in range(1000)]),
+ (ht.Float64HashTable, np.arange(1000, dtype=np.float64)),
+ (ht.Int64HashTable, np.arange(1000, dtype=np.int64)),
+ (ht.UInt64HashTable, np.arange(1000, dtype=np.uint64)),
],
)
- def test_hashtable_factorize(self, htable, tm_dtype, writable):
+ def test_hashtable_factorize(self, htable, writable, data):
# output of maker has guaranteed unique elements
- maker = getattr(tm, "make" + tm_dtype + "Index")
- s = Series(maker(1000))
+ s = Series(data)
if htable == ht.Float64HashTable:
# add NaN for float column
s.loc[500] = np.nan
diff --git a/pandas/tests/tseries/frequencies/test_inference.py b/pandas/tests/tseries/frequencies/test_inference.py
index 75528a8b99c4d..5d22896d9d055 100644
--- a/pandas/tests/tseries/frequencies/test_inference.py
+++ b/pandas/tests/tseries/frequencies/test_inference.py
@@ -401,7 +401,7 @@ def test_invalid_index_types_unicode():
msg = "Unknown datetime string format"
with pytest.raises(ValueError, match=msg):
- frequencies.infer_freq(tm.makeStringIndex(10))
+ frequencies.infer_freq(Index(["ZqgszYBfuL"]))
def test_string_datetime_like_compat():
diff --git a/pandas/tests/util/test_hashing.py b/pandas/tests/util/test_hashing.py
index 00dc184a0ac4d..e7c4c27714d5f 100644
--- a/pandas/tests/util/test_hashing.py
+++ b/pandas/tests/util/test_hashing.py
@@ -328,9 +328,9 @@ def test_alternate_encoding(index):
@pytest.mark.parametrize("l_add", [0, 1])
def test_same_len_hash_collisions(l_exp, l_add):
length = 2 ** (l_exp + 8) + l_add
- s = tm.makeStringIndex(length).to_numpy()
+ idx = np.array([str(i) for i in range(length)], dtype=object)
- result = hash_array(s, "utf8")
+ result = hash_array(idx, "utf8")
assert not result[0] == result[1]
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56155 | 2023-11-24T20:48:49Z | 2023-11-27T02:59:21Z | 2023-11-27T02:59:21Z | 2023-11-27T02:59:24Z |
TST/CLN: make equalContents more strict | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 832919db442d4..51de242522074 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -302,13 +302,6 @@ def reset_display_options() -> None:
# Comparators
-def equalContents(arr1, arr2) -> bool:
- """
- Checks if the set of unique elements of arr1 and arr2 are equivalent.
- """
- return frozenset(arr1) == frozenset(arr2)
-
-
def box_expected(expected, box_cls, transpose: bool = True):
"""
Helper function to wrap the expected output of a test in a given box_class.
@@ -1131,7 +1124,6 @@ def shares_memory(left, right) -> bool:
"EMPTY_STRING_PATTERN",
"ENDIAN",
"ensure_clean",
- "equalContents",
"external_error_raised",
"FLOAT_EA_DTYPES",
"FLOAT_NUMPY_DTYPES",
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index b2d94ff5ffbd1..135a86cad1395 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -48,7 +48,7 @@ def test_getitem(self, float_frame):
# Column access
for _, series in sl.items():
assert len(series.index) == 20
- assert tm.equalContents(series.index, sl.index)
+ tm.assert_index_equal(series.index, sl.index)
for key, _ in float_frame._series.items():
assert float_frame[key] is not None
diff --git a/pandas/tests/frame/methods/test_combine_first.py b/pandas/tests/frame/methods/test_combine_first.py
index 1779f703dd2b7..0335279b3a123 100644
--- a/pandas/tests/frame/methods/test_combine_first.py
+++ b/pandas/tests/frame/methods/test_combine_first.py
@@ -37,7 +37,7 @@ def test_combine_first(self, float_frame):
combined = head.combine_first(tail)
reordered_frame = float_frame.reindex(combined.index)
tm.assert_frame_equal(combined, reordered_frame)
- assert tm.equalContents(combined.columns, float_frame.columns)
+ tm.assert_index_equal(combined.columns, float_frame.columns)
tm.assert_series_equal(combined["A"], reordered_frame["A"])
# same index
diff --git a/pandas/tests/frame/methods/test_reindex.py b/pandas/tests/frame/methods/test_reindex.py
index b57b4f4422888..b6a6334b89fc1 100644
--- a/pandas/tests/frame/methods/test_reindex.py
+++ b/pandas/tests/frame/methods/test_reindex.py
@@ -624,7 +624,7 @@ def test_reindex(self, float_frame, using_copy_on_write):
assert np.isnan(val)
for col, series in newFrame.items():
- assert tm.equalContents(series.index, newFrame.index)
+ tm.assert_index_equal(series.index, newFrame.index)
emptyFrame = float_frame.reindex(Index([]))
assert len(emptyFrame.index) == 0
@@ -642,7 +642,7 @@ def test_reindex(self, float_frame, using_copy_on_write):
assert np.isnan(val)
for col, series in nonContigFrame.items():
- assert tm.equalContents(series.index, nonContigFrame.index)
+ tm.assert_index_equal(series.index, nonContigFrame.index)
# corner cases
diff --git a/pandas/tests/frame/test_iteration.py b/pandas/tests/frame/test_iteration.py
index 7374a8ea6aa77..a1c23ff05f3e1 100644
--- a/pandas/tests/frame/test_iteration.py
+++ b/pandas/tests/frame/test_iteration.py
@@ -40,7 +40,7 @@ def test_items_names(self, float_string_frame):
assert v.name == k
def test_iter(self, float_frame):
- assert tm.equalContents(list(float_frame), float_frame.columns)
+ assert list(float_frame) == list(float_frame.columns)
def test_iterrows(self, float_frame, float_string_frame):
for k, v in float_frame.iterrows():
diff --git a/pandas/tests/indexes/base_class/test_setops.py b/pandas/tests/indexes/base_class/test_setops.py
index 488f79eea0d11..e538ad512d691 100644
--- a/pandas/tests/indexes/base_class/test_setops.py
+++ b/pandas/tests/indexes/base_class/test_setops.py
@@ -12,6 +12,13 @@
from pandas.core.algorithms import safe_sort
+def equal_contents(arr1, arr2) -> bool:
+ """
+ Checks if the set of unique elements of arr1 and arr2 are equivalent.
+ """
+ return frozenset(arr1) == frozenset(arr2)
+
+
class TestIndexSetOps:
@pytest.mark.parametrize(
"method", ["union", "intersection", "difference", "symmetric_difference"]
@@ -71,7 +78,7 @@ def test_union_different_type_base(self, klass):
result = first.union(klass(second.values))
- assert tm.equalContents(result, index)
+ assert equal_contents(result, index)
def test_union_sort_other_incomparable(self):
# https://github.com/pandas-dev/pandas/issues/24959
@@ -119,7 +126,7 @@ def test_intersection_different_type_base(self, klass, sort):
second = index[:3]
result = first.intersection(klass(second.values), sort=sort)
- assert tm.equalContents(result, second)
+ assert equal_contents(result, second)
def test_intersection_nosort(self):
result = Index(["c", "b", "a"]).intersection(["b", "a"])
@@ -244,7 +251,7 @@ def test_union_name_preservation(
tm.assert_index_equal(union, expected)
else:
expected = Index(vals, name=expected_name)
- tm.equalContents(union, expected)
+ tm.assert_index_equal(union.sort_values(), expected.sort_values())
@pytest.mark.parametrize(
"diff_type, expected",
diff --git a/pandas/tests/indexes/datetimes/test_setops.py b/pandas/tests/indexes/datetimes/test_setops.py
index dde680665a8bc..78c23e47897cf 100644
--- a/pandas/tests/indexes/datetimes/test_setops.py
+++ b/pandas/tests/indexes/datetimes/test_setops.py
@@ -206,13 +206,13 @@ def test_intersection2(self):
first = tm.makeDateIndex(10)
second = first[5:]
intersect = first.intersection(second)
- assert tm.equalContents(intersect, second)
+ tm.assert_index_equal(intersect, second)
# GH 10149
cases = [klass(second.values) for klass in [np.array, Series, list]]
for case in cases:
result = first.intersection(case)
- assert tm.equalContents(result, second)
+ tm.assert_index_equal(result, second)
third = Index(["a", "b", "c"])
result = first.intersection(third)
diff --git a/pandas/tests/indexes/interval/test_setops.py b/pandas/tests/indexes/interval/test_setops.py
index 059b0b75f4190..1b0816a9405cb 100644
--- a/pandas/tests/indexes/interval/test_setops.py
+++ b/pandas/tests/indexes/interval/test_setops.py
@@ -25,14 +25,16 @@ def test_union(self, closed, sort):
expected = monotonic_index(0, 13, closed=closed)
result = index[::-1].union(other, sort=sort)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
result = other[::-1].union(index, sort=sort)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
tm.assert_index_equal(index.union(index, sort=sort), index)
tm.assert_index_equal(index.union(index[:1], sort=sort), index)
@@ -65,14 +67,16 @@ def test_intersection(self, closed, sort):
expected = monotonic_index(5, 11, closed=closed)
result = index[::-1].intersection(other, sort=sort)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
result = other[::-1].intersection(index, sort=sort)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
tm.assert_index_equal(index.intersection(index, sort=sort), index)
@@ -148,16 +152,18 @@ def test_symmetric_difference(self, closed, sort):
index = monotonic_index(0, 11, closed=closed)
result = index[1:].symmetric_difference(index[:-1], sort=sort)
expected = IntervalIndex([index[0], index[-1]])
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
# GH 19101: empty result, same dtype
result = index.symmetric_difference(index, sort=sort)
expected = empty_index(dtype="int64", closed=closed)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
# GH 19101: empty result, different dtypes
other = IntervalIndex.from_arrays(
diff --git a/pandas/tests/indexes/multi/test_setops.py b/pandas/tests/indexes/multi/test_setops.py
index 2b4107acee096..be266f5d8fdce 100644
--- a/pandas/tests/indexes/multi/test_setops.py
+++ b/pandas/tests/indexes/multi/test_setops.py
@@ -243,10 +243,10 @@ def test_union(idx, sort):
the_union = piece1.union(piece2, sort=sort)
- if sort is None:
- tm.assert_index_equal(the_union, idx.sort_values())
-
- assert tm.equalContents(the_union, idx)
+ if sort in (None, False):
+ tm.assert_index_equal(the_union.sort_values(), idx.sort_values())
+ else:
+ tm.assert_index_equal(the_union, idx)
# corner case, pass self or empty thing:
the_union = idx.union(idx, sort=sort)
@@ -258,7 +258,7 @@ def test_union(idx, sort):
tuples = idx.values
result = idx[:4].union(tuples[4:], sort=sort)
if sort is None:
- tm.equalContents(result, idx)
+ tm.assert_index_equal(result.sort_values(), idx.sort_values())
else:
assert result.equals(idx)
@@ -284,9 +284,10 @@ def test_intersection(idx, sort):
the_int = piece1.intersection(piece2, sort=sort)
- if sort is None:
+ if sort in (None, True):
tm.assert_index_equal(the_int, idx[3:5])
- assert tm.equalContents(the_int, idx[3:5])
+ else:
+ tm.assert_index_equal(the_int.sort_values(), idx[3:5])
# corner case, pass self
the_int = idx.intersection(idx, sort=sort)
diff --git a/pandas/tests/indexes/numeric/test_setops.py b/pandas/tests/indexes/numeric/test_setops.py
index d3789f2477896..376b51dd98bb1 100644
--- a/pandas/tests/indexes/numeric/test_setops.py
+++ b/pandas/tests/indexes/numeric/test_setops.py
@@ -133,7 +133,10 @@ def test_symmetric_difference(self, sort):
index2 = Index([2, 3, 4, 1])
result = index1.symmetric_difference(index2, sort=sort)
expected = Index([5, 1])
- assert tm.equalContents(result, expected)
+ if sort is not None:
+ tm.assert_index_equal(result, expected)
+ else:
+ tm.assert_index_equal(result, expected.sort_values())
assert result.name is None
if sort is None:
expected = expected.sort_values()
diff --git a/pandas/tests/indexes/period/test_setops.py b/pandas/tests/indexes/period/test_setops.py
index b9a5940795a5b..2fa7e8cd0d2df 100644
--- a/pandas/tests/indexes/period/test_setops.py
+++ b/pandas/tests/indexes/period/test_setops.py
@@ -142,9 +142,10 @@ def test_union_misc(self, sort):
# not in order
result = _permute(index[:-5]).union(_permute(index[10:]), sort=sort)
- if sort is None:
+ if sort is False:
+ tm.assert_index_equal(result.sort_values(), index)
+ else:
tm.assert_index_equal(result, index)
- assert tm.equalContents(result, index)
# cast if different frequencies
index = period_range("1/1/2000", "1/20/2000", freq="D")
@@ -163,9 +164,10 @@ def test_intersection(self, sort):
left = _permute(index[:-5])
right = _permute(index[10:])
result = left.intersection(right, sort=sort)
- if sort is None:
+ if sort is False:
+ tm.assert_index_equal(result.sort_values(), index[10:-5])
+ else:
tm.assert_index_equal(result, index[10:-5])
- assert tm.equalContents(result, index[10:-5])
# cast if different frequencies
index = period_range("1/1/2000", "1/20/2000", freq="D")
diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py
index dc624f0271a73..5360f1c6ea6d9 100644
--- a/pandas/tests/indexes/test_base.py
+++ b/pandas/tests/indexes/test_base.py
@@ -492,10 +492,10 @@ def test_union_dt_as_obj(self, simple_index):
first_cat = index.union(date_index)
second_cat = index.union(index)
- appended = np.append(index, date_index.astype("O"))
+ appended = Index(np.append(index, date_index.astype("O")))
- assert tm.equalContents(first_cat, appended)
- assert tm.equalContents(second_cat, index)
+ tm.assert_index_equal(first_cat, appended)
+ tm.assert_index_equal(second_cat, index)
tm.assert_contains_all(index, first_cat)
tm.assert_contains_all(index, second_cat)
tm.assert_contains_all(date_index, first_cat)
diff --git a/pandas/tests/indexes/test_setops.py b/pandas/tests/indexes/test_setops.py
index 1f328c06b483b..dab2475240267 100644
--- a/pandas/tests/indexes/test_setops.py
+++ b/pandas/tests/indexes/test_setops.py
@@ -30,6 +30,13 @@
)
+def equal_contents(arr1, arr2) -> bool:
+ """
+ Checks if the set of unique elements of arr1 and arr2 are equivalent.
+ """
+ return frozenset(arr1) == frozenset(arr2)
+
+
@pytest.fixture(
params=tm.ALL_REAL_NUMPY_DTYPES
+ [
@@ -215,10 +222,10 @@ def test_intersection_base(self, index):
if isinstance(index, CategoricalIndex):
pytest.skip(f"Not relevant for {type(index).__name__}")
- first = index[:5]
- second = index[:3]
+ first = index[:5].unique()
+ second = index[:3].unique()
intersect = first.intersection(second)
- assert tm.equalContents(intersect, second)
+ tm.assert_index_equal(intersect, second)
if isinstance(index.dtype, DatetimeTZDtype):
# The second.values below will drop tz, so the rest of this test
@@ -229,7 +236,7 @@ def test_intersection_base(self, index):
cases = [second.to_numpy(), second.to_series(), second.to_list()]
for case in cases:
result = first.intersection(case)
- assert tm.equalContents(result, second)
+ assert equal_contents(result, second)
if isinstance(index, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -241,12 +248,13 @@ def test_intersection_base(self, index):
)
@pytest.mark.filterwarnings(r"ignore:PeriodDtype\[B\] is deprecated:FutureWarning")
def test_union_base(self, index):
+ index = index.unique()
first = index[3:]
second = index[:5]
everything = index
union = first.union(second)
- assert tm.equalContents(union, everything)
+ tm.assert_index_equal(union.sort_values(), everything.sort_values())
if isinstance(index.dtype, DatetimeTZDtype):
# The second.values below will drop tz, so the rest of this test
@@ -257,7 +265,7 @@ def test_union_base(self, index):
cases = [second.to_numpy(), second.to_series(), second.to_list()]
for case in cases:
result = first.union(case)
- assert tm.equalContents(result, everything)
+ assert equal_contents(result, everything)
if isinstance(index, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -280,13 +288,13 @@ def test_difference_base(self, sort, index):
else:
answer = index[4:]
result = first.difference(second, sort)
- assert tm.equalContents(result, answer)
+ assert equal_contents(result, answer)
# GH#10149
cases = [second.to_numpy(), second.to_series(), second.to_list()]
for case in cases:
result = first.difference(case, sort)
- assert tm.equalContents(result, answer)
+ assert equal_contents(result, answer)
if isinstance(index, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -311,13 +319,13 @@ def test_symmetric_difference(self, index):
second = index[:-1]
answer = index[[0, -1]]
result = first.symmetric_difference(second)
- assert tm.equalContents(result, answer)
+ tm.assert_index_equal(result.sort_values(), answer.sort_values())
# GH#10149
cases = [second.to_numpy(), second.to_series(), second.to_list()]
for case in cases:
result = first.symmetric_difference(case)
- assert tm.equalContents(result, answer)
+ assert equal_contents(result, answer)
if isinstance(index, MultiIndex):
msg = "other must be a MultiIndex or a list of tuples"
@@ -701,9 +709,10 @@ def test_intersection(self, index, sort):
first = index[:20]
second = index[:10]
intersect = first.intersection(second, sort=sort)
- if sort is None:
- tm.assert_index_equal(intersect, second.sort_values())
- assert tm.equalContents(intersect, second)
+ if sort in (None, False):
+ tm.assert_index_equal(intersect.sort_values(), second.sort_values())
+ else:
+ tm.assert_index_equal(intersect, second)
# Corner cases
inter = first.intersection(first, sort=sort)
@@ -766,9 +775,10 @@ def test_union(self, index, sort):
everything = index[:20]
union = first.union(second, sort=sort)
- if sort is None:
- tm.assert_index_equal(union, everything.sort_values())
- assert tm.equalContents(union, everything)
+ if sort in (None, False):
+ tm.assert_index_equal(union.sort_values(), everything.sort_values())
+ else:
+ tm.assert_index_equal(union, everything)
@pytest.mark.parametrize("klass", [np.array, Series, list])
@pytest.mark.parametrize("index", ["string"], indirect=True)
@@ -780,9 +790,10 @@ def test_union_from_iterables(self, index, klass, sort):
case = klass(second.values)
result = first.union(case, sort=sort)
- if sort is None:
- tm.assert_index_equal(result, everything.sort_values())
- assert tm.equalContents(result, everything)
+ if sort in (None, False):
+ tm.assert_index_equal(result.sort_values(), everything.sort_values())
+ else:
+ tm.assert_index_equal(result, everything)
@pytest.mark.parametrize("index", ["string"], indirect=True)
def test_union_identity(self, index, sort):
@@ -811,7 +822,11 @@ def test_difference_name_preservation(self, index, second_name, expected, sort):
second.name = second_name
result = first.difference(second, sort=sort)
- assert tm.equalContents(result, answer)
+ if sort is True:
+ tm.assert_index_equal(result, answer)
+ else:
+ answer.name = second_name
+ tm.assert_index_equal(result.sort_values(), answer.sort_values())
if expected is None:
assert result.name is None
@@ -894,7 +909,6 @@ def test_symmetric_difference_mi(self, sort):
if sort is None:
expected = expected.sort_values()
tm.assert_index_equal(result, expected)
- assert tm.equalContents(result, expected)
@pytest.mark.parametrize(
"index2,expected",
@@ -916,13 +930,20 @@ def test_symmetric_difference_missing(self, index2, expected, sort):
def test_symmetric_difference_non_index(self, sort):
index1 = Index([1, 2, 3, 4], name="index1")
index2 = np.array([2, 3, 4, 5])
- expected = Index([1, 5])
+ expected = Index([1, 5], name="index1")
result = index1.symmetric_difference(index2, sort=sort)
- assert tm.equalContents(result, expected)
+ if sort in (None, True):
+ tm.assert_index_equal(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
assert result.name == "index1"
result = index1.symmetric_difference(index2, result_name="new_name", sort=sort)
- assert tm.equalContents(result, expected)
+ expected.name = "new_name"
+ if sort in (None, True):
+ tm.assert_index_equal(result, expected)
+ else:
+ tm.assert_index_equal(result.sort_values(), expected)
assert result.name == "new_name"
def test_union_ea_dtypes(self, any_numeric_ea_and_arrow_dtype):
diff --git a/pandas/tests/indexes/timedeltas/test_setops.py b/pandas/tests/indexes/timedeltas/test_setops.py
index 727b4eee00566..fce10d9176d74 100644
--- a/pandas/tests/indexes/timedeltas/test_setops.py
+++ b/pandas/tests/indexes/timedeltas/test_setops.py
@@ -115,7 +115,7 @@ def test_intersection_equal(self, sort):
intersect = first.intersection(second, sort=sort)
if sort is None:
tm.assert_index_equal(intersect, second.sort_values())
- assert tm.equalContents(intersect, second)
+ tm.assert_index_equal(intersect, second)
# Corner cases
inter = first.intersection(first, sort=sort)
diff --git a/pandas/tests/io/test_sql.py b/pandas/tests/io/test_sql.py
index 144210166d1a6..45d0a839b9e1a 100644
--- a/pandas/tests/io/test_sql.py
+++ b/pandas/tests/io/test_sql.py
@@ -375,7 +375,9 @@ def check_iris_frame(frame: DataFrame):
pytype = frame.dtypes.iloc[0].type
row = frame.iloc[0]
assert issubclass(pytype, np.floating)
- tm.equalContents(row.values, [5.1, 3.5, 1.4, 0.2, "Iris-setosa"])
+ tm.assert_series_equal(
+ row, Series([5.1, 3.5, 1.4, 0.2, "Iris-setosa"], index=frame.columns, name=0)
+ )
assert frame.shape in ((150, 5), (8, 5))
@@ -1734,7 +1736,7 @@ def test_api_execute_sql(conn, request):
iris_results = pandas_sql.execute("SELECT * FROM iris")
row = iris_results.fetchone()
iris_results.close()
- tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, "Iris-setosa"])
+ assert list(row) == [5.1, 3.5, 1.4, 0.2, "Iris-setosa"]
@pytest.mark.parametrize("conn", all_connectable_types)
@@ -2710,7 +2712,7 @@ def test_execute_sql(conn, request):
iris_results = pandasSQL.execute("SELECT * FROM iris")
row = iris_results.fetchone()
iris_results.close()
- tm.equalContents(row, [5.1, 3.5, 1.4, 0.2, "Iris-setosa"])
+ assert list(row) == [5.1, 3.5, 1.4, 0.2, "Iris-setosa"]
@pytest.mark.parametrize("conn", sqlalchemy_connectable_iris)
@@ -2726,7 +2728,7 @@ def test_sqlalchemy_read_table_columns(conn, request):
iris_frame = sql.read_sql_table(
"iris", con=conn, columns=["SepalLength", "SepalLength"]
)
- tm.equalContents(iris_frame.columns.values, ["SepalLength", "SepalLength"])
+ tm.assert_index_equal(iris_frame.columns, Index(["SepalLength", "SepalLength__1"]))
@pytest.mark.parametrize("conn", sqlalchemy_connectable_iris)
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index b108ec24732ac..0c788b371a03a 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -252,7 +252,7 @@ def test_slice(string_series, object_series, using_copy_on_write, warn_copy_on_w
assert string_series[numSlice.index[0]] == numSlice[numSlice.index[0]]
assert numSlice.index[1] == string_series.index[11]
- assert tm.equalContents(numSliceEnd, np.array(string_series)[-10:])
+ tm.assert_numpy_array_equal(np.array(numSliceEnd), np.array(string_series)[-10:])
# Test return view.
sl = string_series[10:20]
diff --git a/pandas/tests/series/test_arithmetic.py b/pandas/tests/series/test_arithmetic.py
index e7233f005e427..6471cd71f0860 100644
--- a/pandas/tests/series/test_arithmetic.py
+++ b/pandas/tests/series/test_arithmetic.py
@@ -662,9 +662,9 @@ def test_comparison_operators_with_nas(self, comparison_op):
def test_ne(self):
ts = Series([3, 4, 5, 6, 7], [3, 4, 5, 6, 7], dtype=float)
- expected = [True, True, False, True, True]
- assert tm.equalContents(ts.index != 5, expected)
- assert tm.equalContents(~(ts.index == 5), expected)
+ expected = np.array([True, True, False, True, True])
+ tm.assert_numpy_array_equal(ts.index != 5, expected)
+ tm.assert_numpy_array_equal(~(ts.index == 5), expected)
@pytest.mark.parametrize(
"left, right",
diff --git a/pandas/tests/series/test_constructors.py b/pandas/tests/series/test_constructors.py
index 65726eb8fcbb8..84c612c43da29 100644
--- a/pandas/tests/series/test_constructors.py
+++ b/pandas/tests/series/test_constructors.py
@@ -160,7 +160,7 @@ def test_constructor(self, datetime_series, using_infer_string):
derived = Series(datetime_series)
assert derived.index._is_all_dates
- assert tm.equalContents(derived.index, datetime_series.index)
+ tm.assert_index_equal(derived.index, datetime_series.index)
# Ensure new index is not created
assert id(datetime_series.index) == id(derived.index)
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56154 | 2023-11-24T19:13:07Z | 2023-11-24T22:56:12Z | 2023-11-24T22:56:12Z | 2023-11-25T01:22:12Z |
CLN: Remove reset_display_options | diff --git a/pandas/_testing/__init__.py b/pandas/_testing/__init__.py
index 832919db442d4..788272f9583aa 100644
--- a/pandas/_testing/__init__.py
+++ b/pandas/_testing/__init__.py
@@ -291,13 +291,6 @@
comparison_dunder_methods = ["__eq__", "__ne__", "__le__", "__lt__", "__ge__", "__gt__"]
-def reset_display_options() -> None:
- """
- Reset the display options for printing and representing objects.
- """
- pd.reset_option("^display.", silent=True)
-
-
# -----------------------------------------------------------------------------
# Comparators
@@ -1182,7 +1175,6 @@ def shares_memory(left, right) -> bool:
"NULL_OBJECTS",
"OBJECT_DTYPES",
"raise_assert_detail",
- "reset_display_options",
"raises_chained_assignment_error",
"round_trip_localpath",
"round_trip_pathlib",
diff --git a/pandas/tests/frame/test_repr.py b/pandas/tests/frame/test_repr.py
index f750074a36e91..98962b3003b6d 100644
--- a/pandas/tests/frame/test_repr.py
+++ b/pandas/tests/frame/test_repr.py
@@ -24,8 +24,6 @@
)
import pandas._testing as tm
-import pandas.io.formats.format as fmt
-
class TestDataFrameRepr:
def test_repr_should_return_str(self):
@@ -220,16 +218,14 @@ def test_repr_unsortable(self):
def test_repr_float_frame_options(self, float_frame):
repr(float_frame)
- fmt.set_option("display.precision", 3)
- repr(float_frame)
+ with option_context("display.precision", 3):
+ repr(float_frame)
- fmt.set_option("display.max_rows", 10, "display.max_columns", 2)
- repr(float_frame)
-
- fmt.set_option("display.max_rows", 1000, "display.max_columns", 1000)
- repr(float_frame)
+ with option_context("display.max_rows", 10, "display.max_columns", 2):
+ repr(float_frame)
- tm.reset_display_options()
+ with option_context("display.max_rows", 1000, "display.max_columns", 1000):
+ repr(float_frame)
def test_repr_unicode(self):
uval = "\u03c3\u03c3\u03c3\u03c3"
diff --git a/pandas/tests/io/formats/test_eng_formatting.py b/pandas/tests/io/formats/test_eng_formatting.py
index 75e63cb1f6b54..6d581b5b92e0c 100644
--- a/pandas/tests/io/formats/test_eng_formatting.py
+++ b/pandas/tests/io/formats/test_eng_formatting.py
@@ -1,9 +1,19 @@
import numpy as np
+import pytest
-from pandas import DataFrame
-import pandas._testing as tm
+from pandas import (
+ DataFrame,
+ reset_option,
+ set_eng_float_format,
+)
-import pandas.io.formats.format as fmt
+from pandas.io.formats.format import EngFormatter
+
+
+@pytest.fixture(autouse=True)
+def reset_float_format():
+ yield
+ reset_option("display.float_format")
class TestEngFormatter:
@@ -11,20 +21,19 @@ def test_eng_float_formatter2(self, float_frame):
df = float_frame
df.loc[5] = 0
- fmt.set_eng_float_format()
+ set_eng_float_format()
repr(df)
- fmt.set_eng_float_format(use_eng_prefix=True)
+ set_eng_float_format(use_eng_prefix=True)
repr(df)
- fmt.set_eng_float_format(accuracy=0)
+ set_eng_float_format(accuracy=0)
repr(df)
- tm.reset_display_options()
def test_eng_float_formatter(self):
df = DataFrame({"A": [1.41, 141.0, 14100, 1410000.0]})
- fmt.set_eng_float_format()
+ set_eng_float_format()
result = df.to_string()
expected = (
" A\n"
@@ -35,18 +44,16 @@ def test_eng_float_formatter(self):
)
assert result == expected
- fmt.set_eng_float_format(use_eng_prefix=True)
+ set_eng_float_format(use_eng_prefix=True)
result = df.to_string()
expected = " A\n0 1.410\n1 141.000\n2 14.100k\n3 1.410M"
assert result == expected
- fmt.set_eng_float_format(accuracy=0)
+ set_eng_float_format(accuracy=0)
result = df.to_string()
expected = " A\n0 1E+00\n1 141E+00\n2 14E+03\n3 1E+06"
assert result == expected
- tm.reset_display_options()
-
def compare(self, formatter, input, output):
formatted_input = formatter(input)
assert formatted_input == output
@@ -67,7 +74,7 @@ def compare_all(self, formatter, in_out):
self.compare(formatter, -input, "-" + output[1:])
def test_exponents_with_eng_prefix(self):
- formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=3, use_eng_prefix=True)
f = np.sqrt(2)
in_out = [
(f * 10**-24, " 1.414y"),
@@ -125,7 +132,7 @@ def test_exponents_with_eng_prefix(self):
self.compare_all(formatter, in_out)
def test_exponents_without_eng_prefix(self):
- formatter = fmt.EngFormatter(accuracy=4, use_eng_prefix=False)
+ formatter = EngFormatter(accuracy=4, use_eng_prefix=False)
f = np.pi
in_out = [
(f * 10**-24, " 3.1416E-24"),
@@ -183,7 +190,7 @@ def test_exponents_without_eng_prefix(self):
self.compare_all(formatter, in_out)
def test_rounding(self):
- formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=3, use_eng_prefix=True)
in_out = [
(5.55555, " 5.556"),
(55.5555, " 55.556"),
@@ -194,7 +201,7 @@ def test_rounding(self):
]
self.compare_all(formatter, in_out)
- formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=1, use_eng_prefix=True)
in_out = [
(5.55555, " 5.6"),
(55.5555, " 55.6"),
@@ -205,7 +212,7 @@ def test_rounding(self):
]
self.compare_all(formatter, in_out)
- formatter = fmt.EngFormatter(accuracy=0, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=0, use_eng_prefix=True)
in_out = [
(5.55555, " 6"),
(55.5555, " 56"),
@@ -216,14 +223,14 @@ def test_rounding(self):
]
self.compare_all(formatter, in_out)
- formatter = fmt.EngFormatter(accuracy=3, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=3, use_eng_prefix=True)
result = formatter(0)
assert result == " 0.000"
def test_nan(self):
# Issue #11981
- formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=1, use_eng_prefix=True)
result = formatter(np.nan)
assert result == "NaN"
@@ -235,14 +242,13 @@ def test_nan(self):
}
)
pt = df.pivot_table(values="a", index="b", columns="c")
- fmt.set_eng_float_format(accuracy=1)
+ set_eng_float_format(accuracy=1)
result = pt.to_string()
assert "NaN" in result
- tm.reset_display_options()
def test_inf(self):
# Issue #11981
- formatter = fmt.EngFormatter(accuracy=1, use_eng_prefix=True)
+ formatter = EngFormatter(accuracy=1, use_eng_prefix=True)
result = formatter(np.inf)
assert result == "inf"
diff --git a/pandas/tests/io/formats/test_to_html.py b/pandas/tests/io/formats/test_to_html.py
index 8cfcc26b95b4c..a7648cf1c471a 100644
--- a/pandas/tests/io/formats/test_to_html.py
+++ b/pandas/tests/io/formats/test_to_html.py
@@ -914,7 +914,6 @@ def get_ipython():
repstr = df._repr_html_()
assert "class" in repstr # info fallback
- tm.reset_display_options()
def test_repr_html(self, float_frame):
df = float_frame
@@ -926,16 +925,12 @@ def test_repr_html(self, float_frame):
with option_context("display.notebook_repr_html", False):
df._repr_html_()
- tm.reset_display_options()
-
df = DataFrame([[1, 2], [3, 4]])
with option_context("display.show_dimensions", True):
assert "2 rows" in df._repr_html_()
with option_context("display.show_dimensions", False):
assert "2 rows" not in df._repr_html_()
- tm.reset_display_options()
-
def test_repr_html_mathjax(self):
df = DataFrame([[1, 2], [3, 4]])
assert "tex2jax_ignore" not in df._repr_html_()
diff --git a/pandas/tests/io/formats/test_to_string.py b/pandas/tests/io/formats/test_to_string.py
index 3a8dcb339b063..e607b6eb454a1 100644
--- a/pandas/tests/io/formats/test_to_string.py
+++ b/pandas/tests/io/formats/test_to_string.py
@@ -412,7 +412,6 @@ def test_to_string_complex_float_formatting(self):
def test_to_string_format_inf(self):
# GH#24861
- tm.reset_display_options()
df = DataFrame(
{
"A": [-np.inf, np.inf, -1, -2.1234, 3, 4],
@@ -460,7 +459,6 @@ def test_to_string_int_formatting(self):
assert output == expected
def test_to_string_float_formatting(self):
- tm.reset_display_options()
with option_context(
"display.precision",
5,
@@ -495,7 +493,6 @@ def test_to_string_float_formatting(self):
expected = " x\n0 3234.000\n1 0.253"
assert df_s == expected
- tm.reset_display_options()
assert get_option("display.precision") == 6
df = DataFrame({"x": [1e9, 0.2512]})
@@ -516,14 +513,12 @@ def test_to_string_decimal(self):
assert df.to_string(decimal=",") == expected
def test_to_string_left_justify_cols(self):
- tm.reset_display_options()
df = DataFrame({"x": [3234, 0.253]})
df_s = df.to_string(justify="left")
expected = " x \n0 3234.000\n1 0.253"
assert df_s == expected
def test_to_string_format_na(self):
- tm.reset_display_options()
df = DataFrame(
{
"A": [np.nan, -1, -2.1234, 3, 4],
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/56153 | 2023-11-24T18:00:35Z | 2023-11-24T22:57:16Z | 2023-11-24T22:57:16Z | 2023-11-25T01:21:56Z |
BUG: translate losing object dtype with new string dtype | diff --git a/doc/source/whatsnew/v2.1.4.rst b/doc/source/whatsnew/v2.1.4.rst
index 543a9864ced26..77ce303dc1bfe 100644
--- a/doc/source/whatsnew/v2.1.4.rst
+++ b/doc/source/whatsnew/v2.1.4.rst
@@ -25,7 +25,7 @@ Bug fixes
- Bug in :meth:`Index.__getitem__` returning wrong result for Arrow dtypes and negative stepsize (:issue:`55832`)
- Fixed bug in :meth:`DataFrame.__setitem__` casting :class:`Index` with object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
- Fixed bug in :meth:`Index.insert` casting object-dtype to PyArrow backed strings when ``infer_string`` option is set (:issue:`55638`)
--
+- Fixed bug in :meth:`Series.str.translate` losing object dtype when string option is set (:issue:`56152`)
.. ---------------------------------------------------------------------------
.. _whatsnew_214.other:
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 58b904fd31b6a..9fa6e9973291d 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -259,6 +259,7 @@ def _wrap_result(
fill_value=np.nan,
returns_string: bool = True,
returns_bool: bool = False,
+ dtype=None,
):
from pandas import (
Index,
@@ -379,29 +380,29 @@ def cons_row(x):
out = out.get_level_values(0)
return out
else:
- return Index(result, name=name)
+ return Index(result, name=name, dtype=dtype)
else:
index = self._orig.index
# This is a mess.
- dtype: DtypeObj | str | None
+ _dtype: DtypeObj | str | None = dtype
vdtype = getattr(result, "dtype", None)
if self._is_string:
if is_bool_dtype(vdtype):
- dtype = result.dtype
+ _dtype = result.dtype
elif returns_string:
- dtype = self._orig.dtype
+ _dtype = self._orig.dtype
else:
- dtype = vdtype
- else:
- dtype = vdtype
+ _dtype = vdtype
+ elif vdtype is not None:
+ _dtype = vdtype
if expand:
cons = self._orig._constructor_expanddim
- result = cons(result, columns=name, index=index, dtype=dtype)
+ result = cons(result, columns=name, index=index, dtype=_dtype)
else:
# Must be a Series
cons = self._orig._constructor
- result = cons(result, name=name, index=index, dtype=dtype)
+ result = cons(result, name=name, index=index, dtype=_dtype)
result = result.__finalize__(self._orig, method="str")
if name is not None and result.ndim == 1:
# __finalize__ might copy over the original name, but we may
@@ -2317,7 +2318,8 @@ def translate(self, table):
dtype: object
"""
result = self._data.array._str_translate(table)
- return self._wrap_result(result)
+ dtype = object if self._data.dtype == "object" else None
+ return self._wrap_result(result, dtype=dtype)
@forbid_nonstring_types(["bytes"])
def count(self, pat, flags: int = 0):
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 78f0730d730e8..bd64a5dce3b9a 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -5,6 +5,7 @@
import pytest
from pandas.errors import PerformanceWarning
+import pandas.util._test_decorators as td
import pandas as pd
from pandas import (
@@ -893,7 +894,10 @@ def test_find_nan(any_string_dtype):
# --------------------------------------------------------------------------------------
-def test_translate(index_or_series, any_string_dtype):
+@pytest.mark.parametrize(
+ "infer_string", [False, pytest.param(True, marks=td.skip_if_no("pyarrow"))]
+)
+def test_translate(index_or_series, any_string_dtype, infer_string):
obj = index_or_series(
["abcdefg", "abcc", "cdddfg", "cdefggg"], dtype=any_string_dtype
)
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56152 | 2023-11-24T15:48:58Z | 2023-11-26T18:22:57Z | 2023-11-26T18:22:57Z | 2023-11-26T18:24:37Z |
DOC: to_datetime origin argument not unit specific | diff --git a/pandas/core/tools/datetimes.py b/pandas/core/tools/datetimes.py
index 076b506670d40..e42e89b76e6f2 100644
--- a/pandas/core/tools/datetimes.py
+++ b/pandas/core/tools/datetimes.py
@@ -843,8 +843,8 @@ def to_datetime(
to the day starting at noon on January 1, 4713 BC.
- If Timestamp convertible (Timestamp, dt.datetime, np.datetimt64 or date
string), origin is set to Timestamp identified by origin.
- - If a float or integer, origin is the millisecond difference
- relative to 1970-01-01.
+ - If a float or integer, origin is the difference
+ (in units determined by the ``unit`` argument) relative to 1970-01-01.
cache : bool, default True
If :const:`True`, use a cache of unique, converted dates to apply the
datetime conversion. May produce significant speed-up when parsing
| - [ ] closes #55874
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
Edited the docstring as suggested in the issue tagged. | https://api.github.com/repos/pandas-dev/pandas/pulls/56151 | 2023-11-24T15:34:55Z | 2023-11-24T22:58:56Z | 2023-11-24T22:58:56Z | 2023-11-24T22:59:05Z |
Adjust tests in resample folder for new string option | diff --git a/pandas/tests/resample/test_base.py b/pandas/tests/resample/test_base.py
index fee52780585b8..6ecbdff0018fa 100644
--- a/pandas/tests/resample/test_base.py
+++ b/pandas/tests/resample/test_base.py
@@ -5,6 +5,7 @@
from pandas import (
DataFrame,
+ Index,
MultiIndex,
NaT,
PeriodIndex,
@@ -255,7 +256,7 @@ def test_resample_count_empty_dataframe(freq, empty_frame_dti):
index = _asfreq_compat(empty_frame_dti.index, freq)
- expected = DataFrame({"a": []}, dtype="int64", index=index)
+ expected = DataFrame(dtype="int64", index=index, columns=Index(["a"], dtype=object))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/resample/test_resample_api.py b/pandas/tests/resample/test_resample_api.py
index 81a054bbbc3df..7e8779ab48b7e 100644
--- a/pandas/tests/resample/test_resample_api.py
+++ b/pandas/tests/resample/test_resample_api.py
@@ -197,7 +197,7 @@ def tests_raises_on_nuisance(test_frame):
tm.assert_frame_equal(result, expected)
expected = r[["A", "B", "C"]].mean()
- msg = re.escape("agg function failed [how->mean,dtype->object]")
+ msg = re.escape("agg function failed [how->mean,dtype->")
with pytest.raises(TypeError, match=msg):
r.mean()
result = r.mean(numeric_only=True)
@@ -948,7 +948,7 @@ def test_frame_downsample_method(method, numeric_only, expected_data):
if isinstance(expected_data, str):
if method in ("var", "mean", "median", "prod"):
klass = TypeError
- msg = re.escape(f"agg function failed [how->{method},dtype->object]")
+ msg = re.escape(f"agg function failed [how->{method},dtype->")
else:
klass = ValueError
msg = expected_data
@@ -998,7 +998,7 @@ def test_series_downsample_method(method, numeric_only, expected_data):
with pytest.raises(TypeError, match=msg):
func(**kwargs)
elif method == "prod":
- msg = re.escape("agg function failed [how->prod,dtype->object]")
+ msg = re.escape("agg function failed [how->prod,dtype->")
with pytest.raises(TypeError, match=msg):
func(**kwargs)
else:
| - [ ] closes #xxxx (Replace xxxx with the GitHub issue number)
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56150 | 2023-11-24T14:13:48Z | 2023-11-26T04:30:18Z | 2023-11-26T04:30:18Z | 2023-11-26T11:33:32Z |
DEPR: Some Grouper and Grouping attributes | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 0e613760f4927..a77fb18353cd9 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -303,6 +303,8 @@ Other Deprecations
- Deprecated strings ``H``, ``S``, ``U``, and ``N`` denoting units in :func:`to_timedelta` (:issue:`52536`)
- Deprecated strings ``H``, ``T``, ``S``, ``L``, ``U``, and ``N`` denoting units in :class:`Timedelta` (:issue:`52536`)
- Deprecated strings ``T``, ``S``, ``L``, ``U``, and ``N`` denoting frequencies in :class:`Minute`, :class:`Second`, :class:`Milli`, :class:`Micro`, :class:`Nano` (:issue:`52536`)
+- Deprecated the :class:`.BaseGrouper` attributes ``group_keys_seq`` and ``reconstructed_codes``; these will be removed in a future version of pandas (:issue:`56148`)
+- Deprecated the :class:`.Grouping` attributes ``group_index``, ``result_index``, and ``group_arraylike``; these will be removed in a future version of pandas (:issue:`56148`)
- Deprecated the ``errors="ignore"`` option in :func:`to_datetime`, :func:`to_timedelta`, and :func:`to_numeric`; explicitly catch exceptions instead (:issue:`54467`)
- Deprecated the ``fastpath`` keyword in the :class:`Series` constructor (:issue:`20110`)
- Deprecated the ``ordinal`` keyword in :class:`PeriodIndex`, use :meth:`PeriodIndex.from_ordinals` instead (:issue:`55960`)
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 1fb412e17c4ba..55ab4c2113fd7 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -819,9 +819,9 @@ def value_counts(
rep = partial(np.repeat, repeats=np.add.reduceat(inc, idx))
# multi-index components
- codes = self.grouper.reconstructed_codes
+ codes = self.grouper._reconstructed_codes
codes = [rep(level_codes) for level_codes in codes] + [llab(lab, inc)]
- levels = [ping.group_index for ping in self.grouper.groupings] + [lev]
+ levels = [ping._group_index for ping in self.grouper.groupings] + [lev]
if dropna:
mask = codes[-1] != -1
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 2d530b64e104b..7d284db4eba2c 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2820,7 +2820,7 @@ def _value_counts(
and not grouping._observed
for grouping in groupings
):
- levels_list = [ping.result_index for ping in groupings]
+ levels_list = [ping._result_index for ping in groupings]
multi_index = MultiIndex.from_product(
levels_list, names=[ping.name for ping in groupings]
)
@@ -5573,7 +5573,7 @@ def _reindex_output(
):
return output
- levels_list = [ping.group_index for ping in groupings]
+ levels_list = [ping._group_index for ping in groupings]
names = self.grouper.names
if qs is not None:
# error: Argument 1 to "append" of "list" has incompatible type
@@ -5795,7 +5795,7 @@ def _idxmax_idxmin(
ping._passed_categorical for ping in self.grouper.groupings
):
expected_len = np.prod(
- [len(ping.group_index) for ping in self.grouper.groupings]
+ [len(ping._group_index) for ping in self.grouper.groupings]
)
if len(self.grouper.groupings) == 1:
result_len = len(self.grouper.groupings[0].grouping_vector.unique())
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index fd0479e17d2bd..fc914831b7a72 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -523,7 +523,6 @@ class Grouping:
"""
_codes: npt.NDArray[np.signedinteger] | None = None
- _group_index: Index | None = None
_all_grouper: Categorical | None
_orig_cats: Index | None
_index: Index
@@ -679,7 +678,7 @@ def _ilevel(self) -> int | None:
@property
def ngroups(self) -> int:
- return len(self.group_index)
+ return len(self._group_index)
@cache_readonly
def indices(self) -> dict[Hashable, npt.NDArray[np.intp]]:
@@ -695,34 +694,58 @@ def codes(self) -> npt.NDArray[np.signedinteger]:
return self._codes_and_uniques[0]
@cache_readonly
- def group_arraylike(self) -> ArrayLike:
+ def _group_arraylike(self) -> ArrayLike:
"""
Analogous to result_index, but holding an ArrayLike to ensure
we can retain ExtensionDtypes.
"""
if self._all_grouper is not None:
# retain dtype for categories, including unobserved ones
- return self.result_index._values
+ return self._result_index._values
elif self._passed_categorical:
- return self.group_index._values
+ return self._group_index._values
return self._codes_and_uniques[1]
+ @property
+ def group_arraylike(self) -> ArrayLike:
+ """
+ Analogous to result_index, but holding an ArrayLike to ensure
+ we can retain ExtensionDtypes.
+ """
+ warnings.warn(
+ "group_arraylike is deprecated and will be removed in a future "
+ "version of pandas",
+ category=FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self._group_arraylike
+
@cache_readonly
- def result_index(self) -> Index:
+ def _result_index(self) -> Index:
# result_index retains dtype for categories, including unobserved ones,
# which group_index does not
if self._all_grouper is not None:
- group_idx = self.group_index
+ group_idx = self._group_index
assert isinstance(group_idx, CategoricalIndex)
cats = self._orig_cats
# set_categories is dynamically added
return group_idx.set_categories(cats) # type: ignore[attr-defined]
- return self.group_index
+ return self._group_index
+
+ @property
+ def result_index(self) -> Index:
+ warnings.warn(
+ "result_index is deprecated and will be removed in a future "
+ "version of pandas",
+ category=FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self._result_index
@cache_readonly
- def group_index(self) -> Index:
+ def _group_index(self) -> Index:
codes, uniques = self._codes_and_uniques
if not self._dropna and self._passed_categorical:
assert isinstance(uniques, Categorical)
@@ -744,6 +767,16 @@ def group_index(self) -> Index:
)
return Index._with_infer(uniques, name=self.name)
+ @property
+ def group_index(self) -> Index:
+ warnings.warn(
+ "group_index is deprecated and will be removed in a future "
+ "version of pandas",
+ category=FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self._group_index
+
@cache_readonly
def _codes_and_uniques(self) -> tuple[npt.NDArray[np.signedinteger], ArrayLike]:
uniques: ArrayLike
@@ -809,7 +842,7 @@ def _codes_and_uniques(self) -> tuple[npt.NDArray[np.signedinteger], ArrayLike]:
@cache_readonly
def groups(self) -> dict[Hashable, np.ndarray]:
- cats = Categorical.from_codes(self.codes, self.group_index, validate=False)
+ cats = Categorical.from_codes(self.codes, self._group_index, validate=False)
return self._index.groupby(cats)
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 466bbac641077..f3579e6c13a19 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -15,6 +15,7 @@
Generic,
final,
)
+import warnings
import numpy as np
@@ -32,6 +33,7 @@
)
from pandas.errors import AbstractMethodError
from pandas.util._decorators import cache_readonly
+from pandas.util._exceptions import find_stack_level
from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.cast import (
@@ -616,7 +618,7 @@ def get_iterator(
for each group
"""
splitter = self._get_splitter(data, axis=axis)
- keys = self.group_keys_seq
+ keys = self._group_keys_seq
yield from zip(keys, splitter)
@final
@@ -638,7 +640,7 @@ def _get_splitter(self, data: NDFrame, axis: AxisInt = 0) -> DataSplitter:
@final
@cache_readonly
- def group_keys_seq(self):
+ def _group_keys_seq(self):
if len(self.groupings) == 1:
return self.levels[0]
else:
@@ -647,6 +649,16 @@ def group_keys_seq(self):
# provide "flattened" iterator for multi-group setting
return get_flattened_list(ids, ngroups, self.levels, self.codes)
+ @property
+ def group_keys_seq(self):
+ warnings.warn(
+ "group_keys_seq is deprecated and will be removed in a future "
+ "version of pandas",
+ category=FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self._group_keys_seq
+
@cache_readonly
def indices(self) -> dict[Hashable, npt.NDArray[np.intp]]:
"""dict {group name -> group indices}"""
@@ -654,7 +666,7 @@ def indices(self) -> dict[Hashable, npt.NDArray[np.intp]]:
# This shows unused categories in indices GH#38642
return self.groupings[0].indices
codes_list = [ping.codes for ping in self.groupings]
- keys = [ping.group_index for ping in self.groupings]
+ keys = [ping._group_index for ping in self.groupings]
return get_indexer_dict(codes_list, keys)
@final
@@ -691,7 +703,7 @@ def codes(self) -> list[npt.NDArray[np.signedinteger]]:
@property
def levels(self) -> list[Index]:
- return [ping.group_index for ping in self.groupings]
+ return [ping._group_index for ping in self.groupings]
@property
def names(self) -> list[Hashable]:
@@ -766,7 +778,7 @@ def _get_compressed_codes(
# FIXME: compress_group_index's second return value is int64, not intp
ping = self.groupings[0]
- return ping.codes, np.arange(len(ping.group_index), dtype=np.intp)
+ return ping.codes, np.arange(len(ping._group_index), dtype=np.intp)
@final
@cache_readonly
@@ -774,18 +786,28 @@ def ngroups(self) -> int:
return len(self.result_index)
@property
- def reconstructed_codes(self) -> list[npt.NDArray[np.intp]]:
+ def _reconstructed_codes(self) -> list[npt.NDArray[np.intp]]:
codes = self.codes
ids, obs_ids, _ = self.group_info
return decons_obs_group_ids(ids, obs_ids, self.shape, codes, xnull=True)
+ @property
+ def reconstructed_codes(self) -> list[npt.NDArray[np.intp]]:
+ warnings.warn(
+ "reconstructed_codes is deprecated and will be removed in a future "
+ "version of pandas",
+ category=FutureWarning,
+ stacklevel=find_stack_level(),
+ )
+ return self._reconstructed_codes
+
@cache_readonly
def result_index(self) -> Index:
if len(self.groupings) == 1:
- return self.groupings[0].result_index.rename(self.names[0])
+ return self.groupings[0]._result_index.rename(self.names[0])
- codes = self.reconstructed_codes
- levels = [ping.result_index for ping in self.groupings]
+ codes = self._reconstructed_codes
+ levels = [ping._result_index for ping in self.groupings]
return MultiIndex(
levels=levels, codes=codes, verify_integrity=False, names=self.names
)
@@ -795,12 +817,12 @@ def get_group_levels(self) -> list[ArrayLike]:
# Note: only called from _insert_inaxis_grouper, which
# is only called for BaseGrouper, never for BinGrouper
if len(self.groupings) == 1:
- return [self.groupings[0].group_arraylike]
+ return [self.groupings[0]._group_arraylike]
name_list = []
- for ping, codes in zip(self.groupings, self.reconstructed_codes):
+ for ping, codes in zip(self.groupings, self._reconstructed_codes):
codes = ensure_platform_int(codes)
- levels = ping.group_arraylike.take(codes)
+ levels = ping._group_arraylike.take(codes)
name_list.append(levels)
@@ -907,7 +929,7 @@ def apply_groupwise(
) -> tuple[list, bool]:
mutated = False
splitter = self._get_splitter(data, axis=axis)
- group_keys = self.group_keys_seq
+ group_keys = self._group_keys_seq
result_values = []
# This calls DataSplitter.__iter__
@@ -1087,7 +1109,7 @@ def group_info(self) -> tuple[npt.NDArray[np.intp], npt.NDArray[np.intp], int]:
)
@cache_readonly
- def reconstructed_codes(self) -> list[np.ndarray]:
+ def _reconstructed_codes(self) -> list[np.ndarray]:
# get unique result indices, and prepend 0 as groupby starts from the first
return [np.r_[0, np.flatnonzero(self.bins[1:] != self.bins[:-1]) + 1]]
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py
index c61d9fab0435e..5b17484de9c93 100644
--- a/pandas/tests/groupby/test_groupby.py
+++ b/pandas/tests/groupby/test_groupby.py
@@ -3303,3 +3303,13 @@ def test_groupby_ffill_with_duplicated_index():
result = df.groupby(level=0).ffill()
expected = DataFrame({"a": [1, 2, 3, 4, 2, 3]}, index=[0, 1, 2, 0, 1, 2])
tm.assert_frame_equal(result, expected, check_dtype=False)
+
+
+@pytest.mark.parametrize("attr", ["group_keys_seq", "reconstructed_codes"])
+def test_depr_grouper_attrs(attr):
+ # GH#56148
+ df = DataFrame({"a": [1, 1, 2], "b": [3, 4, 5]})
+ gb = df.groupby("a")
+ msg = f"{attr} is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ getattr(gb.grouper, attr)
diff --git a/pandas/tests/groupby/test_grouping.py b/pandas/tests/groupby/test_grouping.py
index e3cc41afa4679..3c1a35c984031 100644
--- a/pandas/tests/groupby/test_grouping.py
+++ b/pandas/tests/groupby/test_grouping.py
@@ -1211,3 +1211,13 @@ def test_grouper_groups():
msg = "Grouper.indexer is deprecated"
with tm.assert_produces_warning(FutureWarning, match=msg):
grper.indexer
+
+
+@pytest.mark.parametrize("attr", ["group_index", "result_index", "group_arraylike"])
+def test_depr_grouping_attrs(attr):
+ # GH#56148
+ df = DataFrame({"a": [1, 1, 2], "b": [3, 4, 5]})
+ gb = df.groupby("a")
+ msg = f"{attr} is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ getattr(gb.grouper.groupings[0], attr)
diff --git a/pandas/tests/groupby/test_timegrouper.py b/pandas/tests/groupby/test_timegrouper.py
index be02c7f79ba01..aaebb00dd8ad4 100644
--- a/pandas/tests/groupby/test_timegrouper.py
+++ b/pandas/tests/groupby/test_timegrouper.py
@@ -67,7 +67,9 @@ def groupby_with_truncated_bingrouper(frame_for_truncated_bingrouper):
gb = df.groupby(tdg)
# check we're testing the case we're interested in
- assert len(gb.grouper.result_index) != len(gb.grouper.group_keys_seq)
+ msg = "group_keys_seq is deprecated"
+ with tm.assert_produces_warning(FutureWarning, match=msg):
+ assert len(gb.grouper.result_index) != len(gb.grouper.group_keys_seq)
return gb
| - [x] closes #56148 (Replace xxxx with the GitHub issue number)
- [x] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [x] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [x] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [x] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56149 | 2023-11-24T13:45:33Z | 2023-11-26T04:35:43Z | 2023-11-26T04:35:43Z | 2023-11-26T13:35:37Z |
BUG raise pdep6 warning for loc full setter | diff --git a/doc/source/whatsnew/v2.2.0.rst b/doc/source/whatsnew/v2.2.0.rst
index 6a232365fbfeb..64d297069080f 100644
--- a/doc/source/whatsnew/v2.2.0.rst
+++ b/doc/source/whatsnew/v2.2.0.rst
@@ -797,6 +797,7 @@ Conversion
- Bug in :meth:`DataFrame.astype` when called with ``str`` on unpickled array - the array might change in-place (:issue:`54654`)
- Bug in :meth:`DataFrame.astype` where ``errors="ignore"`` had no effect for extension types (:issue:`54654`)
- Bug in :meth:`Series.convert_dtypes` not converting all NA column to ``null[pyarrow]`` (:issue:`55346`)
+- Bug in ``DataFrame.loc`` was not throwing "incompatible dtype warning" (see `PDEP6 <https://pandas.pydata.org/pdeps/0006-ban-upcasting.html>`_) when assigning a ``Series`` with a different dtype using a full column setter (e.g. ``df.loc[:, 'a'] = incompatible_value``) (:issue:`39584`)
Strings
^^^^^^^
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index a7dd3b486ab11..0f892d4924933 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -2143,6 +2143,26 @@ def _setitem_single_column(self, loc: int, value, plane_indexer) -> None:
# If we're setting an entire column and we can't do it inplace,
# then we can use value's dtype (or inferred dtype)
# instead of object
+ dtype = self.obj.dtypes.iloc[loc]
+ if dtype not in (np.void, object) and not self.obj.empty:
+ # - Exclude np.void, as that is a special case for expansion.
+ # We want to warn for
+ # df = pd.DataFrame({'a': [1, 2]})
+ # df.loc[:, 'a'] = .3
+ # but not for
+ # df = pd.DataFrame({'a': [1, 2]})
+ # df.loc[:, 'b'] = .3
+ # - Exclude `object`, as then no upcasting happens.
+ # - Exclude empty initial object with enlargement,
+ # as then there's nothing to be inconsistent with.
+ warnings.warn(
+ f"Setting an item of incompatible dtype is deprecated "
+ "and will raise in a future error of pandas. "
+ f"Value '{value}' has dtype incompatible with {dtype}, "
+ "please explicitly cast to a compatible dtype first.",
+ FutureWarning,
+ stacklevel=find_stack_level(),
+ )
self.obj.isetitem(loc, value)
else:
# set value into the column (first attempting to operate inplace, then
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 8a54cb2d7a189..1237c5b86d298 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -499,6 +499,9 @@ def coerce_to_target_dtype(self, other, warn_on_upcast: bool = False) -> Block:
and is_integer_dtype(self.values.dtype)
and isna(other)
and other is not NaT
+ and not (
+ isinstance(other, (np.datetime64, np.timedelta64)) and np.isnat(other)
+ )
):
warn_on_upcast = False
elif (
diff --git a/pandas/tests/copy_view/test_indexing.py b/pandas/tests/copy_view/test_indexing.py
index 91cd77741f79b..422436d376f69 100644
--- a/pandas/tests/copy_view/test_indexing.py
+++ b/pandas/tests/copy_view/test_indexing.py
@@ -1103,11 +1103,16 @@ def test_set_value_copy_only_necessary_column(
df_orig = df.copy()
view = df[:]
- if val == "a" and indexer[0] != slice(None):
+ if val == "a" and not warn_copy_on_write:
with tm.assert_produces_warning(
FutureWarning, match="Setting an item of incompatible dtype is deprecated"
):
indexer_func(df)[indexer] = val
+ if val == "a" and warn_copy_on_write:
+ with tm.assert_produces_warning(
+ FutureWarning, match="incompatible dtype|Setting a value on a view"
+ ):
+ indexer_func(df)[indexer] = val
else:
with tm.assert_cow_warning(warn_copy_on_write and val == 100):
indexer_func(df)[indexer] = val
diff --git a/pandas/tests/frame/indexing/test_indexing.py b/pandas/tests/frame/indexing/test_indexing.py
index a1868919be685..a9ee31299d469 100644
--- a/pandas/tests/frame/indexing/test_indexing.py
+++ b/pandas/tests/frame/indexing/test_indexing.py
@@ -945,7 +945,8 @@ def test_setitem_frame_upcast(self):
# needs upcasting
df = DataFrame([[1, 2, "foo"], [3, 4, "bar"]], columns=["A", "B", "C"])
df2 = df.copy()
- df2.loc[:, ["A", "B"]] = df.loc[:, ["A", "B"]] + 0.5
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df2.loc[:, ["A", "B"]] = df.loc[:, ["A", "B"]] + 0.5
expected = df.reindex(columns=["A", "B"])
expected += 0.5
expected["C"] = df["C"]
@@ -1381,20 +1382,20 @@ def test_loc_expand_empty_frame_keep_midx_names(self):
tm.assert_frame_equal(df, expected)
@pytest.mark.parametrize(
- "val, idxr, warn",
+ "val, idxr",
[
- ("x", "a", None), # TODO: this should warn as well
- ("x", ["a"], None), # TODO: this should warn as well
- (1, "a", None), # TODO: this should warn as well
- (1, ["a"], FutureWarning),
+ ("x", "a"),
+ ("x", ["a"]),
+ (1, "a"),
+ (1, ["a"]),
],
)
- def test_loc_setitem_rhs_frame(self, idxr, val, warn):
+ def test_loc_setitem_rhs_frame(self, idxr, val):
# GH#47578
df = DataFrame({"a": [1, 2]})
with tm.assert_produces_warning(
- warn, match="Setting an item of incompatible dtype"
+ FutureWarning, match="Setting an item of incompatible dtype"
):
df.loc[:, idxr] = DataFrame({"a": [val, 11]}, index=[1, 2])
expected = DataFrame({"a": [np.nan, val]})
@@ -1968,7 +1969,7 @@ def _check_setitem_invalid(self, df, invalid, indexer, warn):
np.datetime64("NaT"),
np.timedelta64("NaT"),
]
- _indexers = [0, [0], slice(0, 1), [True, False, False]]
+ _indexers = [0, [0], slice(0, 1), [True, False, False], slice(None, None, None)]
@pytest.mark.parametrize(
"invalid", _invalid_scalars + [1, 1.0, np.int64(1), np.float64(1)]
@@ -1982,7 +1983,7 @@ def test_setitem_validation_scalar_bool(self, invalid, indexer):
@pytest.mark.parametrize("indexer", _indexers)
def test_setitem_validation_scalar_int(self, invalid, any_int_numpy_dtype, indexer):
df = DataFrame({"a": [1, 2, 3]}, dtype=any_int_numpy_dtype)
- if isna(invalid) and invalid is not pd.NaT:
+ if isna(invalid) and invalid is not pd.NaT and not np.isnat(invalid):
warn = None
else:
warn = FutureWarning
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 3f13718cfc77a..72cd98ba78122 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -1369,3 +1369,23 @@ def test_frame_setitem_empty_dataframe(self):
index=dti[:0],
)
tm.assert_frame_equal(df, expected)
+
+
+def test_full_setter_loc_incompatible_dtype():
+ # https://github.com/pandas-dev/pandas/issues/55791
+ df = DataFrame({"a": [1, 2]})
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "a"] = True
+ expected = DataFrame({"a": [True, True]})
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame({"a": [1, 2]})
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "a"] = {0: 3.5, 1: 4.5}
+ expected = DataFrame({"a": [3.5, 4.5]})
+ tm.assert_frame_equal(df, expected)
+
+ df = DataFrame({"a": [1, 2]})
+ df.loc[:, "a"] = {0: 3, 1: 4}
+ expected = DataFrame({"a": [3, 4]})
+ tm.assert_frame_equal(df, expected)
diff --git a/pandas/tests/frame/methods/test_update.py b/pandas/tests/frame/methods/test_update.py
index fd4c9d64d656e..565619005d9f0 100644
--- a/pandas/tests/frame/methods/test_update.py
+++ b/pandas/tests/frame/methods/test_update.py
@@ -158,11 +158,8 @@ def test_update_with_different_dtype(self, using_copy_on_write):
# GH#3217
df = DataFrame({"a": [1, 3], "b": [np.nan, 2]})
df["c"] = np.nan
- if using_copy_on_write:
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
df.update({"c": Series(["foo"], index=[0])})
- else:
- with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
- df["c"].update(Series(["foo"], index=[0]))
expected = DataFrame(
{
diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py
index 6d52bf161f4fa..c66b6a0f8b99b 100644
--- a/pandas/tests/frame/test_constructors.py
+++ b/pandas/tests/frame/test_constructors.py
@@ -2815,7 +2815,7 @@ def test_dict_data_arrow_column_expansion(self, key_val, col_vals, col_type):
)
result = DataFrame({key_val: [1, 2]}, columns=cols)
expected = DataFrame([[1, np.nan], [2, np.nan]], columns=cols)
- expected.iloc[:, 1] = expected.iloc[:, 1].astype(object)
+ expected.isetitem(1, expected.iloc[:, 1].astype(object))
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index f9c6939654ea1..7b2a9dd99d925 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -534,7 +534,8 @@ def test_iloc_setitem_frame_duplicate_columns_multiple_blocks(self):
# if the assigned values cannot be held by existing integer arrays,
# we cast
- df.iloc[:, 0] = df.iloc[:, 0] + 0.5
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.iloc[:, 0] = df.iloc[:, 0] + 0.5
assert len(df._mgr.blocks) == 2
expected = df.copy()
@@ -1468,6 +1469,7 @@ def test_iloc_setitem_pure_position_based(self):
def test_iloc_nullable_int64_size_1_nan(self):
# GH 31861
result = DataFrame({"a": ["test"], "b": [np.nan]})
- result.loc[:, "b"] = result.loc[:, "b"].astype("Int64")
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ result.loc[:, "b"] = result.loc[:, "b"].astype("Int64")
expected = DataFrame({"a": ["test"], "b": array([NA], dtype="Int64")})
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py
index c897afaeeee0e..ea52ed57c1a1b 100644
--- a/pandas/tests/indexing/test_loc.py
+++ b/pandas/tests/indexing/test_loc.py
@@ -578,7 +578,8 @@ def test_loc_setitem_consistency(self, frame_for_consistency, val):
}
)
df = frame_for_consistency.copy()
- df.loc[:, "date"] = val
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "date"] = val
tm.assert_frame_equal(df, expected)
def test_loc_setitem_consistency_dt64_to_str(self, frame_for_consistency):
@@ -592,7 +593,8 @@ def test_loc_setitem_consistency_dt64_to_str(self, frame_for_consistency):
}
)
df = frame_for_consistency.copy()
- df.loc[:, "date"] = "foo"
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "date"] = "foo"
tm.assert_frame_equal(df, expected)
def test_loc_setitem_consistency_dt64_to_float(self, frame_for_consistency):
@@ -605,14 +607,16 @@ def test_loc_setitem_consistency_dt64_to_float(self, frame_for_consistency):
}
)
df = frame_for_consistency.copy()
- df.loc[:, "date"] = 1.0
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "date"] = 1.0
tm.assert_frame_equal(df, expected)
def test_loc_setitem_consistency_single_row(self):
# GH 15494
# setting on frame with single row
df = DataFrame({"date": Series([Timestamp("20180101")])})
- df.loc[:, "date"] = "string"
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, "date"] = "string"
expected = DataFrame({"date": Series(["string"])})
tm.assert_frame_equal(df, expected)
@@ -672,9 +676,10 @@ def test_loc_setitem_consistency_slice_column_len(self):
# timedelta64[m] -> float, so this cannot be done inplace, so
# no warning
- df.loc[:, ("Respondent", "Duration")] = df.loc[
- :, ("Respondent", "Duration")
- ] / Timedelta(60_000_000_000)
+ with tm.assert_produces_warning(FutureWarning, match="incompatible dtype"):
+ df.loc[:, ("Respondent", "Duration")] = df.loc[
+ :, ("Respondent", "Duration")
+ ] / Timedelta(60_000_000_000)
expected = Series(
[23.0, 12.0, 14.0, 36.0], index=df.index, name=("Respondent", "Duration")
@@ -1481,7 +1486,11 @@ def test_loc_setitem_datetimeindex_tz(self, idxer, tz_naive_fixture):
# if result started off with object dtype, then the .loc.__setitem__
# below would retain object dtype
result = DataFrame(index=idx, columns=["var"], dtype=np.float64)
- result.loc[:, idxer] = expected
+ with tm.assert_produces_warning(
+ FutureWarning if idxer == "var" else None, match="incompatible dtype"
+ ):
+ # See https://github.com/pandas-dev/pandas/issues/56223
+ result.loc[:, idxer] = expected
tm.assert_frame_equal(result, expected)
def test_loc_setitem_time_key(self):
diff --git a/pandas/tests/io/json/test_pandas.py b/pandas/tests/io/json/test_pandas.py
index 7254fd7cb345d..1527f2219d7b6 100644
--- a/pandas/tests/io/json/test_pandas.py
+++ b/pandas/tests/io/json/test_pandas.py
@@ -168,7 +168,7 @@ def test_frame_non_unique_columns(self, orient, data):
# in milliseconds; these are internally stored in nanosecond,
# so divide to get where we need
# TODO: a to_epoch method would also solve; see GH 14772
- expected.iloc[:, 0] = expected.iloc[:, 0].astype(np.int64) // 1000000
+ expected.isetitem(0, expected.iloc[:, 0].astype(np.int64) // 1000000)
elif orient == "split":
expected = df
expected.columns = ["x", "x.1"]
diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py
index 72e6457e65e3c..1dcecc3d9b09d 100644
--- a/pandas/tests/reshape/merge/test_merge.py
+++ b/pandas/tests/reshape/merge/test_merge.py
@@ -2964,9 +2964,9 @@ def test_merge_empty_frames_column_order(left_empty, right_empty):
if left_empty and right_empty:
expected = expected.iloc[:0]
elif left_empty:
- expected.loc[:, "B"] = np.nan
+ expected["B"] = np.nan
elif right_empty:
- expected.loc[:, ["C", "D"]] = np.nan
+ expected[["C", "D"]] = np.nan
tm.assert_frame_equal(result, expected)
diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py
index c52e47a812183..f4992b758af74 100644
--- a/pandas/tests/series/indexing/test_indexing.py
+++ b/pandas/tests/series/indexing/test_indexing.py
@@ -491,7 +491,7 @@ def _check_setitem_invalid(self, ser, invalid, indexer, warn):
np.datetime64("NaT"),
np.timedelta64("NaT"),
]
- _indexers = [0, [0], slice(0, 1), [True, False, False]]
+ _indexers = [0, [0], slice(0, 1), [True, False, False], slice(None, None, None)]
@pytest.mark.parametrize(
"invalid", _invalid_scalars + [1, 1.0, np.int64(1), np.float64(1)]
@@ -505,7 +505,7 @@ def test_setitem_validation_scalar_bool(self, invalid, indexer):
@pytest.mark.parametrize("indexer", _indexers)
def test_setitem_validation_scalar_int(self, invalid, any_int_numpy_dtype, indexer):
ser = Series([1, 2, 3], dtype=any_int_numpy_dtype)
- if isna(invalid) and invalid is not NaT:
+ if isna(invalid) and invalid is not NaT and not np.isnat(invalid):
warn = None
else:
warn = FutureWarning
| - [x] addresses the inconsistency brought up in https://github.com/pandas-dev/pandas/issues/39584#issuecomment-1748891309
- [ ] [Tests added and passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#writing-tests) if fixing a bug or adding a new feature
- [ ] All [code checks passed](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#pre-commit).
- [ ] Added [type annotations](https://pandas.pydata.org/pandas-docs/dev/development/contributing_codebase.html#type-hints) to new arguments/methods/functions.
- [ ] Added an entry in the latest `doc/source/whatsnew/vX.X.X.rst` file if fixing a bug or adding a new feature.
| https://api.github.com/repos/pandas-dev/pandas/pulls/56146 | 2023-11-24T11:56:17Z | 2024-01-09T22:30:06Z | 2024-01-09T22:30:06Z | 2024-01-10T12:47:44Z |
ENH: add bounds-checking preamble to groupby_X cython code | diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 60ec7de5c4d8e..fdf25a376f68f 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -314,6 +314,7 @@ Bug Fixes
- Fixed slow printing of large Dataframes, due to inefficient dtype
reporting (GH2807_)
+ - Fixed a segfault when using a function as grouper in groupby (GH3035_)
- Fix pretty-printing of infinite data structures (closes GH2978_)
- str.contains ignored na argument (GH2806_)
@@ -325,6 +326,7 @@ on GitHub for a complete list.
.. _GH2810: https://github.com/pydata/pandas/issues/2810
.. _GH2837: https://github.com/pydata/pandas/issues/2837
.. _GH2898: https://github.com/pydata/pandas/issues/2898
+.. _GH3035: https://github.com/pydata/pandas/issues/3035
.. _GH2978: https://github.com/pydata/pandas/issues/2978
.. _GH2739: https://github.com/pydata/pandas/issues/2739
.. _GH2710: https://github.com/pydata/pandas/issues/2710
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 3f12f773db96a..7e20ec95fd763 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -57,6 +57,8 @@ def _groupby_function(name, alias, npfunc, numeric_only=True,
def f(self):
try:
return self._cython_agg_general(alias, numeric_only=numeric_only)
+ except AssertionError as e:
+ raise SpecificationError(str(e))
except Exception:
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
if _convert:
@@ -348,7 +350,7 @@ def mean(self):
"""
try:
return self._cython_agg_general('mean')
- except DataError:
+ except GroupByError:
raise
except Exception: # pragma: no cover
f = lambda x: x.mean(axis=self.axis)
@@ -362,7 +364,7 @@ def median(self):
"""
try:
return self._cython_agg_general('median')
- except DataError:
+ except GroupByError:
raise
except Exception: # pragma: no cover
f = lambda x: x.median(axis=self.axis)
@@ -462,7 +464,10 @@ def _cython_agg_general(self, how, numeric_only=True):
if numeric_only and not is_numeric:
continue
- result, names = self.grouper.aggregate(obj.values, how)
+ try:
+ result, names = self.grouper.aggregate(obj.values, how)
+ except AssertionError as e:
+ raise GroupByError(str(e))
output[name] = result
if len(output) == 0:
@@ -1200,6 +1205,13 @@ def __init__(self, index, grouper=None, name=None, level=None,
# no level passed
if not isinstance(self.grouper, np.ndarray):
self.grouper = self.index.map(self.grouper)
+ if not (hasattr(self.grouper,"__len__") and \
+ len(self.grouper) == len(self.index)):
+ errmsg = "Grouper result violates len(labels) == len(data)\n"
+ errmsg += "result: %s" % com.pprint_thing(self.grouper)
+ self.grouper = None # Try for sanity
+ raise AssertionError(errmsg)
+
def __repr__(self):
return 'Grouping(%s)' % self.name
@@ -1718,9 +1730,10 @@ def _aggregate_multiple_funcs(self, arg):
grouper=self.grouper)
results.append(colg.aggregate(arg))
keys.append(col)
- except (TypeError, DataError):
+ except (TypeError, DataError) :
pass
-
+ except SpecificationError:
+ raise
result = concat(results, keys=keys, axis=1)
return result
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
index c94ed8730f32a..fa9e21e16f57f 100644
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -593,6 +593,9 @@ def groupby_%(name)s(ndarray[%(c_type)s] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -625,6 +628,9 @@ def group_last_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
ndarray[%(dest_type2)s, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -760,6 +766,9 @@ def group_nth_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
ndarray[%(dest_type2)s, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -802,6 +811,9 @@ def group_add_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, count
ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
@@ -915,6 +927,9 @@ def group_prod_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, count
ndarray[%(dest_type2)s, ndim=2] prodx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
prodx = np.ones_like(out)
@@ -1025,6 +1040,9 @@ def group_var_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, ct
ndarray[%(dest_type2)s, ndim=2] nobs, sumx, sumxx
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
sumxx = np.zeros_like(out)
@@ -1220,6 +1238,9 @@ def group_max_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, count
ndarray[%(dest_type2)s, ndim=2] maxx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
maxx = np.empty_like(out)
@@ -1342,6 +1363,9 @@ def group_min_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, count
ndarray[%(dest_type2)s, ndim=2] minx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
minx = np.empty_like(out)
@@ -1399,6 +1423,9 @@ def group_mean_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
%(dest_type2)s val, count
ndarray[%(dest_type2)s, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx
index ce83e08782ea2..11a610375830b 100644
--- a/pandas/src/generated.pyx
+++ b/pandas/src/generated.pyx
@@ -1967,6 +1967,9 @@ def groupby_float64(ndarray[float64_t] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -1992,6 +1995,9 @@ def groupby_float32(ndarray[float32_t] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -2017,6 +2023,9 @@ def groupby_object(ndarray[object] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -2042,6 +2051,9 @@ def groupby_int32(ndarray[int32_t] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -2067,6 +2079,9 @@ def groupby_int64(ndarray[int64_t] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -2092,6 +2107,9 @@ def groupby_bool(ndarray[uint8_t] index, ndarray labels):
length = len(index)
+ if not length == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
for i in range(length):
key = util.get_value_1d(labels, i)
@@ -3334,7 +3352,7 @@ def take_2d_axis1_bool_bool(ndarray[uint8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3374,7 +3392,7 @@ def take_2d_axis1_bool_object(ndarray[uint8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3414,7 +3432,7 @@ def take_2d_axis1_int8_int8(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3454,7 +3472,7 @@ def take_2d_axis1_int8_int32(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3494,7 +3512,7 @@ def take_2d_axis1_int8_int64(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3534,7 +3552,7 @@ def take_2d_axis1_int8_float64(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3574,7 +3592,7 @@ def take_2d_axis1_int16_int16(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3614,7 +3632,7 @@ def take_2d_axis1_int16_int32(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3654,7 +3672,7 @@ def take_2d_axis1_int16_int64(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3694,7 +3712,7 @@ def take_2d_axis1_int16_float64(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3734,7 +3752,7 @@ def take_2d_axis1_int32_int32(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3774,7 +3792,7 @@ def take_2d_axis1_int32_int64(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3814,7 +3832,7 @@ def take_2d_axis1_int32_float64(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3854,7 +3872,7 @@ def take_2d_axis1_int64_int64(ndarray[int64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3894,7 +3912,7 @@ def take_2d_axis1_int64_float64(ndarray[int64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -3934,7 +3952,7 @@ def take_2d_axis1_float32_float32(ndarray[float32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -3974,7 +3992,7 @@ def take_2d_axis1_float32_float64(ndarray[float32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -4014,7 +4032,7 @@ def take_2d_axis1_float64_float64(ndarray[float64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF True:
@@ -4054,7 +4072,7 @@ def take_2d_axis1_object_object(ndarray[object, ndim=2] values,
n = len(values)
k = len(indexer)
-
+
fv = fill_value
IF False:
@@ -4890,6 +4908,9 @@ def group_last_float64(ndarray[float64_t, ndim=2] out,
ndarray[float64_t, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -4930,6 +4951,9 @@ def group_last_float32(ndarray[float32_t, ndim=2] out,
ndarray[float32_t, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -5060,6 +5084,9 @@ def group_nth_float64(ndarray[float64_t, ndim=2] out,
ndarray[float64_t, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -5101,6 +5128,9 @@ def group_nth_float32(ndarray[float32_t, ndim=2] out,
ndarray[float32_t, ndim=2] resx
ndarray[int64_t, ndim=2] nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros((<object> out).shape, dtype=np.int64)
resx = np.empty_like(out)
@@ -5233,6 +5263,9 @@ def group_add_float64(ndarray[float64_t, ndim=2] out,
float64_t val, count
ndarray[float64_t, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
@@ -5286,6 +5319,9 @@ def group_add_float32(ndarray[float32_t, ndim=2] out,
float32_t val, count
ndarray[float32_t, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
@@ -5453,6 +5489,9 @@ def group_prod_float64(ndarray[float64_t, ndim=2] out,
float64_t val, count
ndarray[float64_t, ndim=2] prodx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
prodx = np.ones_like(out)
@@ -5506,6 +5545,9 @@ def group_prod_float32(ndarray[float32_t, ndim=2] out,
float32_t val, count
ndarray[float32_t, ndim=2] prodx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
prodx = np.ones_like(out)
@@ -5670,6 +5712,9 @@ def group_var_float64(ndarray[float64_t, ndim=2] out,
float64_t val, ct
ndarray[float64_t, ndim=2] nobs, sumx, sumxx
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
sumxx = np.zeros_like(out)
@@ -5728,6 +5773,9 @@ def group_var_float32(ndarray[float32_t, ndim=2] out,
float32_t val, ct
ndarray[float32_t, ndim=2] nobs, sumx, sumxx
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
sumxx = np.zeros_like(out)
@@ -5910,6 +5958,9 @@ def group_mean_float64(ndarray[float64_t, ndim=2] out,
float64_t val, count
ndarray[float64_t, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
@@ -5959,6 +6010,9 @@ def group_mean_float32(ndarray[float32_t, ndim=2] out,
float32_t val, count
ndarray[float32_t, ndim=2] sumx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
sumx = np.zeros_like(out)
@@ -6119,6 +6173,9 @@ def group_min_float64(ndarray[float64_t, ndim=2] out,
float64_t val, count
ndarray[float64_t, ndim=2] minx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
minx = np.empty_like(out)
@@ -6176,6 +6233,9 @@ def group_min_float32(ndarray[float32_t, ndim=2] out,
float32_t val, count
ndarray[float32_t, ndim=2] minx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
minx = np.empty_like(out)
@@ -6357,6 +6417,9 @@ def group_max_float64(ndarray[float64_t, ndim=2] out,
float64_t val, count
ndarray[float64_t, ndim=2] maxx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
maxx = np.empty_like(out)
@@ -6414,6 +6477,9 @@ def group_max_float32(ndarray[float32_t, ndim=2] out,
float32_t val, count
ndarray[float32_t, ndim=2] maxx, nobs
+ if not len(values) == len(labels):
+ raise AssertionError("len(index) != len(labels)")
+
nobs = np.zeros_like(out)
maxx = np.empty_like(out)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 4b1770dd4f5df..0e5130ea34674 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -218,6 +218,30 @@ def test_groupby_dict_mapping(self):
assert_series_equal(result, result2)
assert_series_equal(result, expected2)
+ def test_groupby_bounds_check(self):
+ import pandas as pd
+ # groupby_X is code-generated, so if one variant
+ # does, the rest probably do to
+ a = np.array([1,2],dtype='object')
+ b = np.array([1,2,3],dtype='object')
+ self.assertRaises(AssertionError, pd.algos.groupby_object,a, b)
+
+ def test_groupby_grouper_f_sanity_checked(self):
+ import pandas as pd
+ dates = pd.date_range('01-Jan-2013', periods=12, freq='MS')
+ ts = pd.TimeSeries(np.random.randn(12), index=dates)
+
+ # GH3035
+ # index.map is used to apply grouper to the index
+ # if it fails on the elements, map tries it on the entire index as
+ # a sequence. That can yield invalid results that cause trouble
+ # down the line.
+ # the surprise comes from using key[0:6] rather then str(key)[0:6]
+ # when the elements are Timestamp.
+ # the result is Index[0:6], very confusing.
+
+ self.assertRaises(AssertionError, ts.groupby,lambda key: key[0:6])
+
def test_groupby_nonobject_dtype(self):
key = self.mframe.index.labels[0]
grouped = self.mframe.groupby(key)
| continues #3017, does an easier bit under #3028
| https://api.github.com/repos/pandas-dev/pandas/pulls/3031 | 2013-03-12T20:09:24Z | 2013-03-17T15:32:36Z | 2013-03-17T15:32:36Z | 2014-06-26T01:13:26Z |
BUG: Bug in user-facing take with negative indicies was incorrect | diff --git a/RELEASE.rst b/RELEASE.rst
index 970b89c99a7b1..a4955db02096e 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -140,7 +140,8 @@ pandas 0.11.0
- Bug in value_counts of ``datetime64[ns]`` Series (GH3002_)
- Fixed printing of ``NaT` in an index
- Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982__)
- - Bug in ``icol`` with negative indicies was incorrect producing incorrect return values (see GH2922_)
+ - Bug in ``icol, take`` with negative indicies was producing incorrect return
+ values (see GH2922_, GH2892_), also check for out-of-bounds indices (GH3029_)
- Bug in DataFrame column insertion when the column creation fails, existing frame is left in
an irrecoverable state (GH3010_)
@@ -160,6 +161,7 @@ pandas 0.11.0
.. _GH2807: https://github.com/pydata/pandas/issues/2807
.. _GH2849: https://github.com/pydata/pandas/issues/2849
.. _GH2898: https://github.com/pydata/pandas/issues/2898
+.. _GH2892: https://github.com/pydata/pandas/issues/2892
.. _GH2909: https://github.com/pydata/pandas/issues/2909
.. _GH2922: https://github.com/pydata/pandas/issues/2922
.. _GH2931: https://github.com/pydata/pandas/issues/2931
@@ -170,6 +172,7 @@ pandas 0.11.0
.. _GH3002: https://github.com/pydata/pandas/issues/3002
.. _GH3010: https://github.com/pydata/pandas/issues/3010
.. _GH3012: https://github.com/pydata/pandas/issues/3012
+.. _GH3029: https://github.com/pydata/pandas/issues/3029
pandas 0.10.1
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 7ae2cc6d5b6ed..609a04fdb15eb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -28,7 +28,8 @@
from pandas.core.generic import NDFrame
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (_NDFrameIndexer, _maybe_droplevels,
- _is_index_slice, _check_bool_indexer)
+ _is_index_slice, _check_bool_indexer,
+ _maybe_convert_indices)
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.series import Series, _radd_compat
import pandas.core.expressions as expressions
@@ -1928,11 +1929,6 @@ def _ixs(self, i, axis=0, copy=False):
label = self.columns[i]
if isinstance(label, Index):
- # if we have negative indicies, translate to postive here
- # (take doesen't deal properly with these)
- l = len(self.columns)
- i = [ v if v >= 0 else l+v for v in i ]
-
return self.take(i, axis=1)
values = self._data.iget(i)
@@ -2911,11 +2907,13 @@ def take(self, indices, axis=0):
-------
taken : DataFrame
"""
- if isinstance(indices, list):
- indices = np.array(indices)
+
+ # check/convert indicies here
+ indices = _maybe_convert_indices(indices, len(self._get_axis(axis)))
+
if self._is_mixed_type:
if axis == 0:
- new_data = self._data.take(indices, axis=1)
+ new_data = self._data.take(indices, axis=1, verify=False)
return DataFrame(new_data)
else:
new_columns = self.columns.take(indices)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 236d7d8aeadf8..093f600f8c4ea 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -4,6 +4,7 @@
from pandas.core.index import MultiIndex
import pandas.core.indexing as indexing
+from pandas.core.indexing import _maybe_convert_indices
from pandas.tseries.index import DatetimeIndex
import pandas.core.common as com
import pandas.lib as lib
@@ -943,12 +944,16 @@ def take(self, indices, axis=0):
-------
taken : type of caller
"""
+
+ # check/convert indicies here
+ indices = _maybe_convert_indices(indices, len(self._get_axis(axis)))
+
if axis == 0:
labels = self._get_axis(axis)
new_items = labels.take(indices)
new_data = self._data.reindex_axis(new_items, axis=0)
else:
- new_data = self._data.take(indices, axis=axis)
+ new_data = self._data.take(indices, axis=axis, verify=False)
return self._constructor(new_data)
def tz_convert(self, tz, axis=0, copy=True):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b86518e8947ef..17423075b6bfb 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -913,6 +913,20 @@ def _is_series(obj):
return isinstance(obj, Series)
+def _maybe_convert_indices(indices, n):
+ """ if we have negative indicies, translate to postive here
+ if have indicies that are out-of-bounds, raise an IndexError """
+ if isinstance(indices, list):
+ indices = np.array(indices)
+
+ mask = indices<0
+ if mask.any():
+ indices[mask] += n
+ mask = (indices>=n) | (indices<0)
+ if mask.any():
+ raise IndexError("indices are out-of-bounds")
+ return indices
+
def _maybe_convert_ix(*args):
"""
We likely want to take the cross-product
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 5c0f0935346cc..96cc41be26b92 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -5,7 +5,7 @@
import numpy as np
from pandas.core.index import Index, _ensure_index, _handle_legacy_indexes
-from pandas.core.indexing import _check_slice_bounds
+from pandas.core.indexing import _check_slice_bounds, _maybe_convert_indices
import pandas.core.common as com
import pandas.lib as lib
import pandas.tslib as tslib
@@ -1517,13 +1517,16 @@ def _make_na_block(self, items, ref_items, fill_value=np.nan):
na_block = make_block(block_values, items, ref_items)
return na_block
- def take(self, indexer, axis=1):
+ def take(self, indexer, axis=1, verify=True):
if axis < 1:
raise AssertionError('axis must be at least 1, got %d' % axis)
indexer = com._ensure_platform_int(indexer)
-
n = len(self.axes[axis])
+
+ if verify:
+ indexer = _maybe_convert_indices(indexer, n)
+
if ((indexer == -1) | (indexer >= n)).any():
raise Exception('Indices must be nonzero and less than '
'the axis length')
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 85f66148eba8a..903ae1040413f 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -8602,12 +8602,43 @@ def test_fillna_col_reordering(self):
self.assert_(df.columns.tolist() == filled.columns.tolist())
def test_take(self):
+
# homogeneous
#----------------------------------------
+ order = [3, 1, 2, 0]
+ for df in [self.frame]:
+
+ result = df.take(order, axis=0)
+ expected = df.reindex(df.index.take(order))
+ assert_frame_equal(result, expected)
+
+ # axis = 1
+ result = df.take(order, axis=1)
+ expected = df.ix[:, ['D', 'B', 'C', 'A']]
+ assert_frame_equal(result, expected, check_names=False)
+
+ # neg indicies
+ order = [2,1,-1]
+ for df in [self.frame]:
+
+ result = df.take(order, axis=0)
+ expected = df.reindex(df.index.take(order))
+ assert_frame_equal(result, expected)
+
+ # axis = 1
+ result = df.take(order, axis=1)
+ expected = df.ix[:, ['C', 'B', 'D']]
+ assert_frame_equal(result, expected, check_names=False)
+
+ # illegal indices
+ self.assertRaises(IndexError, df.take, [3,1,2,30], axis=0)
+ self.assertRaises(IndexError, df.take, [3,1,2,-31], axis=0)
+ self.assertRaises(IndexError, df.take, [3,1,2,5], axis=1)
+ self.assertRaises(IndexError, df.take, [3,1,2,-5], axis=1)
# mixed-dtype
#----------------------------------------
- order = [4, 1, 2, 0, 3]
+ order = [4, 1, 2, 0, 3]
for df in [self.mixed_frame]:
result = df.take(order, axis=0)
@@ -8619,6 +8650,19 @@ def test_take(self):
expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']]
assert_frame_equal(result, expected)
+ # neg indicies
+ order = [4,1,-2]
+ for df in [self.mixed_frame]:
+
+ result = df.take(order, axis=0)
+ expected = df.reindex(df.index.take(order))
+ assert_frame_equal(result, expected)
+
+ # axis = 1
+ result = df.take(order, axis=1)
+ expected = df.ix[:, ['foo', 'B', 'D']]
+ assert_frame_equal(result, expected)
+
# by dtype
order = [1, 2, 0, 3]
for df in [self.mixed_float,self.mixed_int]:
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index b3aa38b9a972e..d857e999bdd33 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1013,7 +1013,11 @@ def test_take(self):
expected = self.panel.reindex(minor=['D', 'A', 'B', 'C'])
assert_panel_equal(result, expected)
- self.assertRaises(Exception, self.panel.take, [3, -1, 1, 2], axis=2)
+ # neg indicies ok
+ expected = self.panel.reindex(minor=['D', 'D', 'B', 'C'])
+ result = self.panel.take([3, -1, 1, 2], axis=2)
+ assert_panel_equal(result, expected)
+
self.assertRaises(Exception, self.panel.take, [4, 0, 1, 2], axis=2)
def test_sort_index(self):
| issue mentioned in #2892
note: does not solve the actual segmentation faults issue in 2892, only user facing take with neg indicies
| https://api.github.com/repos/pandas-dev/pandas/pulls/3027 | 2013-03-12T18:20:24Z | 2013-03-12T23:17:20Z | 2013-03-12T23:17:20Z | 2014-06-26T05:58:14Z |
Continuing #2057, s/assert/raise AssertionError/g | diff --git a/pandas/core/config.py b/pandas/core/config.py
index 59d2772c857bd..2d62b807cf203 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -313,8 +313,10 @@ def __doc__(self):
class option_context(object):
def __init__(self, *args):
- assert len(args) % 2 == 0 and len(args) >= 2, \
- "Need to invoke as option_context(pat,val,[(pat,val),..))."
+ if not ( len(args) % 2 == 0 and len(args) >= 2):
+ errmsg = "Need to invoke as option_context(pat,val,[(pat,val),..))."
+ raise AssertionError(errmsg)
+
ops = zip(args[::2], args[1::2])
undo = []
for pat, val in ops:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6fd627f42e055..69e7e4178ecfd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -709,7 +709,8 @@ def __unicode__(self):
self.info(buf=buf, verbose=verbose)
value = buf.getvalue()
- assert type(value) == unicode
+ if not type(value) == unicode:
+ raise AssertionError()
return value
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 4f346d2e1860e..38abfcf925363 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -618,7 +618,8 @@ def get_value(self, *args):
value : scalar value
"""
# require an arg for each axis
- assert(len(args) == self._AXIS_LEN)
+ if not ((len(args) == self._AXIS_LEN)):
+ raise AssertionError()
# hm, two layers to the onion
frame = self._get_item_cache(args[0])
@@ -642,7 +643,8 @@ def set_value(self, *args):
otherwise a new object
"""
# require an arg for each axis and the value
- assert(len(args) == self._AXIS_LEN + 1)
+ if not ((len(args) == self._AXIS_LEN + 1)):
+ raise AssertionError()
try:
frame = self._get_item_cache(args[0])
@@ -685,7 +687,8 @@ def __setitem__(self, key, value):
**self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:]))
mat = value.values
elif isinstance(value, np.ndarray):
- assert(value.shape == shape[1:])
+ if not ((value.shape == shape[1:])):
+ raise AssertionError()
mat = np.asarray(value)
elif np.isscalar(value):
dtype, value = _infer_dtype_from_scalar(value)
@@ -1481,7 +1484,8 @@ def _prep_ndarray(self, values, copy=True):
else:
if copy:
values = values.copy()
- assert(values.ndim == self._AXIS_LEN)
+ if not ((values.ndim == self._AXIS_LEN)):
+ raise AssertionError()
return values
@staticmethod
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a68234b5d6bc1..7d9303fa75acd 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1126,7 +1126,8 @@ def __unicode__(self):
else:
result = u'Series([], dtype: %s)' % self.dtype
- assert type(result) == unicode
+ if not ( type(result) == unicode):
+ raise AssertionError()
return result
def __repr__(self):
@@ -1194,7 +1195,9 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
the_repr = self._get_repr(float_format=float_format, na_rep=na_rep,
length=length, dtype=dtype, name=name)
- assert type(the_repr) == unicode
+ # catch contract violations
+ if not type(the_repr) == unicode:
+ raise AssertionError("expected unicode string")
if buf is None:
return the_repr
@@ -1212,7 +1215,8 @@ def _get_repr(self, name=False, print_header=False, length=True, dtype=True,
length=length, dtype=dtype, na_rep=na_rep,
float_format=float_format)
result = formatter.to_string()
- assert type(result) == unicode
+ if not ( type(result) == unicode):
+ raise AssertionError()
return result
def __iter__(self):
diff --git a/pandas/io/date_converters.py b/pandas/io/date_converters.py
index ce670eec7032f..c7a60d13f1778 100644
--- a/pandas/io/date_converters.py
+++ b/pandas/io/date_converters.py
@@ -46,10 +46,12 @@ def _maybe_cast(arr):
def _check_columns(cols):
- assert(len(cols) > 0)
+ if not ((len(cols) > 0)):
+ raise AssertionError()
N = len(cols[0])
for c in cols[1:]:
- assert(len(c) == N)
+ if not ((len(c) == N)):
+ raise AssertionError()
return N
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 60798bacbc144..c479bffdb3ab1 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -579,7 +579,8 @@ def _clean_options(self, options, engine):
# type conversion-related
if converters is not None:
- assert(isinstance(converters, dict))
+ if not (isinstance(converters, dict)):
+ raise AssertionError()
else:
converters = {}
@@ -1474,7 +1475,8 @@ def _rows_to_cols(self, content):
if self._implicit_index:
col_len += len(self.index_col)
- assert(self.skip_footer >= 0)
+ if not ((self.skip_footer >= 0)):
+ raise AssertionError()
if col_len != zip_len and self.index_col is not False:
row_num = -1
@@ -1768,12 +1770,15 @@ def __init__(self, f, colspecs, filler, thousands=None):
self.filler = filler # Empty characters between fields.
self.thousands = thousands
- assert isinstance(colspecs, (tuple, list))
+ if not ( isinstance(colspecs, (tuple, list))):
+ raise AssertionError()
+
for colspec in colspecs:
- assert isinstance(colspec, (tuple, list))
- assert len(colspec) == 2
- assert isinstance(colspec[0], int)
- assert isinstance(colspec[1], int)
+ if not ( isinstance(colspec, (tuple, list)) and
+ len(colspec) == 2 and
+ isinstance(colspec[0], int) and
+ isinstance(colspec[1], int) ):
+ raise AssertionError()
def next(self):
line = next(self.f)
diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py
index 777a58a0f6c49..035db279064a0 100644
--- a/pandas/sparse/array.py
+++ b/pandas/sparse/array.py
@@ -25,7 +25,8 @@ def _sparse_op_wrap(op, name):
"""
def wrapper(self, other):
if isinstance(other, np.ndarray):
- assert(len(self) == len(other))
+ if not ((len(self) == len(other))):
+ raise AssertionError()
if not isinstance(other, SparseArray):
other = SparseArray(other, fill_value=self.fill_value)
return _sparse_array_op(self, other, op, name)
@@ -129,7 +130,8 @@ def __new__(cls, data, sparse_index=None, kind='integer', fill_value=None,
fill_value=fill_value)
else:
values = data
- assert(len(values) == sparse_index.npoints)
+ if not ((len(values) == sparse_index.npoints)):
+ raise AssertionError()
# Create array, do *not* copy data by default
if copy:
@@ -275,7 +277,8 @@ def take(self, indices, axis=0):
-------
taken : ndarray
"""
- assert(axis == 0)
+ if not ((axis == 0)):
+ raise AssertionError()
indices = np.asarray(indices, dtype=int)
n = len(self)
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index 82719817b5744..ed33be33ac02a 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -709,7 +709,9 @@ def _join_compat(self, other, on=None, how='left', lsuffix='', rsuffix='',
def _join_index(self, other, how, lsuffix, rsuffix):
if isinstance(other, Series):
- assert(other.name is not None)
+ if not (other.name is not None):
+ raise AssertionError()
+
other = SparseDataFrame({other.name: other},
default_fill_value=self.default_fill_value)
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index bd5a2785aba2b..0b2842155b299 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -71,7 +71,8 @@ def __init__(self, frames, items=None, major_axis=None, minor_axis=None,
default_kind=default_kind)
frames = new_frames
- assert(isinstance(frames, dict))
+ if not (isinstance(frames, dict)):
+ raise AssertionError()
self.default_fill_value = fill_value = default_fill_value
self.default_kind = kind = default_kind
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 8374c4ab9c373..bd01845a295b6 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -110,7 +110,8 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
if isinstance(data, SparseSeries) and index is None:
index = data.index
elif index is not None:
- assert(len(index) == len(data))
+ if not (len(index) == len(data)):
+ raise AssertionError()
sparse_index = data.sp_index
values = np.asarray(data)
@@ -128,7 +129,8 @@ def __new__(cls, data, index=None, sparse_index=None, kind='block',
fill_value=fill_value)
else:
values = data
- assert(len(values) == sparse_index.npoints)
+ if not (len(values) == sparse_index.npoints):
+ raise AssertionError()
else:
if index is None:
raise Exception('must pass index!')
@@ -446,7 +448,8 @@ def sparse_reindex(self, new_index):
-------
reindexed : SparseSeries
"""
- assert(isinstance(new_index, splib.SparseIndex))
+ if not (isinstance(new_index, splib.SparseIndex)):
+ raise AssertionError()
new_values = self.sp_index.to_int_index().reindex(self.sp_values,
self.fill_value,
diff --git a/pandas/stats/ols.py b/pandas/stats/ols.py
index 9ecf5c6ab715f..e5324926ea1f0 100644
--- a/pandas/stats/ols.py
+++ b/pandas/stats/ols.py
@@ -619,7 +619,8 @@ def _set_window(self, window_type, window, min_periods):
self._window_type = scom._get_window_type(window_type)
if self._is_rolling:
- assert(window is not None)
+ if not ((window is not None)):
+ raise AssertionError()
if min_periods is None:
min_periods = window
else:
@@ -1196,7 +1197,8 @@ def _nobs_raw(self):
return result.astype(int)
def _beta_matrix(self, lag=0):
- assert(lag >= 0)
+ if not ((lag >= 0)):
+ raise AssertionError()
betas = self._beta_raw
@@ -1257,7 +1259,8 @@ def _filter_data(lhs, rhs, weights=None):
Cleaned lhs and rhs
"""
if not isinstance(lhs, Series):
- assert(len(lhs) == len(rhs))
+ if not ((len(lhs) == len(rhs))):
+ raise AssertionError()
lhs = Series(lhs, index=rhs.index)
rhs = _combine_rhs(rhs)
diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py
index 3173e05ae8e9d..467ce6a05e1f0 100644
--- a/pandas/stats/plm.py
+++ b/pandas/stats/plm.py
@@ -101,8 +101,10 @@ def _prepare_data(self):
y_regressor = y
if weights is not None:
- assert(y_regressor.index.equals(weights.index))
- assert(x_regressor.index.equals(weights.index))
+ if not ((y_regressor.index.equals(weights.index))):
+ raise AssertionError()
+ if not ((x_regressor.index.equals(weights.index))):
+ raise AssertionError()
rt_weights = np.sqrt(weights)
y_regressor = y_regressor * rt_weights
@@ -169,7 +171,8 @@ def _convert_x(self, x):
# .iteritems
iteritems = getattr(x, 'iteritems', x.items)
for key, df in iteritems():
- assert(isinstance(df, DataFrame))
+ if not ((isinstance(df, DataFrame))):
+ raise AssertionError()
if _is_numeric(df):
x_converted[key] = df
@@ -637,7 +640,8 @@ def _y_predict_raw(self):
return (betas * x).sum(1)
def _beta_matrix(self, lag=0):
- assert(lag >= 0)
+ if not ((lag >= 0)):
+ raise AssertionError()
index = self._y_trans.index
major_labels = index.labels[0]
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 1643a6bfb2655..6d224ffcb7b05 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -404,14 +404,17 @@ def _validate_specification(self):
elif self.left_on is not None:
n = len(self.left_on)
if self.right_index:
- assert(len(self.left_on) == self.right.index.nlevels)
+ if not ((len(self.left_on) == self.right.index.nlevels)):
+ raise AssertionError()
self.right_on = [None] * n
elif self.right_on is not None:
n = len(self.right_on)
if self.left_index:
- assert(len(self.right_on) == self.left.index.nlevels)
+ if not ((len(self.right_on) == self.left.index.nlevels)):
+ raise AssertionError()
self.left_on = [None] * n
- assert(len(self.right_on) == len(self.left_on))
+ if not ((len(self.right_on) == len(self.left_on))):
+ raise AssertionError()
def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
@@ -424,7 +427,8 @@ def _get_join_indexers(left_keys, right_keys, sort=False, how='inner'):
-------
"""
- assert(len(left_keys) == len(right_keys))
+ if not ((len(left_keys) == len(right_keys))):
+ raise AssertionError()
left_labels = []
right_labels = []
@@ -537,8 +541,9 @@ def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
left_indexer = None
if len(join_keys) > 1:
- assert(isinstance(right_ax, MultiIndex) and
- len(join_keys) == right_ax.nlevels)
+ if not ((isinstance(right_ax, MultiIndex) and
+ len(join_keys) == right_ax.nlevels) ):
+ raise AssertionError()
left_tmp, right_indexer = \
_get_multiindex_indexer(join_keys, right_ax,
@@ -637,7 +642,8 @@ def __init__(self, data_list, join_index, indexers, axis=1, copy=True):
if axis <= 0: # pragma: no cover
raise MergeError('Only axis >= 1 supported for this operation')
- assert(len(data_list) == len(indexers))
+ if not ((len(data_list) == len(indexers))):
+ raise AssertionError()
self.units = []
for data, indexer in zip(data_list, indexers):
@@ -925,7 +931,8 @@ def __init__(self, objs, axis=0, join='outer', join_axes=None,
axis = 1 if axis == 0 else 0
self._is_series = isinstance(sample, Series)
- assert(0 <= axis <= sample.ndim)
+ if not ((0 <= axis <= sample.ndim)):
+ raise AssertionError()
# note: this is the BlockManager axis (since DataFrame is transposed)
self.axis = axis
@@ -1084,7 +1091,8 @@ def _concat_single_item(self, objs, item):
to_concat.append(item_values)
# this method only gets called with axis >= 1
- assert(self.axis >= 1)
+ if not ((self.axis >= 1)):
+ raise AssertionError()
return com._concat_compat(to_concat, axis=self.axis - 1)
def _get_result_dim(self):
@@ -1103,7 +1111,8 @@ def _get_new_axes(self):
continue
new_axes[i] = self._get_comb_axis(i)
else:
- assert(len(self.join_axes) == ndim - 1)
+ if not ((len(self.join_axes) == ndim - 1)):
+ raise AssertionError()
# ufff...
indices = range(ndim)
diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index ef605abb8e4fb..8d5ba7af0d92b 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -300,7 +300,8 @@ def _get_names(arrs, names, prefix='row'):
else:
names.append('%s_%d' % (prefix, i))
else:
- assert(len(names) == len(arrs))
+ if not ((len(names) == len(arrs))):
+ raise AssertionError()
if not isinstance(names, list):
names = list(names)
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index dbc71760d50c7..0d29da83dbd8a 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -308,7 +308,9 @@ def _generate(cls, start, end, periods, name, offset,
tz = tools._maybe_get_tz(tz)
if tz is not None and inferred_tz is not None:
- assert(inferred_tz == tz)
+ if not inferred_tz == tz:
+ raise AssertionError()
+
elif inferred_tz is not None:
tz = inferred_tz
@@ -451,14 +453,17 @@ def _cached_range(cls, start=None, end=None, periods=None, offset=None,
cachedRange = drc[offset]
if start is None:
- assert(isinstance(end, Timestamp))
+ if not (isinstance(end, Timestamp)):
+ raise AssertionError()
end = offset.rollback(end)
endLoc = cachedRange.get_loc(end) + 1
startLoc = endLoc - periods
elif end is None:
- assert(isinstance(start, Timestamp))
+ if not (isinstance(start, Timestamp)):
+ raise AssertionError()
+
start = offset.rollforward(start)
startLoc = cachedRange.get_loc(start)
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index a405fda1c4fe4..c1af7ba5cccc2 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -500,10 +500,13 @@ def _period_index_cmp(opname):
def wrapper(self, other):
if isinstance(other, Period):
func = getattr(self.values, opname)
- assert(other.freq == self.freq)
+ if not (other.freq == self.freq):
+ raise AssertionError()
+
result = func(other.ordinal)
elif isinstance(other, PeriodIndex):
- assert(other.freq == self.freq)
+ if not (other.freq == self.freq):
+ raise AssertionError()
return getattr(self.values, opname)(other.values)
else:
other = Period(other, freq=self.freq)
@@ -1230,7 +1233,8 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
- assert(base == FreqGroup.FR_QTR)
+ if not (base == FreqGroup.FR_QTR):
+ raise AssertionError()
year, quarter = _make_field_arrays(year, quarter)
for y, q in zip(year, quarter):
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index 57f861aff8bfc..4bf0a5bf3182c 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -119,7 +119,8 @@ def _get_time_grouper(self, obj):
return binner, grouper
def _get_time_bins(self, axis):
- assert(isinstance(axis, DatetimeIndex))
+ if not (isinstance(axis, DatetimeIndex)):
+ raise AssertionError()
if len(axis) == 0:
binner = labels = DatetimeIndex(data=[], freq=self.freq)
@@ -178,7 +179,8 @@ def _adjust_bin_edges(self, binner, ax_values):
return binner, bin_edges
def _get_time_period_bins(self, axis):
- assert(isinstance(axis, DatetimeIndex))
+ if not(isinstance(axis, DatetimeIndex)):
+ raise AssertionError()
if len(axis) == 0:
binner = labels = PeriodIndex(data=[], freq=self.freq)
@@ -208,7 +210,8 @@ def _resample_timestamps(self, obj):
result = grouped.aggregate(self._agg_method)
else:
# upsampling shortcut
- assert(self.axis == 0)
+ if not (self.axis == 0):
+ raise AssertionError()
if self.closed == 'right':
res_index = binner[1:]
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index e35b80fff013f..f9608be013b3c 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -28,7 +28,8 @@ def _infer_tzinfo(start, end):
def _infer(a, b):
tz = a.tzinfo
if b and b.tzinfo:
- assert(tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo))
+ if not (tslib.get_timezone(tz) == tslib.get_timezone(b.tzinfo)):
+ raise AssertionError()
return tz
tz = None
if start is not None:
diff --git a/pandas/util/decorators.py b/pandas/util/decorators.py
index 15ab39d07ec4d..97b2ee3353fa3 100644
--- a/pandas/util/decorators.py
+++ b/pandas/util/decorators.py
@@ -46,8 +46,9 @@ def some_function(x):
"%s %s wrote the Raven"
"""
def __init__(self, *args, **kwargs):
- assert not (
- args and kwargs), "Only positional or keyword args are allowed"
+ if (args and kwargs):
+ raise AssertionError( "Only positional or keyword args are allowed")
+
self.params = args or kwargs
def __call__(self, func):
| #2057
Though I've never encountered usage of python -O in the wild.
| https://api.github.com/repos/pandas-dev/pandas/pulls/3023 | 2013-03-12T15:25:14Z | 2013-04-23T02:10:48Z | 2013-04-23T02:10:48Z | 2014-06-17T07:37:07Z |
BUG: Bug in DataFrame update where non-specified values could cause dtype changes | diff --git a/RELEASE.rst b/RELEASE.rst
index 970b89c99a7b1..84934bf3d51e1 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -143,6 +143,7 @@ pandas 0.11.0
- Bug in ``icol`` with negative indicies was incorrect producing incorrect return values (see GH2922_)
- Bug in DataFrame column insertion when the column creation fails, existing frame is left in
an irrecoverable state (GH3010_)
+ - Bug in DataFrame update where non-specified values could cause dtype changes (GH3016_)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 7ae2cc6d5b6ed..13a2bb5a18db7 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3843,8 +3843,13 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
if overwrite:
mask = isnull(that)
+
+ # don't overwrite columns unecessarily
+ if mask.all():
+ continue
else:
mask = notnull(this)
+
self[col] = np.where(mask, this, that)
#----------------------------------------------------------------------
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 85f66148eba8a..85f50641965cc 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -7265,6 +7265,19 @@ def test_update(self):
[1.5, nan, 7.]])
assert_frame_equal(df, expected)
+ def test_update_dtypes(self):
+
+ # gh 3016
+ df = DataFrame([[1.,2.,False, True],[4.,5.,True,False]],
+ columns=['A','B','bool1','bool2'])
+
+ other = DataFrame([[45,45]],index=[0],columns=['A','B'])
+ df.update(other)
+
+ expected = DataFrame([[45.,45.,False, True],[4.,5.,True,False]],
+ columns=['A','B','bool1','bool2'])
+ assert_frame_equal(df, expected)
+
def test_update_nooverwrite(self):
df = DataFrame([[1.5, nan, 3.],
[1.5, nan, 3.],
| fixes #3016
| https://api.github.com/repos/pandas-dev/pandas/pulls/3021 | 2013-03-12T14:28:16Z | 2013-03-12T22:38:01Z | 2013-03-12T22:38:01Z | 2014-06-12T23:49:54Z |
BUG: Bug in DataFrame column insertion when the column creation fails (GH3010) | diff --git a/RELEASE.rst b/RELEASE.rst
index 9deafd56ccc10..970b89c99a7b1 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -141,6 +141,8 @@ pandas 0.11.0
- Fixed printing of ``NaT` in an index
- Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982__)
- Bug in ``icol`` with negative indicies was incorrect producing incorrect return values (see GH2922_)
+ - Bug in DataFrame column insertion when the column creation fails, existing frame is left in
+ an irrecoverable state (GH3010_)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
@@ -166,6 +168,7 @@ pandas 0.11.0
.. _GH2982: https://github.com/pydata/pandas/issues/2982
.. _GH2989: https://github.com/pydata/pandas/issues/2989
.. _GH3002: https://github.com/pydata/pandas/issues/3002
+.. _GH3010: https://github.com/pydata/pandas/issues/3010
.. _GH3012: https://github.com/pydata/pandas/issues/3012
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 01ed06cd6a60f..850c6a2841ef5 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1138,11 +1138,14 @@ defaults to `nan`.
.. ipython:: python
- df_mixed = df.copy()
- df_mixed['string'] = 'string'
- df_mixed['int'] = 1
- df_mixed['bool'] = True
- df_mixed['datetime64'] = Timestamp('20010102')
+ df_mixed = DataFrame({ 'A' : randn(8),
+ 'B' : randn(8),
+ 'C' : np.array(randn(8),dtype='float32'),
+ 'string' :'string',
+ 'int' : 1,
+ 'bool' : True,
+ 'datetime64' : Timestamp('20010102')},
+ index=range(8))
df_mixed.ix[3:5,['A', 'B', 'string', 'datetime64']] = np.nan
store.append('df_mixed', df_mixed, min_itemsize = {'values': 50})
@@ -1445,8 +1448,7 @@ may not be installed (by Python) by default.
Compression for all objects within the file
- - ``store_compressed = HDFStore('store_compressed.h5', complevel=9,
- complib='blosc')``
+ - ``store_compressed = HDFStore('store_compressed.h5', complevel=9, complib='blosc')``
Or on-the-fly compression (this only applies to tables). You can turn
off file compression for a specific table by passing ``complevel=0``
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 75605fae4e39f..5c0f0935346cc 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1334,11 +1334,22 @@ def insert(self, loc, item, value):
if item in self.items:
raise Exception('cannot insert %s, already exists' % item)
- new_items = self.items.insert(loc, item)
- self.set_items_norename(new_items)
+ try:
+ new_items = self.items.insert(loc, item)
+ self.set_items_norename(new_items)
+
+ # new block
+ self._add_new_block(item, value, loc=loc)
- # new block
- self._add_new_block(item, value, loc=loc)
+ except:
+
+ # so our insertion operation failed, so back out of the new items
+ # GH 3010
+ new_items = self.items.delete(loc)
+ self.set_items_norename(new_items)
+
+ # re-raise
+ raise
if len(self.blocks) > 100:
self._consolidate_inplace()
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0729f0e03782e..85f66148eba8a 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1968,6 +1968,19 @@ def test_constructor_cast_failure(self):
foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64)
self.assert_(foo['a'].dtype == object)
+ # GH 3010, constructing with odd arrays
+ df = DataFrame(np.ones((4,2)))
+
+ # this is ok
+ df['foo'] = np.ones((4,2)).tolist()
+
+ # this is not ok
+ self.assertRaises(AssertionError, df.__setitem__, tuple(['test']), np.ones((4,2)))
+
+ # this is ok
+ df['foo2'] = np.ones((4,2)).tolist()
+
+
def test_constructor_dtype_nocast_view(self):
df = DataFrame([[1, 2]])
should_be_view = DataFrame(df, dtype=df[0].dtype)
| Bug in DataFrame column insertion when the column creation fails
existing frame is left in an irrecoverable state, fixes #3010
| https://api.github.com/repos/pandas-dev/pandas/pulls/3018 | 2013-03-12T12:47:40Z | 2013-03-12T16:14:07Z | 2013-03-12T16:14:07Z | 2014-06-14T07:24:11Z |
segmentation fault on groupby with categorical grouper of mismatched len | diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 60ec7de5c4d8e..ce2cfed94a5cf 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -316,6 +316,9 @@ Bug Fixes
reporting (GH2807_)
- Fix pretty-printing of infinite data structures (closes GH2978_)
- str.contains ignored na argument (GH2806_)
+ - Substitute warning for segfault when grouping with categorical grouper
+ of mismatched length (GH3011_)
+
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
@@ -331,3 +334,4 @@ on GitHub for a complete list.
.. _GH2806: https://github.com/pydata/pandas/issues/2806
.. _GH2807: https://github.com/pydata/pandas/issues/2807
.. _GH2918: https://github.com/pydata/pandas/issues/2918
+.. _GH3011: https://github.com/pydata/pandas/issues/3011
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 3f12f773db96a..9e5e9f6404aa4 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1310,6 +1310,11 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
exclusions.append(gpr)
name = gpr
gpr = obj[gpr]
+
+ if (isinstance(gpr,Categorical) and len(gpr) != len(obj)):
+ errmsg = "Categorical grouper must have len(grouper) == len(data)"
+ raise AssertionError(errmsg)
+
ping = Grouping(group_axis, gpr, name=name, level=level, sort=sort)
groupings.append(ping)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 4b1770dd4f5df..d276e2e905623 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -2237,6 +2237,15 @@ def test_groupby_first_datetime64(self):
got_dt = result.dtype
self.assert_(issubclass(got_dt.type, np.datetime64))
+ def test_groupby_categorical_unequal_len(self):
+ import pandas as pd
+ #GH3011
+ series = Series([np.nan, np.nan, 1, 1, 2, 2, 3, 3, 4, 4])
+ bins = pd.cut(series.dropna(), 4)
+
+ # len(bins) != len(series) here
+ self.assertRaises(AssertionError,lambda : series.groupby(bins).mean())
+
def assert_fp_equal(a, b):
assert((np.abs(a - b) < 1e-12).all())
| closes #3011
| https://api.github.com/repos/pandas-dev/pandas/pulls/3017 | 2013-03-12T04:51:18Z | 2013-03-16T23:52:23Z | 2013-03-16T23:52:23Z | 2014-07-19T13:06:32Z |
Bug: Fixed Issue 2993, Incorrect Timestamp Construction by datetime.date and tz | diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 4b637e0ffe9a3..5dac20fb43749 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -1,5 +1,5 @@
# pylint: disable-msg=E1101,W0612
-from datetime import datetime, time, timedelta, tzinfo
+from datetime import datetime, time, timedelta, tzinfo, date
import sys
import os
import unittest
@@ -79,6 +79,7 @@ def test_utc_to_local_no_modify(self):
self.assert_(rng_eastern.tz == pytz.timezone('US/Eastern'))
+
def test_localize_utc_conversion(self):
# Localizing to time zone should:
# 1) check for DST ambiguities
@@ -102,6 +103,15 @@ def test_timestamp_tz_localize(self):
self.assertEquals(result.hour, expected.hour)
self.assertEquals(result, expected)
+ def test_timestamp_constructed_by_date_and_tz(self):
+ """ Fix Issue 2993, Timestamp cannot be constructed by datetime.date and tz correctly """
+
+ result = Timestamp(date(2012, 3, 11), tz='US/Eastern')
+
+ expected = Timestamp('3/11/2012', tz='US/Eastern')
+ self.assertEquals(result.hour, expected.hour)
+ self.assertEquals(result, expected)
+
def test_timestamp_to_datetime_tzoffset(self):
# tzoffset
from dateutil.tz import tzoffset
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 7a5bb0f569349..27fa8939696c4 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -22,6 +22,7 @@ from khash cimport *
cimport cython
from datetime import timedelta, datetime
+from datetime import time as datetime_time
from dateutil.parser import parse as parse_date
cdef extern from "Python.h":
@@ -668,7 +669,9 @@ cdef convert_to_tsobject(object ts, object tz):
_check_dts_bounds(obj.value, &obj.dts)
return obj
elif PyDate_Check(ts):
- obj.value = _date_to_datetime64(ts, &obj.dts)
+ # Keep the converter same as PyDateTime's
+ ts = datetime.combine(ts, datetime_time())
+ return convert_to_tsobject(ts, tz)
else:
raise ValueError("Could not construct Timestamp from argument %s" %
type(ts))
| `Timestamp` cannot be constructed correctly by given an `datetime.date` instance and `tz`.
The issued has been solved in this patch and the test case is add in `pandas/tseries/test/test_timezones.py`
| https://api.github.com/repos/pandas-dev/pandas/pulls/3014 | 2013-03-11T18:25:42Z | 2013-03-21T07:38:36Z | 2013-03-21T07:38:36Z | 2014-07-01T11:00:16Z |
BUG: pytables not writing rows where all-nan in a part of a block | diff --git a/RELEASE.rst b/RELEASE.rst
index 2b911b0ed8170..9deafd56ccc10 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -118,6 +118,7 @@ pandas 0.11.0
underscore)
- fixes for query parsing to correctly interpret boolean and != (GH2849_, GH2973_)
- fixes for pathological case on SparseSeries with 0-len array and compression (GH2931_)
+ - fixes bug with writing rows if part of a block was all-nan (GH3012_)
- Bug showing up in applymap where some object type columns are converted (GH2909_)
had an incorrect default in convert_objects
@@ -165,6 +166,7 @@ pandas 0.11.0
.. _GH2982: https://github.com/pydata/pandas/issues/2982
.. _GH2989: https://github.com/pydata/pandas/issues/2989
.. _GH3002: https://github.com/pydata/pandas/issues/3002
+.. _GH3012: https://github.com/pydata/pandas/issues/3012
pandas 0.10.1
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 914506fb0d3cd..01ed06cd6a60f 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1461,8 +1461,7 @@ beginning. You can use the supplied ``PyTables`` utility
``ptrepack``. In addition, ``ptrepack`` can change compression levels
after the fact.
- - ``ptrepack --chunkshape=auto --propindexes --complevel=9
- --complib=blosc in.h5 out.h5``
+ - ``ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5``
Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow
you to reuse previously deleted space. Aalternatively, one can simply
@@ -1473,6 +1472,10 @@ Notes & Caveats
- Once a ``table`` is created its items (Panel) / columns (DataFrame)
are fixed; only exactly the same columns can be appended
+ - If a row has ``np.nan`` for **EVERY COLUMN** (having a ``nan``
+ in a string, or a ``NaT`` in a datetime-like column counts as having
+ a value), then those rows **WILL BE DROPPED IMPLICITLY**. This limitation
+ *may* be addressed in the future.
- You can not append/select/delete to a non-table (table creation is
determined on the first append, or by passing ``table=True`` in a
put operation)
@@ -1498,13 +1501,13 @@ Notes & Caveats
.. ipython:: python
- store.append('wp_big_strings', wp, min_itemsize = { 'minor_axis' : 30 })
- wp = wp.rename_axis(lambda x: x + '_big_strings', axis=2)
- store.append('wp_big_strings', wp)
- store.select('wp_big_strings')
+ store.append('wp_big_strings', wp, min_itemsize = { 'minor_axis' : 30 })
+ wp = wp.rename_axis(lambda x: x + '_big_strings', axis=2)
+ store.append('wp_big_strings', wp)
+ store.select('wp_big_strings')
- # we have provided a minimum minor_axis indexable size
- store.root.wp_big_strings.table
+ # we have provided a minimum minor_axis indexable size
+ store.root.wp_big_strings.table
DataTypes
~~~~~~~~~
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index c635c0b231c48..6b3b36f231c1a 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2577,7 +2577,7 @@ def write_data(self, chunksize):
# consolidate masks
mask = masks[0]
for m in masks[1:]:
- m = mask & m
+ mask = mask & m
# the arguments
indexes = [a.cvalues for a in self.index_axes]
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 4efe87fceebc0..c3a8990962ca1 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -417,6 +417,85 @@ def test_append(self):
store.append('df', df)
tm.assert_frame_equal(store['df'], df)
+ def test_append_some_nans(self):
+
+ with ensure_clean(self.path) as store:
+ df = DataFrame({'A' : Series(np.random.randn(20)).astype('int32'),
+ 'A1' : np.random.randn(20),
+ 'A2' : np.random.randn(20),
+ 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime.datetime(2001,1,2,0,0) },
+ index=np.arange(20))
+ # some nans
+ store.remove('df1')
+ df.ix[0:15,['A1','B','D','E']] = np.nan
+ store.append('df1', df[:10])
+ store.append('df1', df[10:])
+ tm.assert_frame_equal(store['df1'], df)
+
+ # first column
+ df1 = df.copy()
+ df1.ix[:,'A1'] = np.nan
+ store.remove('df1')
+ store.append('df1', df1[:10])
+ store.append('df1', df1[10:])
+ tm.assert_frame_equal(store['df1'], df1)
+
+ # 2nd column
+ df2 = df.copy()
+ df2.ix[:,'A2'] = np.nan
+ store.remove('df2')
+ store.append('df2', df2[:10])
+ store.append('df2', df2[10:])
+ tm.assert_frame_equal(store['df2'], df2)
+
+ # datetimes
+ df3 = df.copy()
+ df3.ix[:,'E'] = np.nan
+ store.remove('df3')
+ store.append('df3', df3[:10])
+ store.append('df3', df3[10:])
+ tm.assert_frame_equal(store['df3'], df3)
+
+ ##### THIS IS A BUG, should not drop these all-nan rows
+ ##### BUT need to store the index which we don't want to do....
+ # nan some entire rows
+ df = DataFrame({'A1' : np.random.randn(20),
+ 'A2' : np.random.randn(20)},
+ index=np.arange(20))
+
+ store.remove('df4')
+ df.ix[0:15,:] = np.nan
+ store.append('df4', df[:10])
+ store.append('df4', df[10:])
+ tm.assert_frame_equal(store['df4'], df[-4:])
+ self.assert_(store.get_storer('df4').nrows == 4)
+
+ # nan some entire rows (string are still written!)
+ df = DataFrame({'A1' : np.random.randn(20),
+ 'A2' : np.random.randn(20),
+ 'B' : 'foo', 'C' : 'bar'},
+ index=np.arange(20))
+
+ store.remove('df5')
+ df.ix[0:15,:] = np.nan
+ store.append('df5', df[:10])
+ store.append('df5', df[10:])
+ tm.assert_frame_equal(store['df5'], df)
+ self.assert_(store.get_storer('df5').nrows == 20)
+
+ # nan some entire rows (but since we have dates they are still written!)
+ df = DataFrame({'A1' : np.random.randn(20),
+ 'A2' : np.random.randn(20),
+ 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime.datetime(2001,1,2,0,0) },
+ index=np.arange(20))
+
+ store.remove('df6')
+ df.ix[0:15,:] = np.nan
+ store.append('df6', df[:10])
+ store.append('df6', df[10:])
+ tm.assert_frame_equal(store['df6'], df)
+ self.assert_(store.get_storer('df6').nrows == 20)
+
def test_append_frame_column_oriented(self):
with ensure_clean(self.path) as store:
| fixes #3012
if the first column of a block was all-nan was not writing that row
| https://api.github.com/repos/pandas-dev/pandas/pulls/3013 | 2013-03-11T15:26:47Z | 2013-03-11T18:20:16Z | 2013-03-11T18:20:16Z | 2014-06-25T01:05:21Z |
df.from_records should accept values deriving from ABC collections.Mapping #3000 | diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 60ec7de5c4d8e..25ccbf94ff5f6 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -280,6 +280,10 @@ Enhancements
- value_counts() now accepts a "normalize" argument, for normalized
histograms. (GH2710_).
+ - DataFrame.from_records now accepts not only dicts but any instance of
+ the collections.Mapping ABC.
+
+
Bug Fixes
~~~~~~~~~
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ee586a2101f62..5fffadfdaec35 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -17,6 +17,7 @@
import csv
import operator
import sys
+import collections
from numpy import nan as NA
import numpy as np
@@ -413,7 +414,7 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
if index is None and isinstance(data[0], Series):
index = _get_names_from_index(data)
- if isinstance(data[0], (list, tuple, dict, Series)):
+ if isinstance(data[0], (list, tuple, collections.Mapping, Series)):
arrays, columns = _to_arrays(data, columns, dtype=dtype)
columns = _ensure_index(columns)
@@ -5527,7 +5528,7 @@ def _to_arrays(data, columns, coerce_float=False, dtype=None):
if isinstance(data[0], (list, tuple)):
return _list_to_arrays(data, columns, coerce_float=coerce_float,
dtype=dtype)
- elif isinstance(data[0], dict):
+ elif isinstance(data[0], collections.Mapping):
return _list_of_dict_to_arrays(data, columns,
coerce_float=coerce_float,
dtype=dtype)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 1c30dfd1abced..8cd462007bd86 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3246,6 +3246,22 @@ def test_to_records_dt64(self):
rs = df.to_records(convert_datetime64=False)
self.assert_(rs['index'][0] == df.index.values[0])
+ def test_to_records_with_Mapping_type(self):
+ import email
+ from email.parser import Parser
+ import collections
+
+ collections.Mapping.register(email.message.Message)
+
+ headers = Parser().parsestr('From: <user@example.com>\n'
+ 'To: <someone_else@example.com>\n'
+ 'Subject: Test message\n'
+ '\n'
+ 'Body would go here\n')
+
+ frame = DataFrame.from_records([headers])
+ all( x in frame for x in ['Type','Subject','From'])
+
def test_from_records_to_records(self):
# from numpy documentation
arr = np.zeros((2,), dtype=('i4,f4,a10'))
| #3000
| https://api.github.com/repos/pandas-dev/pandas/pulls/3005 | 2013-03-10T19:25:31Z | 2013-03-15T20:43:47Z | 2013-03-15T20:43:47Z | 2013-03-15T20:43:54Z |
BUG: fixed value_counts with datetime64[ns], GH 3002 | diff --git a/RELEASE.rst b/RELEASE.rst
index d7bd7c22ce326..2b911b0ed8170 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -136,6 +136,8 @@ pandas 0.11.0
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
- Bug in argsort of ``datetime64[ns]`` Series with ``NaT`` (GH2967_)
+ - Bug in value_counts of ``datetime64[ns]`` Series (GH3002_)
+ - Fixed printing of ``NaT` in an index
- Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982__)
- Bug in ``icol`` with negative indicies was incorrect producing incorrect return values (see GH2922_)
@@ -162,6 +164,7 @@ pandas 0.11.0
.. _GH2967: https://github.com/pydata/pandas/issues/2967
.. _GH2982: https://github.com/pydata/pandas/issues/2982
.. _GH2989: https://github.com/pydata/pandas/issues/2989
+.. _GH3002: https://github.com/pydata/pandas/issues/3002
pandas 0.10.1
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 256a51b909a19..413923262c6b0 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -170,6 +170,14 @@ def value_counts(values, sort=True, ascending=False):
if com.is_integer_dtype(values.dtype):
values = com._ensure_int64(values)
keys, counts = htable.value_count_int64(values)
+ elif issubclass(values.dtype.type, (np.datetime64,np.timedelta64)):
+
+ dtype = values.dtype
+ values = values.view(np.int64)
+ keys, counts = htable.value_count_int64(values)
+
+ # convert the keys back to the dtype we came in
+ keys = Series(keys,dtype=dtype)
else:
mask = com.isnull(values)
values = com._ensure_object(values)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 3a5f1d8147a99..42fe1c4ccb928 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -13,6 +13,7 @@
from pandas.lib import Timestamp
from pandas.util.decorators import cache_readonly
+from pandas.core.common import isnull
import pandas.core.common as com
from pandas.util import py3compat
from pandas.core.config import get_option
@@ -94,6 +95,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None):
return Index(result.to_pydatetime(), dtype=_o_dtype)
else:
return result
+ elif issubclass(data.dtype.type, np.timedelta64):
+ return Int64Index(data, copy=copy, name=name)
if dtype is not None:
try:
@@ -435,9 +438,12 @@ def format(self, name=False, formatter=None):
zero_time = time(0, 0)
result = []
for dt in self:
- if dt.time() != zero_time or dt.tzinfo is not None:
- return header + [u'%s' % x for x in self]
- result.append(u'%d-%.2d-%.2d' % (dt.year, dt.month, dt.day))
+ if isnull(dt):
+ result.append(u'NaT')
+ else:
+ if dt.time() != zero_time or dt.tzinfo is not None:
+ return header + [u'%s' % x for x in self]
+ result.append(u'%d-%.2d-%.2d' % (dt.year, dt.month, dt.day))
return header + result
values = self.values
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index c2a399b493d13..1b436bfd443fc 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -1226,6 +1226,18 @@ def test_float_trim_zeros(self):
else:
self.assert_('+10' in line)
+ def test_datetimeindex(self):
+
+ from pandas import date_range, NaT, Timestamp
+ index = date_range('20130102',periods=6)
+ s = Series(1,index=index)
+ result = s.to_string()
+ self.assertTrue('2013-01-02' in result)
+
+ s = Series(2, index=[ Timestamp('20130111'), NaT ]).append(s)
+ result = s.to_string()
+ self.assertTrue('NaT' in result)
+
def test_timedelta64(self):
from pandas import date_range
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 9ea5e59447475..cef309fd59503 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2396,6 +2396,27 @@ def test_value_counts_nunique(self):
expected = Series([], dtype=np.int64)
assert_series_equal(hist, expected)
+ # GH 3002, datetime64[ns]
+ import StringIO
+ import pandas as pd
+ f = StringIO.StringIO("xxyyzz20100101PIE\nxxyyzz20100101GUM\nxxyyww20090101EGG\nfoofoo20080909PIE")
+ df = pd.read_fwf(f, widths=[6,8,3], names=["person_id", "dt", "food"], parse_dates=["dt"])
+ s = df.dt.copy()
+ result = s.value_counts()
+ self.assert_(result.index.dtype == 'datetime64[ns]')
+
+ # with NaT
+ s = s.append(Series({ 4 : pd.NaT }))
+ result = s.value_counts()
+ self.assert_(result.index.dtype == 'datetime64[ns]')
+
+ # timedelta64[ns]
+ from datetime import timedelta
+ td = df.dt-df.dt+timedelta(1)
+ result = td.value_counts()
+ #self.assert_(result.index.dtype == 'timedelta64[ns]')
+ self.assert_(result.index.dtype == 'int64')
+
def test_unique(self):
# 714 also, dtype=float
| Fixes issue #3002
- Fixed for datetime64[ns]
- Fixed for timedelta64[ns](though the returned index
is actually int64, which is wrong, should make a TimeDeltaIndex class....someday)
- Fixed NaT display in a DateTimeIndex (was displaying funny)
```
0 2010-01-01 00:00:00
1 2010-01-01 00:00:00
2 2009-01-01 00:00:00
3 2008-09-09 00:00:00
Name: dt, dtype: datetime64[ns]
2010-01-01 2
2008-09-09 1
2009-01-01 1
dtype: int64
<class 'pandas.tseries.index.DatetimeIndex'>
[2010-01-01 00:00:00, ..., 2009-01-01 00:00:00]
Length: 3, Freq: None, Timezone: None
```
WIth NaTs
```
0 2010-01-01 00:00:00
1 2010-01-01 00:00:00
2 2009-01-01 00:00:00
3 2008-09-09 00:00:00
4 NaT
dtype: datetime64[ns]
2010-01-01 2
2009-01-01 1
NaT 1
2008-09-09 1
dtype: int64
<class 'pandas.tseries.index.DatetimeIndex'>
[2010-01-01 00:00:00, ..., 2008-09-09 00:00:00]
Length: 4, Freq: None, Timezone: None
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/3003 | 2013-03-10T18:30:29Z | 2013-03-10T23:48:18Z | 2013-03-10T23:48:18Z | 2014-06-16T11:59:45Z |
DOC: added recommended dependencies section in install.rst | diff --git a/README.rst b/README.rst
index d6d1fa3ad658b..59bf2667181f9 100644
--- a/README.rst
+++ b/README.rst
@@ -74,7 +74,7 @@ Highly Recommended Dependencies
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* `numexpr <http://code.google.com/p/numexpr/>`__: to accelerate some expression evaluation operations
also required by `PyTables`
- * `bottleneck <http://berkeleyanalytics.com/>`__: to accelerate certain numerical operations
+ * `bottleneck <http://berkeleyanalytics.com/bottleneck>`__: to accelerate certain numerical operations
Optional dependencies
~~~~~~~~~~~~~~~~~~~~~
diff --git a/RELEASE.rst b/RELEASE.rst
index 1c6a9d4103ab8..d7bd7c22ce326 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -42,6 +42,7 @@ pandas 0.11.0
- Added ``.at`` attribute, to support fast scalar access via labels (replaces ``get_value/set_value``)
- Moved functionaility from ``irow,icol,iget_value/iset_value`` to ``.iloc`` indexer
(via ``_ixs`` methods in each object)
+ - Added support for expression evaluation using the ``numexpr`` library
**Improvements to existing features**
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 2b89d5e0bafe9..09e989c63f03c 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -86,6 +86,33 @@ unlike the axis labels, cannot be assigned to.
strings are involved, the result will be of object dtype. If there are only
floats and integers, the resulting array will be of float dtype.
+.. _basics.accelerate:
+
+Accelerated operations
+----------------------
+
+Pandas has support for accelerating certain types of binary numerical and boolean operations using
+the ``numexpr`` library (starting in 0.11.0) and the ``bottleneck`` libraries.
+
+These libraries are especially useful when dealing with large data sets, and provide large
+speedups. ``numexpr`` uses smart chunking, caching, and multiple cores. ``bottleneck`` is
+a set of specialized cython routines that are especially fast when dealing with arrays that have
+``nans``.
+
+Here is a sample (using 100 column x 100,000 row ``DataFrames``):
+
+.. csv-table::
+ :header: "Operation", "0.11.0 (ms)", "Prior Vern (ms)", "Ratio to Prior"
+ :widths: 30, 30, 30, 30
+ :delim: ;
+
+ ``df1 > df2``; 13.32; 125.35; 0.1063
+ ``df1 * df2``; 21.71; 36.63; 0.5928
+ ``df1 + df2``; 22.04; 36.50; 0.6039
+
+You are highly encouraged to install both libraries. See the section
+:ref:`Recommended Dependencies <install.recommended_dependencies>` for more installation info.
+
.. _basics.binop:
Flexible binary operations
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 16c7972bd3fb3..7bcfc76bd1fa1 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -23,8 +23,8 @@ Frequently Asked Questions (FAQ)
.. _ref-monkey-patching:
-
-----------------------------------------------------
+Adding Features to your Pandas Installation
+-------------------------------------------
Pandas is a powerful tool and already has a plethora of data manipulation
operations implemented, most of them are very fast as well.
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 246b05918d714..742acff04148e 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -70,7 +70,23 @@ Dependencies
* `pytz <http://pytz.sourceforge.net/>`__
* Needed for time zone support
-Optional dependencies
+.. _install.recommended_dependencies:
+
+Recommended Dependencies
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+ * `numexpr <http://code.google.com/p/numexpr/>`__: for accelerating certain numerical operations.
+ ``numexpr`` uses multiple cores as well as smart chunking and caching to achieve large speedups.
+ * `bottleneck <http://berkeleyanalytics.com/bottleneck>`__: for accelerating certain types of ``nan``
+ evaluations. ``bottleneck`` uses specialized cython routines to achieve large speedups.
+
+.. note::
+
+ You are highly encouraged to install these libraries, as they provide large speedups, especially
+ if working with large data sets.
+
+
+Optional Dependencies
~~~~~~~~~~~~~~~~~~~~~
* `Cython <http://www.cython.org>`__: Only necessary to build development
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index b75ea0e664f6d..191c11eb79896 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -12,6 +12,8 @@ pay close attention to.
There is a new section in the documentation, :ref:`10 Minutes to Pandas <10min>`,
primarily geared to new users.
+There are several libraries that are now :ref:`Recommended Dependencies <install.recommended_dependencies>`
+
Selection Choices
~~~~~~~~~~~~~~~~~
@@ -224,11 +226,11 @@ API changes
Enhancements
~~~~~~~~~~~~
- - Numexpr is now a 'highly recommended dependency', to accelerate certain
- types of expression evaluation
+ - Numexpr is now a :ref:`Recommended Dependencies <install.recommended_dependencies>`, to accelerate certain
+ types of numerical and boolean operations
- - Bottleneck is now a 'highly recommended dependency', to accelerate certain
- types of numerical evaluations
+ - Bottleneck is now a :ref:`Recommended Dependencies <install.recommended_dependencies>`, to accelerate certain
+ types of ``nan`` operations
- In ``HDFStore``, provide dotted attribute access to ``get`` from stores
(e.g. ``store.df == store['df']``)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2999 | 2013-03-10T01:19:09Z | 2013-03-10T02:47:27Z | 2013-03-10T02:47:27Z | 2013-03-10T02:47:28Z | |
DOC: doc updates/formatting in basics,indexing,10min | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index a6945eed1387c..e38bb52ffce15 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -126,11 +126,11 @@ See the :ref:`Indexing section <indexing>`
Getting
~~~~~~~
-Selecting a single column, which yields a ``Series``
+Selecting a single column, which yields a ``Series``,
+equivalent to ``df.A``
.. ipython:: python
- # equivalently ``df.A``
df['A']
Selecting via ``[]``, which slices the rows.
@@ -143,6 +143,8 @@ Selecting via ``[]``, which slices the rows.
Selection by Label
~~~~~~~~~~~~~~~~~~
+See more in :ref:`Selection by Label <indexing.label>`
+
For getting a cross section using a label
.. ipython:: python
@@ -182,6 +184,8 @@ For getting fast access to a scalar (equiv to the prior method)
Selection by Position
~~~~~~~~~~~~~~~~~~~~~
+See more in :ref:`Selection by Position <indexing.integer>`
+
Select via the position of the passed integers
.. ipython:: python
@@ -286,6 +290,11 @@ Setting by assigning with a numpy array
.. ipython:: python
df.loc[:,'D'] = np.array([5] * len(df))
+
+The result of the prior setting operations
+
+.. ipython:: python
+
df
A ``where`` operation with setting.
@@ -517,7 +526,7 @@ unstacks the **last level**:
Pivot Tables
~~~~~~~~~~~~
-See the section on :ref:`Pivot Tables <reshaping.pivot>`).
+See the section on :ref:`Pivot Tables <reshaping.pivot>`.
.. ipython:: python
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index d32cbf7dcb8d1..2b89d5e0bafe9 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -989,7 +989,10 @@ attribute for DataFrames returns a Series with the data type of each column.
.. ipython:: python
- dft = DataFrame(dict( A = np.random.rand(3), B = 1, C = 'foo', D = Timestamp('20010102'),
+ dft = DataFrame(dict( A = np.random.rand(3),
+ B = 1,
+ C = 'foo',
+ D = Timestamp('20010102'),
E = Series([1.0]*3).astype('float32'),
F = False,
G = Series([1]*3,dtype='int8')))
@@ -1014,8 +1017,8 @@ general).
# string data forces an ``object`` dtype
Series([1, 2, 3, 6., 'foo'])
-The related method ``get_dtype_counts`` will return the number of columns of
-each type:
+The method ``get_dtype_counts`` will return the number of columns of
+each type in a ``DataFrame``:
.. ipython:: python
@@ -1023,7 +1026,8 @@ each type:
Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0).
If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
-or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
+or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore,
+different numeric dtypes will **NOT** be combined. The following example will give you a taste.
.. ipython:: python
@@ -1039,9 +1043,8 @@ or a passed ``Series``, then it will be preserved in DataFrame operations. Furth
defaults
~~~~~~~~
-By default integer types are ``int64`` and float types are ``float64``, *REGARDLESS* of platform (32-bit or 64-bit).
-
-The following will all result in ``int64`` dtypes.
+By default integer types are ``int64`` and float types are ``float64``,
+*REGARDLESS* of platform (32-bit or 64-bit). The following will all result in ``int64`` dtypes.
.. ipython:: python
@@ -1050,13 +1053,18 @@ The following will all result in ``int64`` dtypes.
DataFrame({'a' : 1 }, index=range(2)).dtypes
Numpy, however will choose *platform-dependent* types when creating arrays.
-Thus, ``DataFrame(np.array([1,2]))`` **WILL** result in ``int32`` on 32-bit platform.
+The following **WILL** result in ``int32`` on 32-bit platform.
+
+.. ipython:: python
+
+ frame = DataFrame(np.array([1,2]))
upcasting
~~~~~~~~~
-Types can potentially be *upcasted* when combined with other types, meaning they are promoted from the current type (say ``int`` to ``float``)
+Types can potentially be *upcasted* when combined with other types, meaning they are promoted
+from the current type (say ``int`` to ``float``)
.. ipython:: python
@@ -1064,7 +1072,8 @@ Types can potentially be *upcasted* when combined with other types, meaning they
df3
df3.dtypes
-The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning the dtype that can accomodate **ALL** of the types in the resulting homogenous dtyped numpy array. This can
+The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning
+the dtype that can accomodate **ALL** of the types in the resulting homogenous dtyped numpy array. This can
force some *upcasting*.
.. ipython:: python
@@ -1076,7 +1085,10 @@ astype
.. _basics.cast:
-You can use the ``astype`` method to convert dtypes from one to another. These *always* return a copy.
+You can use the ``astype`` method to explicity convert dtypes from one to another. These will by default return a copy,
+even if the dtype was unchanged (pass ``copy=False`` to change this behavior). In addition, they will raise an
+exception if the astype operation is invalid.
+
Upcasting is always according to the **numpy** rules. If two different dtypes are involved in an operation,
then the more *general* one will be used as the result of the operation.
@@ -1091,17 +1103,13 @@ then the more *general* one will be used as the result of the operation.
object conversion
~~~~~~~~~~~~~~~~~
-To force conversion of specific types of number conversion, pass ``convert_numeric = True``.
-This will force strings and numbers alike to be numbers if possible, otherwise the will be set to ``np.nan``.
-To force conversion to ``datetime64[ns]``, pass ``convert_dates = 'coerce'``.
-This will convert any datetimelike object to dates, forcing other values to ``NaT``.
-
-In addition, ``convert_objects`` will attempt to *soft* conversion of any *object* dtypes, meaning that if all
-the objects in a Series are of the same type, the Series will have that dtype.
+``convert_objects`` is a method to try to force conversion of types from the ``object`` dtype to other types.
+To force conversion of specific types that are *number like*, e.g. could be a string that represents a number,
+pass ``convert_numeric=True``. This will force strings and numbers alike to be numbers if possible, otherwise
+they will be set to ``np.nan``.
.. ipython:: python
- # mixed type conversions
df3['D'] = '1.'
df3['E'] = '1'
df3.convert_objects(convert_numeric=True).dtypes
@@ -1111,14 +1119,21 @@ the objects in a Series are of the same type, the Series will have that dtype.
df3['E'] = df3['E'].astype('int32')
df3.dtypes
-This is a *forced coercion* on datelike types. This might be useful if you are reading in data which is mostly dates, but occasionally has non-dates intermixed and you want to make those values ``nan``.
+To force conversion to ``datetime64[ns]``, pass ``convert_dates='coerce'``.
+This will convert any datetimelike object to dates, forcing other values to ``NaT``.
+This might be useful if you are reading in data which is mostly dates,
+but occasionally has non-dates intermixed and you want to represent as missing.
.. ipython:: python
- s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
+ s = Series([datetime(2001,1,1,0,0),
+ 'foo', 1.0, 1, Timestamp('20010104'),
+ '20010105'],dtype='O')
s
s.convert_objects(convert_dates='coerce')
+In addition, ``convert_objects`` will attempt the *soft* conversion of any *object* dtypes, meaning that if all
+the objects in a Series are of the same type, the Series will have that dtype.
gotchas
~~~~~~~
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 02aa00b7eaca6..392768a21586b 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -75,7 +75,7 @@ three types of multi-axis indexing.
See more at :ref:`Advanced Indexing <indexing.advanced>` and :ref:`Advanced Hierarchical <indexing.advanced_hierarchical>`
-Getting values from object with multi-axes uses the following notation (using ``.loc`` as an
+Getting values from an object with multi-axes selection uses the following notation (using ``.loc`` as an
example, but applies to ``.iloc`` and ``.ix`` as well) Any of the axes accessors may be the null
slice ``:``. Axes left out of the specification are assumed to be ``:``.
(e.g. ``p.loc['a']`` is equiv to ``p.loc['a',:,:]``)
@@ -103,13 +103,11 @@ See the section :ref:`Selection by Position <indexing.integer>` for substitutes.
.. _indexing.xs:
Cross-sectional slices on non-hierarchical indices are now easily performed using
-``.loc`` and/or ``.loc``. The methods:
+``.loc`` and/or ``.loc``. These methods now exist primarily for backward compatibility.
- ``xs`` (for DataFrame),
- ``minor_xs`` and ``major_xs`` (for Panel)
-now exist primarily for backward compatibility.
-
See the section at :ref:`Selection by Label <indexing.label>` for substitutes.
.. _indexing.basics:
@@ -230,9 +228,7 @@ must be in the index or a ``KeyError`` will be raised!
When slicing, the start bound is *included*, **AND** the stop bound is *included*.
Integers are valid labels, but they refer to the label *and not the position*.
-The ``.loc`` attribute is the primary access method.
-
-The following are valid inputs:
+The ``.loc`` attribute is the primary access method. The following are valid inputs:
- A single label, e.g. ``5`` or ``'a'``
@@ -261,7 +257,9 @@ With a DataFrame
.. ipython:: python
- df1 = DataFrame(np.random.randn(6,4),index=list('abcdef'),columns=list('ABCD'))
+ df1 = DataFrame(np.random.randn(6,4),
+ index=list('abcdef'),
+ columns=list('ABCD'))
df1
df1.loc[['a','b','d'],:]
@@ -302,9 +300,7 @@ The semantics follow closely python and numpy slicing. These are ``0-based`` ind
When slicing, the start bounds is *included*, while the upper bound is *excluded*.
Trying to use a non-integer, even a **valid** label will raise a ``IndexError``.
-The ``.iloc`` attribute is the primary access method .
-
-The following are valid inputs:
+The ``.iloc`` attribute is the primary access method. The following are valid inputs:
- An integer e.g. ``5``
- A list or array of integers ``[4, 3, 0]``
@@ -329,7 +325,9 @@ With a DataFrame
.. ipython:: python
- df1 = DataFrame(np.random.randn(6,4),index=range(0,12,2),columns=range(0,8,2))
+ df1 = DataFrame(np.random.randn(6,4),
+ index=range(0,12,2),
+ columns=range(0,8,2))
df1
Select via integer slicing
@@ -428,6 +426,8 @@ Boolean indexing
.. _indexing.boolean:
Another common operation is the use of boolean vectors to filter the data.
+The operators are: ``|`` for ``or``, ``&`` for ``and``, and ``~`` for ``not``.
+These are grouped using parentheses.
Using a boolean vector to index a Series works exactly as in a numpy ndarray:
@@ -436,6 +436,7 @@ Using a boolean vector to index a Series works exactly as in a numpy ndarray:
s[s > 0]
s[(s < 0) & (s > -0.5)]
s[(s < -1) | (s > 1 )]
+ s[~(s < 0)]
You may select rows from a DataFrame using a boolean vector the same length as
the DataFrame's index (for example, something derived from one of the columns
@@ -472,11 +473,15 @@ more complex criteria:
# Multiple criteria
df2[criterion & (df2['b'] == 'x')]
-
Note, with the choice methods :ref:`Selection by Label <indexing.label>`, :ref:`Selection by Position <indexing.integer>`,
-and :ref:`Advanced Indexing <indexing.advanced>` may select along more than one axis using boolean vectors combined with other
+and :ref:`Advanced Indexing <indexing.advanced>` you may select along more than one axis using boolean vectors combined with other
indexing expressions.
+.. ipython:: python
+
+ df2.loc[criterion & (df2['b'] == 'x'),'b':'c']
+
+
Where and Masking
~~~~~~~~~~~~~~~~~
@@ -484,21 +489,24 @@ Selecting values from a Series with a boolean vector generally returns a subset
To guarantee that selection output has the same shape as the original data, you can use the
``where`` method in ``Series`` and ``DataFrame``.
+
+To return only the selected rows
+
.. ipython:: python
- # return only the selected rows
s[s > 0]
- # return a Series of the same shape as the original
+To return a Series of the same shape as the original
+
+.. ipython:: python
+
s.where(s > 0)
Selecting values from a DataFrame with a boolean critierion now also preserves input data shape.
-``where`` is used under the hood as the implementation.
+``where`` is used under the hood as the implementation. Equivalent is ``df.where(df < 0)``
.. ipython:: python
- # return a DataFrame of the same shape as the original
- # this is equiavalent to ``df.where(df < 0)``
df[df < 0]
In addition, ``where`` takes an optional ``other`` argument for replacement of values where the
@@ -665,7 +673,7 @@ Advanced Indexing with ``.ix``
The recent addition of ``.loc`` and ``.iloc`` have enabled users to be quite
explicit about indexing choices. ``.ix`` allows a great flexibility to specify
- indexing locations by *label* an/or *integer position*. Pandas will attempt
+ indexing locations by *label* and/or *integer position*. Pandas will attempt
to use any passed *integer* as *label* locations first (like what ``.loc``
would do, then to fall back on *positional* indexing, like what ``.iloc`` would do).
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index ea174629c5fc9..cd7540328230f 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -12,9 +12,6 @@ pay close attention to.
There is a new section in the documentation, :ref:`10 Minutes to Pandas <10min>`,
primarily geared to new users.
-API changes
-~~~~~~~~~~~
-
Selection Choices
~~~~~~~~~~~~~~~~~
@@ -62,7 +59,7 @@ three types of multi-axis indexing.
Selection Deprecations
~~~~~~~~~~~~~~~~~~~~~~
-Starting in version 0.11.0, the methods may be deprecated in future versions.
+Starting in version 0.11.0, these methods may be deprecated in future versions.
- ``irow``
- ``icol``
@@ -90,7 +87,9 @@ Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passe
df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
df1
df1.dtypes
- df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'), B = Series(randn(8)), C = Series(randn(8),dtype='uint8') ))
+ df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
+ B = Series(randn(8)),
+ C = Series(randn(8),dtype='uint8') ))
df2
df2.dtypes
@@ -102,15 +101,22 @@ Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passe
Dtype Conversion
~~~~~~~~~~~~~~~~
+This is lower-common-denomicator upcasting, meaning you get the dtype which can accomodate all of the types
+
.. ipython:: python
- # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
df3.values.dtype
- # conversion of dtypes
+Conversion
+
+.. ipython:: python
+
df3.astype('float32').dtypes
- # mixed type conversions
+Mixed Conversion
+
+.. ipython:: python
+
df3['D'] = '1.'
df3['E'] = '1'
df3.convert_objects(convert_numeric=True).dtypes
@@ -120,7 +126,10 @@ Dtype Conversion
df3['E'] = df3['E'].astype('int32')
df3.dtypes
- # forcing date coercion
+Forcing Date coercion (and setting ``NaT`` when not datelike)
+
+.. ipython:: python
+
s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1,
Timestamp('20010104'), '20010105'],dtype='O')
s.convert_objects(convert_dates='coerce')
@@ -148,7 +157,7 @@ Keep in mind that ``DataFrame(np.array([1,2]))`` **WILL** result in ``int32`` on
**Upcasting Gotchas**
Performing indexing operations on integer type data can easily upcast the data.
-The dtype of the input data will be preserved in cases where ``nans`` are not introduced (coming soon).
+The dtype of the input data will be preserved in cases where ``nans`` are not introduced.
.. ipython:: python
@@ -209,11 +218,14 @@ Astype conversion on ``datetime64[ns]`` to ``object``, implicity converts ``NaT`
s.dtype
+API changes
+~~~~~~~~~~~
+
Enhancements
~~~~~~~~~~~~
- In ``HDFStore``, provide dotted attribute access to ``get`` from stores
- (e.g. store.df == store['df'])
+ (e.g. ``store.df == store['df']``)
- ``Squeeze`` to possibly remove length 1 dimensions from an object.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2992 | 2013-03-07T23:25:23Z | 2013-03-07T23:33:49Z | 2013-03-07T23:33:49Z | 2013-03-07T23:33:49Z | |
ENH/BUG: add tz argument to to_timestamp of Period GH2877 | diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 60ec7de5c4d8e..dbdea60c0f3b9 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -315,6 +315,7 @@ Bug Fixes
- Fixed slow printing of large Dataframes, due to inefficient dtype
reporting (GH2807_)
- Fix pretty-printing of infinite data structures (closes GH2978_)
+ - Fixed exception when plotting timeseries bearing a timezone (closes GH2877_)
- str.contains ignored na argument (GH2806_)
See the `full release notes
@@ -326,6 +327,7 @@ on GitHub for a complete list.
.. _GH2837: https://github.com/pydata/pandas/issues/2837
.. _GH2898: https://github.com/pydata/pandas/issues/2898
.. _GH2978: https://github.com/pydata/pandas/issues/2978
+.. _GH2877: https://github.com/pydata/pandas/issues/2877
.. _GH2739: https://github.com/pydata/pandas/issues/2739
.. _GH2710: https://github.com/pydata/pandas/issues/2710
.. _GH2806: https://github.com/pydata/pandas/issues/2806
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 11e89a840d145..1bc0d16bbbf1b 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1043,7 +1043,7 @@ def _no_base(self, freq):
if (base <= freqmod.FreqGroup.FR_DAY):
return x[:1].is_normalized
- return Period(x[0], freq).to_timestamp() == x[0]
+ return Period(x[0], freq).to_timestamp(tz=x.tz) == x[0]
return True
def _use_dynamic_x(self):
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 75decb91485ca..df5eb8743c9c8 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -188,7 +188,7 @@ def end_time(self):
ordinal = (self + 1).start_time.value - 1
return Timestamp(ordinal)
- def to_timestamp(self, freq=None, how='start'):
+ def to_timestamp(self, freq=None, how='start',tz=None):
"""
Return the Timestamp representation of the Period at the target
frequency at the specified end (how) of the Period
@@ -216,7 +216,7 @@ def to_timestamp(self, freq=None, how='start'):
val = self.asfreq(freq, how)
dt64 = tslib.period_ordinal_to_dt64(val.ordinal, base)
- return Timestamp(dt64)
+ return Timestamp(dt64,tz=tz)
year = _period_field_accessor('year', 0)
month = _period_field_accessor('month', 3)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 22264a5613922..5d477a8ca10ed 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -77,6 +77,12 @@ def test_period_cons_weekly(self):
expected = Period(daystr, freq='D').asfreq(freq)
self.assertEquals(result, expected)
+ def test_timestamp_tz_arg(self):
+ import pytz
+ p = Period('1/1/2005', freq='M').to_timestamp(tz='Europe/Brussels')
+ self.assertEqual(p.tz,
+ pytz.timezone('Europe/Brussels').normalize(p).tzinfo)
+
def test_period_constructor(self):
i1 = Period('1/1/2005', freq='M')
i2 = Period('Jan 2005')
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 04a5ddd89d60b..69e9651258340 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -51,6 +51,13 @@ def setUp(self):
columns=['A', 'B', 'C'])
for x in idx]
+ @slow
+ def test_ts_plot_with_tz(self):
+ # GH2877
+ index = date_range('1/1/2011', periods=2, freq='H', tz='Europe/Brussels')
+ ts = Series([188.5, 328.25], index=index)
+ ts.plot()
+
@slow
def test_frame_inferred(self):
# inferred freq
| #2877
| https://api.github.com/repos/pandas-dev/pandas/pulls/2991 | 2013-03-07T19:43:06Z | 2013-03-15T20:51:28Z | 2013-03-15T20:51:28Z | 2014-06-27T10:11:32Z |
ENH: support min/max on timedelta64[ns] Series | diff --git a/RELEASE.rst b/RELEASE.rst
index 78e946006e1fb..1c6a9d4103ab8 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -131,7 +131,7 @@ pandas 0.11.0
- Support null checking on timedelta64, representing (and formatting) with NaT
- Support setitem with np.nan value, converts to NaT
- Support min/max ops in a Dataframe (abs not working, nor do we error on non-supported ops)
- - Support idxmin/idxmax/abs in a Series (but with no NaT)
+ - Support idxmin/idxmax/abs/max/min in a Series (GH2989_, GH2982_)
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
- Bug in argsort of ``datetime64[ns]`` Series with ``NaT`` (GH2967_)
@@ -160,6 +160,7 @@ pandas 0.11.0
.. _GH2973: https://github.com/pydata/pandas/issues/2973
.. _GH2967: https://github.com/pydata/pandas/issues/2967
.. _GH2982: https://github.com/pydata/pandas/issues/2982
+.. _GH2989: https://github.com/pydata/pandas/issues/2989
pandas 0.10.1
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 78dd5cee9c8f9..1c1a0680e1f28 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -966,14 +966,30 @@ Some timedelta numeric like operations are supported.
.. ipython:: python
- s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ td - timedelta(minutes=5,seconds=5,microseconds=5)
+
+``min, max`` and the corresponding ``idxmin, idxmax`` operations are support on frames
+
+.. ipython:: python
+
df = DataFrame(dict(A = s - Timestamp('20120101')-timedelta(minutes=5,seconds=5),
B = s - Series(date_range('2012-1-2', periods=3, freq='D'))))
df
- # timedelta arithmetic
- td - timedelta(minutes=5,seconds=5,microseconds=5)
- # min/max operations
df.min()
df.min(axis=1)
+
+ df.idxmin()
+ df.idxmax()
+
+``min, max`` operations are support on series, these return a single element ``timedelta64[ns]`` Series (this avoids
+having to deal with numpy timedelta64 issues). ``idxmin, idxmax`` are supported as well.
+
+.. ipython:: python
+
+ df.min().max()
+ df.min(axis=1).min()
+
+ df.min().idxmax()
+ df.min(axis=1).idxmin()
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index f4c9d13c0d23e..ea174629c5fc9 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -258,8 +258,6 @@ Bug Fixes
df = DataFrame(dict(A = s, B = td))
df
s - s.max()
- s - datetime(2011,1,1,3,5)
- s + timedelta(minutes=5)
df['C'] = df['A'] + df['B']
df
df.dtypes
@@ -274,10 +272,11 @@ Bug Fixes
# works on lhs too
s.max() - s
- datetime(2011,1,1,3,5) - s
- timedelta(minutes=5) + s
- - Fix pretty-printing of infinite data structures, GH2978
+ # some timedelta numeric operations are supported
+ td - timedelta(minutes=5,seconds=5,microseconds=5)
+
+ - Fix pretty-printing of infinite data structures (closes GH2978_)
See the `full release notes
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 97ab861d6a3a7..23c178ebb6e4f 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -955,10 +955,10 @@ def _possibly_cast_to_timedelta(value, coerce=True):
def _possibly_cast_to_datetime(value, dtype, coerce = False):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
- if isinstance(dtype, basestring):
- dtype = np.dtype(dtype)
-
if dtype is not None:
+ if isinstance(dtype, basestring):
+ dtype = np.dtype(dtype)
+
is_datetime64 = is_datetime64_dtype(dtype)
is_timedelta64 = is_timedelta64_dtype(dtype)
@@ -984,21 +984,28 @@ def _possibly_cast_to_datetime(value, dtype, coerce = False):
except:
pass
- elif dtype is None:
- # we might have a array (or single object) that is datetime like, and no dtype is passed
- # don't change the value unless we find a datetime set
- v = value
- if not is_list_like(v):
- v = [ v ]
- if len(v):
- inferred_type = lib.infer_dtype(v)
- if inferred_type == 'datetime':
- try:
- value = tslib.array_to_datetime(np.array(v))
- except:
- pass
- elif inferred_type == 'timedelta':
- value = _possibly_cast_to_timedelta(value)
+ else:
+
+ # only do this if we have an array and the dtype of the array is not setup already
+ # we are not an integer/object, so don't bother with this conversion
+ if isinstance(value, np.ndarray) and not (issubclass(value.dtype.type, np.integer) or value.dtype == np.object_):
+ pass
+
+ else:
+ # we might have a array (or single object) that is datetime like, and no dtype is passed
+ # don't change the value unless we find a datetime set
+ v = value
+ if not is_list_like(v):
+ v = [ v ]
+ if len(v):
+ inferred_type = lib.infer_dtype(v)
+ if inferred_type == 'datetime':
+ try:
+ value = tslib.array_to_datetime(np.array(v))
+ except:
+ pass
+ elif inferred_type == 'timedelta':
+ value = _possibly_cast_to_timedelta(value)
return value
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 93f06aae2b1b7..f841c0dbecd8e 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -146,7 +146,13 @@ def _wrap_results(result,dtype):
result = result.view(dtype)
elif issubclass(dtype.type, np.timedelta64):
if not isinstance(result, np.ndarray):
- pass
+
+ # this is a scalar timedelta result!
+ # we have series convert then take the element (scalar)
+ # as series will do the right thing in py3 (and deal with numpy 1.6.2
+ # bug in that it results dtype of timedelta64[us]
+ from pandas import Series
+ result = Series([result],dtype='timedelta64[ns]')
else:
result = result.view(dtype)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index ee288fda120d3..9ea5e59447475 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1838,6 +1838,15 @@ def test_timedelta64_functions(self):
result = (s1-s2).abs()
assert_series_equal(result,expected)
+ # max/min
+ result = td.max()
+ expected = Series([timedelta(2)],dtype='timedelta64[ns]')
+ assert_series_equal(result,expected)
+
+ result = td.min()
+ expected = Series([timedelta(1)],dtype='timedelta64[ns]')
+ assert_series_equal(result,expected)
+
def test_sub_of_datetime_from_TimeSeries(self):
from pandas.core import common as com
from datetime import datetime
| closes GH #2989
```
In [168]: df = DataFrame(dict(A = s - Timestamp('20120101')-timedelta(minutes=5,seconds=5),
.....: B = s - Series(date_range('2012-1-2', periods=3, freq='D'))))
.....:
In [169]: df
Out[169]:
A B
0 -00:05:05 -1 days, 00:00:00
1 23:54:55 -1 days, 00:00:00
2 1 days, 23:54:55 -1 days, 00:00:00
In [170]: df.min()
Out[170]:
A -00:05:05
B -1 days, 00:00:00
dtype: timedelta64[ns]
In [171]: df.min(axis=1)
Out[171]:
0 -1 days, 00:00:00
1 -1 days, 00:00:00
2 -1 days, 00:00:00
dtype: timedelta64[ns]
In [172]: df.idxmin()
Out[172]:
A 0
B 0
dtype: int64
In [173]: df.idxmax()
Out[173]:
A 2
B 0
dtype: int64
min, max operations are support on series, these return a single element timedelta64[ns] Series
(this avoids having to deal with numpy timedelta64 issues). idxmin, idxmax are supported as well.
In [306]: df.min().max()
Out[306]:
0 -00:05:05
dtype: timedelta64[ns]
In [307]: df.min(axis=1).min()
Out[307]:
0 -1 days, 00:00:00
dtype: timedelta64[ns]
In [308]: df.min().idxmax()
Out[308]: 'A'
In [309]: df.min(axis=1).idxmin()
Out[309]: 0
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2990 | 2013-03-07T13:31:21Z | 2013-03-07T19:15:00Z | 2013-03-07T19:15:00Z | 2014-06-16T11:59:00Z |
add comparisons operator to the Period object and fixes issue 2781 | diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 75decb91485ca..7b57097127594 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -128,8 +128,12 @@ def __init__(self, value=None, freq=None, ordinal=None,
def __eq__(self, other):
if isinstance(other, Period):
+ if other.freq != self.freq:
+ raise ValueError("Cannot compare non-conforming periods")
return (self.ordinal == other.ordinal
and _gfc(self.freq) == _gfc(other.freq))
+ else:
+ raise TypeError(other)
return False
def __hash__(self):
@@ -152,6 +156,23 @@ def __sub__(self, other):
else: # pragma: no cover
raise TypeError(other)
+ def _comp_method(func, name):
+ def f(self, other):
+ if isinstance(other, Period):
+ if other.freq != self.freq:
+ raise ValueError("Cannot compare non-conforming periods")
+ return func(self.ordinal, other.ordinal)
+ else:
+ raise TypeError(other)
+
+ f.__name__ = name
+ return f
+
+ __lt__ = _comp_method(operator.lt, '__lt__')
+ __le__ = _comp_method(operator.le, '__le__')
+ __gt__ = _comp_method(operator.gt, '__gt__')
+ __ge__ = _comp_method(operator.ge, '__ge__')
+
def asfreq(self, freq, how='E'):
"""
Convert Period to desired frequency, either at the start or end of the
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 22264a5613922..57fa4243515a6 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -449,12 +449,6 @@ def test_constructor_infer_freq(self):
self.assertRaises(ValueError, Period, '2007-01-01 07:10:15.123456')
- def test_comparisons(self):
- p = Period('2007-01-01')
- self.assertEquals(p, p)
- self.assert_(not p == 1)
-
-
def noWrap(item):
return item
@@ -2000,6 +1994,67 @@ def test_negone_ordinals(self):
repr(period)
+class TestComparisons(unittest.TestCase):
+ def setUp(self):
+ self.january1 = Period('2000-01', 'M')
+ self.january2 = Period('2000-01', 'M')
+ self.february = Period('2000-02', 'M')
+ self.march = Period('2000-03', 'M')
+ self.day = Period('2012-01-01', 'D')
+
+ def test_equal(self):
+ self.assertEqual(self.january1, self.january2)
+
+ def test_equal_Raises_Value(self):
+ self.assertRaises(ValueError, self.january1.__eq__, self.day)
+
+ def test_equal_Raises_Type(self):
+ self.assertRaises(TypeError, self.january1.__eq__, 1)
+
+ def test_notEqual(self):
+ self.assertNotEqual(self.january1, self.february)
+
+ def test_greater(self):
+ self.assertGreater(self.february, self.january1)
+
+ def test_greater_Raises_Value(self):
+ self.assertRaises(ValueError, self.january1.__gt__, self.day)
+
+ def test_greater_Raises_Type(self):
+ self.assertRaises(TypeError, self.january1.__gt__, 1)
+
+ def test_greaterEqual(self):
+ self.assertGreaterEqual(self.january1, self.january2)
+
+ def test_greaterEqual_Raises_Value(self):
+ self.assertRaises(ValueError, self.january1.__ge__, self.day)
+
+ def test_greaterEqual_Raises_Value(self):
+ self.assertRaises(TypeError, self.january1.__ge__, 1)
+
+ def test_smallerEqual(self):
+ self.assertLessEqual(self.january1, self.january2)
+
+ def test_smallerEqual_Raises_Value(self):
+ self.assertRaises(ValueError, self.january1.__le__, self.day)
+
+ def test_smallerEqual_Raises_Type(self):
+ self.assertRaises(TypeError, self.january1.__le__, 1)
+
+ def test_smaller(self):
+ self.assertLess(self.january1, self.february)
+
+ def test_smaller_Raises_Value(self):
+ self.assertRaises(ValueError, self.january1.__lt__, self.day)
+
+ def test_smaller_Raises_Type(self):
+ self.assertRaises(TypeError, self.january1.__lt__, 1)
+
+ def test_sort(self):
+ periods = [self.march, self.january1, self.february]
+ correctPeriods = [self.january1, self.february, self.march]
+ self.assertListEqual(sorted(periods), correctPeriods)
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| This commit should fix [issue 2781](https://github.com/pydata/pandas/issues/2781). The Period object couldn't be sorted because it didn't have any **le**, **gt**, **lt**, **ge** or **cmp** methods.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2987 | 2013-03-07T01:55:44Z | 2013-03-28T05:40:01Z | 2013-03-28T05:40:01Z | 2014-06-19T14:50:06Z |
BUG: Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982) | diff --git a/RELEASE.rst b/RELEASE.rst
index 8724dd4f7daa6..cf3fd598a8186 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -126,6 +126,7 @@ pandas 0.11.0
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
- Bug in argsort of ``datetime64[ns]`` Series with ``NaT`` (GH2967_)
+ - Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982__)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
@@ -147,6 +148,7 @@ pandas 0.11.0
.. _GH2931: https://github.com/pydata/pandas/issues/2931
.. _GH2973: https://github.com/pydata/pandas/issues/2973
.. _GH2967: https://github.com/pydata/pandas/issues/2967
+.. _GH2982: https://github.com/pydata/pandas/issues/2982
pandas 0.10.1
diff --git a/doc/source/io.rst b/doc/source/io.rst
index ebd6f5c77c658..86d590965f141 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1186,7 +1186,7 @@ A query is specified using the ``Term`` class under the hood.
Valid terms can be created from ``dict, list, tuple, or
string``. Objects can be embeded as values. Allowed operations are: ``<,
-<=, >, >=, =``. ``=`` will be inferred as an implicit set operation
+<=, >, >=, =, !=``. ``=`` will be inferred as an implicit set operation
(e.g. if 2 or more values are provided). The following are all valid
terms.
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 03d43250f9265..97ab861d6a3a7 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -682,19 +682,30 @@ def _infer_dtype_from_scalar(val):
def _maybe_promote(dtype, fill_value=np.nan):
+
+ # if we passed an array here, determine the fill value by dtype
+ if isinstance(fill_value,np.ndarray):
+ if issubclass(fill_value.dtype.type, (np.datetime64,np.timedelta64)):
+ fill_value = tslib.iNaT
+ else:
+ fill_value = np.nan
+
# returns tuple of (dtype, fill_value)
- if issubclass(dtype.type, np.datetime64):
+ if issubclass(dtype.type, (np.datetime64,np.timedelta64)):
# for now: refuse to upcast datetime64
# (this is because datetime64 will not implicitly upconvert
# to object correctly as of numpy 1.6.1)
if isnull(fill_value):
fill_value = tslib.iNaT
else:
- try:
- fill_value = lib.Timestamp(fill_value).value
- except:
- # the proper thing to do here would probably be to upcast to
- # object (but numpy 1.6.1 doesn't do this properly)
+ if issubclass(dtype.type, np.datetime64):
+ try:
+ fill_value = lib.Timestamp(fill_value).value
+ except:
+ # the proper thing to do here would probably be to upcast to
+ # object (but numpy 1.6.1 doesn't do this properly)
+ fill_value = tslib.iNaT
+ else:
fill_value = tslib.iNaT
elif is_float(fill_value):
if issubclass(dtype.type, np.bool_):
@@ -722,7 +733,7 @@ def _maybe_promote(dtype, fill_value=np.nan):
return dtype, fill_value
-def _maybe_upcast_putmask(result, mask, other):
+def _maybe_upcast_putmask(result, mask, other, dtype=None):
""" a safe version of put mask that (potentially upcasts the result
return the result and a changed flag """
try:
@@ -730,16 +741,25 @@ def _maybe_upcast_putmask(result, mask, other):
except:
# our type is wrong here, need to upcast
if (-mask).any():
- result, fill_value = _maybe_upcast(result, copy=True)
+ result, fill_value = _maybe_upcast(result, fill_value=other, dtype=dtype, copy=True)
np.putmask(result, mask, other)
return result, True
return result, False
-def _maybe_upcast(values, fill_value=np.nan, copy=False):
+def _maybe_upcast(values, fill_value=np.nan, dtype=None, copy=False):
""" provide explicty type promotion and coercion
- if copy == True, then a copy is created even if no upcast is required """
- new_dtype, fill_value = _maybe_promote(values.dtype, fill_value)
+
+ Parameters
+ ----------
+ values : the ndarray that we want to maybe upcast
+ fill_value : what we want to fill with
+ dtype : if None, then use the dtype of the values, else coerce to this type
+ copy : if True always make a copy even if no upcast is required """
+
+ if dtype is None:
+ dtype = values.dtype
+ new_dtype, fill_value = _maybe_promote(dtype, fill_value)
if new_dtype != values.dtype:
values = values.astype(new_dtype)
elif copy:
@@ -915,7 +935,6 @@ def _possibly_convert_platform(values):
return values
-
def _possibly_cast_to_timedelta(value, coerce=True):
""" try to cast to timedelta64 w/o coercion """
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 881cef2311b27..93f06aae2b1b7 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -8,6 +8,7 @@
import pandas.lib as lib
import pandas.algos as algos
import pandas.hashtable as _hash
+import pandas.tslib as tslib
try:
import bottleneck as bn
@@ -69,8 +70,61 @@ def _has_infs(result):
else:
return np.isinf(result) or np.isneginf(result)
+def _get_fill_value(dtype, fill_value=None, fill_value_typ=None):
+ """ return the correct fill value for the dtype of the values """
+ if fill_value is not None:
+ return fill_value
+ if _na_ok_dtype(dtype):
+ if fill_value_typ is None:
+ return np.nan
+ else:
+ if fill_value_typ == '+inf':
+ return np.inf
+ else:
+ return -np.inf
+ else:
+ if fill_value_typ is None:
+ return tslib.iNaT
+ else:
+ if fill_value_typ == '+inf':
+ # need the max int here
+ return np.iinfo(np.int64).max
+ else:
+ return tslib.iNaT
+
+def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=False, copy=True):
+ """ utility to get the values view, mask, dtype
+ if necessary copy and mask using the specified fill_value
+ copy = True will force the copy """
+ if isfinite:
+ mask = _isfinite(values)
+ else:
+ mask = isnull(values)
+
+ dtype = values.dtype
+ dtype_ok = _na_ok_dtype(dtype)
+
+ # get our fill value (in case we need to provide an alternative dtype for it)
+ fill_value = _get_fill_value(dtype, fill_value=fill_value, fill_value_typ=fill_value_typ)
+
+ if skipna:
+ if copy:
+ values = values.copy()
+ if dtype_ok:
+ np.putmask(values, mask, fill_value)
+
+ # promote if needed
+ else:
+ values, changed = com._maybe_upcast_putmask(values, mask, fill_value)
+
+ elif copy:
+ values = values.copy()
+
+ values = _view_if_needed(values)
+ return values, mask, dtype
+
def _isfinite(values):
- if issubclass(values.dtype.type, np.timedelta64):
+ if issubclass(values.dtype.type, (np.timedelta64,np.datetime64)):
return isnull(values)
return -np.isfinite(values)
@@ -99,43 +153,21 @@ def _wrap_results(result,dtype):
return result
def nanany(values, axis=None, skipna=True):
- mask = isnull(values)
-
- if skipna:
- values = values.copy()
- np.putmask(values, mask, False)
+ values, mask, dtype = _get_values(values, skipna, False, copy=skipna)
return values.any(axis)
-
def nanall(values, axis=None, skipna=True):
- mask = isnull(values)
-
- if skipna:
- values = values.copy()
- np.putmask(values, mask, True)
+ values, mask, dtype = _get_values(values, skipna, True, copy=skipna)
return values.all(axis)
-
def _nansum(values, axis=None, skipna=True):
- mask = isnull(values)
-
- if skipna and not issubclass(values.dtype.type, np.integer):
- values = values.copy()
- np.putmask(values, mask, 0)
-
+ values, mask, dtype = _get_values(values, skipna, 0)
the_sum = values.sum(axis)
the_sum = _maybe_null_out(the_sum, axis, mask)
-
return the_sum
-
def _nanmean(values, axis=None, skipna=True):
- mask = isnull(values)
-
- if skipna and not issubclass(values.dtype.type, np.integer):
- values = values.copy()
- np.putmask(values, mask, 0)
-
+ values, mask, dtype = _get_values(values, skipna, 0)
the_sum = _ensure_numeric(values.sum(axis))
count = _get_counts(mask, axis)
@@ -186,15 +218,7 @@ def _nanvar(values, axis=None, skipna=True, ddof=1):
def _nanmin(values, axis=None, skipna=True):
- mask = isnull(values)
-
- dtype = values.dtype
-
- if skipna and _na_ok_dtype(dtype):
- values = values.copy()
- np.putmask(values, mask, np.inf)
-
- values = _view_if_needed(values)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ = '+inf')
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
@@ -218,15 +242,7 @@ def _nanmin(values, axis=None, skipna=True):
def _nanmax(values, axis=None, skipna=True):
- mask = isnull(values)
-
- dtype = values.dtype
-
- if skipna and _na_ok_dtype(dtype):
- values = values.copy()
- np.putmask(values, mask, -np.inf)
-
- values = _view_if_needed(values)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ ='-inf')
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
@@ -254,11 +270,7 @@ def nanargmax(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- mask = _isfinite(values)
- values = _view_if_needed(values)
- if not issubclass(values.dtype.type, np.integer):
- values = values.copy()
- np.putmask(values, mask, -np.inf)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ = '-inf', isfinite=True)
result = values.argmax(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
@@ -268,11 +280,7 @@ def nanargmin(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- mask = _isfinite(values)
- values = _view_if_needed(values)
- if not issubclass(values.dtype.type, np.integer):
- values = values.copy()
- np.putmask(values, mask, np.inf)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ = '+inf', isfinite=True)
result = values.argmin(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8eceb7798d31d..b349dd65ff82d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2112,13 +2112,13 @@ def argsort(self, axis=0, kind='quicksort', order=None):
mask = isnull(values)
if mask.any():
- result = Series(-1,index=self.index,name=self.name)
+ result = Series(-1,index=self.index,name=self.name,dtype='int64')
notmask = -mask
result.values[notmask] = np.argsort(self.values[notmask], kind=kind)
return result
else:
return Series(np.argsort(values, kind=kind), index=self.index,
- name=self.name)
+ name=self.name,dtype='int64')
def rank(self, method='average', na_option='keep', ascending=True):
"""
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 312789e10c252..ee288fda120d3 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1816,14 +1816,15 @@ def test_timedelta64_functions(self):
result = td.idxmax()
self.assert_(result == 2)
- # with NaT (broken)
+ # GH 2982
+ # with NaT
td[0] = np.nan
- #result = td.idxmin()
- #self.assert_(result == 1)
+ result = td.idxmin()
+ self.assert_(result == 1)
- #result = td.idxmax()
- #self.assert_(result == 2)
+ result = td.idxmax()
+ self.assert_(result == 2)
# abs
s1 = Series(date_range('20120101',periods=3))
@@ -2065,6 +2066,16 @@ def test_idxmin(self):
allna = self.series * nan
self.assert_(isnull(allna.idxmin()))
+ # datetime64[ns]
+ from pandas import date_range
+ s = Series(date_range('20130102',periods=6))
+ result = s.idxmin()
+ self.assert_(result == 0)
+
+ s[0] = np.nan
+ result = s.idxmin()
+ self.assert_(result == 1)
+
def test_idxmax(self):
# test idxmax
# _check_stat_op approach can not be used here because of isnull check.
@@ -2086,6 +2097,15 @@ def test_idxmax(self):
allna = self.series * nan
self.assert_(isnull(allna.idxmax()))
+ from pandas import date_range
+ s = Series(date_range('20130102',periods=6))
+ result = s.idxmax()
+ self.assert_(result == 5)
+
+ s[5] = np.nan
+ result = s.idxmax()
+ self.assert_(result == 4)
+
def test_operators_corner(self):
series = self.ts
| fixes GH #2982
- Bug in idxmin/idxmax of `datetime64[ns]` Series with `NaT`
- refactor in core/nanops.py to generically handle ops with
datetime64[ns]/timedelta64[ns]
- Series.argsort to always return int64 dtype (and ignore numpy dtype on ints),
was failing a test, indicated in a711db075d5835f45f04867752f3db64be359490
- minor doc update in io.rst (pytables)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2985 | 2013-03-07T01:11:33Z | 2013-03-07T01:54:08Z | 2013-03-07T01:54:08Z | 2014-07-06T15:25:25Z |
ENH: add display.max_seq_items to limit len of pprinted long sequences | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 03d43250f9265..3fb1f8bc08f5c 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1651,8 +1651,13 @@ def _pprint_seq(seq, _nest_lvl=0, **kwds):
rather then calling this directly.
"""
fmt = u"[%s]" if hasattr(seq, '__setitem__') else u"(%s)"
- return fmt % ", ".join(pprint_thing(e, _nest_lvl + 1, **kwds)
- for e in seq[:len(seq)])
+
+ nitems = get_option("max_seq_items") or len(seq)
+ body = ", ".join(pprint_thing(e, _nest_lvl + 1, **kwds)
+ for e in seq[:nitems])
+ if nitems < len(seq):
+ body+= ", ..."
+ return fmt % body
def _pprint_dict(seq, _nest_lvl=0):
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 114210d75959b..e4eeea53e1636 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -122,7 +122,15 @@
if set to a float value, all float values smaller then the given threshold
will be displayed as exactly 0 by repr and friends.
"""
+pc_max_seq_items = """
+: int or None
+ when pretty-printing a long sequence, no more then `max_seq_items`
+ will be printed. If items are ommitted, they will be denoted by the addition
+ of "..." to the resulting string.
+
+ If set to None, the number of items to be printed is unlimited.
+"""
with cf.config_prefix('display'):
cf.register_option('precision', 7, pc_precision_doc, validator=is_int)
cf.register_option('float_format', None, float_format_doc)
@@ -149,6 +157,7 @@
cf.register_option('expand_frame_repr', True, pc_expand_repr_doc)
cf.register_option('line_width', 80, pc_line_width_doc)
cf.register_option('chop_threshold', None, pc_chop_threshold_doc)
+ cf.register_option('max_seq_items', None, pc_max_seq_items)
tc_sim_interactive_doc = """
: boolean
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index b3a0c5ee93699..7869d2627d581 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -26,6 +26,11 @@ def test_is_sequence():
assert(not is_seq(u"abcd"))
assert(not is_seq(np.int64))
+ class A(object):
+ def __getitem__(self):
+ return 1
+
+ assert(not is_seq(A()))
def test_notnull():
assert notnull(1.)
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index c31f4e3b8061d..c2a399b493d13 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -114,6 +114,15 @@ def test_repr_chop_threshold(self):
with option_context("display.chop_threshold", None ):
self.assertEqual(repr(df), ' 0 1\n0 0.1 0.5\n1 0.5 -0.1')
+ def test_repr_obeys_max_seq_limit(self):
+ import pandas.core.common as com
+
+ #unlimited
+ reset_option("display.max_seq_items")
+ self.assertTrue(len(com.pprint_thing(range(1000)))> 2000)
+
+ with option_context("display.max_seq_items",5):
+ self.assertTrue(len(com.pprint_thing(range(1000)))< 100)
def test_repr_should_return_str(self):
"""
| There's a circuit-breaker in higher layers of the display code,
but just in case.
``` python
n [2]: pd.options.display.max_seq_items=10
In [3]: pd.core.common.pprint_thing(range(10000))
Out[3]: u'[0, 1, 2, 3, 4, 5, 6, 7, 8, 9, ...]'
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2979 | 2013-03-06T11:42:01Z | 2013-03-10T16:12:08Z | 2013-03-10T16:12:08Z | 2014-06-13T08:16:29Z |
BUG: Series.argsort failing on datetime64[ns] when NaT present, GH #2967 | diff --git a/RELEASE.rst b/RELEASE.rst
index 9553bb2d0d406..9657ed27d29da 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -125,6 +125,7 @@ pandas 0.11.0
- Support idxmin/idxmax in a Series (but with no NaT)
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
+ - Bug in argsort of ``datetime64[ns]`` Series with ``NaT`` (GH2967_)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
@@ -145,6 +146,7 @@ pandas 0.11.0
.. _GH2909: https://github.com/pydata/pandas/issues/2909
.. _GH2931: https://github.com/pydata/pandas/issues/2931
.. _GH2973: https://github.com/pydata/pandas/issues/2973
+.. _GH2967: https://github.com/pydata/pandas/issues/2967
pandas 0.10.1
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 72d31443213c7..fa6f8b84c8f29 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2092,16 +2092,17 @@ def argsort(self, axis=0, kind='quicksort', order=None):
Returns
-------
- argsorted : Series
+ argsorted : Series, with -1 indicated where nan values are present
+
"""
values = self.values
mask = isnull(values)
if mask.any():
- result = values.copy()
+ result = Series(-1,index=self.index,name=self.name)
notmask = -mask
- result[notmask] = np.argsort(values[notmask], kind=kind)
- return Series(result, index=self.index, name=self.name)
+ result.values[notmask] = np.argsort(self.values[notmask], kind=kind)
+ return result
else:
return Series(np.argsort(values, kind=kind), index=self.index,
name=self.name)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index f846a28ff78b7..075d5d5bb28bf 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1404,6 +1404,21 @@ def test_argsort(self):
argsorted = self.ts.argsort()
self.assert_(issubclass(argsorted.dtype.type, np.integer))
+ # GH 2967 (introduced bug in 0.11-dev I think)
+ s = Series([Timestamp('201301%02d'% (i+1)) for i in range(5)])
+ self.assert_(s.dtype == 'datetime64[ns]')
+ shifted = s.shift(-1)
+ self.assert_(shifted.dtype == 'datetime64[ns]')
+ self.assert_(isnull(shifted[4]) == True)
+
+ result = s.argsort()
+ expected = Series(range(5),dtype='int64')
+ assert_series_equal(result,expected)
+
+ result = shifted.argsort()
+ expected = Series(range(4) + [-1],dtype='int64')
+ assert_series_equal(result,expected)
+
def test_argsort_stable(self):
s = Series(np.random.randint(0, 100, size=10000))
mindexer = s.argsort(kind='mergesort')
| should close GH #2967
| https://api.github.com/repos/pandas-dev/pandas/pulls/2977 | 2013-03-06T02:34:43Z | 2013-03-06T12:37:41Z | 2013-03-06T12:37:41Z | 2014-07-05T13:27:51Z |
BUG: HDFStore didn't implement != correctly for string columns query, GH #2973 | diff --git a/RELEASE.rst b/RELEASE.rst
index e73d48fb59cd2..9553bb2d0d406 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -106,6 +106,8 @@ pandas 0.11.0
(e.g. store.df == store['df'])
- Internally, change all variables to be private-like (now have leading
underscore)
+ - fixes for query parsing to correctly interpret boolean and != (GH2849_, GH2973_)
+ - fixes for pathological case on SparseSeries with 0-len array and compression (GH2931_)
- Bug showing up in applymap where some object type columns are converted (GH2909_)
had an incorrect default in convert_objects
@@ -138,8 +140,11 @@ pandas 0.11.0
.. _GH2845: https://github.com/pydata/pandas/issues/2845
.. _GH2867: https://github.com/pydata/pandas/issues/2867
.. _GH2807: https://github.com/pydata/pandas/issues/2807
+.. _GH2849: https://github.com/pydata/pandas/issues/2849
.. _GH2898: https://github.com/pydata/pandas/issues/2898
.. _GH2909: https://github.com/pydata/pandas/issues/2909
+.. _GH2931: https://github.com/pydata/pandas/issues/2931
+.. _GH2973: https://github.com/pydata/pandas/issues/2973
pandas 0.10.1
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index cab11511e2cd5..ce9b43aabaa5b 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -105,7 +105,7 @@ def _combine_with_constructor(self, other, func):
klass._combine_with_constructor = _combine_with_constructor
# set as NonImplemented operations which we don't support
- for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter', 'dropna', 'shift', 'take']:
+ for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter', 'dropna', 'shift']:
def func(self, *args, **kwargs):
raise NotImplementedError
setattr(klass, f, func)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index ac7ca152ffcee..c635c0b231c48 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2296,7 +2296,7 @@ def process_axes(self, obj, columns=None):
# apply the selection filters (but keep in the same order)
if self.selection.filter:
- for field, filt in self.selection.filter:
+ for field, op, filt in self.selection.filter:
def process_filter(field, filt):
@@ -2306,9 +2306,8 @@ def process_filter(field, filt):
# see if the field is the name of an axis
if field == axis_name:
- ordd = axis_values & filt
- ordd = sorted(axis_values.get_indexer(ordd))
- return obj.reindex_axis(axis_values.take(ordd), axis=axis_number, copy=False)
+ takers = op(axis_values,filt)
+ return obj.ix._getitem_axis(takers,axis=axis_number)
# this might be the name of a file IN an axis
elif field in axis_values:
@@ -2320,7 +2319,8 @@ def process_filter(field, filt):
# hack until we support reversed dim flags
if isinstance(obj,DataFrame):
axis_number = 1-axis_number
- return obj.ix._getitem_axis(values.isin(filt),axis=axis_number)
+ takers = op(values,filt)
+ return obj.ix._getitem_axis(takers,axis=axis_number)
raise Exception("cannot find the field [%s] for filtering!" % field)
@@ -2969,7 +2969,7 @@ def __init__(self, field, op=None, value=None, queryables=None):
# backwards compatible
if isinstance(field, dict):
self.field = field.get('field')
- self.op = field.get('op') or '='
+ self.op = field.get('op') or '=='
self.value = field.get('value')
# passed a term
@@ -2996,7 +2996,7 @@ def __init__(self, field, op=None, value=None, queryables=None):
self.op = op
self.value = value
else:
- self.op = '='
+ self.op = '=='
self.value = op
else:
@@ -3008,8 +3008,8 @@ def __init__(self, field, op=None, value=None, queryables=None):
raise Exception("Could not create this term [%s]" % str(self))
# = vs ==
- if self.op == '==':
- self.op = '='
+ if self.op == '=':
+ self.op = '=='
# we have valid conditions
if self.op in ['>', '>=', '<', '<=']:
@@ -3055,22 +3055,29 @@ def eval(self):
values = [[v, v] for v in self.value]
# equality conditions
- if self.op in ['=', '!=']:
+ if self.op in ['==', '!=']:
+
+ # our filter op expression
+ if self.op == '!=':
+ filter_op = lambda axis, values: not axis.isin(values)
+ else:
+ filter_op = lambda axis, values: axis.isin(values)
+
if self.is_in_table:
# too many values to create the expression?
if len(values) <= self._max_selectors:
self.condition = "(%s)" % ' | '.join(
- ["(%s == %s)" % (self.field, v[0]) for v in values])
+ ["(%s %s %s)" % (self.field, self.op, v[0]) for v in values])
# use a filter after reading
else:
- self.filter = (self.field, Index([v[1] for v in values]))
+ self.filter = (self.field, filter_op, Index([v[1] for v in values]))
else:
- self.filter = (self.field, Index([v[1] for v in values]))
+ self.filter = (self.field, filter_op, Index([v[1] for v in values]))
else:
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 986329a615665..4efe87fceebc0 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1673,7 +1673,7 @@ def test_select_dtypes(self):
expected = df[df.ts >= Timestamp('2012-02-01')]
tm.assert_frame_equal(expected, result)
- # bool columns
+ # bool columns (GH #2849)
df = DataFrame(np.random.randn(5,2), columns =['A','B'])
df['object'] = 'foo'
df.ix[4:5,'object'] = 'bar'
@@ -1801,6 +1801,54 @@ def test_frame_select(self):
# self.assertRaises(Exception, store.select,
# 'frame', [crit1, crit2])
+ def test_string_select(self):
+
+ # GH 2973
+
+ df = tm.makeTimeDataFrame()
+
+ with ensure_clean(self.path) as store:
+
+
+ # test string ==/!=
+
+ df['x'] = 'none'
+ df.ix[2:7,'x'] = ''
+
+ store.append('df',df,data_columns=['x'])
+
+ result = store.select('df',Term('x=none'))
+ expected = df[df.x == 'none']
+ assert_frame_equal(result,expected)
+
+ result = store.select('df',Term('x!=none'))
+ expected = df[df.x != 'none']
+ assert_frame_equal(result,expected)
+
+ df2 = df.copy()
+ df2.x[df2.x==''] = np.nan
+
+ from pandas import isnull
+ store.append('df2',df2,data_columns=['x'])
+ result = store.select('df2',Term('x!=none'))
+ expected = df2[isnull(df2.x)]
+ assert_frame_equal(result,expected)
+
+ # int ==/!=
+ df['int'] = 1
+ df.ix[2:7,'int'] = 2
+
+ store.append('df3',df,data_columns=['int'])
+
+ result = store.select('df3',Term('int=2'))
+ expected = df[df.int==2]
+ assert_frame_equal(result,expected)
+
+ result = store.select('df3',Term('int!=2'))
+ expected = df[df.int!=2]
+ assert_frame_equal(result,expected)
+
+
def test_unique(self):
df = tm.makeTimeDataFrame()
| ref to GH #2973
| https://api.github.com/repos/pandas-dev/pandas/pulls/2976 | 2013-03-06T02:14:58Z | 2013-03-06T03:04:09Z | 2013-03-06T03:04:09Z | 2013-03-06T03:04:10Z |
BUG: handle single column frame in DataFrame.cov | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c0449faf40368..b7675163a68e6 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4559,6 +4559,7 @@ def cov(self, min_periods=None):
baseCov.fill(np.nan)
else:
baseCov = np.cov(mat.T)
+ baseCov = baseCov.reshape((len(cols),len(cols)))
else:
baseCov = _algos.nancorr(com._ensure_float64(mat), cov=True,
minp=min_periods)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9329bb1da2b07..2d4c8d7409d4f 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -5049,6 +5049,18 @@ def test_cov(self):
expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov()
assert_frame_equal(result, expected)
+ # Single column frame
+ df = DataFrame(np.linspace(0.0,1.0,10))
+ result = df.cov()
+ expected = DataFrame(np.cov(df.values.T).reshape((1,1)),
+ index=df.columns,columns=df.columns)
+ assert_frame_equal(result, expected)
+ df.ix[0] = np.nan
+ result = df.cov()
+ expected = DataFrame(np.cov(df.values[1:].T).reshape((1,1)),
+ index=df.columns,columns=df.columns)
+ assert_frame_equal(result, expected)
+
def test_corrwith(self):
a = self.tsframe
noise = Series(randn(len(a)), index=a.index)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2975 | 2013-03-06T01:58:57Z | 2013-03-08T01:39:39Z | 2013-03-08T01:39:39Z | 2013-03-08T01:39:39Z | |
BUG check both left and right indexers for None (fix 2843) | diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index b62e307b1f2a8..88b25f160cdfd 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -209,23 +209,26 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
if name in result:
key_col = result[name]
- if name in self.left and left_indexer is not None:
- na_indexer = (left_indexer == -1).nonzero()[0]
- if len(na_indexer) == 0:
- continue
-
- right_na_indexer = right_indexer.take(na_indexer)
- key_col.put(
- na_indexer, com.take_1d(self.right_join_keys[i],
- right_na_indexer))
- elif name in self.right and right_indexer is not None:
- na_indexer = (right_indexer == -1).nonzero()[0]
- if len(na_indexer) == 0:
- continue
-
- left_na_indexer = left_indexer.take(na_indexer)
- key_col.put(na_indexer, com.take_1d(self.left_join_keys[i],
- left_na_indexer))
+ if left_indexer is not None and right_indexer is not None:
+
+ if name in self.left:
+ na_indexer = (left_indexer == -1).nonzero()[0]
+ if len(na_indexer) == 0:
+ continue
+
+ right_na_indexer = right_indexer.take(na_indexer)
+ key_col.put(
+ na_indexer, com.take_1d(self.right_join_keys[i],
+ right_na_indexer))
+ elif name in self.right:
+ na_indexer = (right_indexer == -1).nonzero()[0]
+ if len(na_indexer) == 0:
+ continue
+
+ left_na_indexer = left_indexer.take(na_indexer)
+ key_col.put(na_indexer, com.take_1d(self.left_join_keys[i],
+ left_na_indexer))
+
elif left_indexer is not None:
if name is None:
name = 'key_%d' % i
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index d1c4710c16aad..d7abcadcb3778 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -717,6 +717,24 @@ def test_merge_nosort(self):
self.assert_((df.var3.unique() == result.var3.unique()).all())
+ def test_merge_nan_right(self):
+ df1 = DataFrame({"i1" : [0, 1], "i2" : [0, 1]})
+ df2 = DataFrame({"i1" : [0], "i3" : [0]})
+ result = df1.join(df2, on="i1", rsuffix="_")
+ expected = DataFrame({'i1': {0: 0.0, 1: 1}, 'i2': {0: 0, 1: 1},
+ 'i1_': {0: 0, 1: np.nan}, 'i3': {0: 0.0, 1: np.nan},
+ None: {0: 0, 1: 0}}).set_index(None).reset_index()[['i1', 'i2', 'i1_', 'i3']]
+ assert_frame_equal(result, expected, check_dtype=False)
+
+ df1 = DataFrame({"i1" : [0, 1], "i2" : [0.5, 1.5]})
+ df2 = DataFrame({"i1" : [0], "i3" : [0.7]})
+ result = df1.join(df2, rsuffix="_", on='i1')
+ expected = DataFrame({'i1': {0: 0, 1: 1}, 'i1_': {0: 0.0, 1: nan},
+ 'i2': {0: 0.5, 1: 1.5}, 'i3': {0: 0.69999999999999996,
+ 1: nan}})[['i1', 'i2', 'i1_', 'i3']]
+ assert_frame_equal(result, expected)
+
+
def test_overlapping_columns_error_message(self):
# #2649
df = DataFrame({'key': [1, 2, 3],
| Fixes #2843 and adds the two test cases suggested there.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2965 | 2013-03-04T22:33:05Z | 2013-03-07T01:34:46Z | 2013-03-07T01:34:45Z | 2014-06-21T22:19:56Z |
ENH Assert_frame_equal check_names to True | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c0449faf40368..cf94a4fc7e081 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2241,7 +2241,7 @@ def pop(self, item):
column : Series
"""
return NDFrame.pop(self, item)
-
+
# to support old APIs
@property
def _series(self):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3ac3c8eef0a10..afe7f8775b1e9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -340,7 +340,7 @@ def drop(self, labels, axis=0, level=None):
dropped : type of caller
"""
axis_name = self._get_axis_name(axis)
- axis = self._get_axis(axis)
+ axis, axis_ = self._get_axis(axis), axis
if axis.is_unique:
if level is not None:
@@ -349,8 +349,13 @@ def drop(self, labels, axis=0, level=None):
new_axis = axis.drop(labels, level=level)
else:
new_axis = axis.drop(labels)
+ dropped = self.reindex(**{axis_name: new_axis})
+ try:
+ dropped.axes[axis_].names = axis.names
+ except AttributeError:
+ pass
+ return dropped
- return self.reindex(**{axis_name: new_axis})
else:
if level is not None:
if not isinstance(axis, MultiIndex):
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 556bcdb93477f..062ba0c5e3463 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -103,8 +103,8 @@ def test_parse_cols_int(self):
df2 = df2.reindex(columns=['A', 'B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True, parse_cols=3)
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False) # TODO add index to xls file)
+ tm.assert_frame_equal(df3, df2, check_names=False)
def test_parse_cols_list(self):
_skip_if_no_openpyxl()
@@ -122,8 +122,8 @@ def test_parse_cols_list(self):
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True,
parse_cols=[0, 2, 3])
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False) # TODO add index to xls file
+ tm.assert_frame_equal(df3, df2, check_names=False)
def test_parse_cols_str(self):
_skip_if_no_openpyxl()
@@ -142,8 +142,8 @@ def test_parse_cols_str(self):
df2 = df2.reindex(columns=['A', 'B', 'C'])
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True, parse_cols='A:D')
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False) # TODO add index to xls, read xls ignores index name ?
+ tm.assert_frame_equal(df3, df2, check_names=False)
del df, df2, df3
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
@@ -153,8 +153,8 @@ def test_parse_cols_str(self):
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True,
parse_cols='A,C,D')
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False) # TODO add index to xls file
+ tm.assert_frame_equal(df3, df2, check_names=False)
del df, df2, df3
df = xls.parse('Sheet1', index_col=0, parse_dates=True,
@@ -164,8 +164,8 @@ def test_parse_cols_str(self):
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0,
parse_dates=True,
parse_cols='A,C:D')
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False)
+ tm.assert_frame_equal(df3, df2, check_names=False)
def test_excel_stop_iterator(self):
_skip_if_no_xlrd()
@@ -191,8 +191,8 @@ def test_excel_table(self):
df = xls.parse('Sheet1', index_col=0, parse_dates=True)
df2 = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df3 = xls.parse('Sheet2', skiprows=[1], index_col=0, parse_dates=True)
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+ tm.assert_frame_equal(df, df2, check_names=False)
+ tm.assert_frame_equal(df3, df2, check_names=False)
df4 = xls.parse('Sheet1', index_col=0, parse_dates=True,
skipfooter=1)
@@ -224,8 +224,9 @@ def test_xlsx_table(self):
df = xlsx.parse('Sheet1', index_col=0, parse_dates=True)
df2 = self.read_csv(self.csv1, index_col=0, parse_dates=True)
df3 = xlsx.parse('Sheet2', skiprows=[1], index_col=0, parse_dates=True)
- tm.assert_frame_equal(df, df2)
- tm.assert_frame_equal(df3, df2)
+
+ tm.assert_frame_equal(df, df2, check_names=False) # TODO add index to xlsx file
+ tm.assert_frame_equal(df3, df2, check_names=False)
df4 = xlsx.parse('Sheet1', index_col=0, parse_dates=True,
skipfooter=1)
@@ -632,7 +633,9 @@ def _check_excel_multiindex_dates(self, ext):
tsframe.to_excel(path, 'test1', index_label=['time', 'foo'])
reader = ExcelFile(path)
recons = reader.parse('test1', index_col=[0, 1])
- tm.assert_frame_equal(tsframe, recons)
+
+ tm.assert_frame_equal(tsframe, recons, check_names=False)
+ self.assertEquals(recons.index.names, ['time', 'foo'])
# infer index
tsframe.to_excel(path, 'test1')
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 3b0c9459ea2d0..85e96411dbba6 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -514,6 +514,7 @@ def test_skiprows_bug(self):
columns=[1, 2, 3],
index=[datetime(2000, 1, 1), datetime(2000, 1, 2),
datetime(2000, 1, 3)])
+ expected.index.name = 0
tm.assert_frame_equal(data, expected)
tm.assert_frame_equal(data, data2)
@@ -627,7 +628,7 @@ def test_yy_format(self):
idx = DatetimeIndex([datetime(2009, 1, 31, 0, 10, 0),
datetime(2009, 2, 28, 10, 20, 0),
datetime(2009, 3, 31, 8, 30, 0)]).asobject
- idx.name = 'date'
+ idx.name = 'date_time'
xp = DataFrame({'B': [1, 3, 5], 'C': [2, 4, 6]}, idx)
tm.assert_frame_equal(rs, xp)
@@ -636,7 +637,7 @@ def test_yy_format(self):
idx = DatetimeIndex([datetime(2009, 1, 31, 0, 10, 0),
datetime(2009, 2, 28, 10, 20, 0),
datetime(2009, 3, 31, 8, 30, 0)]).asobject
- idx.name = 'date'
+ idx.name = 'date_time'
xp = DataFrame({'B': [1, 3, 5], 'C': [2, 4, 6]}, idx)
tm.assert_frame_equal(rs, xp)
@@ -967,7 +968,7 @@ def test_multi_index_no_level_names(self):
df = self.read_csv(StringIO(no_header), index_col=[0, 1],
header=None, names=names)
expected = self.read_csv(StringIO(data), index_col=[0, 1])
- tm.assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected, check_names=False)
# 2 implicit first cols
df2 = self.read_csv(StringIO(data2))
@@ -977,7 +978,7 @@ def test_multi_index_no_level_names(self):
df = self.read_csv(StringIO(no_header), index_col=[1, 0], names=names,
header=None)
expected = self.read_csv(StringIO(data), index_col=[1, 0])
- tm.assert_frame_equal(df, expected)
+ tm.assert_frame_equal(df, expected, check_names=False)
def test_multi_index_parse_dates(self):
data = """index1,index2,A,B,C
@@ -1162,11 +1163,13 @@ def test_na_value_dict(self):
xp = DataFrame({'b': [np.nan], 'd': [5]},
MultiIndex.from_tuples([(0, 1)]))
+ xp.index.names = ['a', 'c']
df = self.read_csv(StringIO(data), na_values={}, index_col=[0, 2])
tm.assert_frame_equal(df, xp)
xp = DataFrame({'b': [np.nan], 'd': [5]},
MultiIndex.from_tuples([(0, 1)]))
+ xp.index.names = ['a', 'c']
df = self.read_csv(StringIO(data), na_values={}, index_col=['a', 'c'])
tm.assert_frame_equal(df, xp)
@@ -1249,7 +1252,7 @@ def test_multiple_date_cols_index(self):
tm.assert_frame_equal(df2, df)
df3 = self.read_csv(StringIO(data), parse_dates=[[1, 2]], index_col=0)
- tm.assert_frame_equal(df3, df)
+ tm.assert_frame_equal(df3, df, check_names=False)
def test_multiple_date_cols_chunked(self):
df = self.read_csv(StringIO(self.ts_data), parse_dates={
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index 3216a7dbbaa71..b443c55f97b8d 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -174,6 +174,7 @@ def _check_roundtrip(self, frame):
index_col='Idx')
expected = frame.copy()
expected.index = Index(range(len(frame2))) + 10
+ expected.index.name = 'Idx'
tm.assert_frame_equal(expected, result)
def test_tquery(self):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9329bb1da2b07..970a587face38 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -133,6 +133,8 @@ def test_getitem_list(self):
result2 = self.frame[Index(['B', 'A'])]
expected = self.frame.ix[:, ['B', 'A']]
+ expected.columns.name = 'foo'
+
assert_frame_equal(result, expected)
assert_frame_equal(result2, expected)
@@ -1678,7 +1680,8 @@ def test_join_index_series(self):
df = self.frame.copy()
s = df.pop(self.frame.columns[-1])
joined = df.join(s)
- assert_frame_equal(joined, self.frame)
+
+ assert_frame_equal(joined, self.frame, check_names=False) # TODO should this check_names ?
s.name = None
self.assertRaises(Exception, df.join, s)
@@ -1896,7 +1899,7 @@ def test_set_index_pass_arrays(self):
# multiple columns
result = df.set_index(['A', df['B'].values], drop=False)
expected = df.set_index(['A', 'B'], drop=False)
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False) # TODO should set_index check_names ?
def test_set_index_cast_datetimeindex(self):
df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i)
@@ -3720,12 +3723,15 @@ def test_delitem(self):
self.assert_('A' not in self.frame)
def test_pop(self):
+ self.frame.columns.name = 'baz'
+
A = self.frame.pop('A')
self.assert_('A' not in self.frame)
self.frame['foo'] = 'bar'
foo = self.frame.pop('foo')
self.assert_('foo' not in self.frame)
+ # TODO self.assert_(self.frame.columns.name == 'baz')
def test_pop_non_unique_cols(self):
df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]})
@@ -4409,7 +4415,7 @@ def test_to_csv_from_csv(self):
df.to_csv(path)
result = DataFrame.from_csv(path, index_col=[0, 1, 2],
parse_dates=False)
- assert_frame_equal(result, df)
+ assert_frame_equal(result, df, check_names=False) # TODO from_csv names index ['Unnamed: 1', 'Unnamed: 2'] should it ?
# column aliases
col_aliases = Index(['AA', 'X', 'Y', 'Z'])
@@ -4436,8 +4442,8 @@ def test_to_csv_from_csv_w_some_infs(self):
self.frame.to_csv(path)
recons = DataFrame.from_csv(path)
- assert_frame_equal(self.frame, recons)
- assert_frame_equal(np.isinf(self.frame), np.isinf(recons))
+ assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name
+ assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False)
try:
os.remove(path)
@@ -4456,8 +4462,8 @@ def test_to_csv_from_csv_w_all_infs(self):
self.frame.to_csv(path)
recons = DataFrame.from_csv(path)
- assert_frame_equal(self.frame, recons)
- assert_frame_equal(np.isinf(self.frame), np.isinf(recons))
+ assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name
+ assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False)
os.remove(path)
@@ -4476,7 +4482,7 @@ def test_to_csv_multiindex(self):
frame.to_csv(path)
df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False)
- assert_frame_equal(frame, df)
+ assert_frame_equal(frame, df, check_names=False) # TODO to_csv drops column name
self.assertEqual(frame.index.names, df.index.names)
self.frame.index = old_index # needed if setUP becomes a classmethod
@@ -4488,7 +4494,7 @@ def test_to_csv_multiindex(self):
tsframe.to_csv(path, index_label=['time', 'foo'])
recons = DataFrame.from_csv(path, index_col=[0, 1])
- assert_frame_equal(tsframe, recons)
+ assert_frame_equal(tsframe, recons, check_names=False) # TODO to_csv drops column name
# do not load index
tsframe.to_csv(path)
@@ -4545,7 +4551,7 @@ def test_to_csv_bug(self):
newdf.to_csv(path)
recons = pan.read_csv(path, index_col=0)
- assert_frame_equal(recons, newdf)
+ assert_frame_equal(recons, newdf, check_names=False) # don't check_names as t != 1
os.remove(path)
@@ -4581,7 +4587,7 @@ def test_to_csv_stringio(self):
self.frame.to_csv(buf)
buf.seek(0)
recons = pan.read_csv(buf, index_col=0)
- assert_frame_equal(recons, self.frame)
+ assert_frame_equal(recons, self.frame, check_names=False) # TODO to_csv drops column name
def test_to_csv_float_format(self):
filename = '__tmp_to_csv_float_format__.csv'
@@ -5103,6 +5109,16 @@ def test_corrwith_series(self):
assert_series_equal(result, expected)
+ def test_drop_names(self):
+ df = DataFrame([[1, 2, 3],[3, 4, 5],[5, 6, 7]], index=['a', 'b', 'c'], columns=['d', 'e', 'f'])
+ df.index.name, df.columns.name = 'first', 'second'
+ df_dropped_b = df.drop('b')
+ df_dropped_e = df.drop('e', axis=1)
+ self.assert_(df_dropped_b.index.name == 'first')
+ self.assert_(df_dropped_e.index.name == 'first')
+ self.assert_(df_dropped_b.columns.name == 'second')
+ self.assert_(df_dropped_e.columns.name == 'second')
+
def test_dropEmptyRows(self):
N = len(self.frame.index)
mat = randn(N)
@@ -5857,6 +5873,7 @@ def test_pivot(self):
'One': {'A': 1., 'B': 2., 'C': 3.},
'Two': {'A': 1., 'B': 2., 'C': 3.}
})
+ expected.index.name, expected.columns.name = 'index', 'columns'
assert_frame_equal(pivoted, expected)
@@ -5885,7 +5902,7 @@ def test_pivot_empty(self):
df = DataFrame({}, columns=['a', 'b', 'c'])
result = df.pivot('a', 'b', 'c')
expected = DataFrame({})
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False)
def test_pivot_integer_bug(self):
df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")])
@@ -6391,10 +6408,9 @@ def test_rename(self):
assert_frame_equal(renamed, renamed2)
assert_frame_equal(renamed2.rename(columns=str.upper),
- self.frame)
+ self.frame, check_names=False)
# index
-
data = {
'A': {'foo': 0, 'bar': 1}
}
@@ -6943,7 +6959,8 @@ def test_select(self):
result = self.frame.select(lambda x: x in ('B', 'D'), axis=1)
expected = self.frame.reindex(columns=['B', 'D'])
- assert_frame_equal(result, expected)
+
+ assert_frame_equal(result, expected, check_names=False) # TODO should reindex check_names?
def test_sort_index(self):
frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4],
@@ -8228,30 +8245,31 @@ def test_reset_index(self):
# only remove certain columns
frame = self.frame.reset_index().set_index(['index', 'A', 'B'])
rs = frame.reset_index(['A', 'B'])
- assert_frame_equal(rs, self.frame)
+
+ assert_frame_equal(rs, self.frame, check_names=False) # TODO should reset_index check_names ?
rs = frame.reset_index(['index', 'A', 'B'])
- assert_frame_equal(rs, self.frame.reset_index())
+ assert_frame_equal(rs, self.frame.reset_index(), check_names=False)
rs = frame.reset_index(['index', 'A', 'B'])
- assert_frame_equal(rs, self.frame.reset_index())
+ assert_frame_equal(rs, self.frame.reset_index(), check_names=False)
rs = frame.reset_index('A')
xp = self.frame.reset_index().set_index(['index', 'B'])
- assert_frame_equal(rs, xp)
+ assert_frame_equal(rs, xp, check_names=False)
# test resetting in place
df = self.frame.copy()
resetted = self.frame.reset_index()
df.reset_index(inplace=True)
- assert_frame_equal(df, resetted)
+ assert_frame_equal(df, resetted, check_names=False)
frame = self.frame.reset_index().set_index(['index', 'A', 'B'])
rs = frame.reset_index('A', drop=True)
xp = self.frame.copy()
del xp['A']
xp = xp.set_index(['B'], append=True)
- assert_frame_equal(rs, xp)
+ assert_frame_equal(rs, xp, check_names=False)
def test_reset_index_right_dtype(self):
time = np.arange(0.0, 10, np.sqrt(2) / 2)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 54d29263b2308..4dde7eeea98ce 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -140,17 +140,17 @@ def test_first_last_nth(self):
first = grouped.first()
expected = self.df.ix[[1, 0], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(first, expected)
+ assert_frame_equal(first, expected, check_names=False)
last = grouped.last()
expected = self.df.ix[[5, 7], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(last, expected)
+ assert_frame_equal(last, expected, check_names=False)
nth = grouped.nth(1)
expected = self.df.ix[[3, 2], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(nth, expected)
+ assert_frame_equal(nth, expected, check_names=False)
# it works!
grouped['B'].first()
@@ -169,17 +169,17 @@ def test_first_last_nth_dtypes(self):
first = grouped.first()
expected = self.df_mixed_floats.ix[[1, 0], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(first, expected)
+ assert_frame_equal(first, expected, check_names=False)
last = grouped.last()
expected = self.df_mixed_floats.ix[[5, 7], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(last, expected)
+ assert_frame_equal(last, expected, check_names=False)
nth = grouped.nth(1)
expected = self.df_mixed_floats.ix[[3, 2], ['B', 'C', 'D']]
expected.index = ['bar', 'foo']
- assert_frame_equal(nth, expected)
+ assert_frame_equal(nth, expected, check_names=False)
def test_grouper_iter(self):
self.assertEqual(sorted(self.df.groupby('A').grouper), ['bar', 'foo'])
@@ -290,8 +290,8 @@ def test_agg_apply_corner(self):
# DataFrame
grouped = self.tsframe.groupby(self.tsframe['A'] * np.nan)
exp_df = DataFrame(columns=self.tsframe.columns, dtype=float)
- assert_frame_equal(grouped.sum(), exp_df)
- assert_frame_equal(grouped.agg(np.sum), exp_df)
+ assert_frame_equal(grouped.sum(), exp_df, check_names=False)
+ assert_frame_equal(grouped.agg(np.sum), exp_df, check_names=False)
assert_frame_equal(grouped.apply(np.sum), DataFrame({}, dtype=float))
def test_agg_grouping_is_list_tuple(self):
@@ -629,7 +629,7 @@ def test_frame_groupby(self):
tscopy = self.tsframe.copy()
tscopy['weekday'] = [x.weekday() for x in tscopy.index]
stragged = tscopy.groupby('weekday').aggregate(np.mean)
- assert_frame_equal(stragged, aggregated)
+ assert_frame_equal(stragged, aggregated, check_names=False)
# transform
transformed = grouped.transform(lambda x: x - x.mean())
@@ -785,7 +785,8 @@ def test_multi_func(self):
agged = grouped.mean()
expected = self.df.groupby(['A', 'B']).mean()
assert_frame_equal(agged.ix[:, ['C', 'D']],
- expected.ix[:, ['C', 'D']])
+ expected.ix[:, ['C', 'D']],
+ check_names=False) # TODO groupby get drops names
# some "groups" with no data
df = DataFrame({'v1': np.random.randn(6),
@@ -843,6 +844,7 @@ def _check_op(op):
expected[n1][n2] = op(gp2.ix[:, ['C', 'D']])
expected = dict((k, DataFrame(v)) for k, v in expected.iteritems())
expected = Panel.fromDict(expected).swapaxes(0, 1)
+ expected.major_axis.name, expected.minor_axis.name = 'A', 'B'
# a little bit crude
for col in ['C', 'D']:
@@ -1105,6 +1107,7 @@ def _testit(op):
for cat, group in grouped:
exp[cat] = op(group['C'])
exp = DataFrame({'C': exp})
+ exp.index.name = 'A'
result = op(grouped)
assert_frame_equal(result, exp)
@@ -1259,6 +1262,7 @@ def test_groupby_level_mapper(self):
mapped_level1 = np.array([mapper1.get(x) for x in deleveled['second']])
expected0 = frame.groupby(mapped_level0).sum()
expected1 = frame.groupby(mapped_level1).sum()
+ expected0.index.name, expected1.index.name = 'first', 'second'
assert_frame_equal(result0, expected0)
assert_frame_equal(result1, expected1)
@@ -1483,7 +1487,7 @@ def test_grouping_ndarray(self):
result = grouped.sum()
expected = self.df.groupby('A').sum()
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False) # Note: no names when grouping by value
def test_apply_typecast_fail(self):
df = DataFrame({'d': [1., 1., 1., 2., 2., 2.],
@@ -1663,7 +1667,7 @@ def convert_force_pure(x):
def test_groupby_list_infer_array_like(self):
result = self.df.groupby(list(self.df['A'])).mean()
expected = self.df.groupby(self.df['A']).mean()
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False)
self.assertRaises(Exception, self.df.groupby, list(self.df['A'][:-1]))
@@ -2099,6 +2103,7 @@ def test_groupby_categorical(self):
expected = data.groupby(np.asarray(cats)).mean()
expected = expected.reindex(levels)
+ expected.index.name = 'myfactor'
assert_frame_equal(result, expected)
self.assert_(result.index.name == cats.name)
@@ -2110,6 +2115,7 @@ def test_groupby_categorical(self):
ord_labels = np.asarray(cats).take(idx)
ord_data = data.take(idx)
expected = ord_data.groupby(ord_labels, sort=False).describe()
+ expected.index.names = ['myfactor', None]
assert_frame_equal(desc_result, expected)
def test_groupby_groups_datetimeindex(self):
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index ac9912e5c59eb..99c081c0cc6cb 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1015,7 +1015,7 @@ def test_join(self):
self.assert_(not np.isnan(joined.values).all())
- assert_frame_equal(joined, expected)
+ assert_frame_equal(joined, expected, check_names=False) # TODO what should join do with names ?
def test_swaplevel(self):
swapped = self.frame['A'].swaplevel(0, 1)
@@ -1276,7 +1276,7 @@ def test_groupby_multilevel(self):
expected = self.ymd.groupby([k1, k2]).mean()
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False) # TODO groupby with level_values drops names
self.assertEquals(result.index.names, self.ymd.index.names[:2])
result2 = self.ymd.groupby(level=self.ymd.index.names[:2]).mean()
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index da7a0f68b3eb4..b3aa38b9a972e 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1562,17 +1562,17 @@ def test_truncate(self):
trunced = self.panel.truncate(start, end).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(start, end)
- assert_frame_equal(trunced['ItemA'], expected)
+ assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names
trunced = self.panel.truncate(before=start).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(before=start)
- assert_frame_equal(trunced['ItemA'], expected)
+ assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names
trunced = self.panel.truncate(after=end).to_panel()
expected = self.panel.to_panel()['ItemA'].truncate(after=end)
- assert_frame_equal(trunced['ItemA'], expected)
+ assert_frame_equal(trunced['ItemA'], expected, check_names=False) # TODO trucate drops index.names
# truncate on dates that aren't in there
wp = self.panel.to_panel()
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index d1c4710c16aad..2ac67a5f96433 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -357,6 +357,7 @@ def test_join_multiindex(self):
joined = df1.join(df2, how='outer')
ex_index = index1._tuple_index + index2._tuple_index
expected = df1.reindex(ex_index).join(df2.reindex(ex_index))
+ expected.index.names = index1.names
assert_frame_equal(joined, expected)
self.assertEqual(joined.index.names, index1.names)
@@ -366,6 +367,7 @@ def test_join_multiindex(self):
joined = df1.join(df2, how='outer').sortlevel(0)
ex_index = index1._tuple_index + index2._tuple_index
expected = df1.reindex(ex_index).join(df2.reindex(ex_index))
+ expected.index.names = index1.names
assert_frame_equal(joined, expected)
self.assertEqual(joined.index.names, index1.names)
@@ -739,7 +741,7 @@ def _check_merge(x, y):
sort=True)
expected = expected.set_index('index')
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_names=False) # TODO check_names on merge?
class TestMergeMulti(unittest.TestCase):
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 5eb3445df0d4a..0e50d606d6e7e 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -158,7 +158,7 @@ def test_pivot_integer_columns(self):
df2 = df.rename(columns=str)
table2 = df2.pivot_table(values='4', rows=['0', '1', '3'], cols=['2'])
- tm.assert_frame_equal(table, table2)
+ tm.assert_frame_equal(table, table2, check_names=False)
def test_pivot_no_level_overlap(self):
# GH #1181
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 215aa32b8bbb1..1775847cbdbfb 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2348,7 +2348,7 @@ def test_dti_reset_index_round_trip(self):
d2 = d1.reset_index()
self.assert_(d2.dtypes[0] == np.dtype('M8[ns]'))
d3 = d2.set_index('index')
- assert_frame_equal(d1, d3)
+ assert_frame_equal(d1, d3, check_names=False)
# #2329
stamp = datetime(2012, 11, 22)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 97059d9aaf9f5..758629a4293b2 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -178,7 +178,8 @@ def assert_frame_equal(left, right, check_dtype=True,
check_index_type=False,
check_column_type=False,
check_frame_type=False,
- check_less_precise=False):
+ check_less_precise=False,
+ check_names=True):
if check_frame_type:
assert(type(left) == type(right))
assert(isinstance(left, DataFrame))
@@ -204,6 +205,9 @@ def assert_frame_equal(left, right, check_dtype=True,
assert(type(left.columns) == type(right.columns))
assert(left.columns.dtype == right.columns.dtype)
assert(left.columns.inferred_type == right.columns.inferred_type)
+ if check_names:
+ assert(left.index.names == right.index.names)
+ assert(left.columns.names == right.columns.names)
def assert_panel_equal(left, right,
@@ -218,7 +222,7 @@ def assert_panel_equal(left, right,
for col, series in left.iterkv():
assert(col in right)
- assert_frame_equal(series, right[col], check_less_precise=check_less_precise)
+ assert_frame_equal(series, right[col], check_less_precise=check_less_precise, check_names=False) # TODO strangely check_names fails in py3 ?
for col in right:
assert(col in left)
| This is a follow up to #2962, where I added a `check_names=False` argument to `assert_frame_equal`, this defaults it to `True`. That is, when asserting two DataFrames are equal, it compares `.columns.names` and `.index.names`, unless explicitly passing `check_names=False`.
I've added in `check_names=False` where it wouldn't make sense to check names. I've added a TODO comment (and `check_names=False`) where I think it probably _should_ work but isn't... This was in the following cases:
```
to_csv
reindex
reset_index
groupy-a-get
pop
join (dataframe to a series)
from_xls (when setting index as first column)
```
i.e. these appear to drop column names (`df.columns.names`). Whether this behaviour is correct / desired in these cases, I'm not sure, but here they all are. :)
_Note: By changing the A1 cells to 'index', I had all but one test (labelled "read xls ignores index name ?") in `test_excel.py` working locally (with `check_names=True`) , however Travis was less convinced throwing:_
```
InvalidFileException: "There is no item named 'xl/theme/theme1.xml' in the archive"
```
_so I reverted to the previous xls and xlsx, and added back in `check_names=False`. I suspect LibreOffice wasn't saving it properly...?_
| https://api.github.com/repos/pandas-dev/pandas/pulls/2964 | 2013-03-04T11:44:25Z | 2013-03-07T07:22:41Z | 2013-03-07T07:22:41Z | 2014-06-18T17:28:59Z |
BUG keep name after DataFrame drop, add check_name to assert_dataframe_equal | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 3ac3c8eef0a10..afe7f8775b1e9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -340,7 +340,7 @@ def drop(self, labels, axis=0, level=None):
dropped : type of caller
"""
axis_name = self._get_axis_name(axis)
- axis = self._get_axis(axis)
+ axis, axis_ = self._get_axis(axis), axis
if axis.is_unique:
if level is not None:
@@ -349,8 +349,13 @@ def drop(self, labels, axis=0, level=None):
new_axis = axis.drop(labels, level=level)
else:
new_axis = axis.drop(labels)
+ dropped = self.reindex(**{axis_name: new_axis})
+ try:
+ dropped.axes[axis_].names = axis.names
+ except AttributeError:
+ pass
+ return dropped
- return self.reindex(**{axis_name: new_axis})
else:
if level is not None:
if not isinstance(axis, MultiIndex):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9329bb1da2b07..69c8bdc60be13 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -5103,6 +5103,16 @@ def test_corrwith_series(self):
assert_series_equal(result, expected)
+ def test_drop_names(self):
+ df = DataFrame([[1, 2, 3],[3, 4, 5],[5, 6, 7]], index=['a', 'b', 'c'], columns=['d', 'e', 'f'])
+ df.index.name, df.columns.name = 'first', 'second'
+ df_dropped_b = df.drop('b')
+ df_dropped_e = df.drop('e', axis=1)
+ self.assert_(df_dropped_b.index.name == 'first')
+ self.assert_(df_dropped_e.index.name == 'first')
+ self.assert_(df_dropped_b.columns.name == 'second')
+ self.assert_(df_dropped_e.columns.name == 'second')
+
def test_dropEmptyRows(self):
N = len(self.frame.index)
mat = randn(N)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 97059d9aaf9f5..fddbf405b3f26 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -178,7 +178,8 @@ def assert_frame_equal(left, right, check_dtype=True,
check_index_type=False,
check_column_type=False,
check_frame_type=False,
- check_less_precise=False):
+ check_less_precise=False,
+ check_names=False):
if check_frame_type:
assert(type(left) == type(right))
assert(isinstance(left, DataFrame))
@@ -204,6 +205,9 @@ def assert_frame_equal(left, right, check_dtype=True,
assert(type(left.columns) == type(right.columns))
assert(left.columns.dtype == right.columns.dtype)
assert(left.columns.inferred_type == right.columns.inferred_type)
+ if check_names:
+ assert(left.index.names == right.index.names)
+ assert(left.columns.names == right.columns.names)
def assert_panel_equal(left, right,
| This should fix #2939. (redo of pull-request #2961, which I screwed up).
| https://api.github.com/repos/pandas-dev/pandas/pulls/2962 | 2013-03-03T12:01:15Z | 2013-03-07T01:35:06Z | 2013-03-07T01:35:06Z | 2013-03-07T01:35:06Z |
BUG: changed Dtype to dtype on series displays | diff --git a/pandas/core/format.py b/pandas/core/format.py
index 1db0e0b41b3e3..003b1fefd01f7 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -103,7 +103,7 @@ def _get_footer(self):
if getattr(self.series.dtype,'name',None):
if footer:
footer += ', '
- footer += 'Dtype: %s' % com.pprint_thing(self.series.dtype.name)
+ footer += 'dtype: %s' % com.pprint_thing(self.series.dtype.name)
return unicode(footer)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 726216dde2269..72d31443213c7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3305,7 +3305,7 @@ def _repr_footer(self):
namestr = "Name: %s, " % str(
self.name) if self.name is not None else ""
- return '%s%sLength: %d, Dtype: %s' % (freqstr, namestr, len(self),
+ return '%s%sLength: %d, dtype: %s' % (freqstr, namestr, len(self),
com.pprint_thing(self.dtype.name))
def to_timestamp(self, freq=None, how='start', copy=True):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 177ef3f9e3371..dbe697927b0b8 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -155,7 +155,7 @@ def test_to_string_repr_unicode(self):
line = line.decode(get_option("display.encoding"))
except:
pass
- if not line.startswith('Dtype:'):
+ if not line.startswith('dtype:'):
self.assert_(len(line) == line_len)
# it works even if sys.stdin in None
@@ -1081,7 +1081,7 @@ def test_float_trim_zeros(self):
2.03954217305e+10, 5.59897817305e+10]
skip = True
for line in repr(DataFrame({'A': vals})).split('\n'):
- if line.startswith('Dtype:'):
+ if line.startswith('dtype:'):
continue
if _three_digit_exp():
self.assert_(('+010' in line) or skip)
@@ -1143,7 +1143,7 @@ def test_to_string(self):
cp.name = 'foo'
result = cp.to_string(length=True, name=True, dtype=True)
last_line = result.split('\n')[-1].strip()
- self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d, Dtype: float64" % len(cp))
+ self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d, dtype: float64" % len(cp))
def test_freq_name_separation(self):
s = Series(np.random.randn(10),
@@ -1199,7 +1199,7 @@ def test_float_trim_zeros(self):
vals = [2.08430917305e+10, 3.52205017305e+10, 2.30674817305e+10,
2.03954217305e+10, 5.59897817305e+10]
for line in repr(Series(vals)).split('\n'):
- if line.startswith('Dtype:'):
+ if line.startswith('dtype:'):
continue
if _three_digit_exp():
self.assert_('+010' in line)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 28572e2eae015..f846a28ff78b7 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -142,7 +142,7 @@ def test_multilevel_name_print(self):
"qux one 7",
" two 8",
" three 9",
- "Name: sth, Dtype: int64"]
+ "Name: sth, dtype: int64"]
expected = "\n".join(expected)
self.assertEquals(repr(s), expected)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 2b42fc5c42542..215aa32b8bbb1 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -568,7 +568,7 @@ def test_series_repr_nat(self):
'1 1970-01-01 00:00:00.000001\n'
'2 1970-01-01 00:00:00.000002\n'
'3 NaT\n'
- 'Dtype: datetime64[ns]')
+ 'dtype: datetime64[ns]')
self.assertEquals(result, expected)
def test_fillna_nat(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2959 | 2013-03-01T23:19:26Z | 2013-03-02T00:25:53Z | 2013-03-02T00:25:53Z | 2014-07-29T18:52:38Z | |
BUG: fixed .abs on Series with a timedelta (partial fix for 2957) | diff --git a/RELEASE.rst b/RELEASE.rst
index e73d48fb59cd2..1079fc42aa5c2 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -120,7 +120,7 @@ pandas 0.11.0
- Support null checking on timedelta64, representing (and formatting) with NaT
- Support setitem with np.nan value, converts to NaT
- Support min/max ops in a Dataframe (abs not working, nor do we error on non-supported ops)
- - Support idxmin/idxmax in a Series (but with no NaT)
+ - Support idxmin/idxmax/abs in a Series (but with no NaT)
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 3cac5158d31ef..347cfeec26b5b 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -916,7 +916,7 @@ def _possibly_convert_platform(values):
return values
-def _possibly_cast_to_timedelta(value):
+def _possibly_cast_to_timedelta(value, coerce=True):
""" try to cast to timedelta64 w/o coercion """
# deal with numpy not being able to handle certain timedelta operations
@@ -925,9 +925,12 @@ def _possibly_cast_to_timedelta(value):
value = value.astype('timedelta64[ns]')
return value
- new_value = tslib.array_to_timedelta64(value.astype(object), coerce=False)
- if new_value.dtype == 'i8':
- value = np.array(new_value,dtype='timedelta64[ns]')
+ # we don't have a timedelta, but we want to try to convert to one (but don't force it)
+ if coerce:
+ new_value = tslib.array_to_timedelta64(value.astype(object), coerce=False)
+ if new_value.dtype == 'i8':
+ value = np.array(new_value,dtype='timedelta64[ns]')
+
return value
def _possibly_cast_to_datetime(value, dtype, coerce = False):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 726216dde2269..0f3ec5120a0cd 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -711,6 +711,19 @@ def mask(self, cond):
"""
return self.where(~cond, nan)
+ def abs(self):
+ """
+ Return an object with absolute value taken. Only applicable to objects
+ that are all numeric
+
+ Returns
+ -------
+ abs: type of caller
+ """
+ obj = np.abs(self)
+ obj = com._possibly_cast_to_timedelta(obj, coerce=False)
+ return obj
+
def __setitem__(self, key, value):
try:
try:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 9329bb1da2b07..bb868d2525ee1 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2913,10 +2913,11 @@ def test_operators_timedelta64(self):
self.assert_((result == diffs['A']).all() == True)
# abs ###### THIS IS BROKEN NOW ###### (results are dtype=timedelta64[us]
- result = np.abs(df['A']-df['B'])
- result = diffs.abs()
- expected = DataFrame(dict(A = df['A']-df['C'],
- B = df['B']-df['A']))
+ # even though fixed in series
+ #result = np.abs(df['A']-df['B'])
+ #result = diffs.abs()
+ #expected = DataFrame(dict(A = df['A']-df['C'],
+ # B = df['B']-df['A']))
#assert_frame_equal(result,expected)
# mixed frame
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 28572e2eae015..66e9cc2f4931a 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1786,9 +1786,11 @@ def test_operators_timedelta64(self):
self.assert_(result.dtype=='m8[ns]')
assert_series_equal(result,expected)
+
def test_timedelta64_functions(self):
from datetime import timedelta
+ from pandas import date_range
# index min/max
td = Series(date_range('2012-1-1', periods=3, freq='D'))-Timestamp('20120101')
@@ -1808,6 +1810,18 @@ def test_timedelta64_functions(self):
#result = td.idxmax()
#self.assert_(result == 2)
+ # abs
+ s1 = Series(date_range('20120101',periods=3))
+ s2 = Series(date_range('20120102',periods=3))
+ expected = Series(s2-s1)
+
+ # this fails as numpy returns timedelta64[us]
+ #result = np.abs(s1-s2)
+ #assert_frame_equal(result,expected)
+
+ result = (s1-s2).abs()
+ assert_series_equal(result,expected)
+
def test_sub_of_datetime_from_TimeSeries(self):
from pandas.core import common as com
from datetime import datetime
| https://api.github.com/repos/pandas-dev/pandas/pulls/2958 | 2013-03-01T22:20:11Z | 2013-03-06T12:43:15Z | 2013-03-06T12:43:15Z | 2013-03-06T12:43:15Z | |
BUG: negative timedeltas not printing correctly | diff --git a/RELEASE.rst b/RELEASE.rst
index e41731131d888..e742008b71831 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -115,10 +115,12 @@ pandas 0.11.0
- Series ops with a Timestamp on the rhs was throwing an exception (GH2898_)
added tests for Series ops with datetimes,timedeltas,Timestamps, and datelike
Series on both lhs and rhs
- - Series will now set its dtype automatically to ``timedelta64[ns]``
- if all passed objects are timedelta objects
+ - Fixed subtle timedelta64 inference issue on py3
+ - Fixed some formatting issues on timedelta when negative
- Support null checking on timedelta64, representing (and formatting) with NaT
- Support setitem with np.nan value, converts to NaT
+ - Support min/max ops in a Dataframe (abs not working, nor do we error on non-supported ops)
+ - Support idxmin/idxmax in a Series (but with no NaT)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index d627212c6ae9c..78dd5cee9c8f9 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -961,3 +961,19 @@ Operands can also appear in a reversed order (a singluar object operated with a
s.max() - s
datetime(2011,1,1,3,5) - s
timedelta(minutes=5) + s
+
+Some timedelta numeric like operations are supported.
+
+.. ipython:: python
+
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ df = DataFrame(dict(A = s - Timestamp('20120101')-timedelta(minutes=5,seconds=5),
+ B = s - Series(date_range('2012-1-2', periods=3, freq='D'))))
+ df
+
+ # timedelta arithmetic
+ td - timedelta(minutes=5,seconds=5,microseconds=5)
+
+ # min/max operations
+ df.min()
+ df.min(axis=1)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 4e6215969e7ec..90b31b102fb2f 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -904,6 +904,13 @@ def _possibly_convert_platform(values):
def _possibly_cast_to_timedelta(value):
""" try to cast to timedelta64 w/o coercion """
+
+ # deal with numpy not being able to handle certain timedelta operations
+ if isinstance(value,np.ndarray) and value.dtype.kind == 'm':
+ if value.dtype != 'timedelta64[ns]':
+ value = value.astype('timedelta64[ns]')
+ return value
+
new_value = tslib.array_to_timedelta64(value.astype(object), coerce=False)
if new_value.dtype == 'i8':
value = np.array(new_value,dtype='timedelta64[ns]')
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 000320027e0e4..881cef2311b27 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -55,7 +55,7 @@ def f(values, axis=None, skipna=True, **kwds):
def _bn_ok_dtype(dt):
# Bottleneck chokes on datetime64
- return dt != np.object_ and not issubclass(dt.type, np.datetime64)
+ return dt != np.object_ and not issubclass(dt.type, (np.datetime64,np.timedelta64))
def _has_infs(result):
@@ -69,6 +69,34 @@ def _has_infs(result):
else:
return np.isinf(result) or np.isneginf(result)
+def _isfinite(values):
+ if issubclass(values.dtype.type, np.timedelta64):
+ return isnull(values)
+ return -np.isfinite(values)
+
+def _na_ok_dtype(dtype):
+ return not issubclass(dtype.type, (np.integer, np.datetime64, np.timedelta64))
+
+def _view_if_needed(values):
+ if issubclass(values.dtype.type, (np.datetime64,np.timedelta64)):
+ return values.view(np.int64)
+ return values
+
+def _wrap_results(result,dtype):
+ """ wrap our results if needed """
+
+ if issubclass(dtype.type, np.datetime64):
+ if not isinstance(result, np.ndarray):
+ result = lib.Timestamp(result)
+ else:
+ result = result.view(dtype)
+ elif issubclass(dtype.type, np.timedelta64):
+ if not isinstance(result, np.ndarray):
+ pass
+ else:
+ result = result.view(dtype)
+
+ return result
def nanany(values, axis=None, skipna=True):
mask = isnull(values)
@@ -162,13 +190,11 @@ def _nanmin(values, axis=None, skipna=True):
dtype = values.dtype
- if skipna and not issubclass(dtype.type,
- (np.integer, np.datetime64)):
+ if skipna and _na_ok_dtype(dtype):
values = values.copy()
np.putmask(values, mask, np.inf)
- if issubclass(dtype.type, np.datetime64):
- values = values.view(np.int64)
+ values = _view_if_needed(values)
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
@@ -187,12 +213,7 @@ def _nanmin(values, axis=None, skipna=True):
else:
result = values.min(axis)
- if issubclass(dtype.type, np.datetime64):
- if not isinstance(result, np.ndarray):
- result = lib.Timestamp(result)
- else:
- result = result.view(dtype)
-
+ result = _wrap_results(result,dtype)
return _maybe_null_out(result, axis, mask)
@@ -201,12 +222,11 @@ def _nanmax(values, axis=None, skipna=True):
dtype = values.dtype
- if skipna and not issubclass(dtype.type, (np.integer, np.datetime64)):
+ if skipna and _na_ok_dtype(dtype):
values = values.copy()
np.putmask(values, mask, -np.inf)
- if issubclass(dtype.type, np.datetime64):
- values = values.view(np.int64)
+ values = _view_if_needed(values)
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_
@@ -226,12 +246,7 @@ def _nanmax(values, axis=None, skipna=True):
else:
result = values.max(axis)
- if issubclass(dtype.type, np.datetime64):
- if not isinstance(result, np.ndarray):
- result = lib.Timestamp(result)
- else:
- result = result.view(dtype)
-
+ result = _wrap_results(result,dtype)
return _maybe_null_out(result, axis, mask)
@@ -239,7 +254,8 @@ def nanargmax(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- mask = -np.isfinite(values)
+ mask = _isfinite(values)
+ values = _view_if_needed(values)
if not issubclass(values.dtype.type, np.integer):
values = values.copy()
np.putmask(values, mask, -np.inf)
@@ -252,7 +268,8 @@ def nanargmin(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- mask = -np.isfinite(values)
+ mask = _isfinite(values)
+ values = _view_if_needed(values)
if not issubclass(values.dtype.type, np.integer):
values = values.copy()
np.putmask(values, mask, np.inf)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index f34028482faec..ca08c5e131146 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -81,10 +81,10 @@ def wrapper(self, other):
lvalues, rvalues = self, other
- is_timedelta = com.is_timedelta64_dtype(self)
- is_datetime = com.is_datetime64_dtype(self)
+ is_timedelta_lhs = com.is_timedelta64_dtype(self)
+ is_datetime_lhs = com.is_datetime64_dtype(self)
- if is_datetime or is_timedelta:
+ if is_datetime_lhs or is_timedelta_lhs:
# convert the argument to an ndarray
def convert_to_array(values):
@@ -96,26 +96,27 @@ def convert_to_array(values):
pass
else:
values = tslib.array_to_datetime(values)
+ elif inferred_type in set(['timedelta','timedelta64']):
+ # need to convert timedelta to ns here
+ # safest to convert it to an object arrany to process
+ if isinstance(values, pa.Array) and com.is_timedelta64_dtype(values):
+ pass
+ else:
+ values = com._possibly_cast_to_timedelta(values)
else:
values = pa.array(values)
return values
- # swap the valuesor com.is_timedelta64_dtype(self):
- if is_timedelta:
- lvalues, rvalues = rvalues, lvalues
- lvalues = convert_to_array(lvalues)
- is_timedelta = False
-
+ # convert lhs and rhs
+ lvalues = convert_to_array(lvalues)
rvalues = convert_to_array(rvalues)
- # rhs is either a timedelta or a series/ndarray
- if lib.is_timedelta_or_timedelta64_array(rvalues):
+ is_timedelta_rhs = com.is_timedelta64_dtype(rvalues)
+ is_datetime_rhs = com.is_datetime64_dtype(rvalues)
- # need to convert timedelta to ns here
- # safest to convert it to an object arrany to process
- rvalues = tslib.array_to_timedelta64(rvalues.astype(object))
- dtype = 'M8[ns]'
- elif com.is_datetime64_dtype(rvalues):
+ # 2 datetimes or 2 timedeltas
+ if (is_timedelta_lhs and is_timedelta_rhs) or (is_datetime_lhs and is_datetime_rhs):
+
dtype = 'timedelta64[ns]'
# we may have to convert to object unfortunately here
@@ -126,6 +127,10 @@ def wrap_results(x):
np.putmask(x,mask,tslib.iNaT)
return x
+ # datetime and timedelta
+ elif (is_timedelta_lhs and is_datetime_rhs) or (is_timedelta_rhs and is_datetime_lhs):
+ dtype = 'M8[ns]'
+
else:
raise ValueError('cannot operate on a series with out a rhs '
'of a series/ndarray of type datetime64[ns] '
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 2f84dd416100e..095968494fb57 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -24,6 +24,7 @@ try:
_TYPE_MAP[np.complex256] = 'complex'
_TYPE_MAP[np.float16] = 'floating'
_TYPE_MAP[np.datetime64] = 'datetime64'
+ _TYPE_MAP[np.timedelta64] = 'timedelta64'
except AttributeError:
pass
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 739bea41256df..80b2009465209 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -1209,24 +1209,57 @@ def test_float_trim_zeros(self):
def test_timedelta64(self):
from pandas import date_range
- from datetime import datetime
+ from datetime import datetime, timedelta
Series(np.array([1100, 20], dtype='timedelta64[s]')).to_string()
- # check this works
+
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+
# GH2146
# adding NaTs
- s = Series(date_range('2012-1-1', periods=3, freq='D'))
y = s-s.shift(1)
result = y.to_string()
self.assertTrue('1 days, 00:00:00' in result)
self.assertTrue('NaT' in result)
# with frac seconds
- s = Series(date_range('2012-1-1', periods=3, freq='D'))
- y = s-datetime(2012,1,1,microsecond=150)
+ o = Series([datetime(2012,1,1,microsecond=150)]*3)
+ y = s-o
+ result = y.to_string()
+ self.assertTrue('-00:00:00.000150' in result)
+
+ # rounding?
+ o = Series([datetime(2012,1,1,1)]*3)
+ y = s-o
+ result = y.to_string()
+ self.assertTrue('-01:00:00' in result)
+ self.assertTrue('1 days, 23:00:00' in result)
+
+ o = Series([datetime(2012,1,1,1,1)]*3)
+ y = s-o
+ result = y.to_string()
+ self.assertTrue('-01:01:00' in result)
+ self.assertTrue('1 days, 22:59:00' in result)
+
+ o = Series([datetime(2012,1,1,1,1,microsecond=150)]*3)
+ y = s-o
+ result = y.to_string()
+ self.assertTrue('-01:01:00.000150' in result)
+ self.assertTrue('1 days, 22:58:59.999850' in result)
+
+ # neg time
+ td = timedelta(minutes=5,seconds=3)
+ s2 = Series(date_range('2012-1-1', periods=3, freq='D')) + td
+ y = s - s2
+ result = y.to_string()
+ self.assertTrue('-00:05:03' in result)
+
+ td = timedelta(microseconds=550)
+ s2 = Series(date_range('2012-1-1', periods=3, freq='D')) + td
+ y = s - td
result = y.to_string()
- self.assertTrue('00:00:00.000150' in result)
+ self.assertTrue('2012-01-01 23:59:59.999450' in result)
def test_mixed_datetime64(self):
df = DataFrame({'A': [1, 2],
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 81fbc0fc4d84d..9329bb1da2b07 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2884,6 +2884,53 @@ def test_timedeltas(self):
expected.sort()
assert_series_equal(result, expected)
+ def test_operators_timedelta64(self):
+
+ from pandas import date_range
+ from datetime import datetime, timedelta
+ df = DataFrame(dict(A = date_range('2012-1-1', periods=3, freq='D'),
+ B = date_range('2012-1-2', periods=3, freq='D'),
+ C = Timestamp('20120101')-timedelta(minutes=5,seconds=5)))
+
+ diffs = DataFrame(dict(A = df['A']-df['C'],
+ B = df['A']-df['B']))
+
+
+ # min
+ result = diffs.min()
+ self.assert_(result[0] == diffs.ix[0,'A'])
+ self.assert_(result[1] == diffs.ix[0,'B'])
+
+ result = diffs.min(axis=1)
+ self.assert_((result == diffs.ix[0,'B']).all() == True)
+
+ # max
+ result = diffs.max()
+ self.assert_(result[0] == diffs.ix[2,'A'])
+ self.assert_(result[1] == diffs.ix[2,'B'])
+
+ result = diffs.max(axis=1)
+ self.assert_((result == diffs['A']).all() == True)
+
+ # abs ###### THIS IS BROKEN NOW ###### (results are dtype=timedelta64[us]
+ result = np.abs(df['A']-df['B'])
+ result = diffs.abs()
+ expected = DataFrame(dict(A = df['A']-df['C'],
+ B = df['B']-df['A']))
+ #assert_frame_equal(result,expected)
+
+ # mixed frame
+ mixed = diffs.copy()
+ mixed['C'] = 'foo'
+ mixed['D'] = 1
+ mixed['E'] = 1.
+
+ # this is ok
+ result = mixed.min()
+
+ # this is not
+ result = mixed.min(axis=1)
+
def test_new_empty_index(self):
df1 = DataFrame(randn(0, 3))
df2 = DataFrame(randn(0, 3))
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index fdaede9a2949c..46310cce3160d 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1740,33 +1740,83 @@ def test_operators_timedelta64(self):
# datetimes on rhs
result = df['A'] - datetime(2001,1,1)
- self.assert_(result.dtype=='timedelta64[ns]')
+ expected = Series([timedelta(days=4017+i) for i in range(3)])
+ assert_series_equal(result,expected)
+ self.assert_(result.dtype=='m8[ns]')
result = df['A'] + datetime(2001,1,1)
- self.assert_(result.dtype=='timedelta64[ns]')
+ expected = Series([timedelta(days=26663+i) for i in range(3)])
+ assert_series_equal(result,expected)
+ self.assert_(result.dtype=='m8[ns]')
- td = datetime(2001,1,1,3,4)
- resulta = df['A'] - td
- self.assert_(resulta.dtype=='timedelta64[ns]')
+ d = datetime(2001,1,1,3,4)
+ resulta = df['A'] - d
+ self.assert_(resulta.dtype=='m8[ns]')
- resultb = df['A'] + td
- self.assert_(resultb.dtype=='timedelta64[ns]')
+ resultb = df['A'] + d
+ self.assert_(resultb.dtype=='m8[ns]')
+
+ # roundtrip
+ resultb = resulta + d
+ assert_series_equal(df['A'],resultb)
# timedelta on lhs
- result = resultb + td
- self.assert_(resultb.dtype=='timedelta64[ns]')
+ result = resultb + d
+ self.assert_(result.dtype=='m8[ns]')
# timedeltas on rhs
td = timedelta(days=1)
resulta = df['A'] + td
resultb = resulta - td
assert_series_equal(resultb,df['A'])
+ self.assert_(resultb.dtype=='M8[ns]')
+ # roundtrip
td = timedelta(minutes=5,seconds=3)
resulta = df['A'] + td
resultb = resulta - td
+ assert_series_equal(df['A'],resultb)
self.assert_(resultb.dtype=='M8[ns]')
+ # td operate with td
+ td1 = Series([timedelta(minutes=5,seconds=3)]*3)
+ td2 = timedelta(minutes=5,seconds=4)
+ result = td1-td2
+ expected = Series([timedelta(seconds=0)]*3)-Series([timedelta(seconds=1)]*3)
+ self.assert_(result.dtype=='m8[ns]')
+ assert_series_equal(result,expected)
+
+ def test_timedelta64_functions(self):
+
+ from datetime import timedelta
+
+ # index min/max
+ td = Series(date_range('2012-1-1', periods=3, freq='D'))-Timestamp('20120101')
+
+ result = td.idxmin()
+ self.assert_(result == 0)
+
+ result = td.idxmax()
+ self.assert_(result == 2)
+
+ # with NaT (broken)
+ td[0] = np.nan
+
+ #result = td.idxmin()
+ #self.assert_(result == 1)
+
+ #result = td.idxmax()
+ #self.assert_(result == 2)
+
+ def test_sub_of_datetime_from_TimeSeries(self):
+ from pandas.core import common as com
+ from datetime import datetime
+ a = Timestamp(datetime(1993,01,07,13,30,00))
+ b = datetime(1993, 6, 22, 13, 30)
+ a = Series([a])
+ result = com._possibly_cast_to_timedelta(np.abs(a - b))
+ self.assert_(result.dtype == 'timedelta64[ns]')
+
def test_timedelta64_nan(self):
from pandas import tslib
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 6ac2ee3607f51..7a5bb0f569349 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -928,25 +928,50 @@ def repr_timedelta64(object value):
ivalue = value.view('i8')
+ # put frac in seconds
frac = float(ivalue)/1e9
- days = int(frac) / 86400
- frac -= days*86400
- hours = int(frac) / 3600
- frac -= hours * 3600
- minutes = int(frac) / 60
- seconds = frac - minutes * 60
- nseconds = int(seconds)
-
- if nseconds == seconds:
- seconds_pretty = "%02d" % nseconds
+ sign = np.sign(frac)
+ frac = np.abs(frac)
+
+ if frac >= 86400:
+ days = int(frac / 86400)
+ frac -= days * 86400
+ else:
+ days = 0
+
+ if frac >= 3600:
+ hours = int(frac / 3600)
+ frac -= hours * 3600
+ else:
+ hours = 0
+
+ if frac >= 60:
+ minutes = int(frac / 60)
+ frac -= minutes * 60
+ else:
+ minutes = 0
+
+ if frac >= 1:
+ seconds = int(frac)
+ frac -= seconds
+ else:
+ seconds = 0
+
+ if frac == int(frac):
+ seconds_pretty = "%02d" % seconds
+ else:
+ sp = abs(round(1e6*frac))
+ seconds_pretty = "%02d.%06d" % (seconds,sp)
+
+ if sign < 0:
+ sign_pretty = "-"
else:
- sp = abs(int(1e6*(seconds-nseconds)))
- seconds_pretty = "%02d.%06d" % (nseconds,sp)
+ sign_pretty = ""
if days:
- return "%d days, %02d:%02d:%s" % (days,hours,minutes,seconds_pretty)
+ return "%s%d days, %02d:%02d:%s" % (sign_pretty,days,hours,minutes,seconds_pretty)
- return "%02d:%02d:%s" % (hours,minutes,seconds_pretty)
+ return "%s%02d:%02d:%s" % (sign_pretty,hours,minutes,seconds_pretty)
def array_strptime(ndarray[object] values, object fmt):
cdef:
| ENH: timedelta ops with other timedelta fixed to produce timedeltas
BUG: fixed timedelta (in nanops.py) to work with min/max....abs still not working
| https://api.github.com/repos/pandas-dev/pandas/pulls/2955 | 2013-03-01T15:05:12Z | 2013-03-01T17:54:38Z | 2013-03-01T17:54:38Z | 2013-03-01T17:54:38Z |
CLN: argument order and dtype formatting to be consistent with standard | diff --git a/pandas/core/format.py b/pandas/core/format.py
index c564bb37a18f7..1db0e0b41b3e3 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -65,19 +65,19 @@
class SeriesFormatter(object):
- def __init__(self, series, buf=None, header=True, length=True, dtype=True,
- na_rep='NaN', name=False, float_format=None):
+ def __init__(self, series, buf=None, header=True, length=True,
+ na_rep='NaN', name=False, float_format=None, dtype=True):
self.series = series
self.buf = buf if buf is not None else StringIO(u"")
self.name = name
self.na_rep = na_rep
self.length = length
- self.dtype = dtype
self.header = header
if float_format is None:
float_format = get_option("display.float_format")
self.float_format = float_format
+ self.dtype = dtype
def _get_footer(self):
footer = u''
diff --git a/pandas/core/series.py b/pandas/core/series.py
index f34028482faec..bcdacf9ad22a8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1050,7 +1050,7 @@ def __unicode__(self):
name=True,
dtype=True)
else:
- result = u'Series([], Dtype: %s)' % self.dtype
+ result = u'Series([], dtype: %s)' % self.dtype
assert type(result) == unicode
return result
@@ -1073,8 +1073,8 @@ def _tidy_repr(self, max_vals=20):
dtype=False, name=False)
tail = self[-(max_vals - num):]._get_repr(print_header=False,
length=False,
- dtype=False,
- name=False)
+ name=False,
+ dtype=False)
result = head + '\n...\n' + tail
result = '%s\n%s' % (result, self._repr_footer())
@@ -1083,8 +1083,8 @@ def _tidy_repr(self, max_vals=20):
def _repr_footer(self):
namestr = u"Name: %s, " % com.pprint_thing(
self.name) if self.name is not None else ""
- return u'%sLength: %d, Dtype: %s' % (namestr, len(self),
- com.pprint_thing(self.dtype.name))
+ return u'%sLength: %d, dtype: %s' % (namestr, len(self),
+ str(self.dtype.name))
def to_string(self, buf=None, na_rep='NaN', float_format=None,
nanRep=None, length=False, dtype=False, name=False):
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 739bea41256df..f110bbbc903c2 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -591,7 +591,7 @@ def test_long_series(self):
import re
str_rep = str(s)
- nmatches = len(re.findall('Dtype',str_rep))
+ nmatches = len(re.findall('dtype',str_rep))
self.assert_(nmatches == 1)
def test_to_string(self):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2951 | 2013-02-28T19:44:38Z | 2013-03-01T17:43:51Z | 2013-03-01T17:43:51Z | 2013-03-01T17:43:51Z | |
BUG: in-place conversion of integer series to float (on putmasking), GH #2746. | diff --git a/RELEASE.rst b/RELEASE.rst
index e41731131d888..47e91ef01bfe6 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -120,9 +120,12 @@ pandas 0.11.0
- Support null checking on timedelta64, representing (and formatting) with NaT
- Support setitem with np.nan value, converts to NaT
+ - Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
+
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
.. _GH2681: https://github.com/pydata/pandas/issues/2681
+.. _GH2746: https://github.com/pydata/pandas/issues/2746
.. _GH2747: https://github.com/pydata/pandas/issues/2747
.. _GH2751: https://github.com/pydata/pandas/issues/2751
.. _GH2776: https://github.com/pydata/pandas/issues/2776
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 4e6215969e7ec..4479dce432ae2 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -722,6 +722,20 @@ def _maybe_promote(dtype, fill_value=np.nan):
return dtype, fill_value
+def _maybe_upcast_putmask(result, mask, other):
+ """ a safe version of put mask that (potentially upcasts the result
+ return the result and a changed flag """
+ try:
+ np.putmask(result, mask, other)
+ except:
+ # our type is wrong here, need to upcast
+ if (-mask).any():
+ result, fill_value = _maybe_upcast(result, copy=True)
+ np.putmask(result, mask, other)
+ return result, True
+
+ return result, False
+
def _maybe_upcast(values, fill_value=np.nan, copy=False):
""" provide explicty type promotion and coercion
if copy == True, then a copy is created even if no upcast is required """
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index eb3dbfd01abd7..c0449faf40368 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -201,7 +201,7 @@ def na_op(x, y):
mask = notnull(xrav)
result[mask] = op(xrav[mask], y)
- np.putmask(result, -mask, NA)
+ result, changed = com._maybe_upcast_putmask(result,-mask,np.nan)
result = result.reshape(x.shape)
return result
diff --git a/pandas/core/series.py b/pandas/core/series.py
index f34028482faec..54e981d1db9dc 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -70,7 +70,8 @@ def na_op(x, y):
else:
mask = notnull(x)
result[mask] = op(x[mask], y)
- np.putmask(result, -mask, pa.NA)
+
+ result, changed = com._maybe_upcast_putmask(result,-mask,pa.NA)
return result
@@ -680,7 +681,13 @@ def where(self, cond, other=nan, inplace=False):
if len(other) != len(ser):
raise ValueError('Length of replacements must equal series length')
- np.putmask(ser, ~cond, other)
+ result, changed = com._maybe_upcast_putmask(ser,~cond,other)
+ if changed:
+
+ # need to actually change ser here
+ if inplace:
+ ser.dtype = result.dtype
+ ser[:] = result
return None if inplace else ser
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index fdaede9a2949c..86523938077a1 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2943,6 +2943,17 @@ def test_asof_more(self):
result = s.asof(s.index[0])
self.assertEqual(result, s[0])
+ def test_cast_on_putmask(self):
+
+ # GH 2746
+
+ # need to upcast
+ s = Series([1,2],index=[1,2],dtype='int64')
+ s[[True, False]] = Series([0],index=[1],dtype='int64')
+ expected = Series([0,2],index=[1,2],dtype='float64')
+
+ assert_series_equal(s, expected)
+
def test_astype_cast_nan_int(self):
df = Series([1.0, 2.0, 3.0, np.nan])
self.assertRaises(ValueError, df.astype, np.int64)
| closes #2746
| https://api.github.com/repos/pandas-dev/pandas/pulls/2950 | 2013-02-28T19:36:34Z | 2013-03-01T17:43:39Z | 2013-03-01T17:43:39Z | 2014-06-19T18:51:51Z |
DOC: updated dtypes docs | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index abdade7b61946..05025e4f9479a 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -10,7 +10,7 @@
np.set_printoptions(precision=4, suppress=True)
*****************************
-Essential basic functionality
+Essential Basic Functionality
*****************************
Here we discuss a lot of the essential functionality common to the pandas data
@@ -114,7 +114,7 @@ either match on the *index* or *columns* via the **axis** keyword:
d = {'one' : Series(randn(3), index=['a', 'b', 'c']),
'two' : Series(randn(4), index=['a', 'b', 'c', 'd']),
'three' : Series(randn(3), index=['b', 'c', 'd'])}
- df = DataFrame(d)
+ df = df_orig = DataFrame(d)
df
row = df.ix[1]
column = df['two']
@@ -936,8 +936,8 @@ The ``by`` argument can take a list of column names, e.g.:
.. ipython:: python
- df = DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]})
- df[['one', 'two', 'three']].sort_index(by=['one','two'])
+ df1 = DataFrame({'one':[2,1,1,1],'two':[1,3,2,4],'three':[5,4,3,2]})
+ df1[['one', 'two', 'three']].sort_index(by=['one','two'])
Series has the method ``order`` (analogous to `R's order function
<http://stat.ethz.ch/R-manual/R-patched/library/base/html/order.html>`__) which
@@ -959,10 +959,8 @@ Some other sorting notes / nuances:
method will likely be deprecated in a future release in favor of just using
``sort_index``.
-.. _basics.cast:
-
-Copying, type casting
----------------------
+Copying
+-------
The ``copy`` method on pandas objects copies the underlying data (though not
the axis indexes, since they are immutable) and returns a new object. Note that
@@ -978,36 +976,132 @@ To be clear, no pandas methods have the side effect of modifying your data;
almost all methods return new objects, leaving the original object
untouched. If data is modified, it is because you did so explicitly.
-Data can be explicitly cast to a NumPy dtype by using the ``astype`` method or
-alternately passing the ``dtype`` keyword argument to the object constructor.
+DTypes
+------
+
+.. _basics.dtypes:
+
+The main types stored in pandas objects are float, int, boolean, datetime64[ns],
+and object. A convenient ``dtypes`` attribute for DataFrames returns a Series with
+the data type of each column.
.. ipython:: python
- df = DataFrame(np.arange(12).reshape((4, 3)))
- df[0].dtype
- df.astype(float)[0].dtype
- df = DataFrame(np.arange(12).reshape((4, 3)), dtype=float)
- df[0].dtype
+ dft = DataFrame(dict( A = np.random.rand(3), B = 1, C = 'foo', D = Timestamp('20010102'),
+ E = Series([1.0]*3).astype('float32'),
+ F = False,
+ G = Series([1]*3,dtype='int8')))
+ dft
+
+If a DataFrame contains columns of multiple dtypes, the dtype of the column
+will be chosen to accommodate all of the data types (dtype=object is the most
+general).
-.. _basics.cast.infer:
+The related method ``get_dtype_counts`` will return the number of columns of
+each type:
-Inferring better types for object columns
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+.. ipython:: python
-The ``convert_objects`` DataFrame method will attempt to convert
-``dtype=object`` columns to a better NumPy dtype. Occasionally (after
-transposing multiple times, for example), a mixed-type DataFrame will end up
-with everything as ``dtype=object``. This method attempts to fix that:
+ dft.get_dtype_counts()
+
+Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0).
+If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
+or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
.. ipython:: python
- df = DataFrame(randn(6, 3), columns=['a', 'b', 'c'])
- df['d'] = 'foo'
- df
- df = df.T.T
- df.dtypes
- converted = df.convert_objects()
- converted.dtypes
+ df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
+ df1
+ df1.dtypes
+ df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
+ B = Series(randn(8)),
+ C = Series(np.array(randn(8),dtype='uint8')) ))
+ df2
+ df2.dtypes
+
+ # here you get some upcasting
+ df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
+ df3
+ df3.dtypes
+
+ # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
+ df3.values.dtype
+
+Astype
+~~~~~~
+
+.. _basics.cast:
+
+You can use the ``astype`` method to convert dtypes from one to another. These *always* return a copy.
+Upcasting is always according to the **numpy** rules. If two different dtypes are involved in an operation,
+then the more *general* one will be used as the result of the operation.
+
+.. ipython:: python
+
+ df3
+ df3.dtypes
+
+ # conversion of dtypes
+ df3.astype('float32').dtypes
+
+Object Conversion
+~~~~~~~~~~~~~~~~~
+
+To force conversion of specific types of number conversion, pass ``convert_numeric = True``.
+This will force strings and numbers alike to be numbers if possible, otherwise the will be set to ``np.nan``.
+To force conversion to ``datetime64[ns]``, pass ``convert_dates = 'coerce'``.
+This will convert any datetimelike object to dates, forcing other values to ``NaT``.
+
+In addition, ``convert_objects`` will attempt to *soft* conversion of any *object* dtypes, meaning that if all
+the objects in a Series are of the same type, the Series will have that dtype.
+
+.. ipython:: python
+
+ # mixed type conversions
+ df3['D'] = '1.'
+ df3['E'] = '1'
+ df3.convert_objects(convert_numeric=True).dtypes
+
+ # same, but specific dtype conversion
+ df3['D'] = df3['D'].astype('float16')
+ df3['E'] = df3['E'].astype('int32')
+ df3.dtypes
+
+ # forcing date coercion
+ s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
+ s
+ s.convert_objects(convert_dates='coerce')
+
+
+Upcasting Gotchas
+~~~~~~~~~~~~~~~~~
+
+Performing indexing operations on ``integer`` type data can easily upcast the data to ``floating``.
+The dtype of the input data will be preserved in cases where ``nans`` are not introduced (starting in 0.11.0)
+See also :ref:`integer na gotchas <gotchas.intna>`
+
+.. ipython:: python
+
+ dfi = df3.astype('int32')
+ dfi['E'] = 1
+ dfi
+ dfi.dtypes
+
+ casted = dfi[dfi>0]
+ casted
+ casted.dtypes
+
+While float dtypes are unchanged.
+
+.. ipython:: python
+
+ dfa = df3.copy()
+ dfa['A'] = dfa['A'].astype('float32')
+ dfa.dtypes
+
+ casted = dfa[df2>0]
+ casted
+ casted.dtypes
.. _basics.serialize:
@@ -1157,8 +1251,9 @@ For instance:
.. ipython:: python
set_eng_float_format(accuracy=3, use_eng_prefix=True)
- df['a']/1.e3
- df['a']/1.e6
+ s = Series(randn(5), index=['a', 'b', 'c', 'd', 'e'])
+ s/1.e3
+ s/1.e6
.. ipython:: python
:suppress:
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 0957ed8f0e073..45fabb551d993 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -455,96 +455,6 @@ slicing, see the :ref:`section on indexing <indexing>`. We will address the
fundamentals of reindexing / conforming to new sets of lables in the
:ref:`section on reindexing <basics.reindexing>`.
-DataTypes
-~~~~~~~~~
-
-.. _dsintro.column_types:
-
-The main types stored in pandas objects are float, int, boolean, datetime64[ns],
-and object. A convenient ``dtypes`` attribute return a Series with the data type of
-each column.
-
-.. ipython:: python
-
- df['integer'] = 1
- df['int32'] = df['integer'].astype('int32')
- df['float32'] = Series([1.0]*len(df),dtype='float32')
- df['timestamp'] = Timestamp('20010102')
- df.dtypes
-
-If a DataFrame contains columns of multiple dtypes, the dtype of the column
-will be chosen to accommodate all of the data types (dtype=object is the most
-general).
-
-The related method ``get_dtype_counts`` will return the number of columns of
-each type:
-
-.. ipython:: python
-
- df.get_dtype_counts()
-
-Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0).
-If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
-or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
-
-.. ipython:: python
-
- df1 = DataFrame(randn(8, 1), columns = ['A'], dtype = 'float32')
- df1
- df1.dtypes
- df2 = DataFrame(dict( A = Series(randn(8),dtype='float16'),
- B = Series(randn(8)),
- C = Series(np.array(randn(8),dtype='uint8')) ))
- df2
- df2.dtypes
-
- # here you get some upcasting
- df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
- df3
- df3.dtypes
-
- # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
- df3.values.dtype
-
-Upcasting is always according to the **numpy** rules. If two different dtypes are involved in an operation, then the more *general* one will be used as the result of the operation.
-
-DataType Conversion
-~~~~~~~~~~~~~~~~~~~
-
-You can use the ``astype`` method to convert dtypes from one to another. These *always* return a copy.
-In addition, ``convert_objects`` will attempt to *soft* conversion of any *object* dtypes, meaning that if all the objects in a Series are of the same type, the Series
-will have that dtype.
-
-.. ipython:: python
-
- df3
- df3.dtypes
-
- # conversion of dtypes
- df3.astype('float32').dtypes
-
-To force conversion of specific types of number conversion, pass ``convert_numeric = True``.
-This will force strings and numbers alike to be numbers if possible, otherwise the will be set to ``np.nan``.
-To force conversion to ``datetime64[ns]``, pass ``convert_dates = 'coerce'``.
-This will convert any datetimelike object to dates, forcing other values to ``NaT``.
-
-.. ipython:: python
-
- # mixed type conversions
- df3['D'] = '1.'
- df3['E'] = '1'
- df3.convert_objects(convert_numeric=True).dtypes
-
- # same, but specific dtype conversion
- df3['D'] = df3['D'].astype('float16')
- df3['E'] = df3['E'].astype('int32')
- df3.dtypes
-
- # forcing date coercion
- s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
- s
- s.convert_objects(convert_dates='coerce')
-
Data alignment and arithmetic
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index c951e5f18a112..f1fe5bc69a55d 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -38,6 +38,8 @@ detect NA values.
However, it comes with it a couple of trade-offs which I most certainly have
not ignored.
+.. _gotchas.intna:
+
Support for integer ``NA``
~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 969173d0d3569..8c18d9f69bee3 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -304,35 +304,6 @@ so that the original data can be modified without creating a copy:
df.mask(df >= 0)
-Upcasting Gotchas
-~~~~~~~~~~~~~~~~~
-
-Performing indexing operations on ``integer`` type data can easily upcast the data to ``floating``.
-The dtype of the input data will be preserved in cases where ``nans`` are not introduced (coming soon).
-
-.. ipython:: python
-
- dfi = df.astype('int32')
- dfi['E'] = 1
- dfi
- dfi.dtypes
-
- casted = dfi[dfi>0]
- casted
- casted.dtypes
-
-While float dtypes are unchanged.
-
-.. ipython:: python
-
- df2 = df.copy()
- df2['A'] = df2['A'].astype('float32')
- df2.dtypes
-
- casted = df2[df2>0]
- casted
- casted.dtypes
-
Take Methods
~~~~~~~~~~~~
diff --git a/doc/source/v0.4.x.txt b/doc/source/v0.4.x.txt
index 0fd7cc63d47ce..19293887089ba 100644
--- a/doc/source/v0.4.x.txt
+++ b/doc/source/v0.4.x.txt
@@ -17,7 +17,7 @@ New Features
``MultiIndex`` (IS188_)
- :ref:`Set <indexing.mixed_type_setting>` values in mixed-type
``DataFrame`` objects via ``.ix`` indexing attribute (GH135_)
-- Added new ``DataFrame`` :ref:`methods <dsintro.column_types>`
+- Added new ``DataFrame`` :ref:`methods <basics.dtypes>`
``get_dtype_counts`` and property ``dtypes`` (ENHdc_)
- Added :ref:`ignore_index <merging.ignore_index>` option to
``DataFrame.append`` to stack DataFrames (ENH1b_)
diff --git a/doc/source/v0.6.1.txt b/doc/source/v0.6.1.txt
index b95e9f10d04f4..7b0588884c5b2 100644
--- a/doc/source/v0.6.1.txt
+++ b/doc/source/v0.6.1.txt
@@ -23,7 +23,7 @@ New features
DataFrame, fast versions of scipy.stats.rankdata (GH428_)
- Implement :ref:`DataFrame.from_items <basics.dataframe.from_items>` alternate
constructor (GH444_)
-- DataFrame.convert_objects method for :ref:`inferring better dtypes <basics.cast.infer>`
+- DataFrame.convert_objects method for :ref:`inferring better dtypes <basics.cast>`
for object columns (GH302_)
- Add :ref:`rolling_corr_pairwise <stats.moments.corr_pairwise>` function for
computing Panel of correlation matrices (GH189_)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2947 | 2013-02-28T13:11:29Z | 2013-02-28T13:11:58Z | 2013-02-28T13:11:58Z | 2014-06-18T16:16:54Z | |
BUG: fix handling of the 'kind' argument in Series.order (GH #2811) | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 317806b9de3b7..0495c666cc9a1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -725,7 +725,7 @@ def __setitem__(self, key, value):
raise IndexError(key)
# Could not hash item
except ValueError:
-
+
# reassign a null value to iNaT
if com.is_timedelta64_dtype(self.dtype):
if isnull(value):
@@ -2141,7 +2141,7 @@ def order(self, na_last=True, ascending=True, kind='mergesort'):
def _try_mergesort(arr):
# easier to ask forgiveness than permission
try:
- return arr.argsort(kind='mergesort')
+ return arr.argsort(kind=kind)
except TypeError:
# stable sort not available for object dtype
return arr.argsort()
| The 'kind' argument was not properly propagated: the literal value "mergesort" was used instead of the argument.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2937 | 2013-02-26T14:37:45Z | 2013-02-26T15:38:00Z | 2013-02-26T15:38:00Z | 2013-02-26T15:38:12Z |
BUG: timezone offset double counted using date_range and DateTimeIndex.append (fixes #2906, #2938) | diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index c43360036d346..c91a1ebd5568f 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -298,7 +298,11 @@ def _generate(cls, start, end, periods, name, offset,
if end is not None:
end = Timestamp(end)
- inferred_tz = tools._infer_tzinfo(start, end)
+ try:
+ inferred_tz = tools._infer_tzinfo(start, end)
+ except:
+ raise ValueError('Start and end cannot both be tz-aware with '
+ 'different timezones')
if tz is not None and inferred_tz is not None:
assert(inferred_tz == tz)
@@ -1538,17 +1542,21 @@ def _generate_regular_range(start, end, periods, offset):
b = Timestamp(start).value
e = Timestamp(end).value
e += stride - e % stride
+ # end.tz == start.tz by this point due to _generate implementation
+ tz = start.tz
elif start is not None:
b = Timestamp(start).value
e = b + periods * stride
+ tz = start.tz
elif end is not None:
e = Timestamp(end).value + stride
b = e - periods * stride
+ tz = end.tz
else:
raise NotImplementedError
data = np.arange(b, e, stride, dtype=np.int64)
- data = data.view(_NS_DTYPE)
+ data = DatetimeIndex._simple_new(data, None, tz=tz)
else:
if isinstance(start, Timestamp):
start = start.to_pydatetime()
@@ -1723,7 +1731,8 @@ def _process_concat_data(to_concat, name):
else:
to_concat = [x.values for x in to_concat]
- klass = DatetimeIndex
+ # well, technically not a "class" anymore...oh well
+ klass = DatetimeIndex._simple_new
kwargs = {'tz': tz}
concat = com._concat_compat
else:
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py
index 1a844cdb4f77c..29b8d263a2592 100644
--- a/pandas/tseries/tests/test_daterange.py
+++ b/pandas/tseries/tests/test_daterange.py
@@ -15,10 +15,18 @@
import pandas.core.datetools as datetools
+def _skip_if_no_pytz():
+ try:
+ import pytz
+ except ImportError:
+ raise nose.SkipTest
+
+
def eq_gen_range(kwargs, expected):
rng = generate_range(**kwargs)
assert(np.array_equal(list(rng), expected))
+
START, END = datetime(2009, 1, 1), datetime(2010, 1, 1)
@@ -246,11 +254,11 @@ def test_intersection_bug(self):
def test_summary(self):
self.rng.summary()
self.rng[2:2].summary()
- try:
- import pytz
- bdate_range('1/1/2005', '1/1/2009', tz=pytz.utc).summary()
- except Exception:
- pass
+
+ def test_summary_pytz(self):
+ _skip_if_no_pytz()
+ import pytz
+ bdate_range('1/1/2005', '1/1/2009', tz=pytz.utc).summary()
def test_misc(self):
end = datetime(2009, 5, 13)
@@ -298,6 +306,29 @@ def test_range_bug(self):
exp_values = [start + i * offset for i in range(5)]
self.assert_(np.array_equal(result, DatetimeIndex(exp_values)))
+ def test_range_tz(self):
+ # GH 2906
+ _skip_if_no_pytz()
+ from pytz import timezone as tz
+
+ start = datetime(2011, 1, 1, tzinfo=tz('US/Eastern'))
+ end = datetime(2011, 1, 3, tzinfo=tz('US/Eastern'))
+
+ dr = date_range(start=start, periods=3)
+ self.assert_(dr.tz == tz('US/Eastern'))
+ self.assert_(dr[0] == start)
+ self.assert_(dr[2] == end)
+
+ dr = date_range(end=end, periods=3)
+ self.assert_(dr.tz == tz('US/Eastern'))
+ self.assert_(dr[0] == start)
+ self.assert_(dr[2] == end)
+
+ dr = date_range(start=start, end=end)
+ self.assert_(dr.tz == tz('US/Eastern'))
+ self.assert_(dr[0] == start)
+ self.assert_(dr[2] == end)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 2b42fc5c42542..e43aca8bf764b 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -42,6 +42,13 @@
from numpy.testing.decorators import slow
+def _skip_if_no_pytz():
+ try:
+ import pytz
+ except ImportError:
+ raise nose.SkipTest
+
+
class TestTimeSeriesDuplicates(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -168,13 +175,6 @@ def assert_range_equal(left, right):
assert(left.tz == right.tz)
-def _skip_if_no_pytz():
- try:
- import pytz
- except ImportError:
- raise nose.SkipTest
-
-
class TestTimeSeries(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -1265,6 +1265,29 @@ def test_append_concat(self):
self.assert_(rng1.append(rng1).name == 'foo')
self.assert_(rng1.append(rng2).name is None)
+ def test_append_concat_tz(self):
+ #GH 2938
+ _skip_if_no_pytz()
+
+ rng = date_range('5/8/2012 1:45', periods=10, freq='5T',
+ tz='US/Eastern')
+ rng2 = date_range('5/8/2012 2:35', periods=10, freq='5T',
+ tz='US/Eastern')
+ rng3 = date_range('5/8/2012 1:45', periods=20, freq='5T',
+ tz='US/Eastern')
+ ts = Series(np.random.randn(len(rng)), rng)
+ df = DataFrame(np.random.randn(len(rng), 4), index=rng)
+ ts2 = Series(np.random.randn(len(rng2)), rng2)
+ df2 = DataFrame(np.random.randn(len(rng2), 4), index=rng2)
+
+ result = ts.append(ts2)
+ result_df = df.append(df2)
+ self.assert_(result.index.equals(rng3))
+ self.assert_(result_df.index.equals(rng3))
+
+ appended = rng.append(rng2)
+ self.assert_(appended.equals(rng3))
+
def test_set_dataframe_column_ns_dtype(self):
x = DataFrame([datetime.now(), datetime.now()])
self.assert_(x[0].dtype == np.dtype('M8[ns]'))
| fixes #2906, #2938
| https://api.github.com/repos/pandas-dev/pandas/pulls/2935 | 2013-02-26T04:36:16Z | 2013-03-13T23:53:49Z | 2013-03-13T23:53:49Z | 2014-08-04T14:34:43Z |
BUG: fixes issue in HDFStore w.r.t. compressed empty sparse series (GH #2931) | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 8067d7e0be17f..ac7ca152ffcee 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1596,6 +1596,16 @@ def read_index_node(self, node):
return name, index
+
+ def write_array_empty(self, key, value):
+ """ write a 0-len array """
+
+ # ugly hack for length 0 axes
+ arr = np.empty((1,) * value.ndim)
+ self._handle.createArray(self.group, key, arr)
+ getattr(self.group, key)._v_attrs.value_type = str(value.dtype)
+ getattr(self.group, key)._v_attrs.shape = value.shape
+
def write_array(self, key, value):
if key in self.group:
self._handle.removeNode(self.group, key)
@@ -1618,11 +1628,16 @@ def write_array(self, key, value):
if atom is not None:
# create an empty chunked array and fill it from value
- ca = self._handle.createCArray(self.group, key, atom,
- value.shape,
- filters=self._filters)
- ca[:] = value
- getattr(self.group, key)._v_attrs.transposed = transposed
+ if not empty_array:
+ ca = self._handle.createCArray(self.group, key, atom,
+ value.shape,
+ filters=self._filters)
+ ca[:] = value
+ getattr(self.group, key)._v_attrs.transposed = transposed
+
+ else:
+ self.write_array_empty(key, value)
+
return
if value.dtype.type == np.object_:
@@ -1645,11 +1660,7 @@ def write_array(self, key, value):
getattr(self.group, key)._v_attrs.value_type = 'datetime64'
else:
if empty_array:
- # ugly hack for length 0 axes
- arr = np.empty((1,) * value.ndim)
- self._handle.createArray(self.group, key, arr)
- getattr(self.group, key)._v_attrs.value_type = str(value.dtype)
- getattr(self.group, key)._v_attrs.shape = value.shape
+ self.write_array_empty(key, value)
else:
self._handle.createArray(self.group, key, value)
@@ -1720,7 +1731,7 @@ def write(self, obj, **kwargs):
self.write_index('sp_index', obj.sp_index)
self.write_array('sp_values', obj.sp_values)
self.attrs.name = obj.name
- self.attrs.dill_value = obj.fill_value
+ self.attrs.fill_value = obj.fill_value
self.attrs.kind = obj.kind
class SparseFrameStorer(GenericStorer):
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index d4654d01f1e1e..986329a615665 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1282,6 +1282,7 @@ def test_sparse_frame(self):
s.ix[3:5, 1:3] = np.nan
s.ix[8:10, -2] = np.nan
ss = s.to_sparse()
+
self._check_double_roundtrip(ss, tm.assert_frame_equal,
check_frame_type=True)
@@ -1565,6 +1566,36 @@ def test_overwrite_node(self):
tm.assert_series_equal(store['a'], ts)
+ def test_sparse_with_compression(self):
+
+ # GH 2931
+
+ # make sparse dataframe
+ df = DataFrame(np.random.binomial(n=1, p=.01, size=(1e3, 10))).to_sparse(fill_value=0)
+
+ # case 1: store uncompressed
+ self._check_double_roundtrip(df, tm.assert_frame_equal,
+ compression = False,
+ check_frame_type=True)
+
+ # case 2: store compressed (works)
+ self._check_double_roundtrip(df, tm.assert_frame_equal,
+ compression = 'zlib',
+ check_frame_type=True)
+
+ # set one series to be completely sparse
+ df[0] = np.zeros(1e3)
+
+ # case 3: store df with completely sparse series uncompressed
+ self._check_double_roundtrip(df, tm.assert_frame_equal,
+ compression = False,
+ check_frame_type=True)
+
+ # case 4: try storing df with completely sparse series compressed (fails)
+ self._check_double_roundtrip(df, tm.assert_frame_equal,
+ compression = 'zlib',
+ check_frame_type=True)
+
def test_select(self):
wp = tm.makePanel()
@@ -1967,7 +1998,7 @@ def _check_double_roundtrip(self, obj, comparator, compression=False,
**kwargs):
options = {}
if compression:
- options['complib'] = _default_compressor
+ options['complib'] = compression or _default_compressor
with ensure_clean(self.path, 'w', **options) as store:
store['obj'] = obj
| - SparseSeries that had 0 - len were not saving when the store was compressed
Closes GH #2931
| https://api.github.com/repos/pandas-dev/pandas/pulls/2933 | 2013-02-26T01:26:17Z | 2013-02-26T02:44:59Z | 2013-02-26T02:44:59Z | 2014-06-27T14:00:23Z |
BUG: formatting for Series was printing multiple Dtype lines for long display | diff --git a/pandas/core/format.py b/pandas/core/format.py
index b4a6ac59719b5..c564bb37a18f7 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -65,14 +65,14 @@
class SeriesFormatter(object):
- def __init__(self, series, buf=None, header=True, length=True,
- na_rep='NaN', name=False, float_format=None, dtype=True):
+ def __init__(self, series, buf=None, header=True, length=True, dtype=True,
+ na_rep='NaN', name=False, float_format=None):
self.series = series
self.buf = buf if buf is not None else StringIO(u"")
self.name = name
- self.dtype = dtype
self.na_rep = na_rep
self.length = length
+ self.dtype = dtype
self.header = header
if float_format is None:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 041e708d7a907..317806b9de3b7 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1047,9 +1047,10 @@ def __unicode__(self):
elif len(self.index) > 0:
result = self._get_repr(print_header=True,
length=len(self) > 50,
- name=True)
+ name=True,
+ dtype=True)
else:
- result = u'Series([], dtype=%s)' % self.dtype
+ result = u'Series([], Dtype: %s)' % self.dtype
assert type(result) == unicode
return result
@@ -1069,9 +1070,10 @@ def _tidy_repr(self, max_vals=20):
"""
num = max_vals // 2
head = self[:num]._get_repr(print_header=True, length=False,
- name=False)
+ dtype=False, name=False)
tail = self[-(max_vals - num):]._get_repr(print_header=False,
length=False,
+ dtype=False,
name=False)
result = head + '\n...\n' + tail
result = '%s\n%s' % (result, self._repr_footer())
@@ -1085,7 +1087,7 @@ def _repr_footer(self):
com.pprint_thing(self.dtype.name))
def to_string(self, buf=None, na_rep='NaN', float_format=None,
- nanRep=None, length=False, name=False):
+ nanRep=None, length=False, dtype=False, name=False):
"""
Render a string representation of the Series
@@ -1100,6 +1102,8 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
default None
length : boolean, default False
Add the Series length
+ dtype : boolean, default False
+ Add the Series dtype
name : boolean, default False
Add the Series name (which may be None)
@@ -1114,7 +1118,7 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
na_rep = nanRep
the_repr = self._get_repr(float_format=float_format, na_rep=na_rep,
- length=length, name=name)
+ length=length, dtype=dtype, name=name)
assert type(the_repr) == unicode
@@ -1123,7 +1127,7 @@ def to_string(self, buf=None, na_rep='NaN', float_format=None,
else:
print >> buf, the_repr
- def _get_repr(self, name=False, print_header=False, length=True,
+ def _get_repr(self, name=False, print_header=False, length=True, dtype=True,
na_rep='NaN', float_format=None):
"""
@@ -1131,7 +1135,7 @@ def _get_repr(self, name=False, print_header=False, length=True,
"""
formatter = fmt.SeriesFormatter(self, name=name, header=print_header,
- length=length, na_rep=na_rep,
+ length=length, dtype=dtype, na_rep=na_rep,
float_format=float_format)
result = formatter.to_string()
assert type(result) == unicode
@@ -3287,7 +3291,8 @@ def _repr_footer(self):
namestr = "Name: %s, " % str(
self.name) if self.name is not None else ""
- return '%s%sLength: %d' % (freqstr, namestr, len(self))
+ return '%s%sLength: %d, Dtype: %s' % (freqstr, namestr, len(self),
+ com.pprint_thing(self.dtype.name))
def to_timestamp(self, freq=None, how='start', copy=True):
"""
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 4f91b88291a3d..739bea41256df 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -585,6 +585,15 @@ def test_wide_repr_wide_long_columns(self):
self.assertTrue('ccccc' in result)
self.assertTrue('ddddd' in result)
+ def test_long_series(self):
+ n = 1000
+ s = Series(np.random.randint(-50,50,n),index=['s%04d' % x for x in xrange(n)], dtype='int64')
+
+ import re
+ str_rep = str(s)
+ nmatches = len(re.findall('Dtype',str_rep))
+ self.assert_(nmatches == 1)
+
def test_to_string(self):
from pandas import read_table
import re
@@ -1119,7 +1128,7 @@ def test_to_string(self):
format = '%.4f'.__mod__
result = self.ts.to_string(float_format=format)
result = [x.split()[1] for x in result.split('\n')]
- expected = [format(x) for x in self.ts] + [u'float64']
+ expected = [format(x) for x in self.ts]
self.assertEqual(result, expected)
# empty string
@@ -1132,7 +1141,7 @@ def test_to_string(self):
# name and length
cp = self.ts.copy()
cp.name = 'foo'
- result = cp.to_string(length=True, name=True)
+ result = cp.to_string(length=True, name=True, dtype=True)
last_line = result.split('\n')[-1].strip()
self.assertEqual(last_line, "Freq: B, Name: foo, Length: %d, Dtype: float64" % len(cp))
@@ -1149,8 +1158,7 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
u'1 NaN\n'
u'2 -1.23\n'
- u'3 4.56\n'
- u'Dtype: object')
+ u'3 4.56')
self.assertEqual(result, expected)
# but don't count NAs as floats
@@ -1159,8 +1167,7 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
'1 NaN\n'
'2 bar\n'
- '3 baz\n'
- u'Dtype: object')
+ '3 baz')
self.assertEqual(result, expected)
s = Series(['foo', 5, 'bar', 'baz'])
@@ -1168,8 +1175,7 @@ def test_to_string_mixed(self):
expected = (u'0 foo\n'
'1 5\n'
'2 bar\n'
- '3 baz\n'
- u'Dtype: object')
+ '3 baz')
self.assertEqual(result, expected)
def test_to_string_float_na_spacing(self):
@@ -1181,8 +1187,7 @@ def test_to_string_float_na_spacing(self):
'1 1.5678\n'
'2 NaN\n'
'3 -3.0000\n'
- '4 NaN\n'
- u'Dtype: float64')
+ '4 NaN')
self.assertEqual(result, expected)
def test_unicode_name_in_footer(self):
@@ -1216,14 +1221,12 @@ def test_timedelta64(self):
result = y.to_string()
self.assertTrue('1 days, 00:00:00' in result)
self.assertTrue('NaT' in result)
- self.assertTrue('timedelta64[ns]' in result)
# with frac seconds
s = Series(date_range('2012-1-1', periods=3, freq='D'))
y = s-datetime(2012,1,1,microsecond=150)
result = y.to_string()
self.assertTrue('00:00:00.000150' in result)
- self.assertTrue('timedelta64[ns]' in result)
def test_mixed_datetime64(self):
df = DataFrame({'A': [1, 2],
| https://api.github.com/repos/pandas-dev/pandas/pulls/2930 | 2013-02-25T21:57:41Z | 2013-02-26T02:46:09Z | 2013-02-26T02:46:09Z | 2013-03-10T17:06:35Z | |
ENH: numexpr on boolean frames | diff --git a/README.rst b/README.rst
index 66a7a3e2b627f..d6d1fa3ad658b 100644
--- a/README.rst
+++ b/README.rst
@@ -70,6 +70,12 @@ Dependencies
* `pytz <http://pytz.sourceforge.net/>`__
* Needed for time zone support with ``date_range``
+Highly Recommended Dependencies
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ * `numexpr <http://code.google.com/p/numexpr/>`__: to accelerate some expression evaluation operations
+ also required by `PyTables`
+ * `bottleneck <http://berkeleyanalytics.com/>`__: to accelerate certain numerical operations
+
Optional dependencies
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index cd7540328230f..b75ea0e664f6d 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -224,6 +224,12 @@ API changes
Enhancements
~~~~~~~~~~~~
+ - Numexpr is now a 'highly recommended dependency', to accelerate certain
+ types of expression evaluation
+
+ - Bottleneck is now a 'highly recommended dependency', to accelerate certain
+ types of numerical evaluations
+
- In ``HDFStore``, provide dotted attribute access to ``get`` from stores
(e.g. ``store.df == store['df']``)
diff --git a/pandas/core/expressions.py b/pandas/core/expressions.py
new file mode 100644
index 0000000000000..4199c6f7f890c
--- /dev/null
+++ b/pandas/core/expressions.py
@@ -0,0 +1,129 @@
+"""
+Expressions
+-----------
+
+Offer fast expression evaluation thru numexpr
+
+"""
+import numpy as np
+
+try:
+ import numexpr as ne
+ _NUMEXPR_INSTALLED = True
+except ImportError: # pragma: no cover
+ _NUMEXPR_INSTALLED = False
+
+_USE_NUMEXPR = _NUMEXPR_INSTALLED
+_evaluate = None
+
+# the set of dtypes that we will allow pass to numexpr
+_ALLOWED_DTYPES = set(['int64','int32','float64','float32','bool'])
+
+# the minimum prod shape that we will use numexpr
+_MIN_ELEMENTS = 10000
+
+def set_use_numexpr(v = True):
+ # set/unset to use numexpr
+ global _USE_NUMEXPR
+ if _NUMEXPR_INSTALLED:
+ #print "setting use_numexpr : was->%s, now->%s" % (_USE_NUMEXPR,v)
+ _USE_NUMEXPR = v
+
+ # choose what we are going to do
+ global _evaluate
+ if not _USE_NUMEXPR:
+ _evaluate = _evaluate_standard
+ else:
+ _evaluate = _evaluate_numexpr
+
+ #print "evaluate -> %s" % _evaluate
+
+def set_numexpr_threads(n = None):
+ # if we are using numexpr, set the threads to n
+ # otherwise reset
+ try:
+ if _NUMEXPR_INSTALLED and _USE_NUMEXPR:
+ if n is None:
+ n = ne.detect_number_of_cores()
+ ne.set_num_threads(n)
+ except:
+ pass
+
+
+def _evaluate_standard(op, op_str, a, b, raise_on_error=True):
+ """ standard evaluation """
+ return op(a,b)
+
+def _can_use_numexpr(op, op_str, a, b):
+ """ return a boolean if we WILL be using numexpr """
+ if op_str is not None:
+
+ # required min elements (otherwise we are adding overhead)
+ if np.prod(a.shape) > _MIN_ELEMENTS:
+
+ # check for dtype compatiblity
+ dtypes = set()
+ for o in [ a, b ]:
+ if hasattr(o,'get_dtype_counts'):
+ s = o.get_dtype_counts()
+ if len(s) > 1:
+ return False
+ dtypes |= set(s.index)
+ elif isinstance(o,np.ndarray):
+ dtypes |= set([o.dtype.name])
+
+ # allowed are a superset
+ if not len(dtypes) or _ALLOWED_DTYPES >= dtypes:
+ return True
+
+ return False
+
+def _evaluate_numexpr(op, op_str, a, b, raise_on_error = False):
+ result = None
+
+ if _can_use_numexpr(op, op_str, a, b):
+ try:
+ a_value, b_value = a, b
+ if hasattr(a_value,'values'):
+ a_value = a_value.values
+ if hasattr(b_value,'values'):
+ b_value = b_value.values
+ result = ne.evaluate('a_value %s b_value' % op_str,
+ local_dict={ 'a_value' : a_value,
+ 'b_value' : b_value },
+ casting='safe')
+ except (ValueError), detail:
+ if 'unknown type object' in str(detail):
+ pass
+ except (Exception), detail:
+ if raise_on_error:
+ raise TypeError(str(detail))
+
+ if result is None:
+ result = _evaluate_standard(op,op_str,a,b,raise_on_error)
+
+ return result
+
+# turn myself on
+set_use_numexpr(True)
+
+def evaluate(op, op_str, a, b, raise_on_error=False, use_numexpr=True):
+ """ evaluate and return the expression of the op on a and b
+
+ Parameters
+ ----------
+
+ op : the actual operand
+ op_str: the string version of the op
+ a : left operand
+ b : right operand
+ raise_on_error : pass the error to the higher level if indicated (default is False),
+ otherwise evaluate the op with and return the results
+ use_numexpr : whether to try to use numexpr (default True)
+ """
+
+ if use_numexpr:
+ return _evaluate(op, op_str, a, b, raise_on_error=raise_on_error)
+ return _evaluate_standard(op, op_str, a, b, raise_on_error=raise_on_error)
+
+
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 13ae446be5c0b..7ae2cc6d5b6ed 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -31,6 +31,7 @@
_is_index_slice, _check_bool_indexer)
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.series import Series, _radd_compat
+import pandas.core.expressions as expressions
from pandas.compat.scipy import scoreatpercentile as _quantile
from pandas.util.compat import OrderedDict
from pandas.util import py3compat
@@ -53,7 +54,6 @@
from pandas.core.config import get_option
-
#----------------------------------------------------------------------
# Docstring templates
@@ -186,10 +186,10 @@ class DataConflictError(Exception):
# Factory helper methods
-def _arith_method(op, name, default_axis='columns'):
+def _arith_method(op, name, str_rep = None, default_axis='columns'):
def na_op(x, y):
try:
- result = op(x, y)
+ result = expressions.evaluate(op, str_rep, x, y, raise_on_error=True)
except TypeError:
xrav = x.ravel()
result = np.empty(x.size, dtype=x.dtype)
@@ -240,7 +240,7 @@ def f(self, other, axis=default_axis, level=None, fill_value=None):
return f
-def _flex_comp_method(op, name, default_axis='columns'):
+def _flex_comp_method(op, name, str_rep = None, default_axis='columns'):
def na_op(x, y):
try:
@@ -268,7 +268,7 @@ def na_op(x, y):
@Appender('Wrapper for flexible comparison methods %s' % name)
def f(self, other, axis=default_axis, level=None):
if isinstance(other, DataFrame): # Another DataFrame
- return self._flex_compare_frame(other, na_op, level)
+ return self._flex_compare_frame(other, na_op, str_rep, level)
elif isinstance(other, Series):
return self._combine_series(other, na_op, None, axis, level)
@@ -294,7 +294,7 @@ def f(self, other, axis=default_axis, level=None):
casted = DataFrame(other, index=self.index,
columns=self.columns)
- return self._flex_compare_frame(casted, na_op, level)
+ return self._flex_compare_frame(casted, na_op, str_rep, level)
else: # pragma: no cover
raise ValueError("Bad argument shape")
@@ -307,11 +307,11 @@ def f(self, other, axis=default_axis, level=None):
return f
-def _comp_method(func, name):
+def _comp_method(func, name, str_rep):
@Appender('Wrapper for comparison method %s' % name)
def f(self, other):
if isinstance(other, DataFrame): # Another DataFrame
- return self._compare_frame(other, func)
+ return self._compare_frame(other, func, str_rep)
elif isinstance(other, Series):
return self._combine_series_infer(other, func)
else:
@@ -750,11 +750,11 @@ def __contains__(self, key):
#----------------------------------------------------------------------
# Arithmetic methods
- add = _arith_method(operator.add, 'add')
- mul = _arith_method(operator.mul, 'multiply')
- sub = _arith_method(operator.sub, 'subtract')
- div = divide = _arith_method(lambda x, y: x / y, 'divide')
- pow = _arith_method(operator.pow, 'pow')
+ add = _arith_method(operator.add, 'add', '+')
+ mul = _arith_method(operator.mul, 'multiply', '*')
+ sub = _arith_method(operator.sub, 'subtract', '-')
+ div = divide = _arith_method(lambda x, y: x / y, 'divide', '/')
+ pow = _arith_method(operator.pow, 'pow', '**')
radd = _arith_method(_radd_compat, 'radd')
rmul = _arith_method(operator.mul, 'rmultiply')
@@ -762,14 +762,14 @@ def __contains__(self, key):
rdiv = _arith_method(lambda x, y: y / x, 'rdivide')
rpow = _arith_method(lambda x, y: y ** x, 'rpow')
- __add__ = _arith_method(operator.add, '__add__', default_axis=None)
- __sub__ = _arith_method(operator.sub, '__sub__', default_axis=None)
- __mul__ = _arith_method(operator.mul, '__mul__', default_axis=None)
- __truediv__ = _arith_method(operator.truediv, '__truediv__',
+ __add__ = _arith_method(operator.add, '__add__', '+', default_axis=None)
+ __sub__ = _arith_method(operator.sub, '__sub__', '-', default_axis=None)
+ __mul__ = _arith_method(operator.mul, '__mul__', '*', default_axis=None)
+ __truediv__ = _arith_method(operator.truediv, '__truediv__', '/',
default_axis=None)
__floordiv__ = _arith_method(operator.floordiv, '__floordiv__',
default_axis=None)
- __pow__ = _arith_method(operator.pow, '__pow__', default_axis=None)
+ __pow__ = _arith_method(operator.pow, '__pow__', '**', default_axis=None)
__radd__ = _arith_method(_radd_compat, '__radd__', default_axis=None)
__rmul__ = _arith_method(operator.mul, '__rmul__', default_axis=None)
@@ -782,13 +782,13 @@ def __contains__(self, key):
default_axis=None)
# boolean operators
- __and__ = _arith_method(operator.and_, '__and__')
- __or__ = _arith_method(operator.or_, '__or__')
+ __and__ = _arith_method(operator.and_, '__and__', '&')
+ __or__ = _arith_method(operator.or_, '__or__', '|')
__xor__ = _arith_method(operator.xor, '__xor__')
# Python 2 division methods
if not py3compat.PY3:
- __div__ = _arith_method(operator.div, '__div__', default_axis=None)
+ __div__ = _arith_method(operator.div, '__div__', '/', default_axis=None)
__rdiv__ = _arith_method(lambda x, y: y / x, '__rdiv__',
default_axis=None)
@@ -801,19 +801,19 @@ def __invert__(self):
return self._wrap_array(arr, self.axes, copy=False)
# Comparison methods
- __eq__ = _comp_method(operator.eq, '__eq__')
- __ne__ = _comp_method(operator.ne, '__ne__')
- __lt__ = _comp_method(operator.lt, '__lt__')
- __gt__ = _comp_method(operator.gt, '__gt__')
- __le__ = _comp_method(operator.le, '__le__')
- __ge__ = _comp_method(operator.ge, '__ge__')
-
- eq = _flex_comp_method(operator.eq, 'eq')
- ne = _flex_comp_method(operator.ne, 'ne')
- gt = _flex_comp_method(operator.gt, 'gt')
- lt = _flex_comp_method(operator.lt, 'lt')
- ge = _flex_comp_method(operator.ge, 'ge')
- le = _flex_comp_method(operator.le, 'le')
+ __eq__ = _comp_method(operator.eq, '__eq__', '==')
+ __ne__ = _comp_method(operator.ne, '__ne__', '!=')
+ __lt__ = _comp_method(operator.lt, '__lt__', '<' )
+ __gt__ = _comp_method(operator.gt, '__gt__', '>' )
+ __le__ = _comp_method(operator.le, '__le__', '<=')
+ __ge__ = _comp_method(operator.ge, '__ge__', '>=')
+
+ eq = _flex_comp_method(operator.eq, 'eq', '==')
+ ne = _flex_comp_method(operator.ne, 'ne', '!=')
+ lt = _flex_comp_method(operator.lt, 'lt', '<')
+ gt = _flex_comp_method(operator.gt, 'gt', '>')
+ le = _flex_comp_method(operator.le, 'le', '<=')
+ ge = _flex_comp_method(operator.ge, 'ge', '>=')
def dot(self, other):
"""
@@ -1669,14 +1669,6 @@ def convert_objects(self, convert_dates=True, convert_numeric=False):
"""
return self._constructor(self._data.convert(convert_dates=convert_dates, convert_numeric=convert_numeric))
- def get_dtype_counts(self):
- """ return the counts of dtypes in this frame """
- self._consolidate_inplace()
- counts = dict()
- for b in self._data.blocks:
- counts[b.dtype.name] = counts.get(b.dtype,0) + b.shape[0]
- return Series(counts)
-
#----------------------------------------------------------------------
# properties for index and columns
@@ -3710,25 +3702,25 @@ def _combine_const(self, other, func, raise_on_error = True):
new_data = self._data.eval(func, other, raise_on_error=raise_on_error)
return self._constructor(new_data)
- def _compare_frame(self, other, func):
+ def _compare_frame(self, other, func, str_rep):
if not self._indexed_same(other):
raise Exception('Can only compare identically-labeled '
'DataFrame objects')
- new_data = {}
- for col in self.columns:
- new_data[col] = func(self[col], other[col])
+ def _compare(a, b):
+ return dict([ (col,func(a[col], b[col])) for col in a.columns ])
+ new_data = expressions.evaluate(_compare, str_rep, self, other)
return self._constructor(data=new_data, index=self.index,
columns=self.columns, copy=False)
- def _flex_compare_frame(self, other, func, level):
+ def _flex_compare_frame(self, other, func, str_rep, level):
if not self._indexed_same(other):
self, other = self.align(other, 'outer', level=level)
- new_data = {}
- for col in self.columns:
- new_data[col] = func(self[col], other[col])
+ def _compare(a, b):
+ return dict([ (col,func(a[col], b[col])) for col in a.columns ])
+ new_data = expressions.evaluate(_compare, str_rep, self, other)
return self._constructor(data=new_data, index=self.index,
columns=self.columns, copy=False)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index c25e686afacbf..236d7d8aeadf8 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -606,6 +606,11 @@ def __delitem__(self, key):
except KeyError:
pass
+ def get_dtype_counts(self):
+ """ return the counts of dtypes in this frame """
+ from pandas import Series
+ return Series(self._data.get_dtype_counts())
+
def pop(self, item):
"""
Return item and drop from frame. Raise KeyError if not found.
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 5bf918aff6367..75605fae4e39f 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -418,17 +418,17 @@ def eval(self, func, other, raise_on_error = True, try_cast = False):
args = [ values, other ]
try:
result = func(*args)
- except:
+ except (Exception), detail:
if raise_on_error:
- raise TypeError('Coulnd not operate %s with block values'
- % repr(other))
+ raise TypeError('Could not operate [%s] with block values [%s]'
+ % (repr(other),str(detail)))
else:
# return the values
result = np.empty(values.shape,dtype='O')
result.fill(np.nan)
if not isinstance(result, np.ndarray):
- raise TypeError('Could not compare %s with block values'
+ raise TypeError('Could not compare [%s] with block values'
% repr(other))
if is_transposed:
@@ -492,10 +492,10 @@ def func(c,v,o):
try:
return np.where(c,v,o)
- except:
+ except (Exception), detail:
if raise_on_error:
- raise TypeError('Coulnd not operate %s with block values'
- % repr(o))
+ raise TypeError('Could not operate [%s] with block values [%s]'
+ % (repr(o),str(detail)))
else:
# return the values
result = np.empty(v.shape,dtype='float64')
@@ -504,7 +504,7 @@ def func(c,v,o):
def create_block(result, items, transpose = True):
if not isinstance(result, np.ndarray):
- raise TypeError('Could not compare %s with block values'
+ raise TypeError('Could not compare [%s] with block values'
% repr(other))
if transpose and is_transposed:
@@ -843,6 +843,14 @@ def _get_items(self):
return self.axes[0]
items = property(fget=_get_items)
+ def get_dtype_counts(self):
+ """ return a dict of the counts of dtypes in BlockManager """
+ self._consolidate_inplace()
+ counts = dict()
+ for b in self.blocks:
+ counts[b.dtype.name] = counts.get(b.dtype,0) + b.shape[0]
+ return counts
+
def __getstate__(self):
block_values = [b.values for b in self.blocks]
block_items = [b.items for b in self.blocks]
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 27480d9e489be..2870bb1ab05b1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -695,6 +695,9 @@ def _get_values(self, indexer):
except Exception:
return self.values[indexer]
+ def get_dtype_counts(self):
+ return Series({ self.dtype.name : 1 })
+
def where(self, cond, other=nan, inplace=False):
"""
Return a Series where cond is True; otherwise values are from other
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
new file mode 100644
index 0000000000000..a0321d2dbe55f
--- /dev/null
+++ b/pandas/tests/test_expressions.py
@@ -0,0 +1,130 @@
+# pylint: disable-msg=W0612,E1101
+
+import unittest
+import nose
+
+import operator
+from numpy import random, nan
+from numpy.random import randn
+import numpy as np
+from numpy.testing import assert_array_equal
+
+import pandas as pan
+from pandas.core.api import DataFrame, Series, notnull, isnull
+from pandas.core import expressions as expr
+
+from pandas.util.testing import (assert_almost_equal,
+ assert_series_equal,
+ assert_frame_equal)
+from pandas.util import py3compat
+
+import pandas.util.testing as tm
+import pandas.lib as lib
+
+from numpy.testing.decorators import slow
+
+if not expr._USE_NUMEXPR:
+ raise nose.SkipTest
+
+_frame = DataFrame(np.random.randn(10000, 4), columns = list('ABCD'), dtype='float64')
+_frame2 = DataFrame(np.random.randn(100, 4), columns = list('ABCD'), dtype='float64')
+_mixed = DataFrame({ 'A' : _frame['A'].copy(), 'B' : _frame['B'].astype('float32'), 'C' : _frame['C'].astype('int64'), 'D' : _frame['D'].astype('int32') })
+_mixed2 = DataFrame({ 'A' : _frame2['A'].copy(), 'B' : _frame2['B'].astype('float32'), 'C' : _frame2['C'].astype('int64'), 'D' : _frame2['D'].astype('int32') })
+
+class TestExpressions(unittest.TestCase):
+
+ _multiprocess_can_split_ = False
+
+ def setUp(self):
+
+ self.frame = _frame.copy()
+ self.frame2 = _frame2.copy()
+ self.mixed = _mixed.copy()
+ self.mixed2 = _mixed2.copy()
+
+
+ def test_invalid(self):
+
+ # no op
+ result = expr._can_use_numexpr(operator.add, None, self.frame, self.frame)
+ self.assert_(result == False)
+
+ # mixed
+ result = expr._can_use_numexpr(operator.add, '+', self.mixed, self.frame)
+ self.assert_(result == False)
+
+ # min elements
+ result = expr._can_use_numexpr(operator.add, '+', self.frame2, self.frame2)
+ self.assert_(result == False)
+
+ # ok, we only check on first part of expression
+ result = expr._can_use_numexpr(operator.add, '+', self.frame, self.frame2)
+ self.assert_(result == True)
+
+ def test_binary_ops(self):
+
+ def testit():
+
+ for f, f2 in [ (self.frame, self.frame2), (self.mixed, self.mixed2) ]:
+
+ for op, op_str in [('add','+'),('sub','-'),('mul','*'),('div','/'),('pow','**')]:
+
+ op = getattr(operator,op)
+ result = expr._can_use_numexpr(op, op_str, f, f)
+ self.assert_(result == (not f._is_mixed_type))
+
+ result = expr.evaluate(op, op_str, f, f, use_numexpr=True)
+ expected = expr.evaluate(op, op_str, f, f, use_numexpr=False)
+ assert_array_equal(result,expected.values)
+
+ result = expr._can_use_numexpr(op, op_str, f2, f2)
+ self.assert_(result == False)
+
+
+ expr.set_use_numexpr(False)
+ testit()
+ expr.set_use_numexpr(True)
+ expr.set_numexpr_threads(1)
+ testit()
+ expr.set_numexpr_threads()
+ testit()
+
+ def test_boolean_ops(self):
+
+
+ def testit():
+ for f, f2 in [ (self.frame, self.frame2), (self.mixed, self.mixed2) ]:
+
+ f11 = f
+ f12 = f + 1
+
+ f21 = f2
+ f22 = f2 + 1
+
+ for op, op_str in [('gt','>'),('lt','<'),('ge','>='),('le','<='),('eq','=='),('ne','!=')]:
+
+ op = getattr(operator,op)
+
+ result = expr._can_use_numexpr(op, op_str, f11, f12)
+ self.assert_(result == (not f11._is_mixed_type))
+
+ result = expr.evaluate(op, op_str, f11, f12, use_numexpr=True)
+ expected = expr.evaluate(op, op_str, f11, f12, use_numexpr=False)
+ assert_array_equal(result,expected.values)
+
+ result = expr._can_use_numexpr(op, op_str, f21, f22)
+ self.assert_(result == False)
+
+ expr.set_use_numexpr(False)
+ testit()
+ expr.set_use_numexpr(True)
+ expr.set_numexpr_threads(1)
+ testit()
+ expr.set_numexpr_threads()
+ testit()
+
+if __name__ == '__main__':
+ # unittest.main()
+ import nose
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c3f6b0b1640c3..0729f0e03782e 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -49,6 +49,8 @@ def _skip_if_no_scipy():
MIXED_INT_DTYPES = ['uint8','uint16','uint32','uint64','int8','int16','int32','int64']
def _check_mixed_float(df, dtype = None):
+
+ # float16 are most likely to be upcasted to float32
dtypes = dict(A = 'float32', B = 'float32', C = 'float16', D = 'float64')
if isinstance(dtype, basestring):
dtypes = dict([ (k,dtype) for k, v in dtypes.items() ])
@@ -189,7 +191,7 @@ def test_setitem_list_of_tuples(self):
result = self.frame['tuples']
expected = Series(tuples, index=self.frame.index)
assert_series_equal(result, expected)
-
+
def test_getitem_boolean(self):
# boolean indexing
d = self.tsframe.index[10]
@@ -3933,7 +3935,7 @@ def test_arith_flex_frame(self):
result = getattr(self.mixed_float, op)(2 * self.mixed_float)
exp = f(self.mixed_float, 2 * self.mixed_float)
assert_frame_equal(result, exp)
- _check_mixed_float(result)
+ _check_mixed_float(result, dtype = dict(C = None))
# vs mix int
if op in ['add','sub','mul']:
@@ -3943,7 +3945,9 @@ def test_arith_flex_frame(self):
# overflow in the uint
dtype = None
if op in ['sub']:
- dtype = dict(B = 'object')
+ dtype = dict(B = 'object', C = None)
+ elif op in ['add','mul']:
+ dtype = dict(C = None)
assert_frame_equal(result, exp)
_check_mixed_int(result, dtype = dtype)
@@ -4233,9 +4237,9 @@ def test_combineFrame(self):
# mix vs mix
added = self.mixed_float + self.mixed_float2
- _check_mixed_float(added)
+ _check_mixed_float(added, dtype = dict(C = None))
added = self.mixed_float2 + self.mixed_float
- _check_mixed_float(added)
+ _check_mixed_float(added, dtype = dict(C = None))
# with int
added = self.frame + self.mixed_int
@@ -4265,15 +4269,16 @@ def test_combineSeries(self):
added = self.mixed_float + series
_check_mixed_float(added, dtype = 'float64')
added = self.mixed_float + series.astype('float32')
- _check_mixed_float(added, dtype = dict(C = 'float32'))
+ _check_mixed_float(added, dtype = dict(C = None))
added = self.mixed_float + series.astype('float16')
- _check_mixed_float(added)
+ _check_mixed_float(added, dtype = dict(C = None))
+ #### these raise with numexpr.....as we are adding an int64 to an uint64....weird
# vs int
- added = self.mixed_int + (100*series).astype('int64')
- _check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = 'int64', D = 'int64'))
- added = self.mixed_int + (100*series).astype('int32')
- _check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = 'int32', D = 'int64'))
+ #added = self.mixed_int + (100*series).astype('int64')
+ #_check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = 'int64', D = 'int64'))
+ #added = self.mixed_int + (100*series).astype('int32')
+ #_check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = 'int32', D = 'int64'))
# TimeSeries
import sys
@@ -4320,7 +4325,7 @@ def test_combineFunc(self):
result = self.mixed_float * 2
for c, s in result.iteritems():
self.assert_(np.array_equal(s.values, self.mixed_float[c].values * 2))
- _check_mixed_float(result)
+ _check_mixed_float(result, dtype = dict(C = None))
result = self.empty * 2
self.assert_(result.index is self.empty.index)
diff --git a/vb_suite/binary_ops.py b/vb_suite/binary_ops.py
index b28d1d9ee0806..7a2b03643dc46 100644
--- a/vb_suite/binary_ops.py
+++ b/vb_suite/binary_ops.py
@@ -3,3 +3,103 @@
common_setup = """from pandas_vb_common import *
"""
+
+SECTION = 'Binary ops'
+
+#----------------------------------------------------------------------
+# binary ops
+
+#----------------------------------------------------------------------
+# add
+
+setup = common_setup + """
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+"""
+frame_add = \
+ Benchmark("df + df2", setup, name='frame_add',
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_numexpr_threads(1)
+"""
+
+frame_add_st = \
+ Benchmark("df + df2", setup, name='frame_add_st',cleanup="expr.set_numexpr_threads()",
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_use_numexpr(False)
+"""
+frame_add_no_ne = \
+ Benchmark("df + df2", setup, name='frame_add_no_ne',cleanup="expr.set_use_numexpr(True)",
+ start_date=datetime(2012, 1, 1))
+
+#----------------------------------------------------------------------
+# mult
+
+setup = common_setup + """
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+"""
+frame_mult = \
+ Benchmark("df * df2", setup, name='frame_mult',
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_numexpr_threads(1)
+"""
+frame_mult_st = \
+ Benchmark("df * df2", setup, name='frame_mult_st',cleanup="expr.set_numexpr_threads()",
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_use_numexpr(False)
+"""
+frame_mult_no_ne = \
+ Benchmark("df * df2", setup, name='frame_mult_no_ne',cleanup="expr.set_use_numexpr(True)",
+ start_date=datetime(2012, 1, 1))
+
+#----------------------------------------------------------------------
+# multi and
+
+setup = common_setup + """
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+"""
+frame_multi_and = \
+ Benchmark("df[(df>0) & (df2>0)]", setup, name='frame_multi_and',
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_numexpr_threads(1)
+"""
+frame_multi_and_st = \
+ Benchmark("df[(df>0) & (df2>0)]", setup, name='frame_multi_and_st',cleanup="expr.set_numexpr_threads()",
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_use_numexpr(False)
+"""
+frame_multi_and_no_ne = \
+ Benchmark("df[(df>0) & (df2>0)]", setup, name='frame_multi_and_no_ne',cleanup="expr.set_use_numexpr(True)",
+ start_date=datetime(2012, 1, 1))
+
diff --git a/vb_suite/indexing.py b/vb_suite/indexing.py
index 0c4898089a97f..ceda346fd3e57 100644
--- a/vb_suite/indexing.py
+++ b/vb_suite/indexing.py
@@ -83,7 +83,7 @@
# Boolean DataFrame row selection
setup = common_setup + """
-df = DataFrame(np.random.randn(10000, 4), columns=['A', 'B', 'C', 'D'])
+df = DataFrame(np.random.randn(10000, 4), columns=['A', 'B', 'C', 'D'])
indexer = df['B'] > 0
obj_indexer = indexer.astype('O')
"""
@@ -94,6 +94,36 @@
Benchmark("df[obj_indexer]", setup,
name='indexing_dataframe_boolean_rows_object')
+setup = common_setup + """
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+"""
+indexing_dataframe_boolean = \
+ Benchmark("df > df2", setup, name='indexing_dataframe_boolean',
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_numexpr_threads(1)
+"""
+
+indexing_dataframe_boolean_st = \
+ Benchmark("df > df2", setup, name='indexing_dataframe_boolean_st',cleanup="expr.set_numexpr_threads()",
+ start_date=datetime(2012, 1, 1))
+
+setup = common_setup + """
+import pandas.core.expressions as expr
+df = DataFrame(np.random.randn(100000, 100))
+df2 = DataFrame(np.random.randn(100000, 100))
+expr.set_use_numexpr(False)
+"""
+
+indexing_dataframe_boolean_no_ne = \
+ Benchmark("df > df2", setup, name='indexing_dataframe_boolean_no_ne',cleanup="expr.set_use_numexpr(True)",
+ start_date=datetime(2012, 1, 1))
+
#----------------------------------------------------------------------
# MultiIndex sortlevel
| simple usage of using numexpr on boolean type frame comparison
```
name
indexing_dataframe_boolean 13.3200 125.3521 0.1063
frame_mult 21.7157 36.6353 0.5928
frame_add 22.0432 36.5021 0.6039
frame_multi_and 385.4241 407.4359 0.9460
```
- optional use of numexpr (though out to suggest it highly)
- example usage, many more cases like this
_frame_multi_and is the case we are talking about below, a boolean express anded with another,
can only optimize each sub-expression, and not the 3 terms together_
original issue is #724
here some runs (on travis ci), for these 4 routines
- with no numexpr (no_ne)
- with numexpr single threaded (_st)
- with numexpr (no suffix)
So numexpr helps in boolean case with and w/o parallelization
add/multiply the parallelization helps (and just numexpr doesn't do much)
```
frame_mult : 27.0671 [ms]
frame_mult_st : 85.1929 [ms]
frame_mult_no_ne : 86.2602 [ms]
frame_add : 26.3325 [ms]
frame_add_st : 74.8805 [ms]
frame_add_no_ne : 73.9357 [ms]
indexing_dataframe_boolean : 15.0413 [ms]
indexing_dataframe_boolean_st : 25.8873 [ms]
indexing_dataframe_boolean_no_ne : 288.4722 [ms]
frame_multi_and : 637.8641 [ms]
frame_multi_and_st : 675.7760 [ms]
frame_multi_and_no_ne : 589.3791 [ms]
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2925 | 2013-02-25T14:49:58Z | 2013-03-09T19:42:52Z | 2013-03-09T19:42:52Z | 2014-06-13T03:30:45Z |
ENH: add .iloc attribute to provide location-based indexing | diff --git a/RELEASE.rst b/RELEASE.rst
index cf3fd598a8186..78e946006e1fb 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -29,12 +29,19 @@ pandas 0.11.0
**New features**
+ - New documentation section, ``10 Minutes to Pandas``
- Allow mixed dtypes (e.g ``float32/float64/int32/int16/int8``) to coexist in
DataFrames and propogate in operations
- Add function to pandas.io.data for retrieving stock index components from
Yahoo! finance (GH2795_)
- Add ``squeeze`` function to reduce dimensionality of 1-len objects
- Support slicing with time objects (GH2681_)
+ - Added ``.iloc`` attribute, to support strict integer based indexing, analagous to ``.ix`` (GH2922_)
+ - Added ``.loc`` attribute, to support strict label based indexing, analagous to ``.ix``
+ - Added ``.iat`` attribute, to support fast scalar access via integers (replaces ``iget_value/iset_value``)
+ - Added ``.at`` attribute, to support fast scalar access via labels (replaces ``get_value/set_value``)
+ - Moved functionaility from ``irow,icol,iget_value/iset_value`` to ``.iloc`` indexer
+ (via ``_ixs`` methods in each object)
**Improvements to existing features**
@@ -51,6 +58,8 @@ pandas 0.11.0
- ``describe_option()`` now reports the default and current value of options.
- Add ``format`` option to ``pandas.to_datetime`` with faster conversion of
strings that can be parsed with datetime.strptime
+ - Add ``axes`` property to ``Series`` for compatibility
+ - Add ``xs`` function to ``Series`` for compatibility
**API Changes**
@@ -127,6 +136,7 @@ pandas 0.11.0
- Bug on in-place putmasking on an ``integer`` series that needs to be converted to ``float`` (GH2746_)
- Bug in argsort of ``datetime64[ns]`` Series with ``NaT`` (GH2967_)
- Bug in idxmin/idxmax of ``datetime64[ns]`` Series with ``NaT`` (GH2982__)
+ - Bug in ``icol`` with negative indicies was incorrect producing incorrect return values (see GH2922_)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
@@ -145,6 +155,7 @@ pandas 0.11.0
.. _GH2849: https://github.com/pydata/pandas/issues/2849
.. _GH2898: https://github.com/pydata/pandas/issues/2898
.. _GH2909: https://github.com/pydata/pandas/issues/2909
+.. _GH2922: https://github.com/pydata/pandas/issues/2922
.. _GH2931: https://github.com/pydata/pandas/issues/2931
.. _GH2973: https://github.com/pydata/pandas/issues/2973
.. _GH2967: https://github.com/pydata/pandas/issues/2967
diff --git a/doc/source/10min.rst b/doc/source/10min.rst
new file mode 100644
index 0000000000000..a6945eed1387c
--- /dev/null
+++ b/doc/source/10min.rst
@@ -0,0 +1,687 @@
+.. _10min:
+
+.. currentmodule:: pandas
+
+.. ipython:: python
+ :suppress:
+
+ import numpy as np
+ import random
+ import os
+ np.random.seed(123456)
+ from pandas import *
+ import pandas as pd
+ randn = np.random.randn
+ randint = np.random.randint
+ np.set_printoptions(precision=4, suppress=True)
+
+ #### portions of this were borrowed from the
+ #### Pandas cheatsheet
+ #### created during the PyData Workshop-Sprint 2012
+ #### Hannah Chen, Henry Chow, Eric Cox, Robert Mauriello
+
+
+********************
+10 Minutes to Pandas
+********************
+
+This is a short introduction to pandas, geared mainly for new users.
+
+Customarily, we import as follows
+
+.. ipython:: python
+
+ import pandas as pd
+ import numpy as np
+
+Object Creation
+---------------
+
+See the :ref:`Data Structure Intro section <dsintro>`
+
+Creating a ``Series`` by passing a list of values, letting pandas create a default
+integer index
+
+.. ipython:: python
+
+ s = pd.Series([1,3,5,np.nan,6,8])
+ s
+
+Creating a ``DataFrame`` by passing a numpy array, with a datetime index and labeled columns.
+
+.. ipython:: python
+
+ dates = pd.date_range('20130101',periods=6)
+ dates
+ df = pd.DataFrame(np.random.randn(6,4),index=dates,columns=list('ABCD'))
+ df
+
+Creating a ``DataFrame`` by passing a dict of objects that can be converted to series-like.
+
+.. ipython:: python
+
+ df2 = pd.DataFrame({ 'A' : 1.,
+ 'B' : pd.Timestamp('20130102'),
+ 'C' : pd.Series(1,index=range(4),dtype='float32'),
+ 'D' : np.array([3] * 4,dtype='int32'),
+ 'E' : 'foo' })
+ df2
+
+Having specific :ref:`dtypes <basics.dtypes>`
+
+.. ipython:: python
+
+ df2.dtypes
+
+Viewing Data
+------------
+
+See the :ref:`Basics section <basics>`
+
+See the top & bottom rows of the frame
+
+.. ipython:: python
+
+ df.head()
+ df.tail(3)
+
+Display the index,columns, and the underlying numpy data
+
+.. ipython:: python
+
+ df.index
+ df.columns
+ df.values
+
+Describe shows a quick statistic summary of your data
+
+.. ipython:: python
+
+ df.describe()
+
+Transposing your data
+
+.. ipython:: python
+
+ df.T
+
+Sorting by an axis
+
+.. ipython:: python
+
+ df.sort_index(axis=1, ascending=False)
+
+Sorting by values
+
+.. ipython:: python
+
+ df.sort(columns='B')
+
+Selection
+---------
+
+See the :ref:`Indexing section <indexing>`
+
+
+Getting
+~~~~~~~
+
+Selecting a single column, which yields a ``Series``
+
+.. ipython:: python
+
+ # equivalently ``df.A``
+ df['A']
+
+Selecting via ``[]``, which slices the rows.
+
+.. ipython:: python
+
+ df[0:3]
+ df['20130102':'20130104']
+
+Selection by Label
+~~~~~~~~~~~~~~~~~~
+
+For getting a cross section using a label
+
+.. ipython:: python
+
+ df.loc[dates[0]]
+
+Selecting on a multi-axis by label
+
+.. ipython:: python
+
+ df.loc[:,['A','B']]
+
+Showing label slicing, both endpoints are *included*
+
+.. ipython:: python
+
+ df.loc['20130102':'20130104',['A','B']]
+
+Reduction in the dimensions of the returned object
+
+.. ipython:: python
+
+ df.loc['20130102',['A','B']]
+
+For getting a scalar value
+
+.. ipython:: python
+
+ df.loc[dates[0],'A']
+
+For getting fast access to a scalar (equiv to the prior method)
+
+.. ipython:: python
+
+ df.at[dates[0],'A']
+
+Selection by Position
+~~~~~~~~~~~~~~~~~~~~~
+
+Select via the position of the passed integers
+
+.. ipython:: python
+
+ df.iloc[3]
+
+By integer slices, acting similar to numpy/python
+
+.. ipython:: python
+
+ df.iloc[3:5,0:2]
+
+By lists of integer position locations, similar to the numpy/python style
+
+.. ipython:: python
+
+ df.iloc[[1,2,4],[0,2]]
+
+For slicing rows explicitly
+
+.. ipython:: python
+
+ df.iloc[1:3,:]
+
+For slicing columns explicitly
+
+.. ipython:: python
+
+ df.iloc[:,1:3]
+
+For getting a value explicity
+
+.. ipython:: python
+
+ df.iloc[1,1]
+
+For getting fast access to a scalar (equiv to the prior method)
+
+.. ipython:: python
+
+ df.iat[1,1]
+
+There is one signficant departure from standard python/numpy slicing semantics.
+python/numpy allow slicing past the end of an array without an associated error.
+
+.. ipython:: python
+
+ # these are allowed in python/numpy.
+ x = list('abcdef')
+ x[4:10]
+ x[8:10]
+
+Pandas will detect this and raise ``IndexError``, rather than return an empty structure.
+
+::
+
+ >>> df.iloc[:,8:10]
+ IndexError: out-of-bounds on slice (end)
+
+Boolean Indexing
+~~~~~~~~~~~~~~~~
+
+Using a single column's values to select data.
+
+.. ipython:: python
+
+ df[df.A > 0]
+
+A ``where`` operation for getting.
+
+.. ipython:: python
+
+ df[df > 0]
+
+
+Setting
+~~~~~~~
+
+Setting a new column automatically aligns the data
+by the indexes
+
+.. ipython:: python
+
+ s1 = pd.Series([1,2,3,4,5,6],index=date_range('20130102',periods=6))
+ s1
+ df['F'] = s1
+
+Setting values by label
+
+.. ipython:: python
+
+ df.at[dates[0],'A'] = 0
+
+Setting values by position
+
+.. ipython:: python
+
+ df.iat[0,1] = 0
+
+Setting by assigning with a numpy array
+
+.. ipython:: python
+
+ df.loc[:,'D'] = np.array([5] * len(df))
+ df
+
+A ``where`` operation with setting.
+
+.. ipython:: python
+
+ df2 = df.copy()
+ df2[df2 > 0] = -df2
+ df2
+
+Missing Data
+------------
+
+Pandas primarily uses the value ``np.nan`` to represent missing data. It
+is by default not included in computations. See the :ref:`Missing Data section <missing_data>`
+
+Reindexing allows you to change/add/delete the index on a specified axis. This
+returns a copy of the data.
+
+.. ipython:: python
+
+ df1 = df.reindex(index=dates[0:4],columns=list(df.columns) + ['E'])
+ df1.loc[dates[0]:dates[1],'E'] = 1
+ df1
+
+To drop any rows that have missing data.
+
+.. ipython:: python
+
+ df1.dropna(how='any')
+
+Filling missing data
+
+.. ipython:: python
+
+ df1.fillna(value=5)
+
+To get the boolean mask where values are ``nan``
+
+.. ipython:: python
+
+ pd.isnull(df1)
+
+
+Operations
+----------
+
+See the :ref:`Basic section on Binary Ops <basics.binop>`
+
+Stats
+~~~~~
+
+Operations in general *exclude* missing data.
+
+Performing a descriptive statistic
+
+.. ipython:: python
+
+ df.mean()
+
+Same operation on the other axis
+
+.. ipython:: python
+
+ df.mean(1)
+
+Operating with objects that have different dimensionality and need alignment.
+In addition, pandas automatically broadcasts along the specified dimension.
+
+.. ipython:: python
+
+ s = pd.Series([1,3,5,np.nan,6,8],index=dates).shift(2)
+ s
+ df.sub(s,axis='index')
+
+
+Apply
+~~~~~
+
+Applying functions to the data
+
+.. ipython:: python
+
+ df.apply(np.cumsum)
+ df.apply(lambda x: x.max() - x.min())
+
+Histogramming
+~~~~~~~~~~~~~
+
+See more at :ref:`Histogramming and Discretization <basics.discretization>`
+
+.. ipython:: python
+
+ s = Series(np.random.randint(0,7,size=10))
+ s
+ s.value_counts()
+
+String Methods
+~~~~~~~~~~~~~~
+
+See more at :ref:`Vectorized String Methods <basics.string_methods>`
+
+.. ipython:: python
+
+ s = Series(['A', 'B', 'C', 'Aaba', 'Baca', np.nan, 'CABA', 'dog', 'cat'])
+ s.str.lower()
+
+Merge
+-----
+
+Concat
+~~~~~~
+
+Pandas provides various facilities for easily combining together Series,
+DataFrame, and Panel objects with various kinds of set logic for the indexes
+and relational algebra functionality in the case of join / merge-type
+operations.
+
+See the :ref:`Merging section <merging>`
+
+Concatenating pandas objects together
+
+.. ipython:: python
+
+ df = pd.DataFrame(np.random.randn(10, 4))
+ df
+
+ # break it into pieces
+ pieces = [df[:3], df[3:7], df[7:]]
+
+ concat(pieces)
+
+Join
+~~~~
+
+SQL style merges. See the :ref:`Database style joining <merging.join>`
+
+.. ipython:: python
+
+ left = pd.DataFrame({'key': ['foo', 'foo'], 'lval': [1, 2]})
+ right = pd.DataFrame({'key': ['foo', 'foo'], 'rval': [4, 5]})
+ left
+ right
+ merge(left, right, on='key')
+
+Append
+~~~~~~
+
+Append rows to a dataframe. See the :ref:`Appending <merging.concatenation>`
+
+.. ipython:: python
+
+ df = pd.DataFrame(np.random.randn(8, 4), columns=['A','B','C','D'])
+ df
+ s = df.iloc[3]
+ df.append(s, ignore_index=True)
+ df
+
+
+Grouping
+--------
+
+By "group by" we are referring to a process involving one or more of the following
+steps
+
+ - **Splitting** the data into groups based on some criteria
+ - **Applying** a function to each group independently
+ - **Combining** the results into a data structure
+
+See the :ref:`Grouping section <groupby>`
+
+.. ipython:: python
+
+ df = pd.DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B' : ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C' : randn(8), 'D' : randn(8)})
+ df
+
+Grouping and then applying a function ``sum`` to the resulting groups.
+
+.. ipython:: python
+
+ df.groupby('A').sum()
+
+Grouping by multiple columns forms a hierarchical index, which we then apply the function.
+
+.. ipython:: python
+
+ df.groupby(['A','B']).sum()
+
+Reshaping
+---------
+
+See the section on :ref:`Hierarchical Indexing <indexing.hierarchical>` and
+see the section on :ref:`Reshaping <reshaping.stacking>`).
+
+Stack
+~~~~~
+
+.. ipython:: python
+
+ tuples = zip(*[['bar', 'bar', 'baz', 'baz',
+ 'foo', 'foo', 'qux', 'qux'],
+ ['one', 'two', 'one', 'two',
+ 'one', 'two', 'one', 'two']])
+ index = pd.MultiIndex.from_tuples(tuples, names=['first', 'second'])
+ df = pd.DataFrame(randn(8, 2), index=index, columns=['A', 'B'])
+ df2 = df[:4]
+ df2
+
+The ``stack`` function "compresses" a level in the DataFrame's columns. to
+
+.. ipython:: python
+
+ stacked = df2.stack()
+ stacked
+
+With a "stacked" DataFrame or Series (having a ``MultiIndex`` as the
+``index``), the inverse operation of ``stack`` is ``unstack``, which by default
+unstacks the **last level**:
+
+.. ipython:: python
+
+ stacked.unstack()
+ stacked.unstack(1)
+ stacked.unstack(0)
+
+Pivot Tables
+~~~~~~~~~~~~
+See the section on :ref:`Pivot Tables <reshaping.pivot>`).
+
+.. ipython:: python
+
+ df = DataFrame({'A' : ['one', 'one', 'two', 'three'] * 3,
+ 'B' : ['A', 'B', 'C'] * 4,
+ 'C' : ['foo', 'foo', 'foo', 'bar', 'bar', 'bar'] * 2,
+ 'D' : np.random.randn(12),
+ 'E' : np.random.randn(12)})
+ df
+
+We can produce pivot tables from this data very easily:
+
+.. ipython:: python
+
+ pivot_table(df, values='D', rows=['A', 'B'], cols=['C'])
+
+
+Time Series
+-----------
+
+Pandas has simple, powerful, and efficient functionality for
+performing resampling operations during frequency conversion (e.g., converting
+secondly data into 5-minutely data). This is extremely common in, but not
+limited to, financial applications. See the :ref:`Time Series section <timeseries>`
+
+.. ipython:: python
+
+ rng = pd.date_range('1/1/2012', periods=100, freq='S')
+ ts = pd.Series(randint(0, 500, len(rng)), index=rng)
+ ts.resample('5Min', how='sum')
+
+Time zone representation
+
+.. ipython:: python
+
+ rng = pd.date_range('3/6/2012 00:00', periods=5, freq='D')
+ ts = pd.Series(randn(len(rng)), rng)
+ ts_utc = ts.tz_localize('UTC')
+ ts_utc
+
+Convert to another time zone
+
+.. ipython:: python
+
+ ts_utc.tz_convert('US/Eastern')
+
+Converting between time span representations
+
+.. ipython:: python
+
+ rng = pd.date_range('1/1/2012', periods=5, freq='M')
+ ts = pd.Series(randn(len(rng)), index=rng)
+ ts
+ ps = ts.to_period()
+ ps
+ ps.to_timestamp()
+
+Converting between period and timestamp enables some convenient arithmetic
+functions to be used. In the following example, we convert a quarterly
+frequency with year ending in November to 9am of the end of the month following
+the quarter end:
+
+.. ipython:: python
+
+ prng = period_range('1990Q1', '2000Q4', freq='Q-NOV')
+ ts = Series(randn(len(prng)), prng)
+ ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
+ ts.head()
+
+
+Plotting
+--------
+
+.. ipython:: python
+ :suppress:
+
+ import matplotlib.pyplot as plt
+ plt.close('all')
+
+.. ipython:: python
+
+ ts = pd.Series(randn(1000), index=pd.date_range('1/1/2000', periods=1000))
+ ts = ts.cumsum()
+
+ @savefig series_plot_basic.png width=4.5in
+ ts.plot()
+
+On DataFrame, ``plot`` is a convenience to plot all of the columns with labels:
+
+.. ipython:: python
+
+ df = pd.DataFrame(randn(1000, 4), index=ts.index,
+ columns=['A', 'B', 'C', 'D'])
+ df = df.cumsum()
+
+ @savefig frame_plot_basic.png width=4.5in
+ plt.figure(); df.plot(); plt.legend(loc='best')
+
+Getting Data In/Out
+-------------------
+
+CSV
+~~~
+
+:ref:`Writing to a csv file <io.store_in_csv>`
+
+.. ipython:: python
+
+ df.to_csv('foo.csv')
+
+:ref:`Reading from a csv file <io.read_csv_table>`
+
+.. ipython:: python
+
+ pd.read_csv('foo.csv')
+
+.. ipython:: python
+ :suppress:
+
+ os.remove('foo.csv')
+
+HDF5
+~~~~
+
+Reading and writing to :ref:`HDFStores <io.hdf5>`
+
+Writing to a HDF5 Store
+
+.. ipython:: python
+
+ store = pd.HDFStore('foo.h5')
+ store['df'] = df
+
+Reading from a HDF5 Store
+
+.. ipython:: python
+
+ store['df']
+
+.. ipython:: python
+ :suppress:
+
+ store.close()
+ os.remove('foo.h5')
+
+Excel
+~~~~~
+
+Reading and writing to :ref:`MS Excel <io.excel>`
+
+Writing to an excel file
+
+.. ipython:: python
+
+ df.to_excel('foo.xlsx', sheet_name='sheet1')
+
+Reading from an excel file
+
+.. ipython:: python
+
+ xls = ExcelFile('foo.xlsx')
+ xls.parse('sheet1', index_col=None, na_values=['NA'])
+
+.. ipython:: python
+ :suppress:
+
+ os.remove('foo.xlsx')
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 05025e4f9479a..d32cbf7dcb8d1 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -9,9 +9,9 @@
randn = np.random.randn
np.set_printoptions(precision=4, suppress=True)
-*****************************
-Essential Basic Functionality
-*****************************
+==============================
+ Essential Basic Functionality
+==============================
Here we discuss a lot of the essential functionality common to the pandas data
structures. Here's how to create some of the objects used in the examples from
@@ -374,6 +374,8 @@ value, ``idxmin`` and ``idxmax`` return the first matching index:
df3
df3['A'].idxmin()
+.. _basics.discretization:
+
Value counts (histogramming)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -976,14 +978,14 @@ To be clear, no pandas methods have the side effect of modifying your data;
almost all methods return new objects, leaving the original object
untouched. If data is modified, it is because you did so explicitly.
-DTypes
-------
-
.. _basics.dtypes:
-The main types stored in pandas objects are float, int, boolean, datetime64[ns],
-and object. A convenient ``dtypes`` attribute for DataFrames returns a Series with
-the data type of each column.
+dtypes
+------
+
+The main types stored in pandas objects are ``float``, ``int``, ``bool``, ``datetime64[ns]``, ``timedelta[ns]``,
+and ``object``. In addition these dtypes have item sizes, e.g. ``int64`` and ``int32``. A convenient ``dtypes``
+attribute for DataFrames returns a Series with the data type of each column.
.. ipython:: python
@@ -992,11 +994,26 @@ the data type of each column.
F = False,
G = Series([1]*3,dtype='int8')))
dft
+ dft.dtypes
+
+On a ``Series`` use the ``dtype`` method.
+
+.. ipython:: python
+
+ dft['A'].dtype
-If a DataFrame contains columns of multiple dtypes, the dtype of the column
-will be chosen to accommodate all of the data types (dtype=object is the most
+If a pandas object contains data multiple dtypes *IN A SINGLE COLUMN*, the dtype of the
+column will be chosen to accommodate all of the data types (``object`` is the most
general).
+.. ipython:: python
+
+ # these ints are coerced to floats
+ Series([1, 2, 3, 4, 5, 6.])
+
+ # string data forces an ``object`` dtype
+ Series([1, 2, 3, 6., 'foo'])
+
The related method ``get_dtype_counts`` will return the number of columns of
each type:
@@ -1019,15 +1036,42 @@ or a passed ``Series``, then it will be preserved in DataFrame operations. Furth
df2
df2.dtypes
- # here you get some upcasting
+defaults
+~~~~~~~~
+
+By default integer types are ``int64`` and float types are ``float64``, *REGARDLESS* of platform (32-bit or 64-bit).
+
+The following will all result in ``int64`` dtypes.
+
+.. ipython:: python
+
+ DataFrame([1,2],columns=['a']).dtypes
+ DataFrame({'a' : [1,2] }).dtypes
+ DataFrame({'a' : 1 }, index=range(2)).dtypes
+
+Numpy, however will choose *platform-dependent* types when creating arrays.
+Thus, ``DataFrame(np.array([1,2]))`` **WILL** result in ``int32`` on 32-bit platform.
+
+
+upcasting
+~~~~~~~~~
+
+Types can potentially be *upcasted* when combined with other types, meaning they are promoted from the current type (say ``int`` to ``float``)
+
+.. ipython:: python
+
df3 = df1.reindex_like(df2).fillna(value=0.0) + df2
df3
df3.dtypes
- # this is lower-common-denomicator upcasting (meaning you get the dtype which can accomodate all of the types)
+The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning the dtype that can accomodate **ALL** of the types in the resulting homogenous dtyped numpy array. This can
+force some *upcasting*.
+
+.. ipython:: python
+
df3.values.dtype
-Astype
+astype
~~~~~~
.. _basics.cast:
@@ -1044,7 +1088,7 @@ then the more *general* one will be used as the result of the operation.
# conversion of dtypes
df3.astype('float32').dtypes
-Object Conversion
+object conversion
~~~~~~~~~~~~~~~~~
To force conversion of specific types of number conversion, pass ``convert_numeric = True``.
@@ -1067,16 +1111,19 @@ the objects in a Series are of the same type, the Series will have that dtype.
df3['E'] = df3['E'].astype('int32')
df3.dtypes
- # forcing date coercion
+This is a *forced coercion* on datelike types. This might be useful if you are reading in data which is mostly dates, but occasionally has non-dates intermixed and you want to make those values ``nan``.
+
+.. ipython:: python
+
s = Series([datetime(2001,1,1,0,0), 'foo', 1.0, 1, Timestamp('20010104'), '20010105'],dtype='O')
s
s.convert_objects(convert_dates='coerce')
-Upcasting Gotchas
-~~~~~~~~~~~~~~~~~
+gotchas
+~~~~~~~
-Performing indexing operations on ``integer`` type data can easily upcast the data to ``floating``.
+Performing selection operations on ``integer`` type data can easily upcast the data to ``floating``.
The dtype of the input data will be preserved in cases where ``nans`` are not introduced (starting in 0.11.0)
See also :ref:`integer na gotchas <gotchas.intna>`
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 45fabb551d993..d5eb863580b6b 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -437,8 +437,8 @@ The basics of indexing are as follows:
:widths: 30, 20, 10
Select column, ``df[col]``, Series
- Select row by label, ``df.xs(label)`` or ``df.ix[label]``, Series
- Select row by location (int), ``df.ix[loc]``, Series
+ Select row by label, ``df.loc[label]``, Series
+ Select row by integer location, ``df.iloc[loc]``, Series
Slice rows, ``df[5:10]``, DataFrame
Select rows by boolean vector, ``df[bool_vec]``, DataFrame
@@ -447,8 +447,8 @@ DataFrame:
.. ipython:: python
- df.xs('b')
- df.ix[2]
+ df.loc['b']
+ df.iloc[2]
For a more exhaustive treatment of more sophisticated label-based indexing and
slicing, see the :ref:`section on indexing <indexing>`. We will address the
@@ -475,7 +475,7 @@ row-wise. For example:
.. ipython:: python
- df - df.ix[0]
+ df - df.iloc[0]
In the special case of working with time series data, if the Series is a
TimeSeries (which it will be automatically if the index contains datetime
@@ -592,7 +592,7 @@ DataFrame in tabular form, though it won't always fit the console width:
.. ipython:: python
- print baseball.ix[-20:, :12].to_string()
+ print baseball.iloc[-20:, :12].to_string()
New since 0.10.0, wide DataFrames will now be printed across multiple rows by
default:
diff --git a/doc/source/index.rst b/doc/source/index.rst
index bc51f1b13f36e..d59cb6d7a816b 100644
--- a/doc/source/index.rst
+++ b/doc/source/index.rst
@@ -112,6 +112,7 @@ See the package overview for more detail about what's in the library.
install
faq
overview
+ 10min
dsintro
basics
indexing
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 8c18d9f69bee3..02aa00b7eaca6 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -14,9 +14,9 @@
randint = np.random.randint
np.set_printoptions(precision=4, suppress=True)
-***************************
-Indexing and selecting data
-***************************
+**************
+Selecting Data
+**************
The axis labeling information in pandas objects serves many purposes:
@@ -32,6 +32,86 @@ attention in this area. Expect more work to be invested higher-dimensional data
structures (including Panel) in the future, especially in label-based advanced
indexing.
+Choice
+------
+
+Starting in 0.11.0, object selection has had a number of user-requested additions in
+order to support more explicit location based indexing. Pandas now supports
+three types of multi-axis indexing.
+
+ - ``.loc`` is strictly label based, will raise ``KeyError`` when the items are not found,
+ allowed inputs are:
+
+ - A single label, e.g. ``5`` or ``'a'``
+
+ (note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index)
+ - A list or array of labels ``['a', 'b', 'c']``
+ - A slice object with labels ``'a':'f'``
+
+ (note that contrary to usual python slices, **both** the start and the stop are included!)
+ - A boolean array
+
+ See more at :ref:`Selection by Label <indexing.label>`
+
+ - ``.iloc`` is strictly integer position based (from 0 to length-1 of the axis), will
+ raise ``IndexError`` when the requested indicies are out of bounds. Allowed inputs are:
+
+ - An integer e.g. ``5``
+ - A list or array of integers ``[4, 3, 0]``
+ - A slice object with ints ``1:7``
+ - A boolean array
+
+ See more at :ref:`Selection by Position <indexing.integer>`
+
+ - ``.ix`` supports mixed integer and label based access. It is primarily label based, but
+ will fallback to integer positional access. ``.ix`` is the most general and will support
+ any of the inputs to ``.loc`` and ``.iloc``, as well as support for floating point label schemes.
+
+ As using integer slices with ``.ix`` have different behavior depending on whether the slice
+ is interpreted as integer location based or label position based, it's usually better to be
+ explicit and use ``.iloc`` (integer location) or ``.loc`` (label location).
+
+ ``.ix`` is especially useful when dealing with mixed positional and label based hierarchial indexes.
+
+ See more at :ref:`Advanced Indexing <indexing.advanced>` and :ref:`Advanced Hierarchical <indexing.advanced_hierarchical>`
+
+Getting values from object with multi-axes uses the following notation (using ``.loc`` as an
+example, but applies to ``.iloc`` and ``.ix`` as well) Any of the axes accessors may be the null
+slice ``:``. Axes left out of the specification are assumed to be ``:``.
+(e.g. ``p.loc['a']`` is equiv to ``p.loc['a',:,:]``)
+
+.. csv-table::
+ :header: "Object Type", "Indexers"
+ :widths: 30, 50
+ :delim: ;
+
+ Series; ``s.loc[indexer]``
+ DataFrame; ``df.loc[row_indexer,column_indexer]``
+ Panel; ``p.loc[item_indexer,major_indexer,minor_indexer]``
+
+Deprecations
+~~~~~~~~~~~~
+
+Starting in version 0.11.0, these methods may be deprecated in future versions.
+
+ - ``irow``
+ - ``icol``
+ - ``iget_value``
+
+See the section :ref:`Selection by Position <indexing.integer>` for substitutes.
+
+.. _indexing.xs:
+
+Cross-sectional slices on non-hierarchical indices are now easily performed using
+``.loc`` and/or ``.loc``. The methods:
+
+ - ``xs`` (for DataFrame),
+ - ``minor_xs`` and ``major_xs`` (for Panel)
+
+now exist primarily for backward compatibility.
+
+See the section at :ref:`Selection by Label <indexing.label>` for substitutes.
+
.. _indexing.basics:
Basics
@@ -42,18 +122,21 @@ As mentioned when introducing the data structures in the :ref:`last section
for those familiar with implementing class behavior in Python) is selecting out
lower-dimensional slices. Thus,
- - **Series**: ``series[label]`` returns a scalar value
- - **DataFrame**: ``frame[colname]`` returns a Series corresponding to the
- passed column name
- - **Panel**: ``panel[itemname]`` returns a DataFrame corresponding to the
- passed item name
+.. csv-table::
+ :header: "Object Type", "Selection", "Return Value Type"
+ :widths: 30, 30, 60
+ :delim: ;
+
+ Series; ``series[label]``; scalar value
+ DataFrame; ``frame[colname]``; ``Series`` corresponding to colname
+ Panel; ``panel[itemname]``; ``DataFrame`` corresponing to the itemname
Here we construct a simple time series data set to use for illustrating the
indexing functionality:
.. ipython:: python
- dates = np.asarray(date_range('1/1/2000', periods=8))
+ dates = date_range('1/1/2000', periods=8)
df = DataFrame(randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])
df
panel = Panel({'one' : df, 'two' : df - df.mean()})
@@ -72,46 +155,22 @@ Thus, as per above, we have the most basic indexing using ``[]``:
s[dates[5]]
panel['two']
-
-.. _indexing.basics.get_value:
-
-Fast scalar value getting and setting
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-
-Since indexing with ``[]`` must handle a lot of cases (single-label access,
-slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
-out what you're asking for. If you only want to access a scalar value, the
-fastest way is to use the ``get_value`` method, which is implemented on all of
-the data structures:
-
-.. ipython:: python
-
- s.get_value(dates[5])
- df.get_value(dates[5], 'A')
-
-There is an analogous ``set_value`` method which has the additional capability
-of enlarging an object. This method *always* returns a reference to the object
-it modified, which in the case of enlargement, will be a **new object**:
-
-.. ipython:: python
-
- df.set_value(dates[5], 'E', 7)
-
-Additional Column Access
-~~~~~~~~~~~~~~~~~~~~~~~~
+Attribute Access
+~~~~~~~~~~~~~~~~
.. _indexing.columns.multiple:
.. _indexing.df_cols:
-You may access a column on a dataframe directly as an attribute:
+You may access a column on a ``DataFrame``, and a item on a ``Panel`` directly as an attribute:
.. ipython:: python
df.A
+ panel.one
If you are using the IPython environment, you may also use tab-completion to
-see the accessible columns of a DataFrame.
+see these accessable attributes.
You can pass a list of columns to ``[]`` to select columns in that order:
If a column is not contained in the DataFrame, an exception will be
@@ -126,30 +185,12 @@ raised. Multiple columns can also be set in this manner:
You may find this useful for applying a transform (in-place) to a subset of the
columns.
-Data slices on other axes
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
-It's certainly possible to retrieve data slices along the other axes of a
-DataFrame or Panel. We tend to refer to these slices as
-*cross-sections*. DataFrame has the ``xs`` function for retrieving rows as
-Series and Panel has the analogous ``major_xs`` and ``minor_xs`` functions for
-retrieving slices as DataFrames for a given ``major_axis`` or ``minor_axis``
-label, respectively.
-
-.. ipython:: python
-
- date = dates[5]
- df.xs(date)
- panel.major_xs(date)
- panel.minor_xs('A')
-
-
Slicing ranges
~~~~~~~~~~~~~~
The most robust and consistent way of slicing ranges along arbitrary axes is
-described in the :ref:`Advanced indexing <indexing.advanced>` section detailing
-the ``.ix`` method. For now, we explain the semantics of slicing using the
+described in the :ref:`Selection by Position <indexing.integer>` section detailing
+the ``.iloc`` method. For now, we explain the semantics of slicing using the
``[]`` operator.
With Series, the syntax works exactly as with an ndarray, returning a slice of
@@ -177,6 +218,210 @@ largely as a convenience since it is such a common operation.
df[:3]
df[::-1]
+.. _indexing.label:
+
+Selection By Label
+~~~~~~~~~~~~~~~~~~
+
+Pandas provides a suite of methods in order to have **purely label based indexing**.
+This is a strict inclusion based protocol. **ALL** of the labels for which you ask,
+must be in the index or a ``KeyError`` will be raised!
+
+When slicing, the start bound is *included*, **AND** the stop bound is *included*.
+Integers are valid labels, but they refer to the label *and not the position*.
+
+The ``.loc`` attribute is the primary access method.
+
+The following are valid inputs:
+
+ - A single label, e.g. ``5`` or ``'a'``
+
+ (note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index)
+ - A list or array of labels ``['a', 'b', 'c']``
+ - A slice object with labels ``'a':'f'``
+
+ (note that contrary to usual python slices, **both** the start and the stop are included!)
+ - A boolean array
+
+.. ipython:: python
+
+ s1 = Series(np.random.randn(6),index=list('abcdef'))
+ s1
+ s1.loc['c':]
+ s1.loc['b']
+
+Note that setting works as well:
+
+.. ipython:: python
+
+ s1.loc['c':] = 0
+ s1
+
+With a DataFrame
+
+.. ipython:: python
+
+ df1 = DataFrame(np.random.randn(6,4),index=list('abcdef'),columns=list('ABCD'))
+ df1
+ df1.loc[['a','b','d'],:]
+
+Accessing via label slices
+
+.. ipython:: python
+
+ df1.loc['d':,'A':'C']
+
+For getting a cross section using a label (equiv to deprecated ``df.xs('a')``)
+
+.. ipython:: python
+
+ df1.loc['a']
+
+For getting values with a boolean array
+
+.. ipython:: python
+
+ df1.loc['a']>0
+ df1.loc[:,df1.loc['a']>0]
+
+For getting a value explicity (equiv to deprecated ``df.get_value('a','A')``)
+
+.. ipython:: python
+
+ # this is also equivalent to ``df1.at['a','A']``
+ df1.loc['a','A']
+
+.. _indexing.integer:
+
+Selection By Position
+~~~~~~~~~~~~~~~~~~~~~
+
+Pandas provides a suite of methods in order to get **purely integer based indexing**.
+The semantics follow closely python and numpy slicing. These are ``0-based`` indexing.
+
+When slicing, the start bounds is *included*, while the upper bound is *excluded*.
+Trying to use a non-integer, even a **valid** label will raise a ``IndexError``.
+
+The ``.iloc`` attribute is the primary access method .
+
+The following are valid inputs:
+
+ - An integer e.g. ``5``
+ - A list or array of integers ``[4, 3, 0]``
+ - A slice object with ints ``1:7``
+ - A boolean array
+
+.. ipython:: python
+
+ s1 = Series(np.random.randn(5),index=range(0,10,2))
+ s1
+ s1.iloc[:3]
+ s1.iloc[3]
+
+Note that setting works as well:
+
+.. ipython:: python
+
+ s1.iloc[:3] = 0
+ s1
+
+With a DataFrame
+
+.. ipython:: python
+
+ df1 = DataFrame(np.random.randn(6,4),index=range(0,12,2),columns=range(0,8,2))
+ df1
+
+Select via integer slicing
+
+.. ipython:: python
+
+ df1.iloc[:3]
+ df1.iloc[1:5,2:4]
+
+Select via integer list
+
+.. ipython:: python
+
+ df1.iloc[[1,3,5],[1,3]]
+
+Select via boolean array
+
+.. ipython:: python
+
+ df1.iloc[:,df1.iloc[0]>0]
+
+For slicing rows explicitly (equiv to deprecated ``df.irow(slice(1,3))``).
+
+.. ipython:: python
+
+ df1.iloc[1:3,:]
+
+For slicing columns explicitly (equiv to deprecated ``df.icol(slice(1,3))``).
+
+.. ipython:: python
+
+ df1.iloc[:,1:3]
+
+For getting a scalar via integer position (equiv to deprecated ``df.get_value(1,1)``)
+
+.. ipython:: python
+
+ # this is also equivalent to ``df1.iat[1,1]``
+ df1.iloc[1,1]
+
+For getting a cross section using an integer position (equiv to deprecated ``df.xs(1)``)
+
+.. ipython:: python
+
+ df1.iloc[1]
+
+There is one signficant departure from standard python/numpy slicing semantics.
+python/numpy allow slicing past the end of an array without an associated error.
+
+.. ipython:: python
+
+ # these are allowed in python/numpy.
+ x = list('abcdef')
+ x[4:10]
+ x[8:10]
+
+Pandas will detect this and raise ``IndexError``, rather than return an empty structure.
+
+::
+
+ >>> df.iloc[:,3:6]
+ IndexError: out-of-bounds on slice (end)
+
+.. _indexing.basics.get_value:
+
+Fast scalar value getting and setting
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Since indexing with ``[]`` must handle a lot of cases (single-label access,
+slicing, boolean indexing, etc.), it has a bit of overhead in order to figure
+out what you're asking for. If you only want to access a scalar value, the
+fastest way is to use the ``at`` and ``iat`` methods, which are implemented on all of
+the data structures.
+
+Similary to ``loc``, ``at`` provides **label** based scalar lookups, while, ``iat`` provides
+**integer** based lookups analagously to ``iloc``
+
+.. ipython:: python
+
+ s.iat[5]
+ df.at[dates[5], 'A']
+ df.iat[3, 0]
+
+You can also set using these same indexers. These have the additional capability
+of enlarging an object. This method *always* returns a reference to the object
+it modified, which in the case of enlargement, will be a **new object**:
+
+.. ipython:: python
+
+ df.at[dates[5], 'E'] = 7
+ df.iat[3, 0] = 7
+
Boolean indexing
~~~~~~~~~~~~~~~~
@@ -228,8 +473,8 @@ more complex criteria:
df2[criterion & (df2['b'] == 'x')]
-Note, with the :ref:`advanced indexing <indexing.advanced>` ``ix`` method, you
-may select along more than one axis using boolean vectors combined with other
+Note, with the choice methods :ref:`Selection by Label <indexing.label>`, :ref:`Selection by Position <indexing.integer>`,
+and :ref:`Advanced Indexing <indexing.advanced>` may select along more than one axis using boolean vectors combined with other
indexing expressions.
Where and Masking
@@ -413,20 +658,21 @@ default value.
.. _indexing.advanced:
-Advanced indexing with labels
------------------------------
+Advanced Indexing with ``.ix``
+------------------------------
+
+.. note::
+
+ The recent addition of ``.loc`` and ``.iloc`` have enabled users to be quite
+ explicit about indexing choices. ``.ix`` allows a great flexibility to specify
+ indexing locations by *label* an/or *integer position*. Pandas will attempt
+ to use any passed *integer* as *label* locations first (like what ``.loc``
+ would do, then to fall back on *positional* indexing, like what ``.iloc`` would do).
-We have avoided excessively overloading the ``[]`` / ``__getitem__`` operator
-to keep the basic functionality of the pandas objects straightforward and
-simple. However, there are often times when you may wish get a subset (or
-analogously set a subset) of the data in a way that is not straightforward
-using the combination of ``reindex`` and ``[]``. Complicated setting operations
-are actually quite difficult because ``reindex`` usually returns a copy.
+The syntax of using ``.ix`` is identical to ``.loc``, in :ref:`Selection by Label <indexing.label>`,
+and ``.iloc`` in :ref:`Selection by Position <indexing.integer>`.
-By *advanced* indexing we are referring to a special ``.ix`` attribute on
-pandas objects which enable you to do getting/setting operations on a
-DataFrame, for example, with matrix/ndarray-like semantics. Thus you can
-combine the following kinds of indexing:
+The ``.ix`` attribute takes the following inputs:
- An integer or single label, e.g. ``5`` or ``'a'``
- A list or array of labels ``['a', 'b', 'c']`` or integers ``[4, 3, 0]``
@@ -529,27 +775,6 @@ numpy array. For instance,
dflookup.lookup(xrange(0,10,2), ['B','C','A','B','D'])
-Advanced indexing with integer labels
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-Label-based indexing with integer axis labels is a thorny topic. It has been
-discussed heavily on mailing lists and among various members of the scientific
-Python community. In pandas, our general viewpoint is that labels matter more
-than integer locations. Therefore, with an integer axis index *only*
-label-based indexing is possible with the standard tools like ``.ix``. The
-following code will generate exceptions:
-
-.. code-block:: python
-
- s = Series(range(5))
- s[-1]
- df = DataFrame(np.random.randn(5, 4))
- df
- df.ix[-2:]
-
-This deliberate decision was made to prevent ambiguities and subtle bugs (many
-users reported finding bugs when the API change was made to stop "falling back"
-on position-based indexing).
-
Setting values in mixed-type DataFrame
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -779,6 +1004,8 @@ of tuples:
s.reindex(index[:3])
s.reindex([('foo', 'two'), ('bar', 'one'), ('qux', 'one'), ('baz', 'one')])
+.. _indexing.advanced_hierarchical:
+
Advanced indexing with hierarchical index
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -818,8 +1045,6 @@ but as you use it you may uncover corner cases or unintuitive behavior. If you
do find something like this, do not hesitate to report the issue or ask on the
mailing list.
-.. _indexing.xs:
-
Cross-section with hierarchical index
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/io.rst b/doc/source/io.rst
index 86d590965f141..914506fb0d3cd 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -906,6 +906,8 @@ And then import the data directly to a DataFrame by calling:
clipdf
+.. _io.excel:
+
Excel files
-----------
@@ -970,7 +972,7 @@ one can use the ExcelWriter class, as in the following example:
df2.to_excel(writer, sheet_name='sheet2')
writer.save()
-.. _io-hdf5:
+.. _io.hdf5:
HDF5 (PyTables)
---------------
@@ -1058,6 +1060,7 @@ These stores are **not** appendable once written (though you can simply
remove them and rewrite). Nor are they **queryable**; they must be
retrieved in their entirety.
+.. _io.hdf5-table:
Storing in Table format
~~~~~~~~~~~~~~~~~~~~~~~
@@ -1091,6 +1094,8 @@ supported.
# the type of stored data
store.root.df._v_attrs.pandas_type
+.. _io.hdf5-keys:
+
Hierarchical Keys
~~~~~~~~~~~~~~~~~
@@ -1115,6 +1120,8 @@ everying in the sub-store and BELOW, so be *careful*.
store.remove('food')
store
+.. _io.hdf5-types:
+
Storing Mixed Types in a Table
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -1170,6 +1177,8 @@ storing/selecting from homogeneous index DataFrames.
store.select('df_mi', Term('foo=bar'))
+.. _io.hdf5-query:
+
Querying a Table
~~~~~~~~~~~~~~~~
@@ -1372,6 +1381,7 @@ table (optional) to let it have the remaining columns. The argument
store.select_as_multiple(['df1_mt', 'df2_mt'], where=['A>0', 'B>0'],
selector = 'df1_mt')
+.. _io.hdf5-delete:
Delete from a Table
~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index c220d2cbba81d..0c5497868efe2 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -217,7 +217,7 @@ The width of each line can be changed via 'line_width' (80 by default):
Updated PyTables Support
~~~~~~~~~~~~~~~~~~~~~~~~
-:ref:`Docs <io-hdf5>` for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
+:ref:`Docs <io.hdf5>` for PyTables ``Table`` format & several enhancements to the api. Here is a taste of what to expect.
.. ipython:: python
:suppress:
diff --git a/doc/source/v0.10.1.txt b/doc/source/v0.10.1.txt
index 4c7369c27cc30..e8435df7b2b0c 100644
--- a/doc/source/v0.10.1.txt
+++ b/doc/source/v0.10.1.txt
@@ -232,4 +232,5 @@ on GitHub for a complete list.
.. _GH2626: https://github.com/pydata/pandas/issues/2626
.. _GH2613: https://github.com/pydata/pandas/issues/2613
.. _GH2602: https://github.com/pydata/pandas/issues/2602
+.. _GH2687: https://github.com/pydata/pandas/issues/2687
.. _GH2563: https://github.com/pydata/pandas/issues/2563
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index cc3b39dd22e34..f4c9d13c0d23e 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -4,17 +4,86 @@ v0.11.0 (March ??, 2013)
------------------------
This is a major release from 0.10.1 and includes many new features and
-enhancements along with a large number of bug fixes. There are also a number of
-important API changes that long-time pandas users should pay close attention
-to.
+enhancements along with a large number of bug fixes. The methods of Selecting
+Data have had quite a number of additions, and Dtype support is now full-fledged.
+There are also a number of important API changes that long-time pandas users should
+pay close attention to.
+
+There is a new section in the documentation, :ref:`10 Minutes to Pandas <10min>`,
+primarily geared to new users.
API changes
~~~~~~~~~~~
-Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
+Selection Choices
+~~~~~~~~~~~~~~~~~
+
+Starting in 0.11.0, object selection has had a number of user-requested additions in
+order to support more explicit location based indexing. Pandas now supports
+three types of multi-axis indexing.
+
+ - ``.loc`` is strictly label based, will raise ``KeyError`` when the items are not found,
+ allowed inputs are:
+
+ - A single label, e.g. ``5`` or ``'a'``
+
+ (note that ``5`` is interpreted as a *label* of the index. This use is **not** an integer position along the index)
+ - A list or array of labels ``['a', 'b', 'c']``
+ - A slice object with labels ``'a':'f'``
+
+ (note that contrary to usual python slices, **both** the start and the stop are included!)
+ - A boolean array
+
+ See more at :ref:`Selection by Label <indexing.label>`
+
+ - ``.iloc`` is strictly integer position based (from 0 to length-1 of the axis), will
+ raise ``IndexError`` when the requested indicies are out of bounds. Allowed inputs are:
+
+ - An integer e.g. ``5``
+ - A list or array of integers ``[4, 3, 0]``
+ - A slice object with ints ``1:7``
+ - A boolean array
+
+ See more at :ref:`Selection by Position <indexing.integer>`
+
+ - ``.ix`` supports mixed integer and label based access. It is primarily label based, but
+ will fallback to integer positional access. ``.ix`` is the most general and will support
+ any of the inputs to ``.loc`` and ``.iloc``, as well as support for floating point label schemes.
+
+ As using integer slices with ``.ix`` have different behavior depending on whether the slice
+ is interpreted as integer location based or label position based, it's usually better to be
+ explicit and use ``.iloc`` (integer location) or ``.loc`` (label location).
-Dtype Specification
-~~~~~~~~~~~~~~~~~~~
+ ``.ix`` is especially usefull when dealing with mixed positional/label based hierarchial indexes.
+
+ See more at :ref:`Advanced Indexing <indexing.advanced>` and :ref:`Advanced Hierarchical <indexing.advanced_hierarchical>`
+
+
+Selection Deprecations
+~~~~~~~~~~~~~~~~~~~~~~
+
+Starting in version 0.11.0, the methods may be deprecated in future versions.
+
+ - ``irow``
+ - ``icol``
+ - ``iget_value``
+
+See the section :ref:`Selection by Position <indexing.integer>` for substitutes.
+
+Cross-sectional slices on non-hierarchical indices are now easily performed using
+``.loc`` and/or ``.loc``. The methods:
+
+ - ``xs`` (for DataFrame),
+ - ``minor_xs`` and ``major_xs`` (for Panel)
+
+now exist primarily for backward compatibility.
+
+See the section :ref:`Selection by Label <indexing.label>` for substitutes.
+
+Dtypes
+~~~~~~
+
+Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
.. ipython:: python
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c0449faf40368..faac974ae9ddb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -568,16 +568,6 @@ def axes(self):
def _constructor(self):
return DataFrame
- # Fancy indexing
- _ix = None
-
- @property
- def ix(self):
- if self._ix is None:
- self._ix = _NDFrameIndexer(self)
-
- return self._ix
-
@property
def shape(self):
return (len(self.index), len(self.columns))
@@ -1894,88 +1884,71 @@ def set_value(self, index, col, value):
return result.set_value(index, col, value)
def irow(self, i, copy=False):
- """
- Retrieve the i-th row or rows of the DataFrame by location
-
- Parameters
- ----------
- i : int, slice, or sequence of integers
+ return self._ixs(i,axis=0)
- Notes
- -----
- If slice passed, the resulting data will be a view
+ def icol(self, i):
+ return self._ixs(i,axis=1)
- Returns
- -------
- row : Series (int) or DataFrame (slice, sequence)
+ def _ixs(self, i, axis=0, copy=False):
+ """
+ i : int, slice, or sequence of integers
+ axis : int
"""
- if isinstance(i, slice):
- return self[i]
- else:
- label = self.index[i]
- if isinstance(label, Index):
- return self.reindex(label)
- else:
- try:
- new_values = self._data.fast_2d_xs(i, copy=copy)
- except:
- new_values = self._data.fast_2d_xs(i, copy=True)
- return Series(new_values, index=self.columns,
- name=self.index[i])
- def icol(self, i):
- """
- Retrieve the i-th column or columns of the DataFrame by location
+ # irow
+ if axis == 0:
- Parameters
- ----------
- i : int, slice, or sequence of integers
+ """
+ Notes
+ -----
+ If slice passed, the resulting data will be a view
+ """
- Notes
- -----
- If slice passed, the resulting data will be a view
+ if isinstance(i, slice):
+ return self[i]
+ else:
+ label = self.index[i]
+ if isinstance(label, Index):
+ return self.reindex(label)
+ else:
+ try:
+ new_values = self._data.fast_2d_xs(i, copy=copy)
+ except:
+ new_values = self._data.fast_2d_xs(i, copy=True)
+ return Series(new_values, index=self.columns,
+ name=self.index[i])
- Returns
- -------
- column : Series (int) or DataFrame (slice, sequence)
- """
- label = self.columns[i]
- if isinstance(i, slice):
- # need to return view
- lab_slice = slice(label[0], label[-1])
- return self.ix[:, lab_slice]
+ # icol
else:
- label = self.columns[i]
- if isinstance(label, Index):
- return self.take(i, axis=1)
- values = self._data.iget(i)
- return self._col_klass.from_array(values, index=self.index,
- name=label)
+ """
+ Notes
+ -----
+ If slice passed, the resulting data will be a view
+ """
- def _ixs(self, i, axis=0):
- if axis == 0:
- return self.irow(i)
- else:
- return self.icol(i)
+ label = self.columns[i]
+ if isinstance(i, slice):
+ # need to return view
+ lab_slice = slice(label[0], label[-1])
+ return self.ix[:, lab_slice]
+ else:
+ label = self.columns[i]
+ if isinstance(label, Index):
- def iget_value(self, i, j):
- """
- Return scalar value stored at row i and column j, where i and j are
- integers
+ # if we have negative indicies, translate to postive here
+ # (take doesen't deal properly with these)
+ l = len(self.columns)
+ i = [ v if v >= 0 else l+v for v in i ]
+
+ return self.take(i, axis=1)
- Parameters
- ----------
- i : int
- j : int
+ values = self._data.iget(i)
+ return self._col_klass.from_array(values, index=self.index,
+ name=label)
- Returns
- -------
- value : scalar value
- """
- row = self.index[i]
- col = self.columns[j]
- return self.get_value(row, col)
+ def iget_value(self, i, j):
+ return self.iat[i,j]
def __getitem__(self, key):
if isinstance(key, slice):
@@ -2054,13 +2027,13 @@ def _getitem_frame(self, key):
raise ValueError('Must pass DataFrame with boolean values only')
return self.where(key)
- def _slice(self, slobj, axis=0):
+ def _slice(self, slobj, axis=0, raise_on_error=False):
if axis == 0:
mgr_axis = 1
else:
mgr_axis = 0
- new_data = self._data.get_slice(slobj, axis=mgr_axis)
+ new_data = self._data.get_slice(slobj, axis=mgr_axis, raise_on_error=raise_on_error)
return self._constructor(new_data)
def _box_item_values(self, key, values):
@@ -2370,6 +2343,8 @@ def xs(self, key, axis=0, level=None, copy=True):
result.index = new_index
return result
+ _xs = xs
+
def lookup(self, row_labels, col_labels):
"""
Label-based "fancy indexing" function for DataFrame. Given equal-length
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index afe7f8775b1e9..c25e686afacbf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -3,6 +3,7 @@
import numpy as np
from pandas.core.index import MultiIndex
+import pandas.core.indexing as indexing
from pandas.tseries.index import DatetimeIndex
import pandas.core.common as com
import pandas.lib as lib
@@ -59,6 +60,21 @@ def _get_axis(self, axis):
name = self._get_axis_name(axis)
return getattr(self, name)
+ #----------------------------------------------------------------------
+ # Indexers
+ @classmethod
+ def _create_indexer(cls, name, indexer):
+ """ create an indexer like _name in the class """
+ iname = '_%s' % name
+ setattr(cls,iname,None)
+
+ def _indexer(self):
+ if getattr(self,iname,None) is None:
+ setattr(self,iname,indexer(self, name))
+ return getattr(self,iname)
+
+ setattr(cls,name,property(_indexer))
+
def abs(self):
"""
Return an object with absolute value taken. Only applicable to objects
@@ -396,10 +412,6 @@ def sort_index(self, axis=0, ascending=True):
new_axis = labels.take(sort_index)
return self.reindex(**{axis_name: new_axis})
- @property
- def ix(self):
- raise NotImplementedError
-
def reindex(self, *args, **kwds):
raise NotImplementedError
@@ -466,6 +478,9 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
np.putmask(rs.values, mask, np.nan)
return rs
+# install the indexerse
+for _name, _indexer in indexing.get_indexers_list():
+ PandasObject._create_indexer(_name,_indexer)
class NDFrame(PandasObject):
"""
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 8f812252134a1..b86518e8947ef 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1,12 +1,23 @@
# pylint: disable=W0223
from pandas.core.common import _asarray_tuplesafe
-from pandas.core.index import Index, MultiIndex
+from pandas.core.index import Index, MultiIndex, _ensure_index
import pandas.core.common as com
import pandas.lib as lib
import numpy as np
+# the supported indexers
+def get_indexers_list():
+
+ return [
+ ('ix' ,_NDFrameIndexer),
+ ('iloc',_iLocIndexer ),
+ ('loc' ,_LocIndexer ),
+ ('at' ,_AtIndexer ),
+ ('iat' ,_iAtIndexer ),
+ ]
+
# "null slice"
_NS = slice(None, None)
@@ -17,9 +28,10 @@ class IndexingError(Exception):
class _NDFrameIndexer(object):
- def __init__(self, obj):
+ def __init__(self, obj, name):
self.obj = obj
self.ndim = obj.ndim
+ self.name = name
def __iter__(self):
raise NotImplementedError('ix is not iterable')
@@ -43,15 +55,15 @@ def _get_label(self, label, axis=0):
raise IndexingError('no slices here')
try:
- return self.obj.xs(label, axis=axis, copy=False)
+ return self.obj._xs(label, axis=axis, copy=False)
except Exception:
- return self.obj.xs(label, axis=axis, copy=True)
+ return self.obj._xs(label, axis=axis, copy=True)
def _get_loc(self, key, axis=0):
return self.obj._ixs(key, axis=axis)
- def _slice(self, obj, axis=0):
- return self.obj._slice(obj, axis=axis)
+ def _slice(self, obj, axis=0, raise_on_error=False):
+ return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error)
def __setitem__(self, key, value):
# kludgetastic
@@ -74,6 +86,9 @@ def __setitem__(self, key, value):
self._setitem_with_indexer(indexer, value)
+ def _has_valid_tuple(self, key):
+ pass
+
def _convert_tuple(self, key):
keyidx = []
for i, k in enumerate(key):
@@ -212,6 +227,9 @@ def _getitem_tuple(self, tup):
if self._multi_take_opportunity(tup):
return self._multi_take(tup)
+ # no multi-index, so validate all of the indexers
+ self._has_valid_tuple(tup)
+
# no shortcut needed
retval = self.obj
for i, key in enumerate(tup):
@@ -221,7 +239,7 @@ def _getitem_tuple(self, tup):
if _is_null_slice(key):
continue
- retval = retval.ix._getitem_axis(key, axis=i)
+ retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
return retval
@@ -308,8 +326,12 @@ def _getitem_lowerdim(self, tup):
if _is_label_like(key) or isinstance(key, tuple):
section = self._getitem_axis(key, axis=i)
+ # we have yielded a scalar ?
+ if not _is_list_like(section):
+ return section
+
# might have been a MultiIndex
- if section.ndim == self.ndim:
+ elif section.ndim == self.ndim:
new_key = tup[:i] + (_NS,) + tup[i + 1:]
# new_key = tup[:i] + tup[i+1:]
else:
@@ -325,7 +347,7 @@ def _getitem_lowerdim(self, tup):
if len(new_key) == 1:
new_key, = new_key
- return section.ix[new_key]
+ return getattr(section,self.name)[new_key]
raise IndexingError('not applicable')
@@ -593,6 +615,207 @@ def _get_slice_axis(self, slice_obj, axis=0):
else:
return self.obj.take(indexer, axis=axis)
+class _LocationIndexer(_NDFrameIndexer):
+ _valid_types = None
+ _exception = Exception
+
+ def _has_valid_type(self, k, axis):
+ raise NotImplementedError()
+
+ def _has_valid_tuple(self, key):
+ """ check the key for valid keys across my indexer """
+ for i, k in enumerate(key):
+ if i >= self.obj.ndim:
+ raise ValueError('Too many indexers')
+ if not self._has_valid_type(k,i):
+ raise ValueError("Location based indexing can only have [%s] types" % self._valid_types)
+
+ def __getitem__(self, key):
+ if type(key) is tuple:
+ return self._getitem_tuple(key)
+ else:
+ return self._getitem_axis(key, axis=0)
+
+ def _getitem_axis(self, key, axis=0):
+ raise NotImplementedError()
+
+ def _getbool_axis(self, key, axis=0):
+ labels = self.obj._get_axis(axis)
+ key = _check_bool_indexer(labels, key)
+ inds, = key.nonzero()
+ try:
+ return self.obj.take(inds, axis=axis)
+ except (Exception), detail:
+ raise self._exception(detail)
+
+class _LocIndexer(_LocationIndexer):
+ """ purely label based location based indexing """
+ _valid_types = "labels (MUST BE IN THE INDEX), slices of labels (BOTH endpoints included! Can be slices of integers if the index is integers), listlike of labels, boolean"
+ _exception = KeyError
+
+ def _has_valid_type(self, key, axis):
+ ax = self.obj._get_axis(axis)
+
+ # valid for a label where all labels are in the index
+ # slice of lables (where start-end in labels)
+ # slice of integers (only if in the lables)
+ # boolean
+
+ if isinstance(key, slice):
+
+ if key.start is not None:
+ if key.start not in ax:
+ raise KeyError("start bound [%s] is not the [%s]" % (key.start,self.obj._get_axis_name(axis)))
+ if key.stop is not None:
+ stop = key.stop
+ if com.is_integer(stop):
+ stop -= 1
+ if stop not in ax:
+ raise KeyError("stop bound [%s] is not in the [%s]" % (stop,self.obj._get_axis_name(axis)))
+
+ elif com._is_bool_indexer(key):
+ return True
+
+ elif _is_list_like(key):
+
+ # require all elements in the index
+ idx = _ensure_index(key)
+ if not idx.isin(ax).all():
+ raise KeyError("[%s] are not in ALL in the [%s]" % (key,self.obj._get_axis_name(axis)))
+
+ return True
+
+ else:
+
+ # if its empty we want a KeyError here
+ if not len(ax):
+ raise KeyError("The [%s] axis is empty" % self.obj._get_axis_name(axis))
+
+ if not key in ax:
+ raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
+
+ return True
+
+ def _getitem_axis(self, key, axis=0):
+ labels = self.obj._get_axis(axis)
+
+ if isinstance(key, slice):
+ ltype = labels.inferred_type
+ if ltype == 'mixed-integer-float' or ltype == 'mixed-integer':
+ raise ValueError('cannot slice with a non-single type label array')
+ return self._get_slice_axis(key, axis=axis)
+ elif com._is_bool_indexer(key):
+ return self._getbool_axis(key, axis=axis)
+ elif _is_list_like(key) and not (isinstance(key, tuple) and
+ isinstance(labels, MultiIndex)):
+
+ if hasattr(key, 'ndim') and key.ndim > 1:
+ raise ValueError('Cannot index with multidimensional key')
+
+ return self._getitem_iterable(key, axis=axis)
+ else:
+ return self._get_label(key, axis=axis)
+
+class _iLocIndexer(_LocationIndexer):
+ """ purely integer based location based indexing """
+ _valid_types = "integer, integer slice (START point is INCLUDED, END point is EXCLUDED), listlike of integers, boolean array"
+ _exception = IndexError
+
+ def _has_valid_type(self, key, axis):
+ return isinstance(key, slice) or com.is_integer(key) or com._is_bool_indexer(key) or _is_list_like(key)
+
+ def _getitem_tuple(self, tup):
+
+ self._has_valid_tuple(tup)
+ retval = self.obj
+ for i, key in enumerate(tup):
+ if _is_null_slice(key):
+ continue
+
+ retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
+
+ return retval
+
+ def _get_slice_axis(self, slice_obj, axis=0):
+ obj = self.obj
+
+ if not _need_slice(slice_obj):
+ return obj
+
+ if isinstance(slice_obj, slice):
+ return self._slice(slice_obj, axis=axis, raise_on_error=True)
+ else:
+ return self.obj.take(slice_obj, axis=axis)
+
+ def _getitem_axis(self, key, axis=0):
+
+ if isinstance(key, slice):
+ return self._get_slice_axis(key, axis=axis)
+
+ elif com._is_bool_indexer(key):
+ return self._getbool_axis(key, axis=axis)
+
+ # a single integer or a list of integers
+ else:
+
+ if not (com.is_integer(key) or _is_list_like(key)):
+ raise ValueError("Cannot index by location index with a non-integer key")
+
+ return self._get_loc(key,axis=axis)
+
+ def _convert_to_indexer(self, obj, axis=0):
+ """ much simpler as we only have to deal with our valid types """
+ if self._has_valid_type(obj,axis):
+ return obj
+
+ raise ValueError("Can only index by location with a [%s]" % self._valid_types)
+
+
+class _ScalarAccessIndexer(_NDFrameIndexer):
+ """ access scalars quickly """
+
+ def _convert_key(self, key):
+ return list(key)
+
+ def __getitem__(self, key):
+ if not isinstance(key, tuple):
+
+ # we could have a convertible item here (e.g. Timestamp)
+ if not _is_list_like(key):
+ key = tuple([ key ])
+ else:
+ raise ValueError('Invalid call for scalar access (getting)!')
+
+ if len(key) != self.obj.ndim:
+ raise ValueError('Not enough indexers for scalar access (getting)!')
+ key = self._convert_key(key)
+ return self.obj.get_value(*key)
+
+ def __setitem__(self, key, value):
+ if not isinstance(key, tuple):
+ raise ValueError('Invalid call for scalar access (setting)!')
+ if len(key) != self.obj.ndim:
+ raise ValueError('Not enough indexers for scalar access (setting)!')
+ key = self._convert_key(key)
+ key.append(value)
+ self.obj.set_value(*key)
+
+class _AtIndexer(_ScalarAccessIndexer):
+ """ label based scalar accessor """
+ pass
+
+class _iAtIndexer(_ScalarAccessIndexer):
+ """ integer based scalar accessor """
+
+ def _convert_key(self, key):
+ """ require integer args (and convert to label arguments) """
+ ckey = []
+ for a, i in zip(self.obj.axes,key):
+ if not com.is_integer(i):
+ raise ValueError("iAt based indexing can only have integer indexers")
+ ckey.append(a[i])
+ return ckey
+
# 32-bit floating point machine epsilon
_eps = np.finfo('f4').eps
@@ -737,6 +960,17 @@ def _need_slice(obj):
(obj.step is not None and obj.step != 1))
+def _check_slice_bounds(slobj, values):
+ l = len(values)
+ start = slobj.start
+ if start is not None:
+ if start < -l or start > l-1:
+ raise IndexError("out-of-bounds on slice (start)")
+ stop = slobj.stop
+ if stop is not None:
+ if stop < -l-1 or stop > l:
+ raise IndexError("out-of-bounds on slice (end)")
+
def _maybe_droplevels(index, key):
# drop levels
if isinstance(key, tuple):
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 159393be38b07..5bf918aff6367 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -5,6 +5,7 @@
import numpy as np
from pandas.core.index import Index, _ensure_index, _handle_legacy_indexes
+from pandas.core.indexing import _check_slice_bounds
import pandas.core.common as com
import pandas.lib as lib
import pandas.tslib as tslib
@@ -1034,8 +1035,12 @@ def get_bool_data(self, copy=False, as_blocks=False):
return self.get_numeric_data(copy=copy, type_list=(BoolBlock,),
as_blocks=as_blocks)
- def get_slice(self, slobj, axis=0):
+ def get_slice(self, slobj, axis=0, raise_on_error=False):
new_axes = list(self.axes)
+
+ if raise_on_error:
+ _check_slice_bounds(slobj, new_axes[axis])
+
new_axes[axis] = new_axes[axis][slobj]
if axis == 0:
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index b418995ce3085..9f91d8add1eac 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -12,7 +12,7 @@
from pandas.core.categorical import Factor
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_get_combined_index)
-from pandas.core.indexing import _NDFrameIndexer, _maybe_droplevels
+from pandas.core.indexing import _maybe_droplevels, _is_list_like
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.series import Series
from pandas.core.frame import DataFrame
@@ -540,16 +540,6 @@ def _get_plane_axes(self, axis):
return index, columns
- # Fancy indexing
- _ix = None
-
- @property
- def ix(self):
- if self._ix is None:
- self._ix = _NDFrameIndexer(self)
-
- return self._ix
-
def _wrap_array(self, arr, axes, copy=False):
d = self._construct_axes_dict_from(self, axes, copy=copy)
return self._constructor(arr, **d)
@@ -679,8 +669,8 @@ def __getattr__(self, name):
raise AttributeError("'%s' object has no attribute '%s'" %
(type(self).__name__, name))
- def _slice(self, slobj, axis=0):
- new_data = self._data.get_slice(slobj, axis=axis)
+ def _slice(self, slobj, axis=0, raise_on_error=False):
+ new_data = self._data.get_slice(slobj, axis=axis, raise_on_error=raise_on_error)
return self._constructor(new_data)
def __setitem__(self, key, value):
@@ -1075,10 +1065,17 @@ def xs(self, key, axis=1, copy=True):
new_data = self._data.xs(key, axis=axis_number, copy=copy)
return self._constructor_sliced(new_data)
+ _xs = xs
+
def _ixs(self, i, axis=0):
# for compatibility with .ix indexing
# Won't work with hierarchical indexing yet
key = self._get_axis(axis)[i]
+
+ # xs cannot handle a non-scalar key, so just reindex here
+ if _is_list_like(key):
+ return self.reindex(**{ self._get_axis_name(axis) : key })
+
return self.xs(key, axis=axis)
def groupby(self, function, axis='major'):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index b349dd65ff82d..27480d9e489be 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -20,7 +20,7 @@
_infer_dtype_from_scalar, is_list_like)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
-from pandas.core.indexing import _SeriesIndexer, _check_bool_indexer
+from pandas.core.indexing import _SeriesIndexer, _check_bool_indexer, _check_slice_bounds
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex, Period
from pandas.util import py3compat
@@ -547,15 +547,58 @@ def __setstate__(self, state):
self.index = _handle_legacy_indexes([index])[0]
self.name = name
- _ix = None
+ # indexers
+ @property
+ def axes(self):
+ return [ self.index ]
@property
def ix(self):
if self._ix is None:
- self._ix = _SeriesIndexer(self)
+ self._ix = _SeriesIndexer(self, 'ix')
return self._ix
+ def _xs(self, key, axis=0, level=None, copy=True):
+ return self.__getitem__(key)
+
+ def _ixs(self, i, axis=0):
+ """
+ Return the i-th value or values in the Series by location
+
+ Parameters
+ ----------
+ i : int, slice, or sequence of integers
+
+ Returns
+ -------
+ value : scalar (int) or Series (slice, sequence)
+ """
+ try:
+ return _index.get_value_at(self, i)
+ except IndexError:
+ raise
+ except:
+ if isinstance(i, slice):
+ return self[i]
+ else:
+ label = self.index[i]
+ if isinstance(label, Index):
+ return self.reindex(label)
+ else:
+ return _index.get_value_at(self, i)
+
+
+ @property
+ def _is_mixed_type(self):
+ return False
+
+ def _slice(self, slobj, axis=0, raise_on_error=False):
+ if raise_on_error:
+ _check_slice_bounds(slobj, self.values)
+
+ return self._constructor(self.values[slobj], index=self.index[slobj])
+
def __getitem__(self, key):
try:
return self.index.get_value(self, key)
@@ -908,34 +951,9 @@ def get(self, label, default=None):
except KeyError:
return default
- def iget_value(self, i):
- """
- Return the i-th value or values in the Series by location
-
- Parameters
- ----------
- i : int, slice, or sequence of integers
-
- Returns
- -------
- value : scalar (int) or Series (slice, sequence)
- """
- try:
- return _index.get_value_at(self, i)
- except IndexError:
- raise
- except:
- if isinstance(i, slice):
- return self[i]
- else:
- label = self.index[i]
- if isinstance(label, Index):
- return self.reindex(label)
- else:
- return _index.get_value_at(self, i)
-
- iget = iget_value
- irow = iget_value
+ iget_value = _ixs
+ iget = _ixs
+ irow = _ixs
def get_value(self, label):
"""
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index bf978c322dbd2..f142b36534e22 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -10,6 +10,7 @@
from pandas.core.common import _pickle_array, _unpickle_array, _try_sort
from pandas.core.index import Index, MultiIndex, _ensure_index
+from pandas.core.indexing import _check_slice_bounds
from pandas.core.series import Series
from pandas.core.frame import (DataFrame, extract_index, _prep_ndarray,
_default_index)
@@ -416,11 +417,15 @@ def set_value(self, index, col, value):
return dense.to_sparse(kind=self.default_kind,
fill_value=self.default_fill_value)
- def _slice(self, slobj, axis=0):
+ def _slice(self, slobj, axis=0, raise_on_error=False):
if axis == 0:
+ if raise_on_error:
+ _check_slice_bounds(slobj, self.index)
new_index = self.index[slobj]
new_columns = self.columns
else:
+ if raise_on_error:
+ _check_slice_bounds(slobj, self.columns)
new_index = self.index
new_columns = self.columns[slobj]
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 304072acc664e..d8dd2e8c6f0d0 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6305,7 +6305,6 @@ def _check_set(df, cond, check_dtypes = True):
econd = cond.reindex_like(df).fillna(True)
expected = dfi.mask(~econd)
- #import pdb; pdb.set_trace()
dfi.where(cond, np.nan, inplace=True)
assert_frame_equal(dfi, expected)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
new file mode 100644
index 0000000000000..e48d8dbdcb498
--- /dev/null
+++ b/pandas/tests/test_indexing.py
@@ -0,0 +1,678 @@
+# pylint: disable-msg=W0612,E1101
+import unittest
+import nose
+import itertools
+
+from numpy import random, nan
+from numpy.random import randn
+import numpy as np
+from numpy.testing import assert_array_equal
+
+import pandas as pan
+import pandas.core.common as com
+from pandas.core.api import (DataFrame, Index, Series, Panel, notnull, isnull,
+ MultiIndex, DatetimeIndex, Timestamp)
+from pandas.util.testing import (assert_almost_equal, assert_series_equal,
+ assert_frame_equal, assert_panel_equal)
+from pandas.util import py3compat
+
+import pandas.util.testing as tm
+import pandas.lib as lib
+from pandas import date_range
+from numpy.testing.decorators import slow
+
+_verbose = False
+
+#-------------------------------------------------------------------------------
+# Indexing test cases
+
+
+def _generate_indices(f, values=False):
+ """ generate the indicies
+ if values is True , use the axis values
+ is False, use the range
+ """
+
+ axes = f.axes
+ if values:
+ axes = [ range(len(a)) for a in axes ]
+
+ return itertools.product(*axes)
+
+def _get_value(f, i, values=False):
+ """ return the value for the location i """
+
+ # check agains values
+ if values:
+ return f.values[i]
+
+ # this is equiv of f[col][row].....
+ #v = f
+ #for a in reversed(i):
+ # v = v.__getitem__(a)
+ #return v
+ return f.ix[i]
+
+def _get_result(obj, method, key, axis):
+ """ return the result for this obj with this key and this axis """
+
+ if isinstance(key, dict):
+ key = key[axis]
+
+ # use an artifical conversion to map the key as integers to the labels
+ # so ix can work for comparisions
+ if method == 'indexer':
+ method = 'ix'
+ key = obj._get_axis(axis)[key]
+
+ # in case we actually want 0 index slicing
+ try:
+ xp = getattr(obj, method).__getitem__(_axify(obj,key,axis))
+ except:
+ xp = getattr(obj, method).__getitem__(key)
+
+ return xp
+
+def _axify(obj, key, axis):
+ # create a tuple accessor
+ if axis is not None:
+ axes = [ slice(None) ] * obj.ndim
+ axes[axis] = key
+ return tuple(axes)
+ return k
+
+
+class TestIndexing(unittest.TestCase):
+
+ _multiprocess_can_split_ = True
+
+ _objs = set(['series','frame','panel'])
+ _typs = set(['ints','labels','mixed','ts','floats','empty'])
+
+ def setUp(self):
+ import warnings
+ warnings.filterwarnings(action='ignore', category=FutureWarning)
+
+ self.series_ints = Series(np.random.rand(4), index=range(0,8,2))
+ self.frame_ints = DataFrame(np.random.randn(4, 4), index=range(0, 8, 2), columns=range(0,12,3))
+ self.panel_ints = Panel(np.random.rand(4,4,4), items=range(0,8,2),major_axis=range(0,12,3),minor_axis=range(0,16,4))
+
+ self.series_labels = Series(np.random.randn(4), index=list('abcd'))
+ self.frame_labels = DataFrame(np.random.randn(4, 4), index=list('abcd'), columns=list('ABCD'))
+ self.panel_labels = Panel(np.random.randn(4,4,4), items=list('abcd'), major_axis=list('ABCD'), minor_axis=list('ZYXW'))
+
+ self.series_mixed = Series(np.random.randn(4), index=[2, 4, 'null', 8])
+ self.frame_mixed = DataFrame(np.random.randn(4, 4), index=[2, 4, 'null', 8])
+ self.panel_mixed = Panel(np.random.randn(4,4,4), items=[2,4,'null',8])
+
+ self.series_ts = Series(np.random.randn(4), index=date_range('20130101', periods=4))
+ self.frame_ts = DataFrame(np.random.randn(4, 4), index=date_range('20130101', periods=4))
+ self.panel_ts = Panel(np.random.randn(4, 4, 4), items=date_range('20130101', periods=4))
+
+ #self.series_floats = Series(np.random.randn(4), index=[1.00, 2.00, 3.00, 4.00])
+ #self.frame_floats = DataFrame(np.random.randn(4, 4), columns=[1.00, 2.00, 3.00, 4.00])
+ #self.panel_floats = Panel(np.random.rand(4,4,4), items = [1.00,2.00,3.00,4.00])
+
+ self.frame_empty = DataFrame({})
+ self.series_empty = Series({})
+ self.panel_empty = Panel({})
+
+ # form agglomerates
+ for o in self._objs:
+
+ d = dict()
+ for t in self._typs:
+ d[t] = getattr(self,'%s_%s' % (o,t),None)
+
+ setattr(self,o,d)
+
+ def check_values(self, f, func, values = False):
+
+ if f is None: return
+ axes = f.axes
+ indicies = itertools.product(*axes)
+
+ for i in indicies:
+ result = getattr(f,func)[i]
+
+ # check agains values
+ if values:
+ expected = f.values[i]
+ else:
+ expected = f
+ for a in reversed(i):
+ expected = expected.__getitem__(a)
+
+ assert_almost_equal(result, expected)
+
+
+ def check_result(self, name, method1, key1, method2, key2, typs = None, objs = None, axes = None, fails = None):
+
+
+ def _eq(t, o, a, obj, k1, k2):
+ """ compare equal for these 2 keys """
+
+ if a is not None and a > obj.ndim-1:
+ return
+
+ def _print(result, error = None):
+ if error is not None:
+ error = str(error)
+ v = "%-16.16s [%-16.16s]: [typ->%-8.8s,obj->%-8.8s,key1->(%-4.4s),key2->(%-4.4s),axis->%s] %s" % (name,result,t,o,method1,method2,a,error or '')
+ if _verbose:
+ print(v)
+
+ try:
+
+ ### good debug location ###
+ #if name == 'bool' and t == 'empty' and o == 'series' and method1 == 'loc':
+ # import pdb; pdb.set_trace()
+
+ rs = getattr(obj, method1).__getitem__(_axify(obj,k1,a))
+
+ try:
+ xp = _get_result(obj,method2,k2,a)
+ except:
+ result = 'no comp'
+ _print(result)
+ return
+
+ try:
+ if np.isscalar(rs) and np.isscalar(xp):
+ self.assert_(rs == xp)
+ elif xp.ndim == 1:
+ assert_series_equal(rs,xp)
+ elif xp.ndim == 2:
+ assert_frame_equal(rs,xp)
+ elif xp.ndim == 3:
+ assert_panel_equal(rs,xp)
+ result = 'ok'
+ except (AssertionError):
+ result = 'fail'
+
+ # reverse the checks
+ if fails is True:
+ if result == 'fail':
+ result = 'ok (fail)'
+
+ if not result.startswith('ok'):
+ raise AssertionError(_print(result))
+
+ _print(result)
+
+ except (AssertionError):
+ raise
+ except (TypeError):
+ raise AssertionError(_print('type error'))
+ except (Exception), detail:
+
+ # if we are in fails, the ok, otherwise raise it
+ if fails is not None:
+ if fails == type(detail):
+ result = 'ok (%s)' % type(detail).__name__
+ _print(result)
+ return
+
+ result = type(detail).__name__
+ raise AssertionError(_print(result, error = detail))
+
+ if typs is None:
+ typs = self._typs
+
+ if objs is None:
+ objs = self._objs
+
+ if axes is not None:
+ if not isinstance(axes,(tuple,list)):
+ axes = [ axes ]
+ else:
+ axes = list(axes)
+ else:
+ axes = [ 0, 1, 2]
+
+ # check
+ for o in objs:
+ if o not in self._objs:
+ continue
+
+ d = getattr(self,o)
+ for a in axes:
+ for t in typs:
+ if t not in self._typs:
+ continue
+
+ obj = d[t]
+ if obj is not None:
+ obj = obj.copy()
+
+ k2 = key2
+ _eq(t, o, a, obj, key1, k2)
+
+ def test_at_and_iat_get(self):
+
+ def _check(f, func, values = False):
+
+ if f is not None:
+ indicies = _generate_indices(f, values)
+ for i in indicies:
+ result = getattr(f,func)[i]
+ expected = _get_value(f,i,values)
+ assert_almost_equal(result, expected)
+
+ for o in self._objs:
+
+ d = getattr(self,o)
+
+ # iat
+ _check(d['ints'],'iat', values=True)
+ for f in [d['labels'],d['ts'],d['floats']]:
+ if f is not None:
+ self.assertRaises(ValueError, self.check_values, f, 'iat')
+
+ # at
+ _check(d['ints'], 'at')
+ _check(d['labels'],'at')
+ _check(d['ts'], 'at')
+ _check(d['floats'],'at')
+
+ def test_at_and_iat_set(self):
+
+ def _check(f, func, values = False):
+
+ if f is not None:
+ indicies = _generate_indices(f, values)
+ for i in indicies:
+ getattr(f,func)[i] = 1
+ expected = _get_value(f,i,values)
+ assert_almost_equal(expected, 1)
+
+ for t in self._objs:
+
+ d = getattr(self,t)
+
+ _check(d['ints'],'iat',values=True)
+ for f in [d['labels'],d['ts'],d['floats']]:
+ if f is not None:
+ self.assertRaises(ValueError, _check, f, 'iat')
+
+ # at
+ _check(d['ints'], 'at')
+ _check(d['labels'],'at')
+ _check(d['ts'], 'at')
+ _check(d['floats'],'at')
+
+ def test_at_timestamp(self):
+
+ # as timestamp is not a tuple!
+ dates = date_range('1/1/2000', periods=8)
+ df = DataFrame(randn(8, 4), index=dates, columns=['A', 'B', 'C', 'D'])
+ s = df['A']
+
+ result = s.at[dates[5]]
+ xp = s.values[5]
+ self.assert_(result == xp)
+
+ def test_iat_invalid_args(self):
+ pass
+
+ def test_iloc_getitem_int(self):
+
+ # integer
+ self.check_result('integer', 'iloc', 2, 'ix', { 0 : 4, 1: 6, 2: 8 }, typs = ['ints'])
+ self.check_result('integer', 'iloc', 2, 'indexer', 2, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
+ def test_iloc_getitem_neg_int(self):
+
+ # neg integer
+ self.check_result('neg int', 'iloc', -1, 'ix', { 0 : 6, 1: 9, 2: 12 }, typs = ['ints'])
+ self.check_result('neg int', 'iloc', -1, 'indexer', -1, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
+ def test_iloc_getitem_list_int(self):
+
+ # list of ints
+ self.check_result('list int', 'iloc', [0,1,2], 'ix', { 0 : [0,2,4], 1 : [0,3,6], 2: [0,4,8] }, typs = ['ints'])
+ self.check_result('list int', 'iloc', [0,1,2], 'indexer', [0,1,2], typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
+ def test_iloc_getitem_dups(self):
+
+ # no dups in panel (bug?)
+ self.check_result('list int (dups)', 'iloc', [0,1,1,3], 'ix', { 0 : [0,2,2,6], 1 : [0,3,3,9] }, objs = ['series','frame'], typs = ['ints'])
+
+ def test_iloc_getitem_array(self):
+
+ # array like
+ s = Series(index=range(1,4))
+ self.check_result('array like', 'iloc', s.index, 'ix', { 0 : [2,4,6], 1 : [3,6,9], 2: [4,8,12] }, typs = ['ints'])
+
+ def test_iloc_getitem_bool(self):
+
+ # boolean indexers
+ b = [True,False,True,False,]
+ self.check_result('bool', 'iloc', b, 'ix', b, typs = ['ints'])
+ self.check_result('bool', 'iloc', b, 'ix', b, typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
+ def test_iloc_getitem_slice(self):
+
+ # slices
+ self.check_result('slice', 'iloc', slice(1,3), 'ix', { 0 : [2,4], 1: [3,6], 2: [4,8] }, typs = ['ints'])
+ self.check_result('slice', 'iloc', slice(1,3), 'indexer', slice(1,3), typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
+ def test_iloc_getitem_out_of_bounds(self):
+
+ # out-of-bounds slice
+ self.assertRaises(IndexError, self.frame_ints.iloc.__getitem__, tuple([slice(None),slice(1,5,None)]))
+ self.assertRaises(IndexError, self.frame_ints.iloc.__getitem__, tuple([slice(None),slice(-5,3,None)]))
+ self.assertRaises(IndexError, self.frame_ints.iloc.__getitem__, tuple([slice(1,5,None)]))
+ self.assertRaises(IndexError, self.frame_ints.iloc.__getitem__, tuple([slice(-5,3,None)]))
+
+ def test_iloc_setitem(self):
+ df = self.frame_ints
+
+ df.iloc[1,1] = 1
+ result = df.iloc[1,1]
+ self.assert_(result == 1)
+
+ df.iloc[:,2:3] = 0
+ expected = df.iloc[:,2:3]
+ result = df.iloc[:,2:3]
+ assert_frame_equal(result, expected)
+
+ def test_iloc_multiindex(self):
+ df = DataFrame(np.random.randn(3, 3),
+ columns=[[2,2,4],[6,8,10]],
+ index=[[4,4,8],[8,10,12]])
+
+ rs = df.iloc[2]
+ xp = df.irow(2)
+ assert_series_equal(rs, xp)
+
+ rs = df.iloc[:,2]
+ xp = df.icol(2)
+ assert_series_equal(rs, xp)
+
+ rs = df.iloc[2,2]
+ xp = df.values[2,2]
+ self.assert_(rs == xp)
+
+ def test_loc_getitem_int(self):
+
+ # int label
+ self.check_result('int label', 'loc', 2, 'ix', 2, typs = ['ints'], axes = 0)
+ self.check_result('int label', 'loc', 3, 'ix', 3, typs = ['ints'], axes = 1)
+ self.check_result('int label', 'loc', 4, 'ix', 4, typs = ['ints'], axes = 2)
+ self.check_result('int label', 'loc', 2, 'ix', 2, typs = ['label'], fails = KeyError)
+
+ def test_loc_getitem_label(self):
+
+ # label
+ self.check_result('label', 'loc', 'c', 'ix', 'c', typs = ['labels'], axes=0)
+ self.check_result('label', 'loc', 'null', 'ix', 'null', typs = ['mixed'] , axes=0)
+ self.check_result('label', 'loc', 8, 'ix', 8, typs = ['mixed'] , axes=0)
+ self.check_result('label', 'loc', Timestamp('20130102'), 'ix', 1, typs = ['ts'], axes=0)
+ self.check_result('label', 'loc', 'c', 'ix', 'c', typs = ['empty'], fails = KeyError)
+
+ def test_loc_getitem_label_out_of_range(self):
+
+ # out of range label
+ self.check_result('label range', 'loc', 'f', 'ix', 'f', typs = ['ints','labels','mixed','ts','floats'], fails=KeyError)
+
+ def test_loc_getitem_label_list(self):
+
+ # list of labels
+ self.check_result('list lbl', 'loc', [0,2,4], 'ix', [0,2,4], typs = ['ints'], axes=0)
+ self.check_result('list lbl', 'loc', [3,6,9], 'ix', [3,6,9], typs = ['ints'], axes=1)
+ self.check_result('list lbl', 'loc', [4,8,12], 'ix', [4,8,12], typs = ['ints'], axes=2)
+ self.check_result('list lbl', 'loc', ['a','b','d'], 'ix', ['a','b','d'], typs = ['labels'], axes=0)
+ self.check_result('list lbl', 'loc', ['A','B','C'], 'ix', ['A','B','C'], typs = ['labels'], axes=1)
+ self.check_result('list lbl', 'loc', ['Z','Y','W'], 'ix', ['Z','Y','W'], typs = ['labels'], axes=2)
+ self.check_result('list lbl', 'loc', [2,8,'null'], 'ix', [2,8,'null'], typs = ['mixed'], axes=0)
+ self.check_result('list lbl', 'loc', [Timestamp('20130102'),Timestamp('20130103')], 'ix',
+ [Timestamp('20130102'),Timestamp('20130103')], typs = ['ts'], axes=0)
+
+ # fails
+ self.check_result('list lbl', 'loc', [0,1,2], 'indexer', [0,1,2], typs = ['empty'], fails = KeyError)
+ self.check_result('list lbl', 'loc', [0,2,3], 'ix', [0,2,3], typs = ['ints'], axes=0, fails = KeyError)
+ self.check_result('list lbl', 'loc', [3,6,7], 'ix', [3,6,9], typs = ['ints'], axes=1, fails = KeyError)
+ self.check_result('list lbl', 'loc', [4,8,10], 'ix', [4,8,12], typs = ['ints'], axes=2, fails = KeyError)
+
+ # array like
+ self.check_result('array like', 'loc', Series(index=[0,2,4]).index, 'ix', [0,2,4], typs = ['ints'], axes=0)
+ self.check_result('array like', 'loc', Series(index=[3,6,9]).index, 'ix', [3,6,9], typs = ['ints'], axes=1)
+ self.check_result('array like', 'loc', Series(index=[4,8,12]).index, 'ix', [4,8,12], typs = ['ints'], axes=2)
+
+ def test_loc_getitem_bool(self):
+
+ # boolean indexers
+ b = [True,False,True,False]
+ self.check_result('bool', 'loc', b, 'ix', b, typs = ['ints','labels','mixed','ts','floats'])
+ self.check_result('bool', 'loc', b, 'ix', b, typs = ['empty'], fails = KeyError)
+
+ def test_loc_getitem_int_slice(self):
+
+ # int slices in int
+ self.check_result('int slice1', 'loc', slice(1,3), 'ix', { 0 : [2,4], 1: [3,6], 2: [4,8] }, typs = ['ints'], fails=KeyError)
+
+ # ok
+ self.check_result('int slice2', 'loc', slice(2,5), 'ix', [2,4], typs = ['ints'], axes = 0)
+ self.check_result('int slice2', 'loc', slice(3,7), 'ix', [3,6], typs = ['ints'], axes = 1)
+ self.check_result('int slice2', 'loc', slice(4,9), 'ix', [4,8], typs = ['ints'], axes = 2)
+
+ def test_loc_getitem_label_slice(self):
+
+ # label slices (with ints)
+ self.check_result('lab slice', 'loc', slice(1,3), 'ix', slice(1,3), typs = ['labels','mixed','ts','floats','empty'], fails=KeyError)
+
+ # real label slices
+ self.check_result('lab slice', 'loc', slice('a','c'), 'ix', slice('a','c'), typs = ['labels'], axes=0)
+ self.check_result('lab slice', 'loc', slice('A','C'), 'ix', slice('A','C'), typs = ['labels'], axes=1)
+ self.check_result('lab slice', 'loc', slice('W','Z'), 'ix', slice('W','Z'), typs = ['labels'], axes=2)
+
+ self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=0)
+ self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=1, fails=KeyError)
+ self.check_result('ts slice', 'loc', slice('20130102','20130104'), 'ix', slice('20130102','20130104'), typs = ['ts'], axes=2, fails=KeyError)
+
+ self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=0, fails=KeyError)
+ self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=1, fails=KeyError)
+ self.check_result('mixed slice', 'loc', slice(2,8), 'ix', slice(2,8), typs = ['mixed'], axes=2, fails=KeyError)
+
+ # you would think this would work, but we don't have an ordering, so fail
+ self.check_result('mixed slice', 'loc', slice(2,5,2), 'ix', slice(2,4,2), typs = ['mixed'], axes=0, fails=ValueError)
+
+ def test_loc_general(self):
+
+ # GH 2922 (these are fails)
+ df = DataFrame(np.random.rand(4,4),columns=['A','B','C','D'])
+ self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(0,2),slice(0,2)]))
+
+ df = DataFrame(np.random.rand(4,4),columns=['A','B','C','D'], index=['A','B','C','D'])
+ self.assertRaises(KeyError, df.loc.__getitem__, tuple([slice(0,2),df.columns[0:2]]))
+
+ # want this to work
+ result = df.loc[:,"A":"B"].iloc[0:2,:]
+ self.assert_((result.columns == ['A','B']).all() == True)
+ self.assert_((result.index == ['A','B']).all() == True)
+
+ def test_loc_setitem_frame(self):
+ df = self.frame_labels
+
+ result = df.iloc[0,0]
+
+ df.loc['a','A'] = 1
+ result = df.loc['a','A']
+ self.assert_(result == 1)
+
+ result = df.iloc[0,0]
+ self.assert_(result == 1)
+
+ df.loc[:,'B':'D'] = 0
+ expected = df.loc[:,'B':'D']
+ result = df.ix[:,1:]
+ assert_frame_equal(result, expected)
+
+ def test_iloc_getitem_frame(self):
+ """ originally from test_frame.py"""
+ df = DataFrame(np.random.randn(10, 4), index=range(0, 20, 2), columns=range(0,8,2))
+
+ result = df.iloc[2]
+ exp = df.ix[4]
+ assert_series_equal(result, exp)
+
+ result = df.iloc[2,2]
+ exp = df.ix[4,4]
+ self.assert_(result == exp)
+
+ # slice
+ result = df.iloc[4:8]
+ expected = df.ix[8:14]
+ assert_frame_equal(result, expected)
+
+ result = df.iloc[:,2:3]
+ expected = df.ix[:,4:5]
+ assert_frame_equal(result, expected)
+
+ # list of integers
+ result = df.iloc[[0,1,3]]
+ expected = df.ix[[0,2,6]]
+ assert_frame_equal(result, expected)
+
+ result = df.iloc[[0,1,3],[0,1]]
+ expected = df.ix[[0,2,6],[0,2]]
+ assert_frame_equal(result, expected)
+
+ # neg indicies
+ result = df.iloc[[-1,1,3],[-1,1]]
+ expected = df.ix[[18,2,6],[6,2]]
+ assert_frame_equal(result, expected)
+
+ # dups indicies
+ result = df.iloc[[-1,-1,1,3],[-1,1]]
+ expected = df.ix[[18,18,2,6],[6,2]]
+ assert_frame_equal(result, expected)
+
+ # with index-like
+ s = Series(index=range(1,5))
+ result = df.iloc[s.index]
+ expected = df.ix[[2,4,6,8]]
+ assert_frame_equal(result, expected)
+
+ # out-of-bounds slice
+ self.assertRaises(IndexError, df.iloc.__getitem__, tuple([slice(None),slice(1,5,None)]))
+ self.assertRaises(IndexError, df.iloc.__getitem__, tuple([slice(None),slice(-5,3,None)]))
+ self.assertRaises(IndexError, df.iloc.__getitem__, tuple([slice(1,11,None)]))
+ self.assertRaises(IndexError, df.iloc.__getitem__, tuple([slice(-11,3,None)]))
+
+ # try with labelled frame
+ df = DataFrame(np.random.randn(10, 4), index=list('abcdefghij'), columns=list('ABCD'))
+
+ result = df.iloc[1,1]
+ exp = df.ix['b','B']
+ self.assert_(result == exp)
+
+ result = df.iloc[:,2:3]
+ expected = df.ix[:,['C']]
+ assert_frame_equal(result, expected)
+
+ # negative indexing
+ result = df.iloc[-1,-1]
+ exp = df.ix['j','D']
+ self.assert_(result == exp)
+
+ # out-of-bounds exception
+ self.assertRaises(IndexError, df.iloc.__getitem__, tuple([10,5]))
+
+ # trying to use a label
+ self.assertRaises(ValueError, df.iloc.__getitem__, tuple(['j','D']))
+
+ def test_iloc_setitem_series(self):
+ """ originally from test_series.py """
+ df = DataFrame(np.random.randn(10, 4), index=list('abcdefghij'), columns=list('ABCD'))
+
+ df.iloc[1,1] = 1
+ result = df.iloc[1,1]
+ self.assert_(result == 1)
+
+ df.iloc[:,2:3] = 0
+ expected = df.iloc[:,2:3]
+ result = df.iloc[:,2:3]
+ assert_frame_equal(result, expected)
+
+ def test_iloc_setitem_series(self):
+ s = Series(np.random.randn(10), index=range(0,20,2))
+
+ s.iloc[1] = 1
+ result = s.iloc[1]
+ self.assert_(result == 1)
+
+ s.iloc[:4] = 0
+ expected = s.iloc[:4]
+ result = s.iloc[:4]
+ assert_series_equal(result, expected)
+
+ def test_iloc_multiindex(self):
+ mi_labels = DataFrame(np.random.randn(4, 3), columns=[['i', 'i', 'j'],
+ ['A', 'A', 'B']],
+ index=[['i', 'i', 'j', 'k'], ['X', 'X', 'Y','Y']])
+
+ mi_int = DataFrame(np.random.randn(3, 3),
+ columns=[[2,2,4],[6,8,10]],
+ index=[[4,4,8],[8,10,12]])
+
+
+ # the first row
+ rs = mi_int.iloc[0]
+ xp = mi_int.ix[4].ix[8]
+ assert_series_equal(rs, xp)
+
+ # 2nd (last) columns
+ rs = mi_int.iloc[:,2]
+ xp = mi_int.ix[:,2]
+ assert_series_equal(rs, xp)
+
+ # corner column
+ rs = mi_int.iloc[2,2]
+ xp = mi_int.ix[:,2].ix[2]
+ self.assert_(rs == xp)
+
+ # this is basically regular indexing
+ rs = mi_labels.iloc[2,2]
+ xp = mi_labels.ix['j'].ix[:,'j'].ix[0,0]
+ self.assert_(rs == xp)
+
+ def test_loc_multiindex(self):
+
+ mi_labels = DataFrame(np.random.randn(3, 3), columns=[['i', 'i', 'j'],
+ ['A', 'A', 'B']],
+ index=[['i', 'i', 'j'], ['X', 'X', 'Y']])
+
+ mi_int = DataFrame(np.random.randn(3, 3),
+ columns=[[2,2,4],[6,8,10]],
+ index=[[4,4,8],[8,10,12]])
+
+ # the first row
+ rs = mi_labels.loc['i']
+ xp = mi_labels.ix['i']
+ assert_frame_equal(rs, xp)
+
+ # 2nd (last) columns
+ rs = mi_labels.loc[:,'j']
+ xp = mi_labels.ix[:,'j']
+ assert_frame_equal(rs, xp)
+
+ # corner column
+ rs = mi_labels.loc['j'].loc[:,'j']
+ xp = mi_labels.ix['j'].ix[:,'j']
+ assert_frame_equal(rs,xp)
+
+ # with a tuple
+ rs = mi_labels.loc[('i','X')]
+ xp = mi_labels.ix[('i','X')]
+ assert_frame_equal(rs,xp)
+
+ rs = mi_int.loc[4]
+ xp = mi_int.ix[4]
+ assert_frame_equal(rs,xp)
+
+if __name__ == '__main__':
+ import nose
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
| Updated to include new indexers:
`.iloc` for pure integer based indexing
`.loc` for pure label based indexing
`.iat` for fast scalar access by integer location
`.at` for fast scalar access by label location
Much updated docs, test suite, and example
In the new `test_indexing.py`, you can change the `_verbose` flag to True to get more test output
anybody interested can investigate a couple of cases marked `no comp` which are where the new
indexing behavior differs from `.ix` (or `.ix` doesn't work); this doesn't include cases where a `KeyError/IndexError` is raised (but `.ix` let's these thru)
Also, I wrote `.iloc` on top of `.ix` but most methods are overriden, it is possible that this let's something thru that should not, so pls take a look
_Please try this out and let me know if any of the docs or interface semantics are off_
| https://api.github.com/repos/pandas-dev/pandas/pulls/2922 | 2013-02-25T03:49:56Z | 2013-03-07T04:17:17Z | 2013-03-07T04:17:17Z | 2014-07-28T19:25:24Z |
ENH: change flatten to ravel (in internals/pytables) | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 8d4473301a00a..159393be38b07 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -486,7 +486,7 @@ def where(self, other, cond, raise_on_error = True, try_cast = False):
# our where function
def func(c,v,o):
- if c.flatten().all():
+ if c.ravel().all():
return v
try:
@@ -649,7 +649,7 @@ class ObjectBlock(Block):
@property
def is_bool(self):
""" we can be a bool if we have only bool values but are of type object """
- return lib.is_bool_array(self.values.flatten())
+ return lib.is_bool_array(self.values.ravel())
def convert(self, convert_dates = True, convert_numeric = True, copy = True):
""" attempt to coerce any object types to better types
@@ -751,7 +751,7 @@ def make_block(values, items, ref_items):
# try to infer a datetimeblock
if klass is None and np.prod(values.shape):
- flat = values.flatten()
+ flat = values.ravel()
inferred_type = lib.infer_dtype(flat)
if inferred_type == 'datetime':
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 1db60eb92863d..8067d7e0be17f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1088,7 +1088,7 @@ def set_atom(self, block, existing_col, min_itemsize, nan_rep, **kwargs):
self.values = list(block.items)
dtype = block.dtype.name
- inferred_type = lib.infer_dtype(block.values.flatten())
+ inferred_type = lib.infer_dtype(block.values.ravel())
if inferred_type == 'datetime64':
self.set_atom_datetime64(block)
@@ -1116,7 +1116,7 @@ def set_atom_string(self, block, existing_col, min_itemsize, nan_rep):
data = block.fillna(nan_rep).values
# itemsize is the maximum length of a string (along any dimension)
- itemsize = lib.max_len_string_array(data.flatten())
+ itemsize = lib.max_len_string_array(data.ravel())
# specified min_itemsize?
if isinstance(min_itemsize, dict):
@@ -1209,7 +1209,7 @@ def convert(self, values, nan_rep):
# convert nans
if self.kind == 'string':
self.data = lib.array_replace_from_nan_rep(
- self.data.flatten(), nan_rep).reshape(self.data.shape)
+ self.data.ravel(), nan_rep).reshape(self.data.shape)
return self
def get_attr(self):
@@ -1628,7 +1628,7 @@ def write_array(self, key, value):
if value.dtype.type == np.object_:
# infer the type, warn if we have a non-string type here (for performance)
- inferred_type = lib.infer_dtype(value.flatten())
+ inferred_type = lib.infer_dtype(value.ravel())
if empty_array:
pass
elif inferred_type == 'string':
diff --git a/pandas/stats/var.py b/pandas/stats/var.py
index 9390eef95700a..e993b60e18a39 100644
--- a/pandas/stats/var.py
+++ b/pandas/stats/var.py
@@ -342,7 +342,7 @@ def _forecast_cov_beta_raw(self, n):
for t in xrange(T + 1):
index = t + p
- y = values.take(xrange(index, index - p, -1), axis=0).flatten()
+ y = values.take(xrange(index, index - p, -1), axis=0).ravel()
trans_Z = np.hstack(([1], y))
trans_Z = trans_Z.reshape(1, len(trans_Z))
diff --git a/vb_suite/io_bench.py b/vb_suite/io_bench.py
index 0fe0dd511e1b5..ba386bd0e9649 100644
--- a/vb_suite/io_bench.py
+++ b/vb_suite/io_bench.py
@@ -45,6 +45,20 @@
frame_to_csv = Benchmark("df.to_csv('__test__.csv')", setup,
start_date=datetime(2011, 1, 1))
+#----------------------------------
+setup = common_setup + """
+from pandas import concat, Timestamp
+
+df_float = DataFrame(np.random.randn(1000, 30),dtype='float64')
+df_int = DataFrame(np.random.randn(1000, 30),dtype='int64')
+df_bool = DataFrame(True,index=df_float.index,columns=df_float.columns)
+df_object = DataFrame('foo',index=df_float.index,columns=df_float.columns)
+df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=df_float.columns)
+df = concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1)
+"""
+frame_to_csv_mixed = Benchmark("df.to_csv('__test__.csv')", setup,
+ start_date=datetime(2012, 6, 1))
+
#----------------------------------------------------------------------
# parse dates, ISO8601 format
| - micro-tweak yields small but non-zero perf gains, as ravel doesn't copy uncessarily
- added frame_to_csv_mixed to vbench
| https://api.github.com/repos/pandas-dev/pandas/pulls/2921 | 2013-02-24T15:08:00Z | 2013-02-25T12:30:33Z | 2013-02-25T12:30:33Z | 2013-02-25T12:30:33Z |
BLD: fixed file modes | diff --git a/RELEASE.rst b/RELEASE.rst
old mode 100755
new mode 100644
index 9e133ed324694..e41731131d888
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -136,6 +136,7 @@ pandas 0.11.0
.. _GH2898: https://github.com/pydata/pandas/issues/2898
.. _GH2909: https://github.com/pydata/pandas/issues/2909
+
pandas 0.10.1
=============
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
old mode 100755
new mode 100644
index c50de028917d7..eb3dbfd01abd7
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -53,6 +53,7 @@
from pandas.core.config import get_option
+
#----------------------------------------------------------------------
# Docstring templates
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
old mode 100755
new mode 100644
index 2de7102dabc74..81fbc0fc4d84d
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -35,7 +35,6 @@
from numpy.testing.decorators import slow
-
def _skip_if_no_scipy():
try:
import scipy.stats
| https://api.github.com/repos/pandas-dev/pandas/pulls/2920 | 2013-02-24T02:03:50Z | 2013-02-24T02:07:10Z | 2013-02-24T02:07:10Z | 2013-02-24T02:17:44Z | |
ENH: add escape parameter to to_html() | diff --git a/RELEASE.rst b/RELEASE.rst
index b7a79e3e24175..11fe5ef1e0860 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -131,6 +131,9 @@ pandas 0.11.0
- Add ``time()`` method to DatetimeIndex (GH3180_)
- Return NA when using Series.str[...] for values that are not long enough
(GH3223_)
+ - to_html() now accepts an optional "escape" argument to control reserved
+ HTML character escaping (enabled by default) and escapes ``&``, in addition
+ to ``<`` and ``>``. (GH2919_)
**API Changes**
@@ -390,6 +393,7 @@ pandas 0.11.0
.. _GH3238: https://github.com/pydata/pandas/issues/3238
.. _GH3258: https://github.com/pydata/pandas/issues/3258
.. _GH3283: https://github.com/pydata/pandas/issues/3283
+.. _GH2919: https://github.com/pydata/pandas/issues/2919
pandas 0.10.1
=============
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index c6553b909f7a6..1385c217b6550 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -325,6 +325,10 @@ Enhancements
- Treat boolean values as integers (values 1 and 0) for numeric
operations. (GH2641_)
+ - to_html() now accepts an optional "escape" argument to control reserved
+ HTML character escaping (enabled by default) and escapes ``&``, in addition
+ to ``<`` and ``>``. (GH2919_)
+
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
on GitHub for a complete list.
@@ -350,3 +354,4 @@ on GitHub for a complete list.
.. _GH3070: https://github.com/pydata/pandas/issues/3070
.. _GH3075: https://github.com/pydata/pandas/issues/3075
.. _GH2641: https://github.com/pydata/pandas/issues/2641
+.. _GH2919: https://github.com/pydata/pandas/issues/2919
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 862b09f5e84e3..89e24fa34f070 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -495,6 +495,7 @@ def __init__(self, formatter, classes=None):
self.columns = formatter.columns
self.elements = []
self.bold_rows = self.fmt.kwds.get('bold_rows', False)
+ self.escape = self.fmt.kwds.get('escape', True)
def write(self, s, indent=0):
rs = com.pprint_thing(s)
@@ -517,7 +518,10 @@ def _write_cell(self, s, kind='td', indent=0, tags=None):
else:
start_tag = '<%s>' % kind
- esc = {'<' : r'<', '>' : r'>'}
+ if self.escape:
+ esc = {'<' : r'<', '>' : r'>', '&' : r'&'}
+ else:
+ esc = {}
rs = com.pprint_thing(s, escape_chars=esc)
self.write(
'%s%s</%s>' % (start_tag, rs, kind), indent)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0d7913819f115..091b9926500d9 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1459,13 +1459,15 @@ def to_html(self, buf=None, columns=None, col_space=None, colSpace=None,
header=True, index=True, na_rep='NaN', formatters=None,
float_format=None, sparsify=None, index_names=True,
justify=None, force_unicode=None, bold_rows=True,
- classes=None):
+ classes=None, escape=True):
"""
to_html-specific options
bold_rows : boolean, default True
Make the row labels bold in the output
classes : str or list or tuple, default None
CSS class(es) to apply to the resulting html table
+ escape : boolean, default True
+ Convert the characters <, >, and & to HTML-safe sequences.
Render a DataFrame to an html table.
"""
@@ -1488,7 +1490,8 @@ def to_html(self, buf=None, columns=None, col_space=None, colSpace=None,
justify=justify,
index_names=index_names,
header=header, index=index,
- bold_rows=bold_rows)
+ bold_rows=bold_rows,
+ escape=escape)
formatter.to_html(classes=classes)
if buf is None:
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 0ae8934c898b0..f013f1a7ca14d 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -275,8 +275,8 @@ def test_to_html_unicode(self):
df.to_html()
def test_to_html_escaped(self):
- a = 'str<ing1'
- b = 'stri>ng2'
+ a = 'str<ing1 &'
+ b = 'stri>ng2 &'
test_dict = {'co<l1': {a: "<type 'str'>",
b: "<type 'str'>"},
@@ -293,12 +293,12 @@ def test_to_html_escaped(self):
</thead>
<tbody>
<tr>
- <th>str<ing1</th>
+ <th>str<ing1 &amp;</th>
<td> <type 'str'></td>
<td> <type 'str'></td>
</tr>
<tr>
- <th>stri>ng2</th>
+ <th>stri>ng2 &amp;</th>
<td> <type 'str'></td>
<td> <type 'str'></td>
</tr>
@@ -306,6 +306,38 @@ def test_to_html_escaped(self):
</table>"""
self.assertEqual(xp, rs)
+ def test_to_html_escape_disabled(self):
+ a = 'str<ing1 &'
+ b = 'stri>ng2 &'
+
+ test_dict = {'co<l1': {a: "<b>bold</b>",
+ b: "<b>bold</b>"},
+ 'co>l2': {a: "<b>bold</b>",
+ b: "<b>bold</b>"}}
+ rs = pd.DataFrame(test_dict).to_html(escape=False)
+ xp = """<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>co<l1</th>
+ <th>co>l2</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>str<ing1 &</th>
+ <td> <b>bold</b></td>
+ <td> <b>bold</b></td>
+ </tr>
+ <tr>
+ <th>stri>ng2 &</th>
+ <td> <b>bold</b></td>
+ <td> <b>bold</b></td>
+ </tr>
+ </tbody>
+</table>"""
+ self.assertEqual(xp, rs)
+
def test_to_html_multiindex_sparsify(self):
index = pd.MultiIndex.from_arrays([[0, 0, 1, 1], [0, 1, 0, 1]],
names=['foo', None])
| Treating DataFrame content as plain text, rather than HTML markup, by escaping
everything (#2617) seems like the right default for `to_html()`, however, if a DataFrame contains HTML ([example](http://stackoverflow.com/questions/14627380/pandas-html-output-with-conditional-formatting)) or text already HTML escaped, it results in either unwanted escaping or double-escaping.
Changes in this PR:
- make HTML escaping programmable through a new `to_html()` parameter named
`escape` (default True), allowing users to restore old `to_html()` behavior (<=0.10.0) by setting `escape=False`.
- add `&` to the list of HTML chars escaped, so strings that happen to contain
HTML escape sequences or reserved entities, such as `"<"`, are displayed
properly.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2919 | 2013-02-23T22:17:49Z | 2013-04-10T07:34:26Z | 2013-04-10T07:34:26Z | 2014-06-13T07:22:05Z |
DOC: add "kde" and "density" to plot kind docstring | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index ceaedae0e9db3..11e89a840d145 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1393,9 +1393,10 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
ax : matplotlib axis object, default None
style : list or dict
matplotlib line style per column
- kind : {'line', 'bar', 'barh'}
+ kind : {'line', 'bar', 'barh', 'kde', 'density'}
bar : vertical bar plot
barh : horizontal bar plot
+ kde/density : Kernel Density Estimation plot
logx : boolean, default False
For line plots, use log scaling on x axis
logy : boolean, default False
@@ -1473,9 +1474,10 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
Parameters
----------
label : label argument to provide to plot
- kind : {'line', 'bar', 'barh'}
+ kind : {'line', 'bar', 'barh', 'kde', 'density'}
bar : vertical bar plot
barh : horizontal bar plot
+ kde/density : Kernel Density Estimation plot
use_index : boolean, default True
Plot index as axis tick labels
rot : int, default None
| Changes accepted `kind` parameter values from `{'line', 'bar', 'barh'}` to `{'line', 'bar', 'barh', 'kde', 'density'}`
| https://api.github.com/repos/pandas-dev/pandas/pulls/2917 | 2013-02-23T21:51:01Z | 2013-02-23T22:07:36Z | 2013-02-23T22:07:36Z | 2013-09-19T07:56:45Z |
BUG: support data column indexers that have a large numbers of values | diff --git a/RELEASE.rst b/RELEASE.rst
index 00f1f9375306a..26bb6453adfa1 100755
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -98,6 +98,8 @@ pandas 0.11.0
- ``HDFStore``
- Fix weird PyTables error when using too many selectors in a where
+ also correctly filter on any number of values in a Term expression
+ (so not using numexpr filtering, but isin filtering)
- Provide dotted attribute access to ``get`` from stores
(e.g. store.df == store['df'])
- Internally, change all variables to be private-like (now have leading
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index b56b6c5e5923f..1db60eb92863d 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -24,7 +24,7 @@
from pandas.core.common import _asarray_tuplesafe, _try_sort
from pandas.core.internals import BlockManager, make_block, form_blocks
from pandas.core.reshape import block2d_to_blocknd, factor_indexer
-from pandas.core.index import Int64Index
+from pandas.core.index import Int64Index, _ensure_index
import pandas.core.common as com
from pandas.tools.merge import concat
@@ -1886,6 +1886,7 @@ class Table(Storer):
table_type = None
levels = 1
is_table = True
+ is_shape_reversed = False
def __init__(self, *args, **kwargs):
super(Table, self).__init__(*args, **kwargs)
@@ -2282,16 +2283,37 @@ def process_axes(self, obj, columns=None):
labels = Index(labels) & Index(columns)
obj = obj.reindex_axis(labels, axis=axis, copy=False)
- def reindex(obj, axis, filt, ordered):
- ordd = ordered & filt
- ordd = sorted(ordered.get_indexer(ordd))
- return obj.reindex_axis(ordered.take(ordd), axis=obj._get_axis_number(axis), copy=False)
-
# apply the selection filters (but keep in the same order)
if self.selection.filter:
- for axis, filt in self.selection.filter:
- obj = reindex(
- obj, axis, filt, getattr(obj, obj._get_axis_name(axis)))
+ for field, filt in self.selection.filter:
+
+ def process_filter(field, filt):
+
+ for axis_name in obj._AXIS_NAMES.values():
+ axis_number = obj._get_axis_number(axis_name)
+ axis_values = obj._get_axis(axis_name)
+
+ # see if the field is the name of an axis
+ if field == axis_name:
+ ordd = axis_values & filt
+ ordd = sorted(axis_values.get_indexer(ordd))
+ return obj.reindex_axis(axis_values.take(ordd), axis=axis_number, copy=False)
+
+ # this might be the name of a file IN an axis
+ elif field in axis_values:
+
+ # we need to filter on this dimension
+ values = _ensure_index(getattr(obj,field).values)
+ filt = _ensure_index(filt)
+
+ # hack until we support reversed dim flags
+ if isinstance(obj,DataFrame):
+ axis_number = 1-axis_number
+ return obj.ix._getitem_axis(values.isin(filt),axis=axis_number)
+
+ raise Exception("cannot find the field [%s] for filtering!" % field)
+
+ obj = process_filter(field, filt)
return obj
@@ -2648,7 +2670,7 @@ class AppendableFrameTable(AppendableTable):
table_type = 'appendable_frame'
ndim = 2
obj_type = DataFrame
-
+
@property
def is_transposed(self):
return self.index_axes[0].axis == 1
@@ -3054,9 +3076,11 @@ def convert_value(self, v):
""" convert the expression that is in the term to something that is accepted by pytables """
if self.kind == 'datetime64' or self.kind == 'datetime' :
- return [lib.Timestamp(v).value, None]
+ v = lib.Timestamp(v)
+ return [v.value, v]
elif isinstance(v, datetime) or hasattr(v, 'timetuple') or self.kind == 'date':
- return [time.mktime(v.timetuple()), None]
+ v = time.mktime(v.timetuple())
+ return [v, Timestamp(v) ]
elif self.kind == 'integer':
v = int(float(v))
return [v, v]
@@ -3070,7 +3094,8 @@ def convert_value(self, v):
v = bool(v)
return [v, v]
elif not isinstance(v, basestring):
- return [str(v), None]
+ v = str(v)
+ return [v, v]
# string quoting
return ["'" + v + "'", v]
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 8b77cee3730fc..d4654d01f1e1e 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1679,6 +1679,46 @@ def test_select_dtypes(self):
expected = df.reindex(index=list(df.index)[0:10],columns=['A'])
tm.assert_frame_equal(expected, result)
+ def test_select_with_many_inputs(self):
+
+ with ensure_clean(self.path) as store:
+
+ df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300),
+ A=np.random.randn(300),
+ B=range(300),
+ users = ['a']*50 + ['b']*50 + ['c']*100 + ['a%03d' % i for i in range(100)]))
+ store.remove('df')
+ store.append('df', df, data_columns=['ts', 'A', 'B', 'users'])
+
+ # regular select
+ result = store.select('df', [Term('ts', '>=', Timestamp('2012-02-01'))])
+ expected = df[df.ts >= Timestamp('2012-02-01')]
+ tm.assert_frame_equal(expected, result)
+
+ # small selector
+ result = store.select('df', [Term('ts', '>=', Timestamp('2012-02-01')),Term('users',['a','b','c'])])
+ expected = df[ (df.ts >= Timestamp('2012-02-01')) & df.users.isin(['a','b','c']) ]
+ tm.assert_frame_equal(expected, result)
+
+ # big selector along the columns
+ selector = [ 'a','b','c' ] + [ 'a%03d' % i for i in xrange(60) ]
+ result = store.select('df', [Term('ts', '>=', Timestamp('2012-02-01')),Term('users',selector)])
+ expected = df[ (df.ts >= Timestamp('2012-02-01')) & df.users.isin(selector) ]
+ tm.assert_frame_equal(expected, result)
+
+ selector = range(100,200)
+ result = store.select('df', [Term('B', selector)])
+ expected = df[ df.B.isin(selector) ]
+ tm.assert_frame_equal(expected, result)
+ self.assert_(len(result) == 100)
+
+ # big selector along the index
+ selector = Index(df.ts[0:100].values)
+ result = store.select('df', [Term('ts', selector)])
+ expected = df[ df.ts.isin(selector.values) ]
+ tm.assert_frame_equal(expected, result)
+ self.assert_(len(result) == 100)
+
def test_panel_select(self):
wp = tm.makePanel()
| This is conceptually (and implemented) like an `isin`
You can give a list of values that you want for a particular column, rather
than a boolean expression. This was already speced in the API, just
not implemented correctly!
Something like this
```
store.select('df',[pd.Term('users',[ 'a%03d' % i for i in xrange(60) ])])
```
originally from the mailing list
https://groups.google.com/forum/?fromgroups#!topic/pystatsmodels/oTjfOb0gazw
| https://api.github.com/repos/pandas-dev/pandas/pulls/2914 | 2013-02-22T01:53:33Z | 2013-02-23T16:43:51Z | 2013-02-23T16:43:51Z | 2014-06-13T05:35:05Z |
BUG: incorrect default in df.convert_objects was converting object types (#2909) | diff --git a/RELEASE.rst b/RELEASE.rst
old mode 100644
new mode 100755
index 603a928eda74e..00f1f9375306a
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -41,10 +41,11 @@ pandas 0.11.0
- added ``blocks`` attribute to DataFrames, to return a dict of dtypes to
homogeneously dtyped DataFrames
- added keyword ``convert_numeric`` to ``convert_objects()`` to try to
- convert object dtypes to numeric types
+ convert object dtypes to numeric types (default is False)
- ``convert_dates`` in ``convert_objects`` can now be ``coerce`` which will
return a datetime64[ns] dtype with non-convertibles set as ``NaT``; will
- preserve an all-nan object (e.g. strings)
+ preserve an all-nan object (e.g. strings), default is True (to perform
+ soft-conversion
- Series print output now includes the dtype by default
- Optimize internal reindexing routines (GH2819_, GH2867_)
- ``describe_option()`` now reports the default and current value of options.
@@ -69,12 +70,14 @@ pandas 0.11.0
- Integer block types will upcast as needed in where operations (GH2793_)
- Series now automatically will try to set the correct dtype based on passed
datetimelike objects (datetime/Timestamp)
- - timedelta64 are returned in appropriate cases (e.g. Series - Series,
- when both are datetime64)
- - mixed datetimes and objects (GH2751_) in a constructor witll be casted
- correctly
- - astype on datetimes to object are now handled (as well as NaT
- conversions to np.nan)
+
+ - timedelta64 are returned in appropriate cases (e.g. Series - Series,
+ when both are datetime64)
+ - mixed datetimes and objects (GH2751_) in a constructor witll be casted
+ correctly
+ - astype on datetimes to object are now handled (as well as NaT
+ conversions to np.nan)
+
- arguments to DataFrame.clip were inconsistent to numpy and Series clipping
(GH2747_)
@@ -92,7 +95,7 @@ pandas 0.11.0
overflow ``int64`` and some mixed typed type lists (GH2845_)
- Fix issue with slow printing of wide frames resulting (GH2807_)
- ``HDFStore``
+ - ``HDFStore``
- Fix weird PyTables error when using too many selectors in a where
- Provide dotted attribute access to ``get`` from stores
@@ -100,6 +103,9 @@ pandas 0.11.0
- Internally, change all variables to be private-like (now have leading
underscore)
+ - Bug showing up in applymap where some object type columns are converted (GH2909_)
+ had an incorrect default in convert_objects
+
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
.. _GH2681: https://github.com/pydata/pandas/issues/2681
@@ -113,6 +119,7 @@ pandas 0.11.0
.. _GH2845: https://github.com/pydata/pandas/issues/2845
.. _GH2867: https://github.com/pydata/pandas/issues/2867
.. _GH2807: https://github.com/pydata/pandas/issues/2807
+.. _GH2909: https://github.com/pydata/pandas/issues/2909
pandas 0.10.1
=============
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
old mode 100644
new mode 100755
index 486924eca456f..4301aac566fc6
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1658,7 +1658,7 @@ def info(self, verbose=True, buf=None, max_cols=None):
def dtypes(self):
return self.apply(lambda x: x.dtype)
- def convert_objects(self, convert_dates=True, convert_numeric=True):
+ def convert_objects(self, convert_dates=True, convert_numeric=False):
"""
Attempt to infer better dtype for object columns
Always returns a copy (even if no object columns)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
old mode 100644
new mode 100755
index 9555f924ef58e..a0dbe760bd405
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -6793,6 +6793,15 @@ def test_applymap(self):
result = self.frame.applymap(lambda x: (x, x))
self.assert_(isinstance(result['A'][0], tuple))
+ # GH 2909, object conversion to float in constructor?
+ df = DataFrame(data=[1,'a'])
+ result = df.applymap(lambda x: x)
+ self.assert_(result.dtypes[0] == object)
+
+ df = DataFrame(data=[1.,'a'])
+ result = df.applymap(lambda x: x)
+ self.assert_(result.dtypes[0] == object)
+
def test_filter(self):
# items
| showing up in applymap (#2909)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2910 | 2013-02-21T16:50:34Z | 2013-02-23T01:19:53Z | 2013-02-23T01:19:53Z | 2014-06-22T07:36:52Z |
BUG: Series ops with a rhs of a Timestamp raising exception (#2898) | diff --git a/RELEASE.rst b/RELEASE.rst
index 26bb6453adfa1..9e133ed324694 100755
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -77,6 +77,8 @@ pandas 0.11.0
correctly
- astype on datetimes to object are now handled (as well as NaT
conversions to np.nan)
+ - all timedelta like objects will be correctly assigned to ``timedelta64``
+ with mixed ``NaN`` and/or ``NaT`` allowed
- arguments to DataFrame.clip were inconsistent to numpy and Series clipping
(GH2747_)
@@ -108,6 +110,16 @@ pandas 0.11.0
- Bug showing up in applymap where some object type columns are converted (GH2909_)
had an incorrect default in convert_objects
+ - TimeDeltas
+
+ - Series ops with a Timestamp on the rhs was throwing an exception (GH2898_)
+ added tests for Series ops with datetimes,timedeltas,Timestamps, and datelike
+ Series on both lhs and rhs
+ - Series will now set its dtype automatically to ``timedelta64[ns]``
+ if all passed objects are timedelta objects
+ - Support null checking on timedelta64, representing (and formatting) with NaT
+ - Support setitem with np.nan value, converts to NaT
+
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
.. _GH2681: https://github.com/pydata/pandas/issues/2681
@@ -121,6 +133,7 @@ pandas 0.11.0
.. _GH2845: https://github.com/pydata/pandas/issues/2845
.. _GH2867: https://github.com/pydata/pandas/issues/2867
.. _GH2807: https://github.com/pydata/pandas/issues/2807
+.. _GH2898: https://github.com/pydata/pandas/issues/2898
.. _GH2909: https://github.com/pydata/pandas/issues/2909
pandas 0.10.1
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 1ce40ea74a6bb..0957ed8f0e073 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -912,6 +912,8 @@ method:
panel.to_frame()
+.. _dsintro.panel4d:
+
Panel4D (Experimental)
----------------------
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index e7044371e7fdb..d627212c6ae9c 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -917,3 +917,47 @@ TimeSeries, aligning the data on the UTC timestamps:
result = eastern + berlin
result
result.index
+
+.. _timeseries.timedeltas:
+
+Time Deltas
+-----------
+
+Timedeltas are differences in times, expressed in difference units, e.g. days,hours,minutes,seconds.
+They can be both positive and negative.
+
+.. ipython:: python
+
+ from datetime import datetime, timedelta
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ td = Series([ timedelta(days=i) for i in range(3) ])
+ df = DataFrame(dict(A = s, B = td))
+ df
+ df['C'] = df['A'] + df['B']
+ df
+ df.dtypes
+
+ s - s.max()
+ s - datetime(2011,1,1,3,5)
+ s + timedelta(minutes=5)
+
+Series of timedeltas with ``NaT`` values are supported
+
+.. ipython:: python
+
+ y = s - s.shift()
+ y
+The can be set to ``NaT`` using ``np.nan`` analagously to datetimes
+
+.. ipython:: python
+
+ y[1] = np.nan
+ y
+
+Operands can also appear in a reversed order (a singluar object operated with a Series)
+
+.. ipython:: python
+
+ s.max() - s
+ datetime(2011,1,1,3,5) - s
+ timedelta(minutes=5) + s
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 4bf77968c14eb..c220d2cbba81d 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -330,7 +330,7 @@ N Dimensional Panels (Experimental)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Adding experimental support for Panel4D and factory functions to create n-dimensional named panels.
-:ref:`Docs <dsintro-panel4d>` for NDim. Here is a taste of what to expect.
+:ref:`Docs <dsintro.panel4d>` for NDim. Here is a taste of what to expect.
.. ipython:: python
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 1716f2d4d1413..c799463d7935c 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -140,11 +140,9 @@ Astype conversion on ``datetime64[ns]`` to ``object``, implicity converts ``NaT`
s.dtype
-New features
+Enhancements
~~~~~~~~~~~~
-**Enhancements**
-
- In ``HDFStore``, provide dotted attribute access to ``get`` from stores
(e.g. store.df == store['df'])
@@ -178,7 +176,37 @@ New features
price. This just obtains the data from Options.get_near_stock_price
instead of Options.get_xxx_data().
-**Bug Fixes**
+Bug Fixes
+~~~~~~~~~
+
+ - Timedeltas are now fully operational (closes GH2898_)
+
+ .. ipython:: python
+
+ from datetime import datetime, timedelta
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ td = Series([ timedelta(days=i) for i in range(3) ])
+ df = DataFrame(dict(A = s, B = td))
+ df
+ s - s.max()
+ s - datetime(2011,1,1,3,5)
+ s + timedelta(minutes=5)
+ df['C'] = df['A'] + df['B']
+ df
+ df.dtypes
+
+ # timedelta are representas as ``NaT``
+ y = s - s.shift()
+ y
+
+ # can be set via ``np.nan``
+ y[1] = np.nan
+ y
+
+ # works on lhs too
+ s.max() - s
+ datetime(2011,1,1,3,5) - s
+ timedelta(minutes=5) + s
See the `full release notes
<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
@@ -187,4 +215,5 @@ on GitHub for a complete list.
.. _GH2809: https://github.com/pydata/pandas/issues/2809
.. _GH2810: https://github.com/pydata/pandas/issues/2810
.. _GH2837: https://github.com/pydata/pandas/issues/2837
+.. _GH2898: https://github.com/pydata/pandas/issues/2898
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 4f2852f42f985..4e6215969e7ec 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -148,9 +148,9 @@ def _isnull_ndarraylike(obj):
elif values.dtype == np.dtype('M8[ns]'):
# this is the NaT pattern
result = values.view('i8') == tslib.iNaT
- elif issubclass(values.dtype.type, np.timedelta64):
- # -np.isfinite(values.view('i8'))
- result = np.ones(values.shape, dtype=bool)
+ elif values.dtype == np.dtype('m8[ns]'):
+ # this is the NaT pattern
+ result = values.view('i8') == tslib.iNaT
else:
# -np.isfinite(obj)
result = np.isnan(obj)
@@ -902,35 +902,50 @@ def _possibly_convert_platform(values):
return values
+def _possibly_cast_to_timedelta(value):
+ """ try to cast to timedelta64 w/o coercion """
+ new_value = tslib.array_to_timedelta64(value.astype(object), coerce=False)
+ if new_value.dtype == 'i8':
+ value = np.array(new_value,dtype='timedelta64[ns]')
+ return value
+
def _possibly_cast_to_datetime(value, dtype, coerce = False):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
if isinstance(dtype, basestring):
dtype = np.dtype(dtype)
- if dtype is not None and is_datetime64_dtype(dtype):
- if np.isscalar(value):
- if value == tslib.iNaT or isnull(value):
- value = tslib.iNaT
- else:
- value = np.array(value)
+ if dtype is not None:
+ is_datetime64 = is_datetime64_dtype(dtype)
+ is_timedelta64 = is_timedelta64_dtype(dtype)
- # have a scalar array-like (e.g. NaT)
- if value.ndim == 0:
- value = tslib.iNaT
+ if is_datetime64 or is_timedelta64:
- # we have an array of datetime & nulls
- elif np.prod(value.shape):
- try:
- value = tslib.array_to_datetime(value, coerce = coerce)
- except:
- pass
+ if np.isscalar(value):
+ if value == tslib.iNaT or isnull(value):
+ value = tslib.iNaT
+ else:
+ value = np.array(value)
+
+ # have a scalar array-like (e.g. NaT)
+ if value.ndim == 0:
+ value = tslib.iNaT
+
+ # we have an array of datetime or timedeltas & nulls
+ elif np.prod(value.shape) and value.dtype != dtype:
+ try:
+ if is_datetime64:
+ value = tslib.array_to_datetime(value, coerce = coerce)
+ elif is_timedelta64:
+ value = _possibly_cast_to_timedelta(value)
+ except:
+ pass
elif dtype is None:
# we might have a array (or single object) that is datetime like, and no dtype is passed
# don't change the value unless we find a datetime set
v = value
- if not (is_list_like(v) or hasattr(v,'len')):
+ if not is_list_like(v):
v = [ v ]
if len(v):
inferred_type = lib.infer_dtype(v)
@@ -939,6 +954,8 @@ def _possibly_cast_to_datetime(value, dtype, coerce = False):
value = tslib.array_to_datetime(np.array(v))
except:
pass
+ elif inferred_type == 'timedelta':
+ value = _possibly_cast_to_timedelta(value)
return value
@@ -1281,6 +1298,16 @@ def is_datetime64_dtype(arr_or_dtype):
return issubclass(tipo, np.datetime64)
+def is_timedelta64_dtype(arr_or_dtype):
+ if isinstance(arr_or_dtype, np.dtype):
+ tipo = arr_or_dtype.type
+ elif isinstance(arr_or_dtype, type):
+ tipo = np.dtype(arr_or_dtype).type
+ else:
+ tipo = arr_or_dtype.dtype.type
+ return issubclass(tipo, np.timedelta64)
+
+
def is_float_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -1290,8 +1317,7 @@ def is_float_dtype(arr_or_dtype):
def is_list_like(arg):
- return hasattr(arg, '__iter__') and not isinstance(arg, basestring)
-
+ return hasattr(arg, '__iter__') and not isinstance(arg, basestring) or hasattr(arg,'len')
def _is_sequence(x):
try:
diff --git a/pandas/core/format.py b/pandas/core/format.py
index a14393374e41d..b4a6ac59719b5 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -1012,6 +1012,8 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
fmt_klass = IntArrayFormatter
elif com.is_datetime64_dtype(values.dtype):
fmt_klass = Datetime64Formatter
+ elif com.is_timedelta64_dtype(values.dtype):
+ fmt_klass = Timedelta64Formatter
else:
fmt_klass = GenericArrayFormatter
@@ -1170,7 +1172,6 @@ def get_result(self):
fmt_values = [formatter(x) for x in self.values]
return _make_fixed_width(fmt_values, self.justify)
-
def _format_datetime64(x, tz=None):
if isnull(x):
return 'NaT'
@@ -1179,6 +1180,24 @@ def _format_datetime64(x, tz=None):
return stamp._repr_base
+class Timedelta64Formatter(Datetime64Formatter):
+
+ def get_result(self):
+ if self.formatter:
+ formatter = self.formatter
+ else:
+
+ formatter = _format_timedelta64
+
+ fmt_values = [formatter(x) for x in self.values]
+ return _make_fixed_width(fmt_values, self.justify)
+
+def _format_timedelta64(x):
+ if isnull(x):
+ return 'NaT'
+
+ return lib.repr_timedelta64(x)
+
def _make_fixed_width(strings, justify='right', minimum=None):
if len(strings) == 0:
return strings
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5570f81e9ce15..041e708d7a907 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -17,7 +17,7 @@
from pandas.core.common import (isnull, notnull, _is_bool_indexer,
_default_index, _maybe_promote, _maybe_upcast,
_asarray_tuplesafe, is_integer_dtype,
- _infer_dtype_from_scalar)
+ _infer_dtype_from_scalar, is_list_like)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
from pandas.core.indexing import _SeriesIndexer, _check_bool_indexer
@@ -81,18 +81,51 @@ def wrapper(self, other):
lvalues, rvalues = self, other
- if com.is_datetime64_dtype(self):
+ is_timedelta = com.is_timedelta64_dtype(self)
+ is_datetime = com.is_datetime64_dtype(self)
+
+ if is_datetime or is_timedelta:
+
+ # convert the argument to an ndarray
+ def convert_to_array(values):
+ if not is_list_like(values):
+ values = np.array([values])
+ inferred_type = lib.infer_dtype(values)
+ if inferred_type in set(['datetime64','datetime','date','time']):
+ if isinstance(values, pa.Array) and com.is_datetime64_dtype(values):
+ pass
+ else:
+ values = tslib.array_to_datetime(values)
+ else:
+ values = pa.array(values)
+ return values
- if not isinstance(rvalues, pa.Array):
- rvalues = pa.array([rvalues])
+ # swap the valuesor com.is_timedelta64_dtype(self):
+ if is_timedelta:
+ lvalues, rvalues = rvalues, lvalues
+ lvalues = convert_to_array(lvalues)
+ is_timedelta = False
+
+ rvalues = convert_to_array(rvalues)
# rhs is either a timedelta or a series/ndarray
- if lib.is_timedelta_array(rvalues):
- rvalues = pa.array([np.timedelta64(v) for v in rvalues],
- dtype='timedelta64[ns]')
+ if lib.is_timedelta_or_timedelta64_array(rvalues):
+
+ # need to convert timedelta to ns here
+ # safest to convert it to an object arrany to process
+ rvalues = tslib.array_to_timedelta64(rvalues.astype(object))
dtype = 'M8[ns]'
elif com.is_datetime64_dtype(rvalues):
dtype = 'timedelta64[ns]'
+
+ # we may have to convert to object unfortunately here
+ mask = isnull(lvalues) | isnull(rvalues)
+ if mask.any():
+ def wrap_results(x):
+ x = pa.array(x,dtype='timedelta64[ns]')
+ np.putmask(x,mask,tslib.iNaT)
+ return x
+
else:
raise ValueError('cannot operate on a series with out a rhs '
'of a series/ndarray of type datetime64[ns] '
@@ -105,7 +138,6 @@ def wrapper(self, other):
lvalues = lvalues.values
rvalues = rvalues.values
-
if self.index.equals(other.index):
name = _maybe_match_name(self, other)
return Series(wrap_results(na_op(lvalues, rvalues)),
@@ -123,12 +155,14 @@ def wrapper(self, other):
arr = na_op(lvalues, rvalues)
name = _maybe_match_name(self, other)
- return Series(arr, index=join_idx, name=name,dtype=dtype)
+ return Series(wrap_results(arr), index=join_idx, name=name,dtype=dtype)
elif isinstance(other, DataFrame):
return NotImplemented
else:
# scalars
- return Series(na_op(lvalues.values, rvalues),
+ if hasattr(lvalues,'values'):
+ lvalues = lvalues.values
+ return Series(wrap_results(na_op(lvalues, rvalues)),
index=self.index, name=self.name, dtype=dtype)
return wrapper
@@ -690,6 +724,18 @@ def __setitem__(self, key, value):
if 'unorderable' in str(e): # pragma: no cover
raise IndexError(key)
# Could not hash item
+ except ValueError:
+
+ # reassign a null value to iNaT
+ if com.is_timedelta64_dtype(self.dtype):
+ if isnull(value):
+ value = tslib.iNaT
+
+ try:
+ self.index._engine.set_value(self, key, value)
+ return
+ except (TypeError):
+ pass
if _is_bool_indexer(key):
key = _check_bool_indexer(self.index, key)
diff --git a/pandas/index.pyx b/pandas/index.pyx
index f3d1773401eb3..c13f1f506f5ed 100644
--- a/pandas/index.pyx
+++ b/pandas/index.pyx
@@ -482,7 +482,9 @@ cdef class DatetimeEngine(Int64Engine):
cpdef convert_scalar(ndarray arr, object value):
if arr.descr.type_num == NPY_DATETIME:
- if isinstance(value, Timestamp):
+ if isinstance(value,np.ndarray):
+ pass
+ elif isinstance(value, Timestamp):
return value.value
elif value is None or value != value:
return iNaT
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index bf40587c7a953..1fd579553f094 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -32,7 +32,7 @@ from datetime cimport *
from tslib cimport convert_to_tsobject
import tslib
-from tslib import NaT, Timestamp
+from tslib import NaT, Timestamp, repr_timedelta64
cdef int64_t NPY_NAT = util.get_nat()
@@ -160,6 +160,9 @@ def time64_to_datetime(ndarray[int64_t, ndim=1] arr):
return result
+cdef inline int64_t get_timedelta64_value(val):
+ return val.view('i8')
+
#----------------------------------------------------------------------
# isnull / notnull related
@@ -174,6 +177,8 @@ cpdef checknull(object val):
return get_datetime64_value(val) == NPY_NAT
elif val is NaT:
return True
+ elif util.is_timedelta64_object(val):
+ return get_timedelta64_value(val) == NPY_NAT
elif is_array(val):
return False
else:
@@ -186,6 +191,8 @@ cpdef checknull_old(object val):
return get_datetime64_value(val) == NPY_NAT
elif val is NaT:
return True
+ elif util.is_timedelta64_object(val):
+ return get_timedelta64_value(val) == NPY_NAT
elif is_array(val):
return False
else:
diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 082ebf4f5b3c2..2f84dd416100e 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -92,6 +92,10 @@ def infer_dtype(object _values):
if is_unicode_array(values):
return 'unicode'
+ elif is_timedelta(val):
+ if is_timedelta_or_timedelta64_array(values):
+ return 'timedelta'
+
for i in range(n):
val = util.get_value_1d(values, i)
if util.is_integer_object(val):
@@ -265,6 +269,10 @@ def is_datetime64_array(ndarray values):
return False
return True
+def is_timedelta(object o):
+ import datetime
+ return isinstance(o,datetime.timedelta) or isinstance(o,np.timedelta64)
+
def is_timedelta_array(ndarray values):
import datetime
cdef int i, n = len(values)
@@ -275,6 +283,24 @@ def is_timedelta_array(ndarray values):
return False
return True
+def is_timedelta64_array(ndarray values):
+ cdef int i, n = len(values)
+ if n == 0:
+ return False
+ for i in range(n):
+ if not isinstance(values[i],np.timedelta64):
+ return False
+ return True
+
+def is_timedelta_or_timedelta64_array(ndarray values):
+ import datetime
+ cdef int i, n = len(values)
+ if n == 0:
+ return False
+ for i in range(n):
+ if not (isinstance(values[i],datetime.timedelta) or isinstance(values[i],np.timedelta64)):
+ return False
+ return True
def is_date_array(ndarray[object] values):
cdef int i, n = len(values)
diff --git a/pandas/src/numpy.pxd b/pandas/src/numpy.pxd
index b005a716e7d5f..f61c8762a1e50 100644
--- a/pandas/src/numpy.pxd
+++ b/pandas/src/numpy.pxd
@@ -82,6 +82,7 @@ cdef extern from "numpy/arrayobject.h":
NPY_COMPLEX512
NPY_DATETIME
+ NPY_TIMEDELTA
NPY_INTP
diff --git a/pandas/src/numpy_helper.h b/pandas/src/numpy_helper.h
index 493eecc378b4a..d5485e74b4927 100644
--- a/pandas/src/numpy_helper.h
+++ b/pandas/src/numpy_helper.h
@@ -54,7 +54,6 @@ get_datetime64_value(PyObject* obj) {
}
-
PANDAS_INLINE int
is_integer_object(PyObject* obj) {
return (!PyBool_Check(obj)) && PyArray_IsIntegerScalar(obj);
@@ -85,6 +84,11 @@ is_datetime64_object(PyObject *obj) {
return PyArray_IsScalar(obj, Datetime);
}
+PANDAS_INLINE int
+is_timedelta64_object(PyObject *obj) {
+ return PyArray_IsScalar(obj, Timedelta);
+}
+
PANDAS_INLINE int
assign_value_1d(PyArrayObject* ap, Py_ssize_t _i, PyObject* v) {
npy_intp i = (npy_intp) _i;
diff --git a/pandas/src/util.pxd b/pandas/src/util.pxd
index b4eb903fdfc17..7a30f018e623e 100644
--- a/pandas/src/util.pxd
+++ b/pandas/src/util.pxd
@@ -12,6 +12,7 @@ cdef extern from "numpy_helper.h":
inline int is_bool_object(object)
inline int is_string_object(object)
inline int is_datetime64_object(object)
+ inline int is_timedelta64_object(object)
inline int assign_value_1d(ndarray, Py_ssize_t, object) except -1
inline cnp.int64_t get_nat()
inline object get_value_1d(ndarray, Py_ssize_t)
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index eecf30872a80d..4f91b88291a3d 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -1202,10 +1202,29 @@ def test_float_trim_zeros(self):
self.assert_('+10' in line)
def test_timedelta64(self):
+
+ from pandas import date_range
+ from datetime import datetime
+
Series(np.array([1100, 20], dtype='timedelta64[s]')).to_string()
# check this works
# GH2146
+ # adding NaTs
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ y = s-s.shift(1)
+ result = y.to_string()
+ self.assertTrue('1 days, 00:00:00' in result)
+ self.assertTrue('NaT' in result)
+ self.assertTrue('timedelta64[ns]' in result)
+
+ # with frac seconds
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ y = s-datetime(2012,1,1,microsecond=150)
+ result = y.to_string()
+ self.assertTrue('00:00:00.000150' in result)
+ self.assertTrue('timedelta64[ns]' in result)
+
def test_mixed_datetime64(self):
df = DataFrame({'A': [1, 2],
'B': ['2012-01-01', '2012-01-02']})
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index a0dbe760bd405..2de7102dabc74 100755
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2859,6 +2859,32 @@ def test_constructor_for_list_with_dtypes(self):
expected.sort()
assert_series_equal(result, expected)
+ def test_timedeltas(self):
+
+ from pandas import date_range
+ df = DataFrame(dict(A = Series(date_range('2012-1-1', periods=3, freq='D')),
+ B = Series([ timedelta(days=i) for i in range(3) ])))
+ result = df.get_dtype_counts()
+ expected = Series({'datetime64[ns]': 1, 'timedelta64[ns]' : 1 })
+ result.sort()
+ expected.sort()
+ assert_series_equal(result, expected)
+
+ df['C'] = df['A'] + df['B']
+ expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1 })
+ result = df.get_dtype_counts()
+ result.sort()
+ expected.sort()
+ assert_series_equal(result, expected)
+
+ # mixed int types
+ df['D'] = 1
+ expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1, 'int64' : 1 })
+ result = df.get_dtype_counts()
+ result.sort()
+ expected.sort()
+ assert_series_equal(result, expected)
+
def test_new_empty_index(self):
df1 = DataFrame(randn(0, 3))
df2 = DataFrame(randn(0, 3))
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 42b0fa3827281..891099c385008 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -443,8 +443,10 @@ def test_constructor_dtype_datetime64(self):
s = Series(tslib.iNaT, dtype='M8[ns]', index=range(5))
self.assert_(isnull(s).all() == True)
- s = Series(tslib.NaT, dtype='M8[ns]', index=range(5))
- self.assert_(isnull(s).all() == True)
+ #### in theory this should be all nulls, but since
+ #### we are not specifying a dtype is ambiguous
+ s = Series(tslib.iNaT, index=range(5))
+ self.assert_(isnull(s).all() == False)
s = Series(nan, dtype='M8[ns]', index=range(5))
self.assert_(isnull(s).all() == True)
@@ -1667,12 +1669,126 @@ def test_operators_empty_int_corner(self):
# it works!
_ = s1 * s2
- def test_operators_datetime64(self):
+ def test_constructor_dtype_timedelta64(self):
+
+ td = Series([ timedelta(days=i) for i in range(3) ])
+ self.assert_(td.dtype=='timedelta64[ns]')
+
+ # mixed with NaT
+ from pandas import tslib
+ td = Series([ timedelta(days=i) for i in range(3) ] + [ tslib.NaT ], dtype='m8[ns]' )
+ self.assert_(td.dtype=='timedelta64[ns]')
+
+ td = Series([ timedelta(days=i) for i in range(3) ] + [ tslib.iNaT ], dtype='m8[ns]' )
+ self.assert_(td.dtype=='timedelta64[ns]')
+
+ td = Series([ timedelta(days=i) for i in range(3) ] + [ np.nan ], dtype='m8[ns]' )
+ self.assert_(td.dtype=='timedelta64[ns]')
+
+ # this is an invalid casting
+ self.assertRaises(Exception, Series, [ timedelta(days=i) for i in range(3) ] + [ 'foo' ], dtype='m8[ns]' )
+
+ # leave as object here
+ td = Series([ timedelta(days=i) for i in range(3) ] + [ 'foo' ])
+ self.assert_(td.dtype=='object')
+
+ def test_operators_timedelta64(self):
+
+ # invalid ops
+ self.assertRaises(Exception, self.objSeries.__add__, 1)
+ self.assertRaises(Exception, self.objSeries.__add__, np.array(1,dtype=np.int64))
+ self.assertRaises(Exception, self.objSeries.__sub__, 1)
+ self.assertRaises(Exception, self.objSeries.__sub__, np.array(1,dtype=np.int64))
+
+ # seriese ops
v1 = date_range('2012-1-1', periods=3, freq='D')
v2 = date_range('2012-1-2', periods=3, freq='D')
rs = Series(v2) - Series(v1)
xp = Series(1e9 * 3600 * 24, rs.index).astype('timedelta64[ns]')
assert_series_equal(rs, xp)
+ self.assert_(rs.dtype=='timedelta64[ns]')
+
+ df = DataFrame(dict(A = v1))
+ td = Series([ timedelta(days=i) for i in range(3) ])
+ self.assert_(td.dtype=='timedelta64[ns]')
+
+ # series on the rhs
+ result = df['A'] - df['A'].shift()
+ self.assert_(result.dtype=='timedelta64[ns]')
+
+ result = df['A'] + td
+ self.assert_(result.dtype=='M8[ns]')
+
+ # scalar Timestamp on rhs
+ maxa = df['A'].max()
+ self.assert_(isinstance(maxa,Timestamp))
+
+ resultb = df['A']- df['A'].max()
+ self.assert_(resultb.dtype=='timedelta64[ns]')
+
+ # timestamp on lhs
+ result = resultb + df['A']
+ expected = Series([Timestamp('20111230'),Timestamp('20120101'),Timestamp('20120103')])
+ assert_series_equal(result,expected)
+
+ # datetimes on rhs
+ result = df['A'] - datetime(2001,1,1)
+ self.assert_(result.dtype=='timedelta64[ns]')
+
+ result = df['A'] + datetime(2001,1,1)
+ self.assert_(result.dtype=='timedelta64[ns]')
+
+ td = datetime(2001,1,1,3,4)
+ resulta = df['A'] - td
+ self.assert_(resulta.dtype=='timedelta64[ns]')
+
+ resultb = df['A'] + td
+ self.assert_(resultb.dtype=='timedelta64[ns]')
+
+ # timedelta on lhs
+ result = resultb + td
+ self.assert_(resultb.dtype=='timedelta64[ns]')
+
+ # timedeltas on rhs
+ td = timedelta(days=1)
+ resulta = df['A'] + td
+ resultb = resulta - td
+ assert_series_equal(resultb,df['A'])
+
+ td = timedelta(minutes=5,seconds=3)
+ resulta = df['A'] + td
+ resultb = resulta - td
+ self.assert_(resultb.dtype=='M8[ns]')
+
+ def test_timedelta64_nan(self):
+
+ from pandas import tslib
+ td = Series([ timedelta(days=i) for i in range(10) ])
+
+ # nan ops on timedeltas
+ td1 = td.copy()
+ td1[0] = np.nan
+ self.assert_(isnull(td1[0]) == True)
+ self.assert_(td1[0].view('i8') == tslib.iNaT)
+ td1[0] = td[0]
+ self.assert_(isnull(td1[0]) == False)
+
+ td1[1] = tslib.iNaT
+ self.assert_(isnull(td1[1]) == True)
+ self.assert_(td1[1].view('i8') == tslib.iNaT)
+ td1[1] = td[1]
+ self.assert_(isnull(td1[1]) == False)
+
+ td1[2] = tslib.NaT
+ self.assert_(isnull(td1[2]) == True)
+ self.assert_(td1[2].view('i8') == tslib.iNaT)
+ td1[2] = td[2]
+ self.assert_(isnull(td1[2]) == False)
+
+ #### boolean setting
+ #### this doesn't work, not sure numpy even supports it
+ #result = td[(td>np.timedelta64(timedelta(days=3))) & (td<np.timedelta64(timedelta(days=7)))] = np.nan
+ #self.assert_(isnull(result).sum() == 7)
# NumPy limitiation =(
@@ -1884,10 +2000,6 @@ def test_idxmax(self):
allna = self.series * nan
self.assert_(isnull(allna.idxmax()))
- def test_operators_date(self):
- result = self.objSeries + timedelta(1)
- result = self.objSeries - timedelta(1)
-
def test_operators_corner(self):
series = self.ts
@@ -2170,6 +2282,7 @@ def test_value_counts_nunique(self):
assert_series_equal(hist, expected)
def test_unique(self):
+
# 714 also, dtype=float
s = Series([1.2345] * 100)
s[::2] = np.nan
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index f99613217c2b8..6ac2ee3607f51 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -2,7 +2,7 @@
cimport numpy as np
from numpy cimport (int32_t, int64_t, import_array, ndarray,
- NPY_INT64, NPY_DATETIME)
+ NPY_INT64, NPY_DATETIME, NPY_TIMEDELTA)
import numpy as np
from cpython cimport *
@@ -881,6 +881,73 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
return oresult
+def array_to_timedelta64(ndarray[object] values, coerce=True):
+ """ convert an ndarray to an array of ints that are timedeltas
+ force conversion if coerce = True,
+ else return an object array """
+ cdef:
+ Py_ssize_t i, n
+ object val
+ ndarray[int64_t] result
+
+ n = values.shape[0]
+ result = np.empty(n, dtype='i8')
+ for i in range(n):
+ val = values[i]
+
+ # in py3 this is already an int, don't convert
+ if is_integer_object(val):
+ result[i] = val
+
+ elif isinstance(val,timedelta) or isinstance(val,np.timedelta64):
+
+ if isinstance(val, np.timedelta64):
+ if val.dtype != 'm8[ns]':
+ val = val.astype('m8[ns]')
+ val = val.item()
+ else:
+ val = _delta_to_nanoseconds(np.timedelta64(val).item())
+
+ result[i] = val
+
+ elif util._checknull(val) or val == iNaT or val is NaT:
+ result[i] = iNaT
+
+ else:
+
+ # just return, don't convert
+ if not coerce:
+ return values.copy()
+
+ result[i] = iNaT
+
+ return result
+
+def repr_timedelta64(object value):
+ """ provide repr for timedelta64 """
+
+ ivalue = value.view('i8')
+
+ frac = float(ivalue)/1e9
+ days = int(frac) / 86400
+ frac -= days*86400
+ hours = int(frac) / 3600
+ frac -= hours * 3600
+ minutes = int(frac) / 60
+ seconds = frac - minutes * 60
+ nseconds = int(seconds)
+
+ if nseconds == seconds:
+ seconds_pretty = "%02d" % nseconds
+ else:
+ sp = abs(int(1e6*(seconds-nseconds)))
+ seconds_pretty = "%02d.%06d" % (nseconds,sp)
+
+ if days:
+ return "%d days, %02d:%02d:%s" % (days,hours,minutes,seconds_pretty)
+
+ return "%02d:%02d:%s" % (hours,minutes,seconds_pretty)
+
def array_strptime(ndarray[object] values, object fmt):
cdef:
Py_ssize_t i, n = len(values)
| - Series ops with a rhs of a Timestamp was throwing an exception (#2898)
- fixed issue in _index.convert_scalar where the rhs was a non-scalar, lhs was dtype of M8[ns], and was trying
to convert to a scalar (but didn't need conversion)
- added some utilities in tslib.pyx, inference.pyx to detect and convert timedelta/timedelta64
- added timedelta64 to checknull routines, representing timedelta64 as NaT
- aded setitem support to set NaT via np.nan (analagously to datetime64)
- added ability for Series to detect and set a `timedelta64[ns]` dtype (if all passed objects are timedelta like)
- added correct printing of timedelta64[ns](looks like py3.3 changed the default for str%28x%29, so rolled a new one)
```
In [149]: from datetime import datetime, timedelta
In [150]: s = Series(date_range('2012-1-1', periods=3, freq='D'))
In [151]: td = Series([ timedelta(days=i) for i in range(3) ])
In [152]: df = DataFrame(dict(A = s, B = td))
In [153]: df
Out[153]:
A B
0 2012-01-01 00:00:00 0:00:00
1 2012-01-02 00:00:00 1 day, 0:00:00
2 2012-01-03 00:00:00 2 days, 0:00:00
In [154]: df['C'] = df['A'] + df['B']
In [155]: df
Out[155]:
A B C
0 2012-01-01 00:00:00 0:00:00 2012-01-01 00:00:00
1 2012-01-02 00:00:00 1 day, 0:00:00 2012-01-03 00:00:00
2 2012-01-03 00:00:00 2 days, 0:00:00 2012-01-05 00:00:00
In [156]: df.dtypes
Out[156]:
A datetime64[ns]
B timedelta64[ns]
C datetime64[ns]
Dtype: object
In [60]: s - s.max()
Out[60]:
0 -2 days, 0:00:00
1 -1 day, 0:00:00
2 0:00:00
Dtype: timedelta64[ns]
In [61]: s - datetime(2011,1,1,3,5)
Out[61]:
0 364 days, 20:55:00
1 365 days, 20:55:00
2 366 days, 20:55:00
Dtype: timedelta64[ns]
In [62]: s + timedelta(minutes=5)
Out[62]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
Dtype: datetime64[ns]
In [160]: y = s - s.shift()
In [161]: y
Out[161]:
0 NaT
1 1 day, 0:00:00
2 1 day, 0:00:00
Dtype: timedelta64[ns]
The can be set to NaT using np.nan analagously to datetimes
In [162]: y[1] = np.nan
In [163]: y
Out[163]:
0 NaT
1 NaT
2 1 day, 0:00:00
Dtype: timedelta64[ns]
# works on lhs too
In [64]: s.max() - s
Out[64]:
0 2 days, 0:00:00
1 1 day, 0:00:00
2 0:00:00
Dtype: timedelta64[ns]
In [65]: datetime(2011,1,1,3,5) - s
Out[65]:
0 -365 days, 3:05:00
1 -366 days, 3:05:00
2 -367 days, 3:05:00
Dtype: timedelta64[ns]
In [66]: timedelta(minutes=5) + s
Out[66]:
0 2012-01-01 00:05:00
1 2012-01-02 00:05:00
2 2012-01-03 00:05:00
Dtype: datetime64[ns]
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2899 | 2013-02-19T15:11:51Z | 2013-02-23T20:30:30Z | 2013-02-23T20:30:30Z | 2014-06-12T07:39:59Z |
BUG: nanops.var produces incorrect results due to int64 overflow (fixes #2888) | diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 1315fc3ce2b76..000320027e0e4 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -138,6 +138,9 @@ def get_median(x):
def _nanvar(values, axis=None, skipna=True, ddof=1):
+ if not isinstance(values.dtype.type, np.floating):
+ values = values.astype('f8')
+
mask = isnull(values)
if axis is not None:
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 69463d0dcab6c..24e684ac3379b 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1401,7 +1401,8 @@ def testit():
# check the result is correct
nona = self.series.dropna()
- assert_almost_equal(f(nona), alternate(nona))
+ assert_almost_equal(f(nona), alternate(nona.values))
+ assert_almost_equal(f(self.series), alternate(nona.values))
allna = self.series * nan
self.assert_(np.isnan(f(allna)))
@@ -1410,6 +1411,12 @@ def testit():
s = Series([1, 2, 3, None, 5])
f(s)
+ # 2888
+ l = [0]
+ l.extend(list(range(2**40,2**40+1000)))
+ s = Series(l, dtype='int64')
+ assert_almost_equal(float(f(s)), float(alternate(s.values)))
+
# check date range
if check_objects:
s = Series(bdate_range('1/1/2000', periods=10))
| fixes #2888
| https://api.github.com/repos/pandas-dev/pandas/pulls/2889 | 2013-02-17T20:22:28Z | 2013-02-23T20:12:43Z | 2013-02-23T20:12:43Z | 2014-06-24T16:44:51Z |
BUG: Fix broken YearBegin offset. Closes #2844. | diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 592cc551aeb77..bd95a62c3f2ed 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -966,7 +966,7 @@ class YearBegin(DateOffset, CacheableOffset):
"""DateOffset increments between calendar year begin dates"""
def __init__(self, n=1, **kwds):
- self.month = kwds.get('month', 12)
+ self.month = kwds.get('month', 1)
if self.month < 1 or self.month > 12:
raise ValueError('Month must go from 1 to 12')
@@ -974,19 +974,44 @@ def __init__(self, n=1, **kwds):
DateOffset.__init__(self, n=n, **kwds)
def apply(self, other):
+ def _increment(date):
+ year = date.year
+ if date.month >= self.month:
+ year += 1
+ return datetime(year, self.month, 1, date.hour, date.minute,
+ date.second, date.microsecond)
+
+ def _decrement(date):
+ year = date.year
+ if date.month < self.month or (date.month == self.month and
+ date.day == 1):
+ year -= 1
+ return datetime(year, self.month, 1, date.hour, date.minute,
+ date.second, date.microsecond)
+
+ def _rollf(date):
+ if (date.month != self.month) or date.day > 1:
+ date = _increment(date)
+ return date
+
n = self.n
- if other.month != 1 or other.day != 1:
- other = datetime(other.year, 1, 1,
- other.hour, other.minute, other.second,
- other.microsecond)
- if n <= 0:
- n = n + 1
- other = other + relativedelta(years=n, day=1)
- return other
+ result = other
+ if n > 0:
+ while n > 0:
+ result = _increment(result)
+ n -= 1
+ elif n < 0:
+ while n < 0:
+ result = _decrement(result)
+ n += 1
+ else:
+ # n == 0, roll forward
+ result = _rollf(result)
- @classmethod
- def onOffset(cls, dt):
- return dt.month == 1 and dt.day == 1
+ return result
+
+ def onOffset(self, dt):
+ return dt.month == self.month and dt.day == 1
@property
def rule_code(self):
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index f42e9b3877b3e..209f770da5c94 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -1128,6 +1128,7 @@ def test_offset(self):
tests.append((YearBegin(-1),
{datetime(2007, 1, 1): datetime(2006, 1, 1),
+ datetime(2007, 1, 15): datetime(2007, 1, 1),
datetime(2008, 6, 30): datetime(2008, 1, 1),
datetime(2008, 12, 31): datetime(2008, 1, 1),
datetime(2006, 12, 29): datetime(2006, 1, 1),
@@ -1139,6 +1140,26 @@ def test_offset(self):
datetime(2008, 6, 30): datetime(2007, 1, 1),
datetime(2008, 12, 31): datetime(2007, 1, 1), }))
+ tests.append((YearBegin(month=4),
+ {datetime(2007, 4, 1): datetime(2008, 4, 1),
+ datetime(2007, 4, 15): datetime(2008, 4, 1),
+ datetime(2007, 3, 1): datetime(2007, 4, 1),
+ datetime(2007, 12, 15): datetime(2008, 4, 1),
+ datetime(2012, 1, 31): datetime(2012, 4, 1), }))
+
+ tests.append((YearBegin(0, month=4),
+ {datetime(2007, 4, 1): datetime(2007, 4, 1),
+ datetime(2007, 3, 1): datetime(2007, 4, 1),
+ datetime(2007, 12, 15): datetime(2008, 4, 1),
+ datetime(2012, 1, 31): datetime(2012, 4, 1), }))
+
+ tests.append((YearBegin(-1, month=4),
+ {datetime(2007, 4, 1): datetime(2006, 4, 1),
+ datetime(2007, 3, 1): datetime(2006, 4, 1),
+ datetime(2007, 12, 15): datetime(2007, 4, 1),
+ datetime(2012, 1, 31): datetime(2011, 4, 1), }))
+
+
for offset, cases in tests:
for base, expected in cases.iteritems():
assertEq(offset, base, expected)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 5d477a8ca10ed..2183c24088bb6 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -1235,7 +1235,7 @@ def test_to_timestamp(self):
self.assert_(result.index.equals(exp_index))
self.assertEquals(result.name, 'foo')
- exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-DEC')
+ exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = series.to_timestamp(how='start')
self.assert_(result.index.equals(exp_index))
@@ -1344,7 +1344,7 @@ def test_frame_to_time_stamp(self):
self.assert_(result.index.equals(exp_index))
assert_almost_equal(result.values, df.values)
- exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-DEC')
+ exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start')
self.assert_(result.index.equals(exp_index))
@@ -1375,7 +1375,7 @@ def _get_with_delta(delta, freq='A-DEC'):
self.assert_(result.columns.equals(exp_index))
assert_almost_equal(result.values, df.values)
- exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-DEC')
+ exp_index = date_range('1/1/2001', end='1/1/2009', freq='AS-JAN')
result = df.to_timestamp('D', 'start', axis=1)
self.assert_(result.columns.equals(exp_index))
| This should close #2844. Good but unsettling news is it wasn't an implementation bug. The logic for handling months in the YearBegin offset just wasn't written. It was hard-coded to always be Year, 1, 1.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2883 | 2013-02-16T20:24:03Z | 2013-04-08T07:18:48Z | 2013-04-08T07:18:48Z | 2014-07-08T08:39:35Z |
BUG: fix max_columns=0, close #2856 | diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 70f3fb045376e..9f599ffe908ba 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -212,6 +212,9 @@ def mpl_style_cb(key):
cf.register_option('mpl_style', None, pc_mpl_style_doc,
validator=is_one_of_factory([None, False, 'default']),
cb=mpl_style_cb)
+ cf.register_option('height', 100, 'TODO', validator=is_int)
+ cf.register_option('width',80, 'TODO', validator=is_int)
+cf.deprecate_option('display.line_width', msg='TODO', rkey='display.width')
tc_sim_interactive_doc = """
: boolean
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 091b9926500d9..ecc35a3ee9749 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -599,44 +599,39 @@ def empty(self):
def __nonzero__(self):
raise ValueError("Cannot call bool() on DataFrame.")
- def _need_info_repr_(self):
+ def _repr_fits_boundaries_(self):
"""
- Check if it is needed to use info/summary view to represent a
- particular DataFrame.
+ Check if repr fits in boundaries imposed by the following sets of
+ display options:
+ * width, height
+ * max_rows, max_columns
+ In case off non-interactive session, no boundaries apply.
"""
+ if not com.in_interactive_session():
+ return True
- if com.in_qtconsole():
- terminal_width, terminal_height = 100, 100
- else:
- terminal_width, terminal_height = get_terminal_size()
- max_rows = (terminal_height if get_option("display.max_rows") == 0
- else get_option("display.max_rows"))
- max_columns = get_option("display.max_columns")
- expand_repr = get_option("display.expand_frame_repr")
+ terminal_width, terminal_height = get_terminal_size()
- if max_columns > 0:
- if (len(self.index) <= max_rows and
- (len(self.columns) <= max_columns and expand_repr)):
- return False
- else:
- return True
- else:
- # save us
- if (len(self.index) > max_rows or
- (com.in_interactive_session() and
- len(self.columns) > terminal_width // 2 and
- not expand_repr)):
- return True
- else:
- buf = StringIO()
- self.to_string(buf=buf)
- value = buf.getvalue()
- if (max([len(l) for l in value.split('\n')]) > terminal_width
- and com.in_interactive_session()
- and not expand_repr):
- return True
- else:
- return False
+ # check vertical boundaries (excluding column axis area)
+ max_rows = get_option("display.max_rows") or terminal_height
+ display_height = get_option("display.height") or terminal_height
+ if len(self.index) > min(max_rows, display_height):
+ return False
+
+ # check horizontal boundaries (including index axis area)
+ max_columns = get_option("display.max_columns")
+ display_width = get_option("display.width") or terminal_width
+ nb_columns = len(self.columns)
+ if max_columns and nb_columns > max_columns:
+ return False
+ if nb_columns > (display_width // 2):
+ return False
+
+ buf = StringIO()
+ self.to_string(buf=buf)
+ value = buf.getvalue()
+ repr_width = max([len(l) for l in value.split('\n')])
+ return repr_width <= display_width
def __str__(self):
"""
@@ -668,26 +663,29 @@ def __unicode__(self):
py2/py3.
"""
buf = StringIO(u"")
- if self._need_info_repr_():
- max_info_rows = get_option('display.max_info_rows')
- verbose = max_info_rows is None or self.shape[0] <= max_info_rows
- self.info(buf=buf, verbose=verbose)
+ if self._repr_fits_boundaries_():
+ self.to_string(buf=buf)
else:
- is_wide = self._need_wide_repr()
- line_width = None
- if is_wide:
- line_width = get_option('display.line_width')
- self.to_string(buf=buf, line_width=line_width)
+ terminal_width, terminal_height = get_terminal_size()
+ max_rows = get_option("display.max_rows") or terminal_height
+ # Expand or info? Decide based on option display.expand_frame_repr
+ # and keep it sane for the number of display rows used by the
+ # expanded repr.
+ if (get_option("display.expand_frame_repr") and
+ len(self.columns) < max_rows):
+ line_width = get_option("display.width") or terminal_width
+ self.to_string(buf=buf, line_width=line_width)
+ else:
+ max_info_rows = get_option('display.max_info_rows')
+ verbose = (max_info_rows is None or
+ self.shape[0] <= max_info_rows)
+ self.info(buf=buf, verbose=verbose)
value = buf.getvalue()
assert type(value) == unicode
return value
- def _need_wide_repr(self):
- return (get_option("display.expand_frame_repr")
- and com.in_interactive_session())
-
def __repr__(self):
"""
Return a string representation for a particular DataFrame
@@ -705,12 +703,18 @@ def _repr_html_(self):
raise ValueError('Disable HTML output in QtConsole')
if get_option("display.notebook_repr_html"):
- if self._need_info_repr_():
- return None
- else:
+ if self._repr_fits_boundaries_():
return ('<div style="max-height:1000px;'
'max-width:1500px;overflow:auto;">\n' +
self.to_html() + '\n</div>')
+ else:
+ buf = StringIO(u"")
+ max_info_rows = get_option('display.max_info_rows')
+ verbose = (max_info_rows is None or
+ self.shape[0] <= max_info_rows)
+ self.info(buf=buf, verbose=verbose)
+ info = buf.getvalue().replace('<', '<').replace('>', '>')
+ return ('<pre>\n' + info + '\n</pre>')
else:
return None
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index cb55061ae7c3e..d1e5c64afe794 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -19,6 +19,7 @@
from pandas.util.py3compat import lzip
import pandas.core.format as fmt
import pandas.util.testing as tm
+from pandas.util.terminal import get_terminal_size
import pandas
import pandas as pd
from pandas.core.config import (set_option, get_option,
@@ -31,6 +32,17 @@ def curpath():
pth, _ = os.path.split(os.path.abspath(__file__))
return pth
+def has_info_repr(df):
+ r = repr(df)
+ return r.split('\n')[0].startswith("<class")
+
+def has_expanded_repr(df):
+ r = repr(df)
+ for line in r.split('\n'):
+ if line.endswith('\\'):
+ return True
+ return False
+
class TestDataFrameFormatting(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -144,6 +156,57 @@ def test_repr_no_backslash(self):
df = DataFrame(np.random.randn(10, 4))
self.assertTrue('\\' not in repr(df))
+ def test_expand_frame_repr(self):
+ df_small = DataFrame('hello', [0], [0])
+ df_wide = DataFrame('hello', [0], range(10))
+
+ with option_context('mode.sim_interactive', True):
+ with option_context('display.width', 50):
+ with option_context('display.expand_frame_repr', True):
+ self.assertFalse(has_info_repr(df_small))
+ self.assertFalse(has_expanded_repr(df_small))
+ self.assertFalse(has_info_repr(df_wide))
+ self.assertTrue(has_expanded_repr(df_wide))
+
+ with option_context('display.expand_frame_repr', False):
+ self.assertFalse(has_info_repr(df_small))
+ self.assertFalse(has_expanded_repr(df_small))
+ self.assertTrue(has_info_repr(df_wide))
+ self.assertFalse(has_expanded_repr(df_wide))
+
+ def test_repr_max_columns_max_rows(self):
+ term_width, term_height = get_terminal_size()
+ if term_width < 10 or term_height < 10:
+ raise nose.SkipTest
+
+ def mkframe(n):
+ index = ['%05d' % i for i in range(n)]
+ return DataFrame(0, index, index)
+
+ with option_context('mode.sim_interactive', True):
+ with option_context('display.width', term_width * 2):
+ with option_context('display.max_rows', 5,
+ 'display.max_columns', 5):
+ self.assertFalse(has_expanded_repr(mkframe(4)))
+ self.assertFalse(has_expanded_repr(mkframe(5)))
+ self.assertFalse(has_expanded_repr(mkframe(6)))
+ self.assertTrue(has_info_repr(mkframe(6)))
+
+ with option_context('display.max_rows', 20,
+ 'display.max_columns', 5):
+ # Out off max_columns boundary, but no extending
+ # occurs ... can improve?
+ self.assertFalse(has_expanded_repr(mkframe(6)))
+ self.assertFalse(has_info_repr(mkframe(6)))
+
+ with option_context('display.max_columns', 0,
+ 'display.max_rows', term_width * 20,
+ 'display.width', 0):
+ df = mkframe((term_width // 7) - 2)
+ self.assertFalse(has_expanded_repr(df))
+ df = mkframe((term_width // 7) + 2)
+ self.assertTrue(has_expanded_repr(df))
+
def test_to_string_repr_unicode(self):
buf = StringIO()
@@ -1176,8 +1239,8 @@ def get_ipython():
self.assert_(repstr is not None)
fmt.set_printoptions(max_rows=5, max_columns=2)
-
- self.assert_(self.frame._repr_html_() is None)
+ repstr = self.frame._repr_html_()
+ self.assert_('class' in repstr) # info fallback
fmt.reset_printoptions()
| #2856
| https://api.github.com/repos/pandas-dev/pandas/pulls/2881 | 2013-02-16T14:49:14Z | 2013-04-12T05:35:01Z | 2013-04-12T05:35:01Z | 2014-06-13T23:52:46Z |
BUG: Series construction from MaskedArray fails for non-floating, non-object types | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4301aac566fc6..c50de028917d7 100755
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -23,7 +23,8 @@
import numpy.ma as ma
from pandas.core.common import (isnull, notnull, PandasError, _try_sort,
- _default_index, _is_sequence, _infer_dtype_from_scalar)
+ _default_index, _maybe_upcast, _is_sequence,
+ _infer_dtype_from_scalar)
from pandas.core.generic import NDFrame
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (_NDFrameIndexer, _maybe_droplevels,
@@ -390,9 +391,12 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
mgr = self._init_dict(data, index, columns, dtype=dtype)
elif isinstance(data, ma.MaskedArray):
mask = ma.getmaskarray(data)
- datacopy, fill_value = com._maybe_upcast(data, copy=True)
- datacopy[mask] = fill_value
- mgr = self._init_ndarray(datacopy, index, columns, dtype=dtype,
+ if mask.any():
+ data, fill_value = _maybe_upcast(data, copy=True)
+ data[mask] = fill_value
+ else:
+ data = data.copy()
+ mgr = self._init_ndarray(data, index, columns, dtype=dtype,
copy=copy)
elif isinstance(data, np.ndarray):
if data.dtype.names:
@@ -2701,7 +2705,8 @@ def _reindex_with_indexers(self, index, row_indexer, columns, col_indexer,
return DataFrame(new_data)
- def reindex_like(self, other, method=None, copy=True, limit=None):
+ def reindex_like(self, other, method=None, copy=True, limit=None,
+ fill_value=NA):
"""
Reindex DataFrame to match indices of another DataFrame, optionally
with filling logic
@@ -2724,7 +2729,8 @@ def reindex_like(self, other, method=None, copy=True, limit=None):
reindexed : DataFrame
"""
return self.reindex(index=other.index, columns=other.columns,
- method=method, copy=copy, limit=limit)
+ method=method, copy=copy, limit=limit,
+ fill_value=fill_value)
truncate = generic.truncate
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 21109593489ad..5570f81e9ce15 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -15,7 +15,7 @@
import numpy.ma as ma
from pandas.core.common import (isnull, notnull, _is_bool_indexer,
- _default_index, _maybe_promote,
+ _default_index, _maybe_promote, _maybe_upcast,
_asarray_tuplesafe, is_integer_dtype,
_infer_dtype_from_scalar)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
@@ -88,12 +88,15 @@ def wrapper(self, other):
# rhs is either a timedelta or a series/ndarray
if lib.is_timedelta_array(rvalues):
- rvalues = pa.array([ np.timedelta64(v) for v in rvalues ],dtype='timedelta64[ns]')
+ rvalues = pa.array([np.timedelta64(v) for v in rvalues],
+ dtype='timedelta64[ns]')
dtype = 'M8[ns]'
elif com.is_datetime64_dtype(rvalues):
dtype = 'timedelta64[ns]'
else:
- raise ValueError("cannot operate on a series with out a rhs of a series/ndarray of type datetime64[ns] or a timedelta")
+ raise ValueError('cannot operate on a series with out a rhs '
+ 'of a series/ndarray of type datetime64[ns] '
+ 'or a timedelta')
lvalues = lvalues.view('i8')
rvalues = rvalues.view('i8')
@@ -430,32 +433,32 @@ def from_array(cls, arr, index=None, name=None, copy=False):
def __init__(self, data=None, index=None, dtype=None, name=None,
copy=False):
- """One-dimensional ndarray with axis labels (including time
-series). Labels need not be unique but must be any hashable type. The object
-supports both integer- and label-based indexing and provides a host of methods
-for performing operations involving the index. Statistical methods from ndarray
-have been overridden to automatically exclude missing data (currently
-represented as NaN)
-
-Operations between Series (+, -, /, *, **) align values based on their
-associated index values-- they need not be the same length. The result
-index will be the sorted union of the two indexes.
-
-Parameters
-----------
-data : array-like, dict, or scalar value
- Contains data stored in Series
-index : array-like or Index (1d)
+ """
+ One-dimensional ndarray with axis labels (including time series).
+ Labels need not be unique but must be any hashable type. The object
+ supports both integer- and label-based indexing and provides a host of
+ methods for performing operations involving the index. Statistical
+ methods from ndarray have been overridden to automatically exclude
+ missing data (currently represented as NaN)
- Values must be unique and hashable, same length as data. Index object
- (or other iterable of same length as data) Will default to
- np.arange(len(data)) if not provided. If both a dict and index sequence
- are used, the index will override the keys found in the dict.
+ Operations between Series (+, -, /, *, **) align values based on their
+ associated index values-- they need not be the same length. The result
+ index will be the sorted union of the two indexes.
-dtype : numpy.dtype or None
- If None, dtype will be inferred copy : boolean, default False Copy
- input data
-copy : boolean, default False
+ Parameters
+ ----------
+ data : array-like, dict, or scalar value
+ Contains data stored in Series
+ index : array-like or Index (1d)
+ Values must be unique and hashable, same length as data. Index
+ object (or other iterable of same length as data) Will default to
+ np.arange(len(data)) if not provided. If both a dict and index
+ sequence are used, the index will override the keys found in the
+ dict.
+ dtype : numpy.dtype or None
+ If None, dtype will be inferred copy : boolean, default False Copy
+ input data
+ copy : boolean, default False
"""
pass
@@ -769,7 +772,8 @@ def astype(self, dtype):
See numpy.ndarray.astype
"""
casted = com._astype_nansafe(self.values, dtype)
- return self._constructor(casted, index=self.index, name=self.name, dtype=casted.dtype)
+ return self._constructor(casted, index=self.index, name=self.name,
+ dtype=casted.dtype)
def convert_objects(self, convert_dates=True, convert_numeric=True):
"""
@@ -778,8 +782,12 @@ def convert_objects(self, convert_dates=True, convert_numeric=True):
Parameters
----------
- convert_dates : if True, attempt to soft convert_dates, if 'coerce', force conversion (and non-convertibles get NaT)
- convert_numeric : if True attempt to coerce to numerbers (including strings), non-convertibles get NaN
+ convert_dates : boolean, default True
+ if True, attempt to soft convert_dates, if 'coerce', force
+ conversion (and non-convertibles get NaT)
+ convert_numeric : boolean, default True
+ if True attempt to coerce to numbers (including strings),
+ non-convertibles get NaN
Returns
-------
@@ -982,7 +990,8 @@ def __unicode__(self):
"""
Return a string representation for a particular DataFrame
- Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both
+ py2/py3.
"""
width, height = get_terminal_size()
max_rows = (height if get_option("display.max_rows") == 0
@@ -2416,7 +2425,7 @@ def reindex_axis(self, labels, axis=0, **kwargs):
raise ValueError("cannot reindex series on non-zero axis!")
return self.reindex(index=labels,**kwargs)
- def reindex_like(self, other, method=None, limit=None):
+ def reindex_like(self, other, method=None, limit=None, fill_value=pa.NA):
"""
Reindex Series to match index of another Series, optionally with
filling logic
@@ -2437,7 +2446,8 @@ def reindex_like(self, other, method=None, limit=None):
-------
reindexed : Series
"""
- return self.reindex(other.index, method=method, limit=limit)
+ return self.reindex(other.index, method=method, limit=limit,
+ fill_value=fill_value)
def take(self, indices, axis=0):
"""
@@ -3060,10 +3070,14 @@ def remove_na(arr):
def _sanitize_array(data, index, dtype=None, copy=False,
raise_cast_failure=False):
+
if isinstance(data, ma.MaskedArray):
mask = ma.getmaskarray(data)
- data = ma.copy(data)
- data[mask] = pa.NA
+ if mask.any():
+ data, fill_value = _maybe_upcast(data, copy=True)
+ data[mask] = fill_value
+ else:
+ data = data.copy()
def _try_cast(arr):
try:
@@ -3112,7 +3126,7 @@ def _try_cast(arr):
raise
subarr = pa.array(data, dtype=object, copy=copy)
subarr = lib.maybe_convert_objects(subarr)
-
+
else:
subarr = com._possibly_convert_platform(data)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 69463d0dcab6c..42b0fa3827281 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -40,7 +40,7 @@ def _skip_if_no_pytz():
except ImportError:
raise nose.SkipTest
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Series test cases
JOIN_TYPES = ['inner', 'outer', 'left', 'right']
@@ -342,6 +342,65 @@ def test_constructor_maskedarray(self):
expected = Series([0.0, nan, 2.0], index=index)
assert_series_equal(result, expected)
+ data[1] = 1.0
+ result = Series(data, index=index)
+ expected = Series([0.0, 1.0, 2.0], index=index)
+ assert_series_equal(result, expected)
+
+ data = ma.masked_all((3,), dtype=int)
+ result = Series(data)
+ expected = Series([nan, nan, nan], dtype=float)
+ assert_series_equal(result, expected)
+
+ data[0] = 0
+ data[2] = 2
+ index = ['a', 'b', 'c']
+ result = Series(data, index=index)
+ expected = Series([0, nan, 2], index=index, dtype=float)
+ assert_series_equal(result, expected)
+
+ data[1] = 1
+ result = Series(data, index=index)
+ expected = Series([0, 1, 2], index=index, dtype=int)
+ assert_series_equal(result, expected)
+
+ data = ma.masked_all((3,), dtype=bool)
+ result = Series(data)
+ expected = Series([nan, nan, nan], dtype=object)
+ assert_series_equal(result, expected)
+
+ data[0] = True
+ data[2] = False
+ index = ['a', 'b', 'c']
+ result = Series(data, index=index)
+ expected = Series([True, nan, False], index=index, dtype=object)
+ assert_series_equal(result, expected)
+
+ data[1] = True
+ result = Series(data, index=index)
+ expected = Series([True, True, False], index=index, dtype=bool)
+ assert_series_equal(result, expected)
+
+ from pandas import tslib
+ data = ma.masked_all((3,), dtype='M8[ns]')
+ result = Series(data)
+ expected = Series([tslib.iNaT, tslib.iNaT, tslib.iNaT], dtype='M8[ns]')
+ assert_series_equal(result, expected)
+
+ data[0] = datetime(2001, 1, 1)
+ data[2] = datetime(2001, 1, 3)
+ index = ['a', 'b', 'c']
+ result = Series(data, index=index)
+ expected = Series([datetime(2001, 1, 1), tslib.iNaT,
+ datetime(2001, 1, 3)], index=index, dtype='M8[ns]')
+ assert_series_equal(result, expected)
+
+ data[1] = datetime(2001, 1, 2)
+ result = Series(data, index=index)
+ expected = Series([datetime(2001, 1, 1), datetime(2001, 1, 2),
+ datetime(2001, 1, 3)], index=index, dtype='M8[ns]')
+ assert_series_equal(result, expected)
+
def test_constructor_default_index(self):
s = Series([0, 1, 2])
assert_almost_equal(s.index, np.arange(3))
@@ -2922,7 +2981,7 @@ def test_convert_objects(self):
result = s.convert_objects(convert_dates=True,convert_numeric=False)
expected = Series([Timestamp('20010101'),Timestamp('20010102'),Timestamp('20010103')],dtype='M8[ns]')
assert_series_equal(expected,result)
-
+
result = s.convert_objects(convert_dates='coerce',convert_numeric=False)
assert_series_equal(expected,result)
result = s.convert_objects(convert_dates='coerce',convert_numeric=True)
@@ -3306,7 +3365,7 @@ def test_fillna_int(self):
s.fillna(method='ffill', inplace=True)
assert_series_equal(s.fillna(method='ffill', inplace=False), s)
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# TimeSeries-specific
def test_fillna(self):
@@ -3569,7 +3628,7 @@ def test_mpl_compat_hack(self):
expected = self.ts.values[:, np.newaxis]
assert_almost_equal(result, expected)
-#-------------------------------------------------------------------------------
+#------------------------------------------------------------------------------
# GroupBy
def test_select(self):
@@ -3582,7 +3641,7 @@ def test_select(self):
expected = self.ts[self.ts.weekday == 2]
assert_series_equal(result, expected)
-#----------------------------------------------------------------------
+#------------------------------------------------------------------------------
# Misc not safe for sparse
def test_dropna_preserve_name(self):
| Currently Series constructor is throwing exceptions or ignoring the mask for non-floating, non-object MaskedArray input:
``` python
In [1]: from numpy.ma import MaskedArray as ma
In [2]: from pandas import Series, DataFrame
In [3]: from datetime import datetime as dt
In [4]: mask = [True, False]
In [5]: Series(ma([2, 0], mask=mask, dtype=int))
(exception)
In [6]: Series(ma([True, False], mask=mask, dtype=bool))
Out[6]:
0 True
1 False
In [7]: Series(ma([dt(2006, 1, 1), dt(2007, 1, 1)],
...: mask=mask, dtype='M8[ns]'))
(exception)
```
After fix:
``` python
In [1]: from numpy.ma import MaskedArray as ma
In [2]: from pandas import Series, DataFrame
In [3]: from datetime import datetime as dt
In [4]: mask = [True, False]
In [5]: Series(ma([2, 0], mask=mask, dtype=int))
Out[5]:
0 NaN
1 0
Dtype: float64
In [6]: Series(ma([True, False], mask=mask, dtype=bool))
Out[6]:
0 NaN
1 False
Dtype: object
In [7]: Series(ma([dt(2006, 1, 1), dt(2007, 1, 1)],
...: mask=mask, dtype='M8[ns]'))
Out[7]:
0 NaT
1 2007-01-01 00:00:00
Dtype: datetime64[ns]
```
The upcasting will only be done if necessary (i.e. if the mask has any `True` elements):
``` python
In [8]: Series(ma([2, 3], mask=[False, False], dtype=int))
Out[8]:
0 2
1 3
Dtype: int32
In [9]: DataFrame(ma([2, 3], mask=[False, False], dtype=int))
Out[9]:
0
0 2
1 3
```
This latter is a minor change to the previous behavior of the DataFrame constructor, which was upcasting correctly but was doing it even if the mask was all `False`: I've changed it to only upcast when necessary because this is more consistent with how masks are treated elsewhere.
Also made some PEP8 line length fixes
| https://api.github.com/repos/pandas-dev/pandas/pulls/2880 | 2013-02-16T04:56:33Z | 2013-02-23T19:51:31Z | 2013-02-23T19:51:31Z | 2013-02-23T19:51:31Z |
RLS: fixup RELEASE.rst links | diff --git a/RELEASE.rst b/RELEASE.rst
index b97b9443032a4..8ca4db0a9ffb4 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -247,7 +247,6 @@ pandas 0.10.1
.. _GH2698: https://github.com/pydata/pandas/issues/2698
.. _GH2699: https://github.com/pydata/pandas/issues/2699
.. _GH2700: https://github.com/pydata/pandas/issues/2700
-.. _GH2694: https://github.com/pydata/pandas/issues/2694
.. _GH2686: https://github.com/pydata/pandas/issues/2686
.. _GH2618: https://github.com/pydata/pandas/issues/2618
.. _GH2592: https://github.com/pydata/pandas/issues/2592
@@ -1389,7 +1388,7 @@ pandas 0.8.0
- Add ``order`` method to Index classes (GH1028_)
- Avoid hash table creation in large monotonic hash table indexes (GH1160_)
- Store time zones in HDFStore (GH1232_)
- - Enable storage of sparse data structures in HDFStore (#85)
+ - Enable storage of sparse data structures in HDFStore (GH85_)
- Enable Series.asof to work with arrays of timestamp inputs
- Cython implementation of DataFrame.corr speeds up by > 100x (GH1349_, GH1354_)
- Exclude "nuisance" columns automatically in GroupBy.transform (GH1364_)
@@ -1596,6 +1595,7 @@ pandas 0.8.0
.. _GH1513: https://github.com/pydata/pandas/issues/1513
.. _GH1533: https://github.com/pydata/pandas/issues/1533
.. _GH1547: https://github.com/pydata/pandas/issues/1547
+.. _GH85: https://github.com/pydata/pandas/issues/85
pandas 0.7.3
@@ -1719,7 +1719,6 @@ pandas 0.7.2
- Add additional tie-breaking methods in DataFrame.rank (GH874_)
- Add ascending parameter to rank in Series, DataFrame (GH875_)
- - Add coerce_float option to DataFrame.from_records (GH893_)
- Add sort_columns parameter to allow unsorted plots (GH918_)
- IPython tab completion on GroupBy objects
@@ -1741,7 +1740,7 @@ pandas 0.7.2
- Can select multiple hierarchical groups by passing list of values in .ix
(GH134_)
- Add level keyword to ``drop`` for dropping values from a level (GH159_)
- - Add ``coerce_float`` option on DataFrame.from_records (# 893)
+ - Add ``coerce_float`` option on DataFrame.from_records (GH893_)
- Raise exception if passed date_parser fails in ``read_csv``
- Add ``axis`` option to DataFrame.fillna (GH174_)
- Fixes to Panel to make it easier to subclass (GH888_)
@@ -1962,7 +1961,7 @@ pandas 0.7.0
exact matches for the labels are found or if the index is monotonic (for
range selections)
- Label-based slicing and sequences of labels can be passed to ``[]`` on a
- Series for both getting and setting (GH #86)
+ Series for both getting and setting (GH86_)
- `[]` operator (``__getitem__`` and ``__setitem__``) will raise KeyError
with integer indexes when an index is not contained in the index. The prior
behavior would fall back on position-based indexing if a key was not found
@@ -1994,7 +1993,7 @@ pandas 0.7.0
- Don't print length by default in Series.to_string, add `length` option (GH
GH489_)
- Improve Cython code for multi-groupby to aggregate without having to sort
- the data (GH #93)
+ the data (GH93_)
- Improve MultiIndex reindexing speed by storing tuples in the MultiIndex,
test for backwards unpickling compatibility
- Improve column reindexing performance by using specialized Cython take
@@ -2027,7 +2026,7 @@ pandas 0.7.0
- Improve DataFrame.to_string and console formatting to be more consistent in
the number of displayed digits (GH395_)
- Use bottleneck if available for performing NaN-friendly statistical
- operations that it implemented (GH #91)
+ operations that it implemented (GH91_)
- Monkey-patch context to traceback in ``DataFrame.apply`` to indicate which
row/column the function application failed on (GH614_)
- Improved ability of read_table and read_clipboard to parse
@@ -2133,7 +2132,7 @@ pandas 0.7.0
- Use right dropna function for SparseSeries. Return dense Series for NA fill
value (GH730_)
- Fix Index.format bug causing incorrectly string-formatted Series with
- datetime indexes (# 726, 758)
+ datetime indexes (GH726_, GH758_)
- Fix errors caused by object dtype arrays passed to ols (GH759_)
- Fix error where column names lost when passing list of labels to
DataFrame.__getitem__, (GH662_)
@@ -2303,6 +2302,10 @@ Thanks
.. _GH764: https://github.com/pydata/pandas/issues/764
.. _GH770: https://github.com/pydata/pandas/issues/770
.. _GH771: https://github.com/pydata/pandas/issues/771
+.. _GH758: https://github.com/pydata/pandas/issues/758
+.. _GH86: https://github.com/pydata/pandas/issues/86
+.. _GH91: https://github.com/pydata/pandas/issues/91
+.. _GH93: https://github.com/pydata/pandas/issues/93
pandas 0.6.1
@@ -2492,7 +2495,7 @@ pandas 0.6.0
- Implement logical (boolean) operators &, |, ^ on DataFrame (GH347_)
- Add `Series.mad`, mean absolute deviation, matching DataFrame
- Add `QuarterEnd` DateOffset (GH321_)
- - Add matrix multiplication function `dot` to DataFrame (GH #65)
+ - Add matrix multiplication function `dot` to DataFrame (GH65_)
- Add `orient` option to `Panel.from_dict` to ease creation of mixed-type
Panels (GH359_, GH301_)
- Add `DataFrame.from_dict` with similar `orient` option
@@ -2500,7 +2503,7 @@ pandas 0.6.0
for fast conversion to DataFrame (GH357_)
- Can pass multiple levels to groupby, e.g. `df.groupby(level=[0, 1])` (GH
GH103_)
- - Can sort by multiple columns in `DataFrame.sort_index` (GH #92, GH362_)
+ - Can sort by multiple columns in `DataFrame.sort_index` (GH92_, GH362_)
- Add fast `get_value` and `put_value` methods to DataFrame and
micro-performance tweaks (GH360_)
- Add `cov` instance methods to Series and DataFrame (GH194_, GH362_)
@@ -2705,6 +2708,8 @@ Thanks
.. _GH405: https://github.com/pydata/pandas/issues/405
.. _GH408: https://github.com/pydata/pandas/issues/408
.. _GH416: https://github.com/pydata/pandas/issues/416
+.. _GH65: https://github.com/pydata/pandas/issues/65
+.. _GH92: https://github.com/pydata/pandas/issues/92
pandas 0.5.0
| just minor fixup of links
| https://api.github.com/repos/pandas-dev/pandas/pulls/2879 | 2013-02-15T23:40:35Z | 2013-02-16T21:07:36Z | 2013-02-16T21:07:36Z | 2013-02-16T21:07:36Z |
BUG: issue in get_dtype_counts for SparseDataFrame (introduced by dtypes) | diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index cfbe5ea2ee0ed..bf978c322dbd2 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -42,6 +42,11 @@ def shape(self):
def axes(self):
return [self.sp_frame.columns, self.sp_frame.index]
+ @property
+ def blocks(self):
+ """ return our series in the column order """
+ s = self.sp_frame._series
+ return [ self.iget(i) for i in self.sp_frame.columns ]
class SparseDataFrame(DataFrame):
"""
@@ -235,6 +240,13 @@ def to_dense(self):
data = dict((k, v.to_dense()) for k, v in self.iteritems())
return DataFrame(data, index=self.index)
+ def get_dtype_counts(self):
+ from collections import defaultdict
+ d = defaultdict(int)
+ for k, v in self.iteritems():
+ d[v.dtype.name] += 1
+ return Series(d)
+
def astype(self, dtype):
raise NotImplementedError
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index 1202649afed84..af9112b8297be 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -821,6 +821,39 @@ def test_constructor_convert_index_once(self):
sdf = SparseDataFrame(columns=range(4), index=arr)
self.assertTrue(sdf[0].index is sdf[1].index)
+ def test_constructor_from_series(self):
+
+ # GH 2873
+ x = Series(np.random.randn(10000), name='a')
+ x = x.to_sparse(fill_value=0)
+ self.assert_(isinstance(x,SparseSeries))
+ df = SparseDataFrame(x)
+ self.assert_(isinstance(df,SparseDataFrame))
+
+ x = Series(np.random.randn(10000), name ='a')
+ y = Series(np.random.randn(10000), name ='b')
+ x.ix[:9998] = 0
+ x = x.to_sparse(fill_value=0)
+
+ # currently fails
+ #df1 = SparseDataFrame([x, y])
+
+ def test_dtypes(self):
+ df = DataFrame(np.random.randn(10000, 4))
+ df.ix[:9998] = np.nan
+ sdf = df.to_sparse()
+
+ result = sdf.get_dtype_counts()
+ expected = Series({ 'float64' : 4 })
+ assert_series_equal(result, expected)
+
+ def test_str(self):
+ df = DataFrame(np.random.randn(10000, 4))
+ df.ix[:9998] = np.nan
+ sdf = df.to_sparse()
+
+ str(sdf)
+
def test_array_interface(self):
res = np.sqrt(self.frame)
dres = np.sqrt(self.frame.to_dense())
| - added tests for GH #2873 (commented out)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2875 | 2013-02-14T20:14:26Z | 2013-02-15T03:40:22Z | 2013-02-15T03:40:22Z | 2014-06-26T17:29:31Z |
ENH: implement Block splitting to avoid upcasts where possible (GH #2794) | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ecf2f8ba482f6..b32bb28f512b1 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3714,14 +3714,14 @@ def _combine_match_columns(self, other, func, fill_value=None):
if fill_value is not None:
raise NotImplementedError
- new_data = left._data.where(func, right, axes = [left.columns, self.index])
+ new_data = left._data.eval(func, right, axes = [left.columns, self.index])
return self._constructor(new_data)
def _combine_const(self, other, func, raise_on_error = True):
if self.empty:
return self
- new_data = self._data.where(func, other, raise_on_error=raise_on_error)
+ new_data = self._data.eval(func, other, raise_on_error=raise_on_error)
return self._constructor(new_data)
def _compare_frame(self, other, func):
@@ -5293,8 +5293,7 @@ def where(self, cond, other=NA, inplace=False, try_cast=False, raise_on_error=Tr
self._data = self._data.putmask(cond,other,inplace=True)
else:
- func = lambda values, others, conds: np.where(conds, values, others)
- new_data = self._data.where(func, other, cond, raise_on_error=raise_on_error, try_cast=try_cast)
+ new_data = self._data.where(other, cond, raise_on_error=raise_on_error, try_cast=try_cast)
return self._constructor(new_data)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index ee024ce68b5b4..bdcbca7086681 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -304,14 +304,15 @@ def putmask(self, mask, new, inplace=False):
if self._can_hold_element(new):
new = self._try_cast(new)
np.putmask(new_values, mask, new)
- # upcast me
- else:
+
+ # maybe upcast me
+ elif mask.any():
# type of the new block
if ((isinstance(new, np.ndarray) and issubclass(new.dtype, np.number)) or
isinstance(new, float)):
- typ = float
+ typ = np.float64
else:
- typ = object
+ typ = np.object_
# we need to exiplicty astype here to make a copy
new_values = new_values.astype(typ)
@@ -384,17 +385,16 @@ def shift(self, indexer, periods):
new_values[:, periods:] = np.nan
return make_block(new_values, self.items, self.ref_items)
- def where(self, func, other, cond = None, raise_on_error = True, try_cast = False):
+ def eval(self, func, other, raise_on_error = True, try_cast = False):
"""
- evaluate the block; return result block(s) from the result
+ evaluate the block; return result block from the result
Parameters
----------
func : how to combine self, other
other : a ndarray/object
- cond : the condition to respect, optional
- raise_on_error : if True, raise when I can't perform the function,
- False by default (and just return the data that we had coming in)
+ raise_on_error : if True, raise when I can't perform the function, False by default (and just return
+ the data that we had coming in)
Returns
-------
@@ -414,28 +414,7 @@ def where(self, func, other, cond = None, raise_on_error = True, try_cast = Fals
values = values.T
is_transposed = True
- # see if we can align cond
- if cond is not None:
- if not hasattr(cond, 'shape'):
- raise ValueError('where must have a condition that is ndarray'
- ' like')
- if hasattr(cond, 'reindex_axis'):
- axis = getattr(cond, '_het_axis', 0)
- cond = cond.reindex_axis(self.items, axis=axis,
- copy=True).values
- else:
- cond = cond.values
-
- # may need to undo transpose of values
- if hasattr(values, 'ndim'):
- if (values.ndim != cond.ndim or
- values.shape == cond.shape[::-1]):
- values = values.T
- is_transposed = not is_transposed
-
args = [ values, other ]
- if cond is not None:
- args.append(cond)
try:
result = func(*args)
except:
@@ -458,7 +437,106 @@ def where(self, func, other, cond = None, raise_on_error = True, try_cast = Fals
if try_cast:
result = self._try_cast_result(result)
- return [ make_block(result, self.items, self.ref_items) ]
+ return make_block(result, self.items, self.ref_items)
+
+ def where(self, other, cond, raise_on_error = True, try_cast = False):
+ """
+ evaluate the block; return result block(s) from the result
+
+ Parameters
+ ----------
+ other : a ndarray/object
+ cond : the condition to respect
+ raise_on_error : if True, raise when I can't perform the function, False by default (and just return
+ the data that we had coming in)
+
+ Returns
+ -------
+ a new block(s), the result of the func
+ """
+
+ values = self.values
+
+ # see if we can align other
+ if hasattr(other,'reindex_axis'):
+ axis = getattr(other,'_het_axis',0)
+ other = other.reindex_axis(self.items, axis=axis, copy=True).values
+
+ # make sure that we can broadcast
+ is_transposed = False
+ if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
+ if values.ndim != other.ndim or values.shape == other.shape[::-1]:
+ values = values.T
+ is_transposed = True
+
+ # see if we can align cond
+ if not hasattr(cond,'shape'):
+ raise ValueError("where must have a condition that is ndarray like")
+ if hasattr(cond,'reindex_axis'):
+ axis = getattr(cond,'_het_axis',0)
+ cond = cond.reindex_axis(self.items, axis=axis, copy=True).values
+ else:
+ cond = cond.values
+
+ # may need to undo transpose of values
+ if hasattr(values, 'ndim'):
+ if values.ndim != cond.ndim or values.shape == cond.shape[::-1]:
+ values = values.T
+ is_transposed = not is_transposed
+
+ # our where function
+ def func(c,v,o):
+ if c.flatten().all():
+ return v
+
+ try:
+ return np.where(c,v,o)
+ except:
+ if raise_on_error:
+ raise TypeError('Coulnd not operate %s with block values'
+ % repr(o))
+ else:
+ # return the values
+ result = np.empty(v.shape,dtype='float64')
+ result.fill(np.nan)
+ return result
+
+ def create_block(result, items, transpose = True):
+ if not isinstance(result, np.ndarray):
+ raise TypeError('Could not compare %s with block values'
+ % repr(other))
+
+ if transpose and is_transposed:
+ result = result.T
+
+ # try to cast if requested
+ if try_cast:
+ result = self._try_cast_result(result)
+
+ return make_block(result, items, self.ref_items)
+
+ # see if we can operate on the entire block, or need item-by-item
+ if not self._can_hold_na:
+ axis = cond.ndim-1
+ result_blocks = []
+ for item in self.items:
+ loc = self.items.get_loc(item)
+ item = self.items.take([loc])
+ v = values.take([loc],axis=axis)
+ c = cond.take([loc],axis=axis)
+ o = other.take([loc],axis=axis) if hasattr(other,'shape') else other
+
+ result = func(c,v,o)
+ if len(result) == 1:
+ result = np.repeat(result,self.shape[1:])
+
+ result = result.reshape(((1,) + self.shape[1:]))
+ result_blocks.append(create_block(result, item, transpose = False))
+
+ return result_blocks
+ else:
+ result = func(cond,values,other)
+ return create_block(result, self.items)
def _mask_missing(array, missing_values):
if not isinstance(missing_values, (list, np.ndarray)):
@@ -840,6 +918,9 @@ def apply(self, f, *args, **kwargs):
def where(self, *args, **kwargs):
return self.apply('where', *args, **kwargs)
+ def eval(self, *args, **kwargs):
+ return self.apply('eval', *args, **kwargs)
+
def putmask(self, *args, **kwargs):
return self.apply('putmask', *args, **kwargs)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c628bf3f0df97..d249e0f240a82 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -244,20 +244,26 @@ def test_getitem_boolean(self):
def test_getitem_boolean_casting(self):
- #### this currently disabled ###
-
# don't upcast if we don't need to
df = self.tsframe.copy()
df['E'] = 1
df['E'] = df['E'].astype('int32')
+ df['E1'] = df['E'].copy()
df['F'] = 1
df['F'] = df['F'].astype('int64')
+ df['F1'] = df['F'].copy()
+
casted = df[df>0]
result = casted.get_dtype_counts()
- #expected = Series({'float64': 4, 'int32' : 1, 'int64' : 1})
- expected = Series({'float64': 6 })
+ expected = Series({'float64': 4, 'int32' : 2, 'int64' : 2})
assert_series_equal(result, expected)
+ # int block splitting
+ df.ix[1:3,['E1','F1']] = 0
+ casted = df[df>0]
+ result = casted.get_dtype_counts()
+ expected = Series({'float64': 6, 'int32' : 1, 'int64' : 1})
+ assert_series_equal(result, expected)
def test_getitem_boolean_list(self):
df = DataFrame(np.arange(12).reshape(3, 4))
@@ -5997,6 +6003,19 @@ def _check_get(df, cond, check_dtypes = True):
cond = df > 0
_check_get(df, cond)
+
+ # upcasting case (GH # 2794)
+ df = DataFrame(dict([ (c,Series([1]*3,dtype=c)) for c in ['int64','int32','float32','float64'] ]))
+ df.ix[1,:] = 0
+
+ result = df.where(df>=0).get_dtype_counts()
+
+ #### when we don't preserver boolean casts ####
+ #expected = Series({ 'float32' : 1, 'float64' : 3 })
+
+ expected = Series({ 'float32' : 1, 'float64' : 1, 'int32' : 1, 'int64' : 1 })
+ assert_series_equal(result, expected)
+
# aligning
def _check_align(df, cond, other, check_dtypes = True):
rs = df.where(cond, other)
@@ -6013,10 +6032,12 @@ def _check_align(df, cond, other, check_dtypes = True):
else:
o = other[k].values
- assert_series_equal(v, Series(np.where(c, d, o),index=v.index))
-
+ new_values = d if c.all() else np.where(c, d, o)
+ assert_series_equal(v, Series(new_values,index=v.index))
+
# dtypes
# can't check dtype when other is an ndarray
+
if check_dtypes and not isinstance(other,np.ndarray):
self.assert_((rs.dtypes == df.dtypes).all() == True)
@@ -6052,13 +6073,15 @@ def _check_set(df, cond, check_dtypes = True):
dfi = df.copy()
econd = cond.reindex_like(df).fillna(True)
expected = dfi.mask(~econd)
+
+ #import pdb; pdb.set_trace()
dfi.where(cond, np.nan, inplace=True)
assert_frame_equal(dfi, expected)
# dtypes (and confirm upcasts)x
if check_dtypes:
for k, v in df.dtypes.iteritems():
- if issubclass(v.type,np.integer):
+ if issubclass(v.type,np.integer) and not cond[k].all():
v = np.dtype('float64')
self.assert_(dfi[k].dtype == v)
| - provides an implementation of IntBlock splitting that can avoid upcasting (to float) these blocks if a boolean condition does not affect the column, closes GH #2794)
- bugfix in internals.Block.putmask; was upcasting always on an int; and upcasting to float, (now float64)
```
In [6]: df = pd.DataFrame(1,columns=['a','b'],index=range(3))
In [7]: df.ix[1,'b'] = 0
In [8]: df
Out[8]:
a b
0 1 1
1 1 0
2 1 1
In [9]: df.dtypes
Out[9]:
a int64
b int64
Dtype: object
In [10]: df[df>0].dtypes
Out[10]:
a int64
b float64
Dtype: object
In [11]: df[df>0]
Out[11]:
a b
0 1 1
1 1 NaN
2 1 1
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/2871 | 2013-02-14T03:35:34Z | 2013-02-15T00:08:23Z | 2013-02-15T00:08:23Z | 2014-06-12T07:40:13Z |
ENH: Consolidation and further optimization of take functions in common | diff --git a/RELEASE.rst b/RELEASE.rst
index 25350555317bd..b97b9443032a4 100644
--- a/RELEASE.rst
+++ b/RELEASE.rst
@@ -32,9 +32,9 @@ pandas 0.11.0
- Allow mixed dtypes (e.g ``float32/float64/int32/int16/int8``) to coexist in
DataFrames and propogate in operations
- Add function to pandas.io.data for retrieving stock index components from
- Yahoo! finance (#2795)
+ Yahoo! finance (GH2795_)
- Add ``squeeze`` function to reduce dimensionality of 1-len objects
- - Support slicing with time objects (#2681)
+ - Support slicing with time objects (GH2681_)
**Improvements to existing features**
@@ -46,7 +46,7 @@ pandas 0.11.0
return a datetime64[ns] dtype with non-convertibles set as ``NaT``; will
preserve an all-nan object (e.g. strings)
- Series print output now includes the dtype by default
- - Optimize internal reindexing routines for upcasting cases (#2819)
+ - Optimize internal reindexing routines (GH2819_, GH2867_)
- ``describe_option()`` now reports the default and current value of options.
**API Changes**
@@ -62,7 +62,8 @@ pandas 0.11.0
(float32/float64); other types will be operated on, and will try to cast
back to the input dtype (e.g. if an int is passed, as long as the output
doesn't have nans, then an int will be returned)
- - backfill/pad/take/diff/ohlc will now support ``float32/int16/int8`` operations
+ - backfill/pad/take/diff/ohlc will now support ``float32/int16/int8``
+ operations
- Integer block types will upcast as needed in where operations (GH2793_)
- Series now automatically will try to set the correct dtype based on passed
datetimelike objects (datetime/Timestamp)
@@ -83,22 +84,31 @@ pandas 0.11.0
(e.g. np.array(datetime(2001,1,1,0,0))), w/o dtype being passed
- 0-dim ndarrays with a passed dtype are handled correctly
(e.g. np.array(0.,dtype='float32'))
- - Fix some boolean indexing inconsistencies in Series __getitem__/__setitem__
+ - Fix some boolean indexing inconsistencies in Series.__getitem__/__setitem__
(GH2776_)
+ - Fix issues with DataFrame and Series constructor with integers that
+ overflow ``int64`` and some mixed typed type lists (GH2845_)
``HDFStore``
- Fix weird PyTables error when using too many selectors in a where
- - Provide dotted attribute access to ``get`` from stores (e.g. store.df == store['df'])
- - Internally, change all variables to be private-like (now have leading underscore)
+ - Provide dotted attribute access to ``get`` from stores
+ (e.g. store.df == store['df'])
+ - Internally, change all variables to be private-like (now have leading
+ underscore)
.. _GH622: https://github.com/pydata/pandas/issues/622
.. _GH797: https://github.com/pydata/pandas/issues/797
-.. _GH2778: https://github.com/pydata/pandas/issues/2778
-.. _GH2793: https://github.com/pydata/pandas/issues/2793
-.. _GH2751: https://github.com/pydata/pandas/issues/2751
+.. _GH2681: https://github.com/pydata/pandas/issues/2681
.. _GH2747: https://github.com/pydata/pandas/issues/2747
+.. _GH2751: https://github.com/pydata/pandas/issues/2751
.. _GH2776: https://github.com/pydata/pandas/issues/2776
+.. _GH2778: https://github.com/pydata/pandas/issues/2778
+.. _GH2793: https://github.com/pydata/pandas/issues/2793
+.. _GH2795: https://github.com/pydata/pandas/issues/2795
+.. _GH2819: https://github.com/pydata/pandas/issues/2819
+.. _GH2845: https://github.com/pydata/pandas/issues/2845
+.. _GH2867: https://github.com/pydata/pandas/issues/2867
pandas 0.10.1
=============
@@ -107,7 +117,7 @@ pandas 0.10.1
**New features**
- - Add data inferface to World Bank WDI pandas.io.wb (#2592)
+ - Add data inferface to World Bank WDI pandas.io.wb (GH2592_)
**API Changes**
@@ -125,7 +135,8 @@ pandas 0.10.1
- ``HDFStore``
- enables storing of multi-index dataframes (closes GH1277_)
- - support data column indexing and selection, via ``data_columns`` keyword in append
+ - support data column indexing and selection, via ``data_columns`` keyword
+ in append
- support write chunking to reduce memory footprint, via ``chunksize``
keyword to append
- support automagic indexing via ``index`` keyword to append
@@ -138,11 +149,14 @@ pandas 0.10.1
- added methods append_to_multiple/select_as_multiple/select_as_coordinates
to do multiple-table append/selection
- added support for datetime64 in columns
- - added method ``unique`` to select the unique values in an indexable or data column
+ - added method ``unique`` to select the unique values in an indexable or
+ data column
- added method ``copy`` to copy an existing store (and possibly upgrade)
- - show the shape of the data on disk for non-table stores when printing the store
- - added ability to read PyTables flavor tables (allows compatiblity to other HDF5 systems)
- - Add ``logx`` option to DataFrame/Series.plot (GH2327_, #2565)
+ - show the shape of the data on disk for non-table stores when printing the
+ store
+ - added ability to read PyTables flavor tables (allows compatiblity to
+ other HDF5 systems)
+ - Add ``logx`` option to DataFrame/Series.plot (GH2327_, GH2565_)
- Support reading gzipped data from file-like object
- ``pivot_table`` aggfunc can be anything used in GroupBy.aggregate (GH2643_)
- Implement DataFrame merges in case where set cardinalities might overflow
@@ -173,11 +187,12 @@ pandas 0.10.1
- Fix DatetimeIndex handling of FixedOffset tz (GH2604_)
- More robust detection of being in IPython session for wide DataFrame
console formatting (GH2585_)
- - Fix platform issues with ``file:///`` in unit test (#2564)
+ - Fix platform issues with ``file:///`` in unit test (GH2564_)
- Fix bug and possible segfault when grouping by hierarchical level that
contains NA values (GH2616_)
- - Ensure that MultiIndex tuples can be constructed with NAs (seen in #2616)
- - Fix int64 overflow issue when unstacking MultiIndex with many levels (#2616)
+ - Ensure that MultiIndex tuples can be constructed with NAs (GH2616_)
+ - Fix int64 overflow issue when unstacking MultiIndex with many levels
+ (GH2616_)
- Exclude non-numeric data from DataFrame.quantile by default (GH2625_)
- Fix a Cython C int64 boxing issue causing read_csv to return incorrect
results (GH2599_)
@@ -188,12 +203,14 @@ pandas 0.10.1
- Fix C parser-tokenizer bug with trailing fields. (GH2668_)
- Don't exclude non-numeric data from GroupBy.max/min (GH2700_)
- Don't lose time zone when calling DatetimeIndex.drop (GH2621_)
- - Fix setitem on a Series with a boolean key and a non-scalar as value (GH2686_)
+ - Fix setitem on a Series with a boolean key and a non-scalar as value
+ (GH2686_)
- Box datetime64 values in Series.apply/map (GH2627_, GH2689_)
- Upconvert datetime + datetime64 values when concatenating frames (GH2624_)
- Raise a more helpful error message in merge operations when one DataFrame
has duplicate columns (GH2649_)
- - Fix partial date parsing issue occuring only when code is run at EOM (GH2618_)
+ - Fix partial date parsing issue occuring only when code is run at EOM
+ (GH2618_)
- Prevent MemoryError when using counting sort in sortlevel with
high-cardinality MultiIndex objects (GH2684_)
- Fix Period resampling bug when all values fall into a single bin (GH2070_)
@@ -204,6 +221,7 @@ pandas 0.10.1
.. _GH1277: https://github.com/pydata/pandas/issues/1277
.. _GH2070: https://github.com/pydata/pandas/issues/2070
.. _GH2327: https://github.com/pydata/pandas/issues/2327
+.. _GH2565: https://github.com/pydata/pandas/issues/2565
.. _GH2585: https://github.com/pydata/pandas/issues/2585
.. _GH2599: https://github.com/pydata/pandas/issues/2599
.. _GH2604: https://github.com/pydata/pandas/issues/2604
@@ -232,6 +250,9 @@ pandas 0.10.1
.. _GH2694: https://github.com/pydata/pandas/issues/2694
.. _GH2686: https://github.com/pydata/pandas/issues/2686
.. _GH2618: https://github.com/pydata/pandas/issues/2618
+.. _GH2592: https://github.com/pydata/pandas/issues/2592
+.. _GH2564: https://github.com/pydata/pandas/issues/2564
+.. _GH2616: https://github.com/pydata/pandas/issues/2616
pandas 0.10.0
=============
@@ -708,7 +729,7 @@ pandas 0.9.1
.. _GH2117: https://github.com/pydata/pandas/issues/2117
.. _GH2133: https://github.com/pydata/pandas/issues/2133
.. _GH2114: https://github.com/pydata/pandas/issues/2114
-.. _GH2527: https://github.com/pydata/pandas/issues/2114
+.. _GH2527: https://github.com/pydata/pandas/issues/2527
.. _GH2128: https://github.com/pydata/pandas/issues/2128
.. _GH2008: https://github.com/pydata/pandas/issues/2008
.. _GH2179: https://github.com/pydata/pandas/issues/2179
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 98a92072fe608..4f2852f42f985 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -247,7 +247,7 @@ def _unpickle_array(bytes):
return arr
-def _view_wrapper(f, arr_dtype, out_dtype, fill_wrap=None):
+def _view_wrapper(f, arr_dtype=None, out_dtype=None, fill_wrap=None):
def wrapper(arr, indexer, out, fill_value=np.nan):
if arr_dtype is not None:
arr = arr.view(arr_dtype)
@@ -259,17 +259,6 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
return wrapper
-def _datetime64_fill_wrap(fill_value):
- if isnull(fill_value):
- return tslib.iNaT
- try:
- return lib.Timestamp(fill_value).value
- except:
- # the proper thing to do here would probably be to upcast to object
- # (but numpy 1.6.1 doesn't do this properly)
- return tslib.iNaT
-
-
def _convert_wrapper(f, conv_dtype):
def wrapper(arr, indexer, out, fill_value=np.nan):
arr = arr.astype(conv_dtype)
@@ -277,18 +266,21 @@ def wrapper(arr, indexer, out, fill_value=np.nan):
return wrapper
-def _take_2d_multi_generic(arr, indexer, out, fill_value=np.nan):
- # this is not ideal, performance-wise, but it's better than
- # raising an exception
- if arr.shape[0] == 0 or arr.shape[1] == 0:
- return
+def _take_2d_multi_generic(arr, indexer, out, fill_value, mask_info):
+ # this is not ideal, performance-wise, but it's better than raising
+ # an exception (best to optimize in Cython to avoid getting here)
row_idx, col_idx = indexer
- row_mask = row_idx == -1
- col_mask = col_idx == -1
+ if mask_info is not None:
+ (row_mask, col_mask), (row_needs, col_needs) = mask_info
+ else:
+ row_mask = row_idx == -1
+ col_mask = col_idx == -1
+ row_needs = row_mask.any()
+ col_needs = col_mask.any()
if fill_value is not None:
- if row_mask.any():
+ if row_needs:
out[row_mask, :] = fill_value
- if col_mask.any():
+ if col_needs:
out[:, col_mask] = fill_value
for i in range(len(row_idx)):
u = row_idx[i]
@@ -297,14 +289,16 @@ def _take_2d_multi_generic(arr, indexer, out, fill_value=np.nan):
out[i, j] = arr[u, v]
-def _take_nd_generic(arr, indexer, out, axis=0, fill_value=np.nan):
- if arr.shape[axis] == 0:
- return
- mask = indexer == -1
- needs_masking = mask.any()
+def _take_nd_generic(arr, indexer, out, axis, fill_value, mask_info):
+ if mask_info is not None:
+ mask, needs_masking = mask_info
+ else:
+ mask = indexer == -1
+ needs_masking = mask.any()
if arr.dtype != out.dtype:
arr = arr.astype(out.dtype)
- ndtake(arr, indexer, axis=axis, out=out)
+ if arr.shape[axis] > 0:
+ arr.take(_ensure_platform_int(indexer), axis=axis, out=out)
if needs_masking:
outindexer = [slice(None)] * arr.ndim
outindexer[axis] = mask
@@ -334,8 +328,7 @@ def _take_nd_generic(arr, indexer, out, axis=0, fill_value=np.nan):
('bool', 'object'):
_view_wrapper(algos.take_1d_bool_object, np.uint8, None),
('datetime64[ns]','datetime64[ns]'):
- _view_wrapper(algos.take_1d_int64_int64, np.int64, np.int64,
- fill_wrap=_datetime64_fill_wrap)
+ _view_wrapper(algos.take_1d_int64_int64, np.int64, np.int64, np.int64)
}
@@ -363,7 +356,7 @@ def _take_nd_generic(arr, indexer, out, axis=0, fill_value=np.nan):
_view_wrapper(algos.take_2d_axis0_bool_object, np.uint8, None),
('datetime64[ns]','datetime64[ns]'):
_view_wrapper(algos.take_2d_axis0_int64_int64, np.int64, np.int64,
- fill_wrap=_datetime64_fill_wrap)
+ fill_wrap=np.int64)
}
@@ -391,7 +384,7 @@ def _take_nd_generic(arr, indexer, out, axis=0, fill_value=np.nan):
_view_wrapper(algos.take_2d_axis1_bool_object, np.uint8, None),
('datetime64[ns]','datetime64[ns]'):
_view_wrapper(algos.take_2d_axis1_int64_int64, np.int64, np.int64,
- fill_wrap=_datetime64_fill_wrap)
+ fill_wrap=np.int64)
}
@@ -419,159 +412,187 @@ def _take_nd_generic(arr, indexer, out, axis=0, fill_value=np.nan):
_view_wrapper(algos.take_2d_multi_bool_object, np.uint8, None),
('datetime64[ns]','datetime64[ns]'):
_view_wrapper(algos.take_2d_multi_int64_int64, np.int64, np.int64,
- fill_wrap=_datetime64_fill_wrap)
+ fill_wrap=np.int64)
}
-def _get_take_1d_function(dtype, out_dtype):
- try:
- return _take_1d_dict[dtype.name, out_dtype.name]
- except KeyError:
- pass
-
- if dtype != out_dtype:
- try:
- func = _take_1d_dict[out_dtype.name, out_dtype.name]
- return _convert_wrapper(func, out_dtype)
- except KeyError:
- pass
-
- def wrapper(arr, indexer, out, fill_value=np.nan):
- return _take_nd_generic(arr, indexer, out, axis=0,
- fill_value=fill_value)
- return wrapper
-
-
-def _get_take_2d_function(dtype, out_dtype, axis=0):
- try:
- if axis == 0:
- return _take_2d_axis0_dict[dtype.name, out_dtype.name]
- elif axis == 1:
- return _take_2d_axis1_dict[dtype.name, out_dtype.name]
- elif axis == 'multi':
- return _take_2d_multi_dict[dtype.name, out_dtype.name]
- else: # pragma: no cover
- raise ValueError('bad axis: %s' % axis)
- except KeyError:
- pass
-
- if dtype != out_dtype:
- try:
+def _get_take_nd_function(ndim, arr_dtype, out_dtype, axis=0, mask_info=None):
+ if ndim <= 2:
+ tup = (arr_dtype.name, out_dtype.name)
+ if ndim == 1:
+ func = _take_1d_dict.get(tup, None)
+ elif ndim == 2:
if axis == 0:
- func = _take_2d_axis0_dict[out_dtype.name, out_dtype.name]
- elif axis == 1:
- func = _take_2d_axis1_dict[out_dtype.name, out_dtype.name]
+ func = _take_2d_axis0_dict.get(tup, None)
else:
- func = _take_2d_multi_dict[out_dtype.name, out_dtype.name]
- return _convert_wrapper(func, out_dtype)
- except KeyError:
- pass
-
- if axis == 'multi':
- return _take_2d_multi_generic
-
- def wrapper(arr, indexer, out, fill_value=np.nan):
- return _take_nd_generic(arr, indexer, out, axis=axis,
- fill_value=fill_value)
- return wrapper
-
-
-def _get_take_nd_function(ndim, dtype, out_dtype, axis=0):
- if ndim == 2:
- return _get_take_2d_function(dtype, out_dtype, axis=axis)
- elif ndim == 1:
- if axis != 0:
- raise ValueError('axis must be 0 for one dimensional array')
- return _get_take_1d_function(dtype, out_dtype)
- elif ndim <= 0:
- raise ValueError('ndim must be >= 1')
-
- def wrapper(arr, indexer, out, fill_value=np.nan):
- return _take_nd_generic(arr, indexer, out, axis=axis,
- fill_value=fill_value)
- if (dtype.name, out_dtype.name) == ('datetime64[ns]','datetime64[ns]'):
- wrapper = _view_wrapper(wrapper, np.int64, np.int64,
- fill_wrap=_datetime64_fill_wrap)
- return wrapper
+ func = _take_2d_axis1_dict.get(tup, None)
+ if func is not None:
+ return func
+
+ tup = (out_dtype.name, out_dtype.name)
+ if ndim == 1:
+ func = _take_1d_dict.get(tup, None)
+ elif ndim == 2:
+ if axis == 0:
+ func = _take_2d_axis0_dict.get(tup, None)
+ else:
+ func = _take_2d_axis1_dict.get(tup, None)
+ if func is not None:
+ func = _convert_wrapper(func, out_dtype)
+ return func
+
+ def func(arr, indexer, out, fill_value=np.nan):
+ _take_nd_generic(arr, indexer, out, axis=axis,
+ fill_value=fill_value, mask_info=mask_info)
+ return func
-def take_1d(arr, indexer, out=None, fill_value=np.nan):
+def take_nd(arr, indexer, axis=0, out=None, fill_value=np.nan,
+ mask_info=None, allow_fill=True):
"""
Specialized Cython take which sets NaN values in one pass
+
+ Parameters
+ ----------
+ arr : ndarray
+ Input array
+ indexer : ndarray
+ 1-D array of indices to take, subarrays corresponding to -1 value
+ indicies are filed with fill_value
+ axis : int, default 0
+ Axis to take from
+ out : ndarray or None, default None
+ Optional output array, must be appropriate type to hold input and
+ fill_value together, if indexer has any -1 value entries; call
+ common._maybe_promote to determine this type for any fill_value
+ fill_value : any, default np.nan
+ Fill value to replace -1 values with
+ mask_info : tuple of (ndarray, boolean)
+ If provided, value should correspond to:
+ (indexer != -1, (indexer != -1).any())
+ If not provided, it will be computed internally if necessary
+ allow_fill : boolean, default True
+ If False, indexer is assumed to contain no -1 values so no filling
+ will be done. This short-circuits computation of a mask. Result is
+ undefined if allow_fill == False and -1 is present in indexer.
"""
if indexer is None:
- indexer = np.arange(len(arr), dtype=np.int64)
+ indexer = np.arange(arr.shape[axis], dtype=np.int64)
dtype, fill_value = arr.dtype, arr.dtype.type()
else:
indexer = _ensure_int64(indexer)
- dtype = _maybe_promote(arr.dtype, fill_value)[0]
- if dtype != arr.dtype:
- mask = indexer == -1
- needs_masking = mask.any()
- if needs_masking:
- if out is not None and out.dtype != dtype:
- raise Exception('Incompatible type for fill_value')
- else:
- dtype, fill_value = arr.dtype, arr.dtype.type()
-
+ if not allow_fill:
+ dtype, fill_value = arr.dtype, arr.dtype.type()
+ mask_info = None, False
+ else:
+ # check for promotion based on types only (do this first because
+ # it's faster than computing a mask)
+ dtype, fill_value = _maybe_promote(arr.dtype, fill_value)
+ if dtype != arr.dtype and (out is None or out.dtype != dtype):
+ # check if promotion is actually required based on indexer
+ if mask_info is not None:
+ mask, needs_masking = mask_info
+ else:
+ mask = indexer == -1
+ needs_masking = mask.any()
+ mask_info = mask, needs_masking
+ if needs_masking:
+ if out is not None and out.dtype != dtype:
+ raise Exception('Incompatible type for fill_value')
+ else:
+ # if not, then depromote, set fill_value to dummy
+ # (it won't be used but we don't want the cython code
+ # to crash when trying to cast it to dtype)
+ dtype, fill_value = arr.dtype, arr.dtype.type()
+
+ # at this point, it's guaranteed that dtype can hold both the arr values
+ # and the fill_value
if out is None:
- out = np.empty(len(indexer), dtype=dtype)
- take_f = _get_take_1d_function(arr.dtype, out.dtype)
- take_f(arr, indexer, out=out, fill_value=fill_value)
+ out_shape = list(arr.shape)
+ out_shape[axis] = len(indexer)
+ out_shape = tuple(out_shape)
+ if arr.flags.f_contiguous and axis == arr.ndim - 1:
+ # minor tweak that can make an order-of-magnitude difference
+ # for dataframes initialized directly from 2-d ndarrays
+ # (s.t. df.values is c-contiguous and df._data.blocks[0] is its
+ # f-contiguous transpose)
+ out = np.empty(out_shape, dtype=dtype, order='F')
+ else:
+ out = np.empty(out_shape, dtype=dtype)
+
+ func = _get_take_nd_function(arr.ndim, arr.dtype, out.dtype,
+ axis=axis, mask_info=mask_info)
+ func(arr, indexer, out, fill_value)
return out
-def take_nd(arr, indexer, out=None, axis=0, fill_value=np.nan):
- """
- Specialized Cython take which sets NaN values in one pass
- """
- if indexer is None:
- mask = None
- needs_masking = False
- fill_value = arr.dtype.type()
- else:
- indexer = _ensure_int64(indexer)
- mask = indexer == -1
- needs_masking = mask.any()
- if not needs_masking:
- fill_value = arr.dtype.type()
- return take_fast(arr, indexer, mask, needs_masking, axis, out, fill_value)
+take_1d = take_nd
-def take_2d_multi(arr, row_idx, col_idx, fill_value=np.nan, out=None):
+def take_2d_multi(arr, indexer, out=None, fill_value=np.nan,
+ mask_info=None, allow_fill=True):
"""
Specialized Cython take which sets NaN values in one pass
"""
- if row_idx is None:
+ if indexer is None or (indexer[0] is None and indexer[1] is None):
row_idx = np.arange(arr.shape[0], dtype=np.int64)
- else:
- row_idx = _ensure_int64(row_idx)
-
- if col_idx is None:
col_idx = np.arange(arr.shape[1], dtype=np.int64)
+ indexer = row_idx, col_idx
+ dtype, fill_value = arr.dtype, arr.dtype.type()
else:
- col_idx = _ensure_int64(col_idx)
-
- dtype = _maybe_promote(arr.dtype, fill_value)[0]
- if dtype != arr.dtype:
- row_mask = row_idx == -1
- col_mask = col_idx == -1
- needs_masking = row_mask.any() or col_mask.any()
- if needs_masking:
- if out is not None and out.dtype != dtype:
- raise Exception('Incompatible type for fill_value')
+ row_idx, col_idx = indexer
+ if row_idx is None:
+ row_idx = np.arange(arr.shape[0], dtype=np.int64)
else:
+ row_idx = _ensure_int64(row_idx)
+ if col_idx is None:
+ col_idx = np.arange(arr.shape[1], dtype=np.int64)
+ else:
+ col_idx = _ensure_int64(col_idx)
+ indexer = row_idx, col_idx
+ if not allow_fill:
dtype, fill_value = arr.dtype, arr.dtype.type()
+ mask_info = None, False
+ else:
+ # check for promotion based on types only (do this first because
+ # it's faster than computing a mask)
+ dtype, fill_value = _maybe_promote(arr.dtype, fill_value)
+ if dtype != arr.dtype and (out is None or out.dtype != dtype):
+ # check if promotion is actually required based on indexer
+ if mask_info is not None:
+ (row_mask, col_mask), (row_needs, col_needs) = mask_info
+ else:
+ row_mask = row_idx == -1
+ col_mask = col_idx == -1
+ row_needs = row_mask.any()
+ col_needs = col_mask.any()
+ mask_info = (row_mask, col_mask), (row_needs, col_needs)
+ if row_needs or col_needs:
+ if out is not None and out.dtype != dtype:
+ raise Exception('Incompatible type for fill_value')
+ else:
+ # if not, then depromote, set fill_value to dummy
+ # (it won't be used but we don't want the cython code
+ # to crash when trying to cast it to dtype)
+ dtype, fill_value = arr.dtype, arr.dtype.type()
+
+ # at this point, it's guaranteed that dtype can hold both the arr values
+ # and the fill_value
if out is None:
out_shape = len(row_idx), len(col_idx)
out = np.empty(out_shape, dtype=dtype)
- take_f = _get_take_2d_function(arr.dtype, out.dtype, axis='multi')
- take_f(arr, (row_idx, col_idx), out=out, fill_value=fill_value)
- return out
-
-def ndtake(arr, indexer, axis=0, out=None):
- return arr.take(_ensure_platform_int(indexer), axis=axis, out=out)
+ func = _take_2d_multi_dict.get((arr.dtype.name, out.dtype.name), None)
+ if func is None and arr.dtype != out.dtype:
+ func = _take_2d_multi_dict.get((out.dtype.name, out.dtype.name), None)
+ if func is not None:
+ func = _convert_wrapper(func, out.dtype)
+ if func is None:
+ def func(arr, indexer, out, fill_value=np.nan):
+ _take_2d_multi_generic(arr, indexer, out,
+ fill_value=fill_value, mask_info=mask_info)
+ func(arr, indexer, out=out, fill_value=fill_value)
+ return out
_diff_special = {
@@ -615,36 +636,6 @@ def diff(arr, n, axis=0):
return out_arr
-def take_fast(arr, indexer, mask, needs_masking, axis=0, out=None,
- fill_value=np.nan):
- """
- Specialized Cython take which sets NaN values in one pass
-
- (equivalent to take_nd but requires mask and needs_masking
- to be set appropriately already; slightly more efficient)
- """
- if indexer is None:
- indexer = np.arange(arr.shape[axis], dtype=np.int64)
- dtype = arr.dtype
- else:
- indexer = _ensure_int64(indexer)
- if needs_masking:
- dtype = _maybe_promote(arr.dtype, fill_value)[0]
- if dtype != arr.dtype and out is not None and out.dtype != dtype:
- raise Exception('Incompatible type for fill_value')
- else:
- dtype = arr.dtype
-
- if out is None:
- out_shape = list(arr.shape)
- out_shape[axis] = len(indexer)
- out_shape = tuple(out_shape)
- out = np.empty(out_shape, dtype=dtype)
- take_f = _get_take_nd_function(arr.ndim, arr.dtype, out.dtype, axis=axis)
- take_f(arr, indexer, out=out, fill_value=fill_value)
- return out
-
-
def _infer_dtype_from_scalar(val):
""" interpret the dtype from a scalar, upcast floats and ints
return the new value and the dtype """
@@ -689,6 +680,7 @@ def _infer_dtype_from_scalar(val):
return dtype, val
+
def _maybe_promote(dtype, fill_value=np.nan):
# returns tuple of (dtype, fill_value)
if issubclass(dtype.type, np.datetime64):
@@ -729,10 +721,10 @@ def _maybe_promote(dtype, fill_value=np.nan):
dtype = np.object_
return dtype, fill_value
+
def _maybe_upcast(values, fill_value=np.nan, copy=False):
""" provide explicty type promotion and coercion
if copy == True, then a copy is created even if no upcast is required """
-
new_dtype, fill_value = _maybe_promote(values.dtype, fill_value)
if new_dtype != values.dtype:
values = values.astype(new_dtype)
@@ -740,6 +732,7 @@ def _maybe_upcast(values, fill_value=np.nan, copy=False):
values = values.copy()
return values, fill_value
+
def _possibly_cast_item(obj, item, dtype):
chunk = obj[item]
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ecd7d57a0e4d2..676c684f927f2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2644,8 +2644,9 @@ def _reindex_multi(self, new_index, new_columns, copy, fill_value):
new_columns, col_indexer = self.columns.reindex(new_columns)
if row_indexer is not None and col_indexer is not None:
- new_values = com.take_2d_multi(self.values, row_indexer,
- col_indexer, fill_value=fill_value)
+ indexer = row_indexer, col_indexer
+ new_values = com.take_2d_multi(self.values, indexer,
+ fill_value=fill_value)
return DataFrame(new_values, index=new_index, columns=new_columns)
elif row_indexer is not None:
return self._reindex_with_indexers(new_index, row_indexer,
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index e89175ef72f43..fe7c281afb1b9 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -897,7 +897,7 @@ def _aggregate_series_fast(self, obj, func):
dummy = obj[:0].copy()
indexer = _algos.groupsort_indexer(group_index, ngroups)[0]
obj = obj.take(indexer)
- group_index = com.ndtake(group_index, indexer)
+ group_index = com.take_nd(group_index, indexer, allow_fill=False)
grouper = lib.SeriesGrouper(obj, func, group_index, ngroups,
dummy)
result, counts = grouper.get_result()
@@ -1686,7 +1686,9 @@ def aggregate(self, arg, *args, **kwargs):
zipped = zip(result.index.levels, result.index.labels,
result.index.names)
for i, (lev, lab, name) in enumerate(zipped):
- result.insert(i, name, com.ndtake(lev.values, lab))
+ result.insert(i, name,
+ com.take_nd(lev.values, lab,
+ allow_fill=False))
result = result.consolidate()
else:
values = result.index.values
@@ -2133,7 +2135,7 @@ def __init__(self, data, labels, ngroups, axis=0, keep_internal=False):
@cache_readonly
def slabels(self):
# Sorted labels
- return com.ndtake(self.labels, self.sort_idx)
+ return com.take_nd(self.labels, self.sort_idx, allow_fill=False)
@cache_readonly
def sort_idx(self):
@@ -2411,11 +2413,11 @@ def _reorder_by_uniques(uniques, labels):
mask = labels < 0
# move labels to right locations (ie, unsort ascending labels)
- labels = com.ndtake(reverse_indexer, labels)
+ labels = com.take_nd(reverse_indexer, labels, allow_fill=False)
np.putmask(labels, mask, -1)
# sort observed ids
- uniques = com.ndtake(uniques, sorter)
+ uniques = com.take_nd(uniques, sorter, allow_fill=False)
return uniques, labels
diff --git a/pandas/core/index.py b/pandas/core/index.py
index bcd45a9fa9262..3a5f1d8147a99 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -12,7 +12,6 @@
import pandas.index as _index
from pandas.lib import Timestamp
-from pandas.core.common import ndtake
from pandas.util.decorators import cache_readonly
import pandas.core.common as com
from pandas.util import py3compat
@@ -608,7 +607,8 @@ def union(self, other):
indexer = (indexer == -1).nonzero()[0]
if len(indexer) > 0:
- other_diff = ndtake(other.values, indexer)
+ other_diff = com.take_nd(other.values, indexer,
+ allow_fill=False)
result = com._concat_compat((self.values, other_diff))
try:
result.sort()
@@ -1037,7 +1037,8 @@ def _join_level(self, other, level, how='left', return_indexers=False):
rev_indexer = lib.get_reverse_indexer(left_lev_indexer,
len(old_level))
- new_lev_labels = ndtake(rev_indexer, left.labels[level])
+ new_lev_labels = com.take_nd(rev_indexer, left.labels[level],
+ allow_fill=False)
omit_mask = new_lev_labels != -1
new_labels = list(left.labels)
@@ -1057,8 +1058,9 @@ def _join_level(self, other, level, how='left', return_indexers=False):
left_indexer = None
if right_lev_indexer is not None:
- right_indexer = ndtake(right_lev_indexer,
- join_index.labels[level])
+ right_indexer = com.take_nd(right_lev_indexer,
+ join_index.labels[level],
+ allow_fill=False)
else:
right_indexer = join_index.labels[level]
@@ -2369,8 +2371,10 @@ def equals(self, other):
return False
for i in xrange(self.nlevels):
- svalues = ndtake(self.levels[i].values, self.labels[i])
- ovalues = ndtake(other.levels[i].values, other.labels[i])
+ svalues = com.take_nd(self.levels[i].values, self.labels[i],
+ allow_fill=False)
+ ovalues = com.take_nd(other.levels[i].values, other.labels[i],
+ allow_fill=False)
if not np.array_equal(svalues, ovalues):
return False
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 56802c2cb3bae..cd7404532c431 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -120,14 +120,14 @@ def merge(self, other):
# union_ref = self.ref_items + other.ref_items
return _merge_blocks([self, other], self.ref_items)
- def reindex_axis(self, indexer, mask, needs_masking, axis=0,
- fill_value=np.nan):
+ def reindex_axis(self, indexer, axis=1, fill_value=np.nan, mask_info=None):
"""
Reindex using pre-computed indexer information
"""
- new_values = com.take_fast(self.values, indexer,
- mask, needs_masking, axis=axis,
- fill_value=fill_value)
+ if axis < 1:
+ raise AssertionError('axis must be at least 1, got %d' % axis)
+ new_values = com.take_nd(self.values, indexer, axis,
+ fill_value=fill_value, mask_info=mask_info)
return make_block(new_values, self.items, self.ref_items)
def reindex_items_from(self, new_ref_items, copy=True):
@@ -146,12 +146,9 @@ def reindex_items_from(self, new_ref_items, copy=True):
new_items = new_ref_items
new_values = self.values.copy() if copy else self.values
else:
- mask = indexer != -1
- masked_idx = indexer[mask]
-
- new_values = com.take_fast(self.values, masked_idx,
- mask=None, needs_masking=False,
- axis=0)
+ masked_idx = indexer[indexer != -1]
+ new_values = com.take_nd(self.values, masked_idx, axis=0,
+ allow_fill=False)
new_items = self.items.take(masked_idx)
return make_block(new_values, new_items, new_ref_items)
@@ -221,7 +218,10 @@ def fillna(self, value, inplace=False):
return make_block(new_values, self.items, self.ref_items)
def astype(self, dtype, copy = True, raise_on_error = True):
- """ coerce to the new type (if copy=True, return a new copy) raise on an except if raise == True """
+ """
+ Coerce to the new type (if copy=True, return a new copy)
+ raise on an except if raise == True
+ """
try:
newb = make_block(com._astype_nansafe(self.values, dtype, copy = copy),
self.items, self.ref_items)
@@ -231,12 +231,12 @@ def astype(self, dtype, copy = True, raise_on_error = True):
newb = self.copy() if copy else self
if newb.is_numeric and self.is_numeric:
- if newb.shape != self.shape or (not copy and newb.itemsize < self.itemsize):
- raise TypeError("cannot set astype for copy = [%s] for dtype (%s [%s]) with smaller itemsize that current (%s [%s])" % (copy,
- self.dtype.name,
- self.itemsize,
- newb.dtype.name,
- newb.itemsize))
+ if (newb.shape != self.shape or
+ (not copy and newb.itemsize < self.itemsize)):
+ raise TypeError("cannot set astype for copy = [%s] for dtype "
+ "(%s [%s]) with smaller itemsize that current "
+ "(%s [%s])" % (copy, self.dtype.name,
+ self.itemsize, newb.dtype.name, newb.itemsize))
return newb
def convert(self, copy = True, **kwargs):
@@ -356,11 +356,11 @@ def interpolate(self, method='pad', axis=0, inplace=False,
return make_block(values, self.items, self.ref_items)
- def take(self, indexer, axis=1, fill_value=np.nan):
+ def take(self, indexer, axis=1):
if axis < 1:
raise AssertionError('axis must be at least 1, got %d' % axis)
- new_values = com.take_fast(self.values, indexer, None, False,
- axis=axis, fill_value=fill_value)
+ new_values = com.take_nd(self.values, indexer, axis=axis,
+ allow_fill=False)
return make_block(new_values, self.items, self.ref_items)
def get_values(self, dtype):
@@ -1320,15 +1320,9 @@ def reindex_indexer(self, new_axis, indexer, axis=1, fill_value=np.nan):
if axis == 0:
return self._reindex_indexer_items(new_axis, indexer, fill_value)
- mask = indexer == -1
-
- # TODO: deal with length-0 case? or does it fall out?
- needs_masking = len(new_axis) > 0 and mask.any()
-
new_blocks = []
for block in self.blocks:
- newb = block.reindex_axis(indexer, mask, needs_masking,
- axis=axis, fill_value=fill_value)
+ newb = block.reindex_axis(indexer, axis=axis, fill_value=fill_value)
new_blocks.append(newb)
new_axes = list(self.axes)
@@ -1354,8 +1348,8 @@ def _reindex_indexer_items(self, new_items, indexer, fill_value):
continue
new_block_items = new_items.take(selector.nonzero()[0])
- new_values = com.take_fast(blk.values, blk_indexer[selector],
- None, False, axis=0)
+ new_values = com.take_nd(blk.values, blk_indexer[selector], axis=0,
+ allow_fill=False)
new_blocks.append(make_block(new_values, new_block_items,
new_items))
@@ -1419,8 +1413,8 @@ def _make_na_block(self, items, ref_items, fill_value=np.nan):
return na_block
def take(self, indexer, axis=1):
- if axis == 0:
- raise NotImplementedError
+ if axis < 1:
+ raise AssertionError('axis must be at least 1, got %d' % axis)
indexer = com._ensure_platform_int(indexer)
@@ -1433,8 +1427,8 @@ def take(self, indexer, axis=1):
new_axes[axis] = self.axes[axis].take(indexer)
new_blocks = []
for blk in self.blocks:
- new_values = com.take_fast(blk.values, indexer, None, False,
- axis=axis)
+ new_values = com.take_nd(blk.values, indexer, axis=axis,
+ allow_fill=False)
newb = make_block(new_values, blk.items, self.items)
new_blocks.append(newb)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 6e52193a2c025..b418995ce3085 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -838,7 +838,7 @@ def _reindex_multi(self, items, major, minor):
indexer2 = range(len(new_minor))
for i, ind in enumerate(indexer0):
- com.take_2d_multi(values[ind], indexer1, indexer2,
+ com.take_2d_multi(values[ind], (indexer1, indexer2),
out=new_values[i])
return Panel(new_values, items=new_items, major_axis=new_major,
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
index c68154b27f7d1..c94ed8730f32a 100644
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -5,6 +5,8 @@
cimport numpy as np
cimport cython
+from libc.string cimport memmove
+
from numpy cimport *
from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
@@ -62,21 +64,13 @@ def take_1d_%(name)s_%(dest)s(ndarray[%(c_type_in)s] values,
n = len(indexer)
- if %(raise_on_na)s and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = %(preval)svalues[idx]%(postval)s
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = %(preval)svalues[idx]%(postval)s
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = %(preval)svalues[idx]%(postval)s
"""
@@ -93,25 +87,32 @@ def take_2d_axis0_%(name)s_%(dest)s(ndarray[%(c_type_in)s, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if %(raise_on_na)s and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = %(preval)svalues[idx, j]%(postval)s
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = %(preval)svalues[idx, j]%(postval)s
+ fv = fill_value
+
+ IF %(can_copy)s:
+ cdef:
+ %(c_type_out)s *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(%(c_type_out)s) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = %(preval)svalues[idx, j]%(postval)s
"""
@@ -127,26 +128,33 @@ def take_2d_axis1_%(name)s_%(dest)s(ndarray[%(c_type_in)s, ndim=2] values,
n = len(values)
k = len(indexer)
+
+ fv = fill_value
+
+ IF %(can_copy)s:
+ cdef:
+ %(c_type_out)s *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(%(c_type_out)s) * n))
+ return
- if %(raise_on_na)s and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = %(preval)svalues[i, idx]%(postval)s
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = %(preval)svalues[i, idx]%(postval)s
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = %(preval)svalues[i, idx]%(postval)s
"""
@@ -165,31 +173,18 @@ def take_2d_multi_%(name)s_%(dest)s(ndarray[%(c_type_in)s, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if %(raise_on_na)s and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = %(preval)svalues[idx, idx1[j]]%(postval)s
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = %(preval)svalues[idx, idx1[j]]%(postval)s
+ else:
+ outbuf[i, j] = %(preval)svalues[idx, idx1[j]]%(postval)s
"""
@@ -2156,40 +2151,40 @@ def generate_put_template(template, use_ints = True, use_floats = True):
return output.getvalue()
def generate_take_template(template, exclude=None):
- # name, dest, ctypein, ctypeout, preval, postval, capable of holding NA
+ # name, dest, ctypein, ctypeout, preval, postval, cancopy
function_list = [
- ('bool', 'bool', 'uint8_t', 'uint8_t', '', '', False),
+ ('bool', 'bool', 'uint8_t', 'uint8_t', '', '', True),
('bool', 'object', 'uint8_t', 'object',
- 'True if ', ' > 0 else False', True),
- ('int8', 'int8', 'int8_t', 'int8_t', '', '', False),
+ 'True if ', ' > 0 else False', False),
+ ('int8', 'int8', 'int8_t', 'int8_t', '', '', True),
('int8', 'int32', 'int8_t', 'int32_t', '', '', False),
('int8', 'int64', 'int8_t', 'int64_t', '', '', False),
- ('int8', 'float64', 'int8_t', 'float64_t', '', '', True),
- ('int16', 'int16', 'int16_t', 'int16_t', '', '', False),
+ ('int8', 'float64', 'int8_t', 'float64_t', '', '', False),
+ ('int16', 'int16', 'int16_t', 'int16_t', '', '', True),
('int16', 'int32', 'int16_t', 'int32_t', '', '', False),
('int16', 'int64', 'int16_t', 'int64_t', '', '', False),
- ('int16', 'float64', 'int16_t', 'float64_t', '', '', True),
- ('int32', 'int32', 'int32_t', 'int32_t', '', '', False),
+ ('int16', 'float64', 'int16_t', 'float64_t', '', '', False),
+ ('int32', 'int32', 'int32_t', 'int32_t', '', '', True),
('int32', 'int64', 'int32_t', 'int64_t', '', '', False),
- ('int32', 'float64', 'int32_t', 'float64_t', '', '', True),
- ('int64', 'int64', 'int64_t', 'int64_t', '', '', False),
- ('int64', 'float64', 'int64_t', 'float64_t', '', '', True),
+ ('int32', 'float64', 'int32_t', 'float64_t', '', '', False),
+ ('int64', 'int64', 'int64_t', 'int64_t', '', '', True),
+ ('int64', 'float64', 'int64_t', 'float64_t', '', '', False),
('float32', 'float32', 'float32_t', 'float32_t', '', '', True),
- ('float32', 'float64', 'float32_t', 'float64_t', '', '', True),
+ ('float32', 'float64', 'float32_t', 'float64_t', '', '', False),
('float64', 'float64', 'float64_t', 'float64_t', '', '', True),
- ('object', 'object', 'object', 'object', '', '', True)
+ ('object', 'object', 'object', 'object', '', '', False)
]
output = StringIO()
for (name, dest, c_type_in, c_type_out,
- preval, postval, can_hold_na) in function_list:
+ preval, postval, can_copy) in function_list:
if exclude is not None and name in exclude:
continue
func = template % {'name': name, 'dest': dest,
'c_type_in': c_type_in, 'c_type_out': c_type_out,
'preval': preval, 'postval': postval,
- 'raise_on_na': 'False' if can_hold_na else 'True'}
+ 'can_copy': 'True' if can_copy else 'False'}
output.write(func)
return output.getvalue()
diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx
index 1723f2fb8b34c..ce83e08782ea2 100644
--- a/pandas/src/generated.pyx
+++ b/pandas/src/generated.pyx
@@ -2,6 +2,8 @@
cimport numpy as np
cimport cython
+from libc.string cimport memmove
+
from numpy cimport *
from cpython cimport (PyDict_New, PyDict_GetItem, PyDict_SetItem,
@@ -2208,21 +2210,13 @@ def take_1d_bool_bool(ndarray[uint8_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_bool_object(ndarray[uint8_t] values,
@@ -2235,21 +2229,13 @@ def take_1d_bool_object(ndarray[uint8_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = True if values[idx] > 0 else False
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = True if values[idx] > 0 else False
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = True if values[idx] > 0 else False
@cython.wraparound(False)
def take_1d_int8_int8(ndarray[int8_t] values,
@@ -2262,21 +2248,13 @@ def take_1d_int8_int8(ndarray[int8_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int8_int32(ndarray[int8_t] values,
@@ -2289,21 +2267,13 @@ def take_1d_int8_int32(ndarray[int8_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int8_int64(ndarray[int8_t] values,
@@ -2316,21 +2286,13 @@ def take_1d_int8_int64(ndarray[int8_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int8_float64(ndarray[int8_t] values,
@@ -2343,21 +2305,13 @@ def take_1d_int8_float64(ndarray[int8_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int16_int16(ndarray[int16_t] values,
@@ -2370,21 +2324,13 @@ def take_1d_int16_int16(ndarray[int16_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int16_int32(ndarray[int16_t] values,
@@ -2397,21 +2343,13 @@ def take_1d_int16_int32(ndarray[int16_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int16_int64(ndarray[int16_t] values,
@@ -2424,21 +2362,13 @@ def take_1d_int16_int64(ndarray[int16_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int16_float64(ndarray[int16_t] values,
@@ -2451,21 +2381,13 @@ def take_1d_int16_float64(ndarray[int16_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int32_int32(ndarray[int32_t] values,
@@ -2478,21 +2400,13 @@ def take_1d_int32_int32(ndarray[int32_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int32_int64(ndarray[int32_t] values,
@@ -2505,21 +2419,13 @@ def take_1d_int32_int64(ndarray[int32_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int32_float64(ndarray[int32_t] values,
@@ -2532,21 +2438,13 @@ def take_1d_int32_float64(ndarray[int32_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int64_int64(ndarray[int64_t] values,
@@ -2559,21 +2457,13 @@ def take_1d_int64_int64(ndarray[int64_t] values,
n = len(indexer)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_int64_float64(ndarray[int64_t] values,
@@ -2586,21 +2476,13 @@ def take_1d_int64_float64(ndarray[int64_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_float32_float32(ndarray[float32_t] values,
@@ -2613,21 +2495,13 @@ def take_1d_float32_float32(ndarray[float32_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_float32_float64(ndarray[float32_t] values,
@@ -2640,21 +2514,13 @@ def take_1d_float32_float64(ndarray[float32_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_float64_float64(ndarray[float64_t] values,
@@ -2667,21 +2533,13 @@ def take_1d_float64_float64(ndarray[float64_t] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
def take_1d_object_object(ndarray[object] values,
@@ -2694,21 +2552,13 @@ def take_1d_object_object(ndarray[object] values,
n = len(indexer)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i] = values[idx]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- outbuf[i] = fv
- else:
- outbuf[i] = values[idx]
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ outbuf[i] = fv
+ else:
+ outbuf[i] = values[idx]
@cython.wraparound(False)
@@ -2724,25 +2574,32 @@ def take_2d_axis0_bool_bool(ndarray[uint8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ uint8_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(uint8_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2757,25 +2614,32 @@ def take_2d_axis0_bool_object(ndarray[uint8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = True if values[idx, j] > 0 else False
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = True if values[idx, j] > 0 else False
+ fv = fill_value
+
+ IF False:
+ cdef:
+ object *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(object) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = True if values[idx, j] > 0 else False
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2790,25 +2654,32 @@ def take_2d_axis0_int8_int8(ndarray[int8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int8_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int8_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2823,25 +2694,32 @@ def take_2d_axis0_int8_int32(ndarray[int8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int32_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2856,25 +2734,32 @@ def take_2d_axis0_int8_int64(ndarray[int8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2889,25 +2774,32 @@ def take_2d_axis0_int8_float64(ndarray[int8_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2922,25 +2814,32 @@ def take_2d_axis0_int16_int16(ndarray[int16_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int16_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int16_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2955,25 +2854,32 @@ def take_2d_axis0_int16_int32(ndarray[int16_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int32_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -2988,25 +2894,32 @@ def take_2d_axis0_int16_int64(ndarray[int16_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3021,25 +2934,32 @@ def take_2d_axis0_int16_float64(ndarray[int16_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3054,25 +2974,32 @@ def take_2d_axis0_int32_int32(ndarray[int32_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int32_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3087,25 +3014,32 @@ def take_2d_axis0_int32_int64(ndarray[int32_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3120,25 +3054,32 @@ def take_2d_axis0_int32_float64(ndarray[int32_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3153,25 +3094,32 @@ def take_2d_axis0_int64_int64(ndarray[int64_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(int64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3186,25 +3134,32 @@ def take_2d_axis0_int64_float64(ndarray[int64_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3219,25 +3174,32 @@ def take_2d_axis0_float32_float32(ndarray[float32_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ float32_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float32_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3252,25 +3214,32 @@ def take_2d_axis0_float32_float64(ndarray[float32_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3285,25 +3254,32 @@ def take_2d_axis0_float64_float64(ndarray[float64_t, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF True:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(float64_t) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3318,25 +3294,32 @@ def take_2d_axis0_object_object(ndarray[object, ndim=2] values,
n = len(indexer)
k = values.shape[1]
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = indexer[i]
- if idx == -1:
- for j from 0 <= j < k:
- outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- outbuf[i, j] = values[idx, j]
+ fv = fill_value
+
+ IF False:
+ cdef:
+ object *v, *o
+
+ if values.flags.c_contiguous and out.flags.c_contiguous:
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ v = &values[idx, 0]
+ o = &outbuf[i, 0]
+ memmove(o, v, <size_t>(sizeof(object) * k))
+ return
+
+ for i from 0 <= i < n:
+ idx = indexer[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ outbuf[i, j] = values[idx, j]
@cython.wraparound(False)
@@ -3351,26 +3334,33 @@ def take_2d_axis1_bool_bool(ndarray[uint8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ uint8_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(uint8_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3384,26 +3374,33 @@ def take_2d_axis1_bool_object(ndarray[uint8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = True if values[i, idx] > 0 else False
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = True if values[i, idx] > 0 else False
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ object *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(object) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = True if values[i, idx] > 0 else False
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3417,26 +3414,33 @@ def take_2d_axis1_int8_int8(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int8_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int8_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3450,26 +3454,33 @@ def take_2d_axis1_int8_int32(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int32_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3483,26 +3494,33 @@ def take_2d_axis1_int8_int64(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3516,26 +3534,33 @@ def take_2d_axis1_int8_float64(ndarray[int8_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3549,26 +3574,33 @@ def take_2d_axis1_int16_int16(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int16_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int16_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3582,26 +3614,33 @@ def take_2d_axis1_int16_int32(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int32_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3615,26 +3654,33 @@ def take_2d_axis1_int16_int64(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3648,26 +3694,33 @@ def take_2d_axis1_int16_float64(ndarray[int16_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3681,26 +3734,33 @@ def take_2d_axis1_int32_int32(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int32_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int32_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3714,26 +3774,33 @@ def take_2d_axis1_int32_int64(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3747,26 +3814,33 @@ def take_2d_axis1_int32_float64(ndarray[int32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3780,26 +3854,33 @@ def take_2d_axis1_int64_int64(ndarray[int64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if True and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ int64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(int64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3813,26 +3894,33 @@ def take_2d_axis1_int64_float64(ndarray[int64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3846,26 +3934,33 @@ def take_2d_axis1_float32_float32(ndarray[float32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ float32_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float32_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3879,26 +3974,33 @@ def take_2d_axis1_float32_float64(ndarray[float32_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3912,26 +4014,33 @@ def take_2d_axis1_float64_float64(ndarray[float64_t, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF True:
+ cdef:
+ float64_t *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(float64_t) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -3945,26 +4054,33 @@ def take_2d_axis1_object_object(ndarray[object, ndim=2] values,
n = len(values)
k = len(indexer)
-
- if False and _checknan(fill_value):
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- raise ValueError('No NA values allowed')
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
- else:
- fv = fill_value
- for j from 0 <= j < k:
- idx = indexer[j]
- if idx == -1:
- for i from 0 <= i < n:
- outbuf[i, j] = fv
- else:
- for i from 0 <= i < n:
- outbuf[i, j] = values[i, idx]
+
+ fv = fill_value
+
+ IF False:
+ cdef:
+ object *v, *o
+
+ if values.flags.f_contiguous and out.flags.f_contiguous:
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ v = &values[0, idx]
+ o = &outbuf[0, j]
+ memmove(o, v, <size_t>(sizeof(object) * n))
+ return
+
+ for j from 0 <= j < k:
+ idx = indexer[j]
+ if idx == -1:
+ for i from 0 <= i < n:
+ outbuf[i, j] = fv
+ else:
+ for i from 0 <= i < n:
+ outbuf[i, j] = values[i, idx]
@cython.wraparound(False)
@@ -3982,31 +4098,18 @@ def take_2d_multi_bool_bool(ndarray[uint8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4023,31 +4126,18 @@ def take_2d_multi_bool_object(ndarray[uint8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = True if values[idx, idx1[j]] > 0 else False
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = True if values[idx, idx1[j]] > 0 else False
+ else:
+ outbuf[i, j] = True if values[idx, idx1[j]] > 0 else False
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4064,31 +4154,18 @@ def take_2d_multi_int8_int8(ndarray[int8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4105,31 +4182,18 @@ def take_2d_multi_int8_int32(ndarray[int8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4146,31 +4210,18 @@ def take_2d_multi_int8_int64(ndarray[int8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4187,31 +4238,18 @@ def take_2d_multi_int8_float64(ndarray[int8_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4228,31 +4266,18 @@ def take_2d_multi_int16_int16(ndarray[int16_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4269,31 +4294,18 @@ def take_2d_multi_int16_int32(ndarray[int16_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4310,31 +4322,18 @@ def take_2d_multi_int16_int64(ndarray[int16_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4351,31 +4350,18 @@ def take_2d_multi_int16_float64(ndarray[int16_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4392,31 +4378,18 @@ def take_2d_multi_int32_int32(ndarray[int32_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4433,31 +4406,18 @@ def take_2d_multi_int32_int64(ndarray[int32_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4474,31 +4434,18 @@ def take_2d_multi_int32_float64(ndarray[int32_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4515,31 +4462,18 @@ def take_2d_multi_int64_int64(ndarray[int64_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if True and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4556,31 +4490,18 @@ def take_2d_multi_int64_float64(ndarray[int64_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4597,31 +4518,18 @@ def take_2d_multi_float32_float32(ndarray[float32_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4638,31 +4546,18 @@ def take_2d_multi_float32_float64(ndarray[float32_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4679,31 +4574,18 @@ def take_2d_multi_float64_float64(ndarray[float64_t, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.wraparound(False)
@cython.boundscheck(False)
@@ -4720,31 +4602,18 @@ def take_2d_multi_object_object(ndarray[object, ndim=2] values,
n = len(idx0)
k = len(idx1)
- if False and _checknan(fill_value):
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
- raise ValueError('No NA values allowed')
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- raise ValueError('No NA values allowed')
- else:
- outbuf[i, j] = values[idx, idx1[j]]
- else:
- fv = fill_value
- for i from 0 <= i < n:
- idx = idx0[i]
- if idx == -1:
- for j from 0 <= j < k:
+ fv = fill_value
+ for i from 0 <= i < n:
+ idx = idx0[i]
+ if idx == -1:
+ for j from 0 <= j < k:
+ outbuf[i, j] = fv
+ else:
+ for j from 0 <= j < k:
+ if idx1[j] == -1:
outbuf[i, j] = fv
- else:
- for j from 0 <= j < k:
- if idx1[j] == -1:
- outbuf[i, j] = fv
- else:
- outbuf[i, j] = values[idx, idx1[j]]
+ else:
+ outbuf[i, j] = values[idx, idx1[j]]
@cython.boundscheck(False)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 09843667e4816..f2870f120f64b 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -8311,73 +8311,6 @@ def test_constructor_with_convert(self):
None], np.object_))
assert_series_equal(result, expected)
- def test_constructor_with_datetimes(self):
- intname = np.dtype(np.int_).name
- floatname = np.dtype(np.float_).name
- datetime64name = np.dtype('M8[ns]').name
- objectname = np.dtype(np.object_).name
-
- # single item
- df = DataFrame({'A' : 1, 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime(2001,1,2,0,0) },
- index=np.arange(10))
- result = df.get_dtype_counts()
- expected = Series({intname: 1, datetime64name: 2, objectname : 2})
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
- # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 ndarray with a dtype specified)
- df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array(1.,dtype=floatname),
- intname : np.array(1,dtype=intname)}, index=np.arange(10))
- result = df.get_dtype_counts()
- expected = Series({intname: 2, floatname : 2, objectname : 1})
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
- # check with ndarray construction ndim>0
- df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array([1.]*10,dtype=floatname),
- intname : np.array([1]*10,dtype=intname)}, index=np.arange(10))
- result = df.get_dtype_counts()
- expected = Series({intname: 2, floatname : 2, objectname : 1})
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
- # GH #2751 (construction with no index specified)
- df = DataFrame({'a':[1,2,4,7], 'b':[1.2, 2.3, 5.1, 6.3], 'c':list('abcd'), 'd':[datetime(2000,1,1) for i in range(4)] })
- result = df.get_dtype_counts()
- # TODO: fix this on 32-bit (or decide it's ok behavior?)
- # expected = Series({intname: 1, floatname : 1, datetime64name: 1, objectname : 1})
- expected = Series({'int64': 1, floatname : 1, datetime64name: 1, objectname : 1})
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
- # GH 2809
- from pandas import date_range
- ind = date_range(start="2000-01-01", freq="D", periods=10)
- datetimes = [ts.to_pydatetime() for ts in ind]
- datetime_s = Series(datetimes)
- self.assert_(datetime_s.dtype == 'M8[ns]')
- df = DataFrame({'datetime_s':datetime_s})
- result = df.get_dtype_counts()
- expected = Series({ datetime64name : 1 })
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
- # GH 2810
- ind = date_range(start="2000-01-01", freq="D", periods=10)
- datetimes = [ts.to_pydatetime() for ts in ind]
- dates = [ts.date() for ts in ind]
- df = DataFrame({'datetimes': datetimes, 'dates':dates})
- result = df.get_dtype_counts()
- expected = Series({ datetime64name : 1, objectname : 1 })
- result.sort()
- expected.sort()
- assert_series_equal(result, expected)
-
def test_constructor_frame_copy(self):
cop = DataFrame(self.frame, copy=True)
cop['A'] = 5
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 6d699967915ba..b62e307b1f2a8 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -715,9 +715,7 @@ def _merge_blocks(self, merge_chunks):
sofar = 0
for unit, blk in merge_chunks:
out_chunk = out[sofar: sofar + len(blk)]
- com.take_fast(blk.values, unit.indexer,
- None, False, axis=self.axis,
- out=out_chunk)
+ com.take_nd(blk.values, unit.indexer, self.axis, out=out_chunk)
sofar += len(blk)
# does not sort
@@ -737,36 +735,24 @@ def __init__(self, blocks, indexer):
@cache_readonly
def mask_info(self):
if self.indexer is None or not _may_need_upcasting(self.blocks):
- mask = None
- need_masking = False
+ return None
else:
mask = self.indexer == -1
- need_masking = mask.any()
-
- return mask, need_masking
-
- @property
- def need_masking(self):
- return self.mask_info[1]
+ needs_masking = mask.any()
+ return (mask, needs_masking)
def get_upcasted_blocks(self):
- # will short-circuit and not compute lneed_masking if indexer is None
- if self.need_masking:
+ # will short-circuit and not compute needs_masking if indexer is None
+ if self.mask_info is not None and self.mask_info[1]:
return _upcast_blocks(self.blocks)
return self.blocks
def reindex_block(self, block, axis, ref_items, copy=True):
- # still some inefficiency here for bool/int64 because in the case where
- # no masking is needed, take_fast will recompute the mask
-
- mask, need_masking = self.mask_info
-
if self.indexer is None:
result = block.copy() if copy else block
else:
- result = block.reindex_axis(self.indexer, mask, need_masking,
- axis=axis)
-
+ result = block.reindex_axis(self.indexer, axis=axis,
+ mask_info=self.mask_info)
result.ref_items = ref_items
return result
| I've consolidated `take_nd`, `ndtake`, and `take_fast` in `common` into a single signature `take_nd` which has cleaner semantics (which is documented in a docstring), at least preserves the existing performance properties in all cases, and improves it (sometimes significantly) in some. (In particular, computation of intermediate arrays like boolean masks are still being short-circuited in the same way they were before, in the appropriate situations.) The operation that used to be `take_fast` was also broken for non-NA `fill_value`: this is fixed too.
In addition, I've optimized the Cython implementations of 2-D takes to use row-wise or column-wise ~~memcpys~~ `memmove`s automatically in places where it is appropriate (same type input and output, non-object type, both c-contiguous if slice axis is 0, both f-contiguous if slice axis is 1...). In theory Cython and/or the C compiler could do this automatically, but apparently it doesn't because the performance does improve when this is triggered, at least with gcc on 32-bit linux.
Tests with measurably improved performance (this is on top of the performance improvements in #2819)
```
Results:
t_head t_baseline ratio
name
frame_reindex_axis0 0.6845 3.2839 0.2084
frame_reindex_axis1 4.0088 5.4203 0.7396
frame_boolean_row_select 0.2932 0.3939 0.7443
```
All other ratios are apparently just statistical noise within the +- 10% range that vary from run to run. (Was hoping it would help more, but I guess this is OK)
| https://api.github.com/repos/pandas-dev/pandas/pulls/2867 | 2013-02-13T17:36:19Z | 2013-02-14T23:55:50Z | 2013-02-14T23:55:50Z | 2014-06-24T10:12:51Z |
BLD: bring back build_ext warning message cython is missing #2439 | diff --git a/setup.py b/setup.py
index 3c45d16f4024d..ad39ad7ac74d7 100755
--- a/setup.py
+++ b/setup.py
@@ -552,14 +552,14 @@ def run(self):
if cython:
suffix = '.pyx'
- cmdclass['build_ext'] = build_ext
+ cmdclass['build_ext'] = CheckingBuildExt
if BUILD_CACHE_DIR: # use the cache
cmdclass['build_ext'] = CachingBuildExt
cmdclass['cython'] = CythonCommand
else:
suffix = '.c'
cmdclass['build_src'] = DummyBuildSrc
- cmdclass['build_ext'] = build_ext
+ cmdclass['build_ext'] = CheckingBuildExt
lib_depends = ['reduce', 'inference', 'properties']
| #2439
| https://api.github.com/repos/pandas-dev/pandas/pulls/2860 | 2013-02-13T00:13:49Z | 2013-03-15T20:38:22Z | 2013-03-15T20:38:22Z | 2014-06-18T17:25:34Z |
TST: avoid platform dependency, close #2854 | diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 27ac43100a9ed..89cc407daf3f4 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -308,7 +308,7 @@ def test_unsorted_index(self):
ax = df.plot()
l = ax.get_lines()[0]
rs = l.get_xydata()
- rs = Series(rs[:, 1], rs[:, 0], dtype=int)
+ rs = Series(rs[:, 1], rs[:, 0], dtype=np.int64)
tm.assert_series_equal(rs, df.y)
def _check_data(self, xp, rs):
| https://api.github.com/repos/pandas-dev/pandas/pulls/2857 | 2013-02-12T21:37:28Z | 2013-02-12T22:38:09Z | 2013-02-12T22:38:09Z | 2013-02-12T22:38:09Z | |
BUG: proper conversion of string to bool HDFStore selection | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 516b6c211fc7f..84c2ef4957529 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -250,10 +250,14 @@ def __repr__(self):
values = []
for k in self.keys():
- s = self.get_storer(k)
- if s is not None:
- keys.append(str(s.pathname or k))
- values.append(str(s or 'invalid_HDFStore node'))
+ try:
+ s = self.get_storer(k)
+ if s is not None:
+ keys.append(str(s.pathname or k))
+ values.append(str(s or 'invalid_HDFStore node'))
+ except (Exception), detail:
+ keys.append(k)
+ values.append("[invalid_HDFStore node: %s]" % str(detail))
output += adjoin(12, keys, values)
else:
@@ -3060,7 +3064,10 @@ def convert_value(self, v):
v = float(v)
return [v, v]
elif self.kind == 'bool':
- v = bool(v)
+ if isinstance(v, basestring):
+ v = not str(v).strip().lower() in ["false", "f", "no", "n", "none", "0", "[]", "{}", ""]
+ else:
+ v = bool(v)
return [v, v]
elif not isinstance(v, basestring):
return [str(v), None]
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 8260180138752..8b77cee3730fc 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1649,15 +1649,20 @@ def test_select_dtypes(self):
df['bool'] = df['A'] > 0
store.remove('df')
store.append('df', df, data_columns = True)
- result = store.select('df', Term('bool == True'), columns = ['A','bool'])
+
expected = df[df.bool == True].reindex(columns=['A','bool'])
- tm.assert_frame_equal(expected, result)
+ for v in [True,'true',1]:
+ result = store.select('df', Term('bool == %s' % str(v)), columns = ['A','bool'])
+ tm.assert_frame_equal(expected, result)
- result = store.select('df', Term('bool == 1'), columns = ['A','bool'])
- tm.assert_frame_equal(expected, result)
+ expected = df[df.bool == False ].reindex(columns=['A','bool'])
+ for v in [False,'false',0]:
+ result = store.select('df', Term('bool == %s' % str(v)), columns = ['A','bool'])
+ tm.assert_frame_equal(expected, result)
# integer index
df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
+ store.remove('df_int')
store.append('df_int', df)
result = store.select(
'df_int', [Term("index<10"), Term("columns", "=", ["A"])])
@@ -1667,6 +1672,7 @@ def test_select_dtypes(self):
# float index
df = DataFrame(dict(A=np.random.rand(
20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
+ store.remove('df_float')
store.append('df_float', df)
result = store.select(
'df_float', [Term("index<10.0"), Term("columns", "=", ["A"])])
| https://api.github.com/repos/pandas-dev/pandas/pulls/2855 | 2013-02-12T20:31:56Z | 2013-02-12T20:49:07Z | 2013-02-12T20:49:07Z | 2013-02-12T20:49:08Z | |
DOC: prepend missing >>> to docstring Examples in set_index. | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index ecf2f8ba482f6..9f92136f2af4d 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2746,9 +2746,9 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
Examples
--------
- indexed_df = df.set_index(['A', 'B'])
- indexed_df2 = df.set_index(['A', [0, 1, 2, 0, 1, 2]])
- indexed_df3 = df.set_index([[0, 1, 2, 0, 1, 2]])
+ >>> indexed_df = df.set_index(['A', 'B'])
+ >>> indexed_df2 = df.set_index(['A', [0, 1, 2, 0, 1, 2]])
+ >>> indexed_df3 = df.set_index([[0, 1, 2, 0, 1, 2]])
Returns
-------
| https://api.github.com/repos/pandas-dev/pandas/pulls/2851 | 2013-02-12T17:48:04Z | 2013-03-09T01:00:22Z | 2013-03-09T01:00:22Z | 2014-06-18T17:55:41Z | |
BUG: HDFStore - missing implementation of bool columns in selection | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index ae9aa3c4f201c..516b6c211fc7f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1076,6 +1076,8 @@ def set_kind(self):
self.kind = 'integer'
elif self.dtype.startswith('date'):
self.kind = 'datetime'
+ elif self.dtype.startswith('bool'):
+ self.kind = 'bool'
def set_atom(self, block, existing_col, min_itemsize, nan_rep, **kwargs):
""" create and setup my atom from the block b """
@@ -3057,6 +3059,9 @@ def convert_value(self, v):
elif self.kind == 'float':
v = float(v)
return [v, v]
+ elif self.kind == 'bool':
+ v = bool(v)
+ return [v, v]
elif not isinstance(v, basestring):
return [str(v), None]
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index a4df428d60d90..8260180138752 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -953,19 +953,21 @@ def test_table_values_dtypes_roundtrip(self):
assert df1.dtypes == store['df_f4'].dtypes
assert df1.dtypes[0] == 'float32'
- # check with mixed dtypes (but not multi float types)
- df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['float32'])
+ # check with mixed dtypes
+ df1 = DataFrame(dict([ (c,Series(np.random.randn(5),dtype=c)) for c in
+ ['float32','float64','int32','int64','int16','int8'] ]))
df1['string'] = 'foo'
- store.append('df_mixed_dtypes1', df1)
- assert (df1.dtypes == store['df_mixed_dtypes1'].dtypes).all() == True
- assert df1.dtypes[0] == 'float32'
- assert df1.dtypes[1] == 'object'
+ df1['float322'] = 1.
+ df1['float322'] = df1['float322'].astype('float32')
+ df1['bool'] = df1['float32'] > 0
- ### this is not supported, e.g. mixed float32/float64 blocks ###
- #df1 = DataFrame(np.array([[1],[2],[3]],dtype='f4'),columns = ['float32'])
- #df1['float64'] = 1.0
- #store.append('df_mixed_dtypes2', df1)
- #assert df1.dtypes == store['df_mixed_dtypes2'].dtypes).all() == True
+ store.append('df_mixed_dtypes1', df1)
+ result = store.select('df_mixed_dtypes1').get_dtype_counts()
+ expected = Series({ 'float32' : 2, 'float64' : 1,'int32' : 1, 'bool' : 1,
+ 'int16' : 1, 'int8' : 1, 'int64' : 1, 'object' : 1 })
+ result.sort()
+ expected.sort()
+ tm.assert_series_equal(result,expected)
def test_table_mixed_dtypes(self):
@@ -1628,6 +1630,10 @@ def test_select(self):
expected = df[df.A > 0].reindex(columns=['C', 'D'])
tm.assert_frame_equal(expected, result)
+ def test_select_dtypes(self):
+
+ with ensure_clean(self.path) as store:
+
# with a Timestamp data column (GH #2637)
df = DataFrame(dict(ts=bdate_range('2012-01-01', periods=300), A=np.random.randn(300)))
store.remove('df')
@@ -1636,6 +1642,37 @@ def test_select(self):
expected = df[df.ts >= Timestamp('2012-02-01')]
tm.assert_frame_equal(expected, result)
+ # bool columns
+ df = DataFrame(np.random.randn(5,2), columns =['A','B'])
+ df['object'] = 'foo'
+ df.ix[4:5,'object'] = 'bar'
+ df['bool'] = df['A'] > 0
+ store.remove('df')
+ store.append('df', df, data_columns = True)
+ result = store.select('df', Term('bool == True'), columns = ['A','bool'])
+ expected = df[df.bool == True].reindex(columns=['A','bool'])
+ tm.assert_frame_equal(expected, result)
+
+ result = store.select('df', Term('bool == 1'), columns = ['A','bool'])
+ tm.assert_frame_equal(expected, result)
+
+ # integer index
+ df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
+ store.append('df_int', df)
+ result = store.select(
+ 'df_int', [Term("index<10"), Term("columns", "=", ["A"])])
+ expected = df.reindex(index=list(df.index)[0:10],columns=['A'])
+ tm.assert_frame_equal(expected, result)
+
+ # float index
+ df = DataFrame(dict(A=np.random.rand(
+ 20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
+ store.append('df_float', df)
+ result = store.select(
+ 'df_float', [Term("index<10.0"), Term("columns", "=", ["A"])])
+ expected = df.reindex(index=list(df.index)[0:10],columns=['A'])
+ tm.assert_frame_equal(expected, result)
+
def test_panel_select(self):
wp = tm.makePanel()
@@ -1676,20 +1713,6 @@ def test_frame_select(self):
expected = df.ix[:, ['A']]
tm.assert_frame_equal(result, expected)
- # other indicies for a frame
-
- # integer
- df = DataFrame(dict(A=np.random.rand(20), B=np.random.rand(20)))
- store.append('df_int', df)
- store.select(
- 'df_int', [Term("index<10"), Term("columns", "=", ["A"])])
-
- df = DataFrame(dict(A=np.random.rand(
- 20), B=np.random.rand(20), index=np.arange(20, dtype='f8')))
- store.append('df_float', df)
- store.select(
- 'df_float', [Term("index<10.0"), Term("columns", "=", ["A"])])
-
# invalid terms
df = tm.makeTimeDataFrame()
store.append('df_time', df)
| - added implementation of bool columns in selection (was missing)
- moved dtype like tests to single method
| https://api.github.com/repos/pandas-dev/pandas/pulls/2849 | 2013-02-12T13:13:37Z | 2013-02-12T13:38:22Z | 2013-02-12T13:38:22Z | 2014-06-18T09:32:12Z |
Typo fix: propgate -> propagate. | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index ddf8c678b766a..1ce40ea74a6bb 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -483,7 +483,7 @@ each type:
df.get_dtype_counts()
-Numeric dtypes will propgate and can coexist in DataFrames (starting in v0.10.2).
+Numeric dtypes will propagate and can coexist in DataFrames (starting in v0.11.0).
If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``,
or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 5254f0e1d6c93..d2648cbdb5a44 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -11,7 +11,7 @@ to.
API changes
~~~~~~~~~~~
-Numeric dtypes will propgate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
+Numeric dtypes will propagate and can coexist in DataFrames. If a dtype is passed (either directly via the ``dtype`` keyword, a passed ``ndarray``, or a passed ``Series``, then it will be preserved in DataFrame operations. Furthermore, different numeric dtypes will **NOT** be combined. The following example will give you a taste.
**Dtype Specification**
| Few tiny typo fixes.
| https://api.github.com/repos/pandas-dev/pandas/pulls/2848 | 2013-02-12T10:37:21Z | 2013-02-12T20:49:40Z | 2013-02-12T20:49:40Z | 2013-02-12T20:49:40Z |
BUG: Various issues with maybe_convert_objects (GH #2845) | diff --git a/pandas/src/inference.pyx b/pandas/src/inference.pyx
index 6044955d8188f..082ebf4f5b3c2 100644
--- a/pandas/src/inference.pyx
+++ b/pandas/src/inference.pyx
@@ -403,6 +403,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
bint seen_bool = 0
bint seen_object = 0
bint seen_null = 0
+ bint seen_numeric = 0
object val, onan
float64_t fval, fnan
@@ -437,12 +438,17 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
else:
seen_object = 1
# objects[i] = val.astype('O')
+ break
elif util.is_integer_object(val):
seen_int = 1
floats[i] = <float64_t> val
complexes[i] = <double complex> val
if not seen_null:
- ints[i] = val
+ try:
+ ints[i] = val
+ except OverflowError:
+ seen_object = 1
+ break
elif util.is_complex_object(val):
complexes[i] = val
seen_complex = 1
@@ -452,6 +458,7 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
idatetimes[i] = convert_to_tsobject(val, None).value
else:
seen_object = 1
+ break
elif try_float and not util.is_string_object(val):
# this will convert Decimal objects
try:
@@ -460,72 +467,65 @@ def maybe_convert_objects(ndarray[object] objects, bint try_float=0,
seen_float = 1
except Exception:
seen_object = 1
+ break
else:
seen_object = 1
+ break
- if not safe:
- if seen_null:
- if (seen_float or seen_int) and not seen_object:
- if seen_complex:
- return complexes
- else:
- return floats
- else:
- return objects
- else:
- if seen_object:
- return objects
- elif not seen_bool:
- if seen_datetime:
- if seen_complex or seen_float or seen_int:
- return objects
- else:
- return datetimes
- else:
+ seen_numeric = seen_complex or seen_float or seen_int
+
+ if not seen_object:
+
+ if not safe:
+ if seen_null:
+ if not seen_bool and not seen_datetime:
if seen_complex:
return complexes
- elif seen_float:
+ elif seen_float or seen_int:
return floats
- elif seen_int:
- return ints
else:
- if not seen_float and not seen_int:
+ if not seen_bool:
+ if seen_datetime:
+ if not seen_numeric:
+ return datetimes
+ else:
+ if seen_complex:
+ return complexes
+ elif seen_float:
+ return floats
+ elif seen_int:
+ return ints
+ elif not seen_datetime and not seen_numeric:
return bools.view(np.bool_)
- return objects
- else:
- # don't cast int to float, etc.
- if seen_null:
- if (seen_float or seen_int) and not seen_object:
- if seen_complex:
- return complexes
- else:
- return floats
- else:
- return objects
else:
- if seen_object:
- return objects
- elif not seen_bool:
- if seen_datetime:
- if seen_complex or seen_float or seen_int:
- return objects
- else:
- return datetimes
- else:
- if seen_int and seen_float:
- return objects
- elif seen_complex:
- return complexes
+ # don't cast int to float, etc.
+ if seen_null:
+ if not seen_bool and not seen_datetime:
+ if seen_complex:
+ if not seen_int:
+ return complexes
elif seen_float:
- return floats
- elif seen_int:
- return ints
+ if not seen_int:
+ return floats
else:
- if not seen_float and not seen_int:
+ if not seen_bool:
+ if seen_datetime:
+ if not seen_numeric:
+ return datetimes
+ else:
+ if seen_complex:
+ if not seen_int:
+ return complexes
+ elif seen_float:
+ if not seen_int:
+ return floats
+ elif seen_int:
+ return ints
+ elif not seen_datetime and not seen_numeric:
return bools.view(np.bool_)
- return objects
+ return objects
def convert_sql_column(x):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c628bf3f0df97..77bf23ea946e1 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -8099,6 +8099,69 @@ def test_as_matrix_lcd(self):
values = self.mixed_int.as_matrix(['C'])
self.assert_(values.dtype == np.uint8)
+ def test_constructor_with_convert(self):
+ # this is actually mostly a test of lib.maybe_convert_objects
+ # #2845
+ df = DataFrame({'A' : [2**63-1] })
+ result = df['A']
+ expected = Series(np.asarray([2**63-1], np.int64))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [2**63] })
+ result = df['A']
+ expected = Series(np.asarray([2**63], np.object_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [datetime(2005, 1, 1), True] })
+ result = df['A']
+ expected = Series(np.asarray([datetime(2005, 1, 1), True], np.object_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [None, 1] })
+ result = df['A']
+ expected = Series(np.asarray([np.nan, 1], np.float_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0, 2] })
+ result = df['A']
+ expected = Series(np.asarray([1.0, 2], np.float_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0+2.0j, 3] })
+ result = df['A']
+ expected = Series(np.asarray([1.0+2.0j, 3], np.complex_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0+2.0j, 3.0] })
+ result = df['A']
+ expected = Series(np.asarray([1.0+2.0j, 3.0], np.complex_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0+2.0j, True] })
+ result = df['A']
+ expected = Series(np.asarray([1.0+2.0j, True], np.object_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0, None] })
+ result = df['A']
+ expected = Series(np.asarray([1.0, np.nan], np.float_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [1.0+2.0j, None] })
+ result = df['A']
+ expected = Series(np.asarray([1.0+2.0j, np.nan], np.complex_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [2.0, 1, True, None] })
+ result = df['A']
+ expected = Series(np.asarray([2.0, 1, True, None], np.object_))
+ assert_series_equal(result, expected)
+
+ df = DataFrame({'A' : [2.0, 1, datetime(2006, 1, 1), None] })
+ result = df['A']
+ expected = Series(np.asarray([2.0, 1, datetime(2006, 1, 1),
+ None], np.object_))
+ assert_series_equal(result, expected)
def test_constructor_with_datetimes(self):
intname = np.dtype(np.int_).name
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 95d85d44f2ceb..a5732f252d617 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -348,6 +348,17 @@ def test_format(self):
expected = [str(index[0])]
self.assertEquals(formatted, expected)
+ # 2845
+ index = Index([1, 2.0+3.0j, np.nan])
+ formatted = index.format()
+ expected = [str(index[0]), str(index[1]), str(index[2])]
+ self.assertEquals(formatted, expected)
+
+ index = Index([1, 2.0+3.0j, None])
+ formatted = index.format()
+ expected = [str(index[0]), str(index[1]), '']
+ self.assertEquals(formatted, expected)
+
self.strIndex[:0].format()
def test_format_with_name_time_info(self):
| fixes #2845
| https://api.github.com/repos/pandas-dev/pandas/pulls/2846 | 2013-02-11T21:48:45Z | 2013-02-14T22:17:49Z | 2013-02-14T22:17:49Z | 2014-06-24T15:33:33Z |
DOC: migrated 0.10.2 remaining changes to 0.11.0 | diff --git a/doc/source/io.rst b/doc/source/io.rst
index a2f30dc14e29f..ebd6f5c77c658 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1166,7 +1166,7 @@ storing/selecting from homogeneous index DataFrames.
store.append('df_mi',df_mi)
store.select('df_mi')
- # the levels are automatically included as data columns
+ # the levels are automatically included as data columns
store.select('df_mi', Term('foo=bar'))
diff --git a/doc/source/v0.10.2.txt b/doc/source/v0.10.2.txt
deleted file mode 100644
index e9fed5b36f3cd..0000000000000
--- a/doc/source/v0.10.2.txt
+++ /dev/null
@@ -1,18 +0,0 @@
-.. _whatsnew_0102:
-
-v0.10.2 (February ??, 2013)
----------------------------
-
-This is a minor release from 0.10.1 and includes many new features and
-enhancements along with a large number of bug fixes. There are also a number of
-important API changes that long-time pandas users should pay close attention
-to.
-
-**Enhancements**
-
- - In ``HDFStore``, provide dotted attribute access to ``get`` from stores (e.g. store.df == store['df'])
-
-See the `full release notes
-<https://github.com/pydata/pandas/blob/master/RELEASE.rst>`__ or issue tracker
-on GitHub for a complete list.
-
diff --git a/doc/source/v0.11.0.txt b/doc/source/v0.11.0.txt
index 539933d872fe3..5254f0e1d6c93 100644
--- a/doc/source/v0.11.0.txt
+++ b/doc/source/v0.11.0.txt
@@ -87,6 +87,8 @@ New features
**Enhancements**
+ - In ``HDFStore``, provide dotted attribute access to ``get`` from stores (e.g. store.df == store['df'])
+
**Bug Fixes**
See the `full release notes
| https://api.github.com/repos/pandas-dev/pandas/pulls/2840 | 2013-02-11T13:18:09Z | 2013-02-11T13:54:17Z | 2013-02-11T13:54:17Z | 2013-02-11T13:54:18Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.