title
stringlengths
1
185
diff
stringlengths
0
32.2M
body
stringlengths
0
123k
url
stringlengths
57
58
created_at
stringlengths
20
20
closed_at
stringlengths
20
20
merged_at
stringlengths
20
20
updated_at
stringlengths
20
20
DEPR: Change boxplot return_type kwarg
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst index 16ef76638ec5b..6e05c3ff0457a 100644 --- a/doc/source/visualization.rst +++ b/doc/source/visualization.rst @@ -456,28 +456,29 @@ columns: .. _visualization.box.return: -Basically, plot functions return :class:`matplotlib Axes <matplotlib.axes.Axes>` as a return value. -In ``boxplot``, the return type can be changed by argument ``return_type``, and whether the subplots is enabled (``subplots=True`` in ``plot`` or ``by`` is specified in ``boxplot``). +.. warning:: -When ``subplots=False`` / ``by`` is ``None``: + The default changed from ``'dict'`` to ``'axes'`` in version 0.19.0. -* if ``return_type`` is ``'dict'``, a dictionary containing the :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned. The keys are "boxes", "caps", "fliers", "medians", and "whiskers". - This is the default of ``boxplot`` in historical reason. - Note that ``plot.box()`` returns ``Axes`` by default same as other plots. -* if ``return_type`` is ``'axes'``, a :class:`matplotlib Axes <matplotlib.axes.Axes>` containing the boxplot is returned. -* if ``return_type`` is ``'both'`` a namedtuple containing the :class:`matplotlib Axes <matplotlib.axes.Axes>` - and :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned +In ``boxplot``, the return type can be controlled by the ``return_type``, keyword. The valid choices are ``{"axes", "dict", "both", None}``. +Faceting, created by ``DataFrame.boxplot`` with the ``by`` +keyword, will affect the output type as well: -When ``subplots=True`` / ``by`` is some column of the DataFrame: +================ ======= ========================== +``return_type=`` Faceted Output type +---------------- ------- -------------------------- -* A dict of ``return_type`` is returned, where the keys are the columns - of the DataFrame. The plot has a facet for each column of - the DataFrame, with a separate box for each value of ``by``. +``None`` No axes +``None`` Yes 2-D ndarray of axes +``'axes'`` No axes +``'axes'`` Yes Series of axes +``'dict'`` No dict of artists +``'dict'`` Yes Series of dicts of artists +``'both'`` No namedtuple +``'both'`` Yes Series of namedtuples +================ ======= ========================== -Finally, when calling boxplot on a :class:`Groupby` object, a dict of ``return_type`` -is returned, where the keys are the same as the Groupby object. The plot has a -facet for each key, with each facet containing a box for each column of the -DataFrame. +``Groupby.boxplot`` always returns a Series of ``return_type``. .. ipython:: python :okwarning: diff --git a/doc/source/whatsnew/v0.19.0.txt b/doc/source/whatsnew/v0.19.0.txt index a422e667e32a7..f02367a49d44d 100644 --- a/doc/source/whatsnew/v0.19.0.txt +++ b/doc/source/whatsnew/v0.19.0.txt @@ -494,6 +494,7 @@ API changes - ``__setitem__`` will no longer apply a callable rhs as a function instead of storing it. Call ``where`` directly to get the previous behavior. (:issue:`13299`) - Passing ``Period`` with multiple frequencies to normal ``Index`` now returns ``Index`` with ``object`` dtype (:issue:`13664`) - ``PeriodIndex.fillna`` with ``Period`` has different freq now coerces to ``object`` dtype (:issue:`13664`) +- Faceted boxplots from ``DataFrame.boxplot(by=col)`` now return a ``Series`` when ``return_type`` is not None. Previously these returned an ``OrderedDict``. Note that when ``return_type=None``, the default, these still return a 2-D NumPy array. (:issue:`12216`, :issue:`7096`) - More informative exceptions are passed through the csv parser. The exception type would now be the original exception type instead of ``CParserError``. (:issue:`13652`) - ``astype()`` will now accept a dict of column name to data types mapping as the ``dtype`` argument. (:issue:`12086`) - The ``pd.read_json`` and ``DataFrame.to_json`` has gained support for reading and writing json lines with ``lines`` option see :ref:`Line delimited json <io.jsonl>` (:issue:`9180`) @@ -1282,9 +1283,9 @@ Removal of prior version deprecations/changes Now legacy time rules raises ``ValueError``. For the list of currently supported offsets, see :ref:`here <timeseries.offset_aliases>` +- The default value for the ``return_type`` parameter for ``DataFrame.plot.box`` and ``DataFrame.boxplot`` changed from ``None`` to ``"axes"``. These methods will now return a matplotlib axes by default instead of a dictionary of artists. See :ref:`here <visualization.box.return>` (:issue:`6581`). - The ``tquery`` and ``uquery`` functions in the ``pandas.io.sql`` module are removed (:issue:`5950`). - .. _whatsnew_0190.performance: Performance Improvements diff --git a/pandas/tests/plotting/common.py b/pandas/tests/plotting/common.py index 7dcc3d6e5734f..9fe1d7cacd38f 100644 --- a/pandas/tests/plotting/common.py +++ b/pandas/tests/plotting/common.py @@ -5,8 +5,8 @@ import os import warnings -from pandas import DataFrame -from pandas.compat import zip, iteritems, OrderedDict +from pandas import DataFrame, Series +from pandas.compat import zip, iteritems from pandas.util.decorators import cache_readonly from pandas.types.api import is_list_like import pandas.util.testing as tm @@ -445,7 +445,8 @@ def _check_box_return_type(self, returned, return_type, expected_keys=None, self.assertIsInstance(r, Axes) return - self.assertTrue(isinstance(returned, OrderedDict)) + self.assertTrue(isinstance(returned, Series)) + self.assertEqual(sorted(returned.keys()), sorted(expected_keys)) for key, value in iteritems(returned): self.assertTrue(isinstance(value, types[return_type])) diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py index d499540827ab0..333792c5ffdb2 100644 --- a/pandas/tests/plotting/test_boxplot_method.py +++ b/pandas/tests/plotting/test_boxplot_method.py @@ -92,6 +92,12 @@ def test_boxplot_legacy(self): lines = list(itertools.chain.from_iterable(d.values())) self.assertEqual(len(ax.get_lines()), len(lines)) + @slow + def test_boxplot_return_type_none(self): + # GH 12216; return_type=None & by=None -> axes + result = self.hist_df.boxplot() + self.assertTrue(isinstance(result, self.plt.Axes)) + @slow def test_boxplot_return_type_legacy(self): # API change in https://github.com/pydata/pandas/pull/7096 @@ -103,10 +109,8 @@ def test_boxplot_return_type_legacy(self): with tm.assertRaises(ValueError): df.boxplot(return_type='NOTATYPE') - with tm.assert_produces_warning(FutureWarning): - result = df.boxplot() - # change to Axes in future - self._check_box_return_type(result, 'dict') + result = df.boxplot() + self._check_box_return_type(result, 'axes') with tm.assert_produces_warning(False): result = df.boxplot(return_type='dict') @@ -140,6 +144,7 @@ def _check_ax_limits(col, ax): p = df.boxplot(['height', 'weight', 'age'], by='category') height_ax, weight_ax, age_ax = p[0, 0], p[0, 1], p[1, 0] dummy_ax = p[1, 1] + _check_ax_limits(df['height'], height_ax) _check_ax_limits(df['weight'], weight_ax) _check_ax_limits(df['age'], age_ax) @@ -163,8 +168,7 @@ def test_boxplot_legacy(self): grouped = self.hist_df.groupby(by='gender') with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=2, layout=(1, 2)) - + self._check_axes_shape(list(axes.values), axes_num=2, layout=(1, 2)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) @@ -175,7 +179,7 @@ def test_boxplot_legacy(self): grouped = df.groupby(level=1) with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=10, layout=(4, 3)) + self._check_axes_shape(list(axes.values), axes_num=10, layout=(4, 3)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') @@ -184,8 +188,7 @@ def test_boxplot_legacy(self): grouped = df.unstack(level=1).groupby(level=0, axis=1) with tm.assert_produces_warning(UserWarning): axes = _check_plot_works(grouped.boxplot, return_type='axes') - self._check_axes_shape(list(axes.values()), axes_num=3, layout=(2, 2)) - + self._check_axes_shape(list(axes.values), axes_num=3, layout=(2, 2)) axes = _check_plot_works(grouped.boxplot, subplots=False, return_type='axes') self._check_axes_shape(axes, axes_num=1, layout=(1, 1)) @@ -226,8 +229,7 @@ def test_grouped_box_return_type(self): expected_keys=['height', 'weight', 'category']) # now for groupby - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = df.groupby('gender').boxplot() + result = df.groupby('gender').boxplot(return_type='dict') self._check_box_return_type( result, 'dict', expected_keys=['Male', 'Female']) @@ -347,7 +349,7 @@ def test_grouped_box_multiple_axes(self): with tm.assert_produces_warning(UserWarning): returned = df.boxplot(column=['height', 'weight', 'category'], by='gender', return_type='axes', ax=axes[0]) - returned = np.array(list(returned.values())) + returned = np.array(list(returned.values)) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assert_numpy_array_equal(returned, axes[0]) self.assertIs(returned[0].figure, fig) @@ -357,7 +359,7 @@ def test_grouped_box_multiple_axes(self): returned = df.groupby('classroom').boxplot( column=['height', 'weight', 'category'], return_type='axes', ax=axes[1]) - returned = np.array(list(returned.values())) + returned = np.array(list(returned.values)) self._check_axes_shape(returned, axes_num=3, layout=(1, 3)) self.assert_numpy_array_equal(returned, axes[1]) self.assertIs(returned[0].figure, fig) diff --git a/pandas/tests/plotting/test_frame.py b/pandas/tests/plotting/test_frame.py index 91be0a7a73e35..4d0c1e9213b17 100644 --- a/pandas/tests/plotting/test_frame.py +++ b/pandas/tests/plotting/test_frame.py @@ -1221,6 +1221,9 @@ def test_boxplot_return_type(self): result = df.plot.box(return_type='axes') self._check_box_return_type(result, 'axes') + result = df.plot.box() # default axes + self._check_box_return_type(result, 'axes') + result = df.plot.box(return_type='both') self._check_box_return_type(result, 'both') @@ -1230,7 +1233,7 @@ def test_boxplot_subplots_return_type(self): # normal style: return_type=None result = df.plot.box(subplots=True) - self.assertIsInstance(result, np.ndarray) + self.assertIsInstance(result, Series) self._check_box_return_type(result, None, expected_keys=[ 'height', 'weight', 'category']) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 1abd11017dbfe..7fd0b1044f9d7 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -2247,7 +2247,7 @@ class BoxPlot(LinePlot): # namedtuple to hold results BP = namedtuple("Boxplot", ['ax', 'lines']) - def __init__(self, data, return_type=None, **kwargs): + def __init__(self, data, return_type='axes', **kwargs): # Do not call LinePlot.__init__ which may fill nan if return_type not in self._valid_return_types: raise ValueError( @@ -2266,7 +2266,7 @@ def _args_adjust(self): self.sharey = False @classmethod - def _plot(cls, ax, y, column_num=None, return_type=None, **kwds): + def _plot(cls, ax, y, column_num=None, return_type='axes', **kwds): if y.ndim == 2: y = [remove_na(v) for v in y] # Boxplot fails with empty arrays, so need to add a NaN @@ -2339,7 +2339,7 @@ def maybe_color_bp(self, bp): def _make_plot(self): if self.subplots: - self._return_obj = compat.OrderedDict() + self._return_obj = Series() for i, (label, y) in enumerate(self._iter_data()): ax = self._get_ax(i) @@ -2691,14 +2691,17 @@ def plot_series(data, kind='line', ax=None, # Series unique grid : Setting this to True will show the grid layout : tuple (optional) (rows, columns) for the layout of the plot - return_type : {'axes', 'dict', 'both'}, default 'dict' - The kind of object to return. 'dict' returns a dictionary - whose values are the matplotlib Lines of the boxplot; + return_type : {None, 'axes', 'dict', 'both'}, default None + The kind of object to return. The default is ``axes`` 'axes' returns the matplotlib axes the boxplot is drawn on; + 'dict' returns a dictionary whose values are the matplotlib + Lines of the boxplot; 'both' returns a namedtuple with the axes and dict. - When grouping with ``by``, a dict mapping columns to ``return_type`` - is returned. + When grouping with ``by``, a Series mapping columns to ``return_type`` + is returned, unless ``return_type`` is None, in which case a NumPy + array of axes is returned with the same shape as ``layout``. + See the prose documentation for more. kwds : other plotting keyword arguments to be passed to matplotlib boxplot function @@ -2724,7 +2727,7 @@ def boxplot(data, column=None, by=None, ax=None, fontsize=None, # validate return_type: if return_type not in BoxPlot._valid_return_types: - raise ValueError("return_type must be {None, 'axes', 'dict', 'both'}") + raise ValueError("return_type must be {'axes', 'dict', 'both'}") from pandas import Series, DataFrame if isinstance(data, Series): @@ -2769,23 +2772,19 @@ def plot_group(keys, values, ax): columns = [column] if by is not None: + # Prefer array return type for 2-D plots to match the subplot layout + # https://github.com/pydata/pandas/pull/12216#issuecomment-241175580 result = _grouped_plot_by_column(plot_group, data, columns=columns, by=by, grid=grid, figsize=figsize, ax=ax, layout=layout, return_type=return_type) else: + if return_type is None: + return_type = 'axes' if layout is not None: raise ValueError("The 'layout' keyword is not supported when " "'by' is None") - if return_type is None: - msg = ("\nThe default value for 'return_type' will change to " - "'axes' in a future release.\n To use the future behavior " - "now, set return_type='axes'.\n To keep the previous " - "behavior and silence this warning, set " - "return_type='dict'.") - warnings.warn(msg, FutureWarning, stacklevel=3) - return_type = 'dict' if ax is None: ax = _gca() data = data._get_numeric_data() @@ -3104,12 +3103,12 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None, figsize=figsize, layout=layout) axes = _flatten(axes) - ret = compat.OrderedDict() + ret = Series() for (key, group), ax in zip(grouped, axes): d = group.boxplot(ax=ax, column=column, fontsize=fontsize, rot=rot, grid=grid, **kwds) ax.set_title(pprint_thing(key)) - ret[key] = d + ret.loc[key] = d fig.subplots_adjust(bottom=0.15, top=0.9, left=0.1, right=0.9, wspace=0.2) else: @@ -3175,7 +3174,9 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None, _axes = _flatten(axes) - result = compat.OrderedDict() + result = Series() + ax_values = [] + for i, col in enumerate(columns): ax = _axes[i] gp_col = grouped[col] @@ -3183,9 +3184,11 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None, re_plotf = plotf(keys, values, ax, **kwargs) ax.set_title(col) ax.set_xlabel(pprint_thing(by)) - result[col] = re_plotf + ax_values.append(re_plotf) ax.grid(grid) + result = Series(ax_values, index=columns) + # Return axes in multiplot case, maybe revisit later # 985 if return_type is None: result = axes diff --git a/pandas/util/testing.py b/pandas/util/testing.py index f5a93d1f17d00..57bb01e5e0406 100644 --- a/pandas/util/testing.py +++ b/pandas/util/testing.py @@ -880,12 +880,12 @@ def assert_attr_equal(attr, left, right, obj='Attributes'): def assert_is_valid_plot_return_object(objs): import matplotlib.pyplot as plt - if isinstance(objs, np.ndarray): - for el in objs.flat: - assert isinstance(el, plt.Axes), ('one of \'objs\' is not a ' - 'matplotlib Axes instance, ' - 'type encountered {0!r}' - ''.format(el.__class__.__name__)) + if isinstance(objs, (pd.Series, np.ndarray)): + for el in objs.ravel(): + msg = ('one of \'objs\' is not a matplotlib Axes instance, ' + 'type encountered {0!r}') + assert isinstance(el, (plt.Axes, dict)), msg.format( + el.__class__.__name__) else: assert isinstance(objs, (plt.Artist, tuple, dict)), \ ('objs is neither an ndarray of Artist instances nor a '
Part of https://github.com/pydata/pandas/issues/6581 Deprecation started in https://github.com/pydata/pandas/pull/7096 Changes the default value of `return_type` in DataFrame.boxplot and DataFrame.plot.box from None to 'axes'.
https://api.github.com/repos/pandas-dev/pandas/pulls/12216
2016-02-03T02:51:32Z
2016-09-04T10:21:01Z
2016-09-04T10:21:01Z
2017-04-05T02:08:35Z
STYLE: final flake8 fixes, add back check for travis-ci
diff --git a/.travis.yml b/.travis.yml index 7dbc2fb821162..565d52184d0f1 100644 --- a/.travis.yml +++ b/.travis.yml @@ -164,11 +164,10 @@ script: - echo "script" - ci/run_build_docs.sh - ci/script.sh + - ci/lint.sh # nothing here, or failed tests won't fail travis after_script: - ci/install_test.sh - source activate pandas && ci/print_versions.py - ci/print_skipped.py /tmp/nosetests.xml - - ci/lint.sh - - ci/lint_ok_for_now.sh diff --git a/ci/lint.sh b/ci/lint.sh index 97d318b48469e..4350ecd8b11ed 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -4,17 +4,14 @@ echo "inside $0" source activate pandas -for path in 'core' +RET=0 +for path in 'core' 'io' 'stats' 'compat' 'sparse' 'tools' 'tseries' 'tests' 'computation' 'util' do echo "linting -> pandas/$path" - flake8 pandas/$path --filename '*.py' --statistics -q + flake8 pandas/$path --filename '*.py' + if [ $? -ne "0" ]; then + RET=1 + fi done -RET="$?" - -# we are disabling the return code for now -# to have Travis-CI pass. When the code -# passes linting, re-enable -#exit "$RET" - -exit 0 +exit $RET diff --git a/ci/lint_ok_for_now.sh b/ci/lint_ok_for_now.sh deleted file mode 100755 index eba667fadde06..0000000000000 --- a/ci/lint_ok_for_now.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash - -echo "inside $0" - -source activate pandas - -for path in 'io' 'stats' 'computation' 'tseries' 'util' 'compat' 'tools' 'sparse' 'tests' -do - echo "linting [ok_for_now] -> pandas/$path" - flake8 pandas/$path --filename '*.py' --statistics -q -done - -RET="$?" - -# we are disabling the return code for now -# to have Travis-CI pass. When the code -# passes linting, re-enable -#exit "$RET" - -exit 0 diff --git a/pandas/compat/numpy_compat.py b/pandas/compat/numpy_compat.py index f7f5da40d01c5..e4aeb05177aa4 100644 --- a/pandas/compat/numpy_compat.py +++ b/pandas/compat/numpy_compat.py @@ -19,7 +19,8 @@ _np_version_under1p11 = LooseVersion(_np_version) < '1.11' if LooseVersion(_np_version) < '1.7.0': - raise ImportError('this version of pandas is incompatible with numpy < 1.7.0\n' + raise ImportError('this version of pandas is incompatible with ' + 'numpy < 1.7.0\n' 'your numpy version is {0}.\n' 'Please upgrade numpy to >= 1.7.0 to use ' 'this pandas version'.format(_np_version)) @@ -61,7 +62,7 @@ def np_array_datetime64_compat(arr, *args, **kwargs): isinstance(arr, string_and_binary_types): arr = [tz_replacer(s) for s in arr] else: - arr = tz_replacer(s) + arr = tz_replacer(arr) return np.array(arr, *args, **kwargs) diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 33c3a3638ee72..27e932cb54b95 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -51,7 +51,6 @@ from pandas.tseries.index import DatetimeIndex from pandas.tseries.tdi import TimedeltaIndex -import pandas.core.algorithms as algos import pandas.core.base as base import pandas.core.common as com import pandas.core.format as fmt diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py index fbc25e7fdb98d..698bbcb2538b9 100644 --- a/pandas/core/groupby.py +++ b/pandas/core/groupby.py @@ -2310,10 +2310,9 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True): except Exception: all_in_columns = False - if (not any_callable and not all_in_columns - and not any_arraylike and not any_groupers - and match_axis_length - and level is None): + if not any_callable and not all_in_columns and \ + not any_arraylike and not any_groupers and \ + match_axis_length and level is None: keys = [com._asarray_tuplesafe(keys)] if isinstance(level, (tuple, list)): @@ -3695,7 +3694,7 @@ def count(self): return self._wrap_agged_blocks(data.items, list(blk)) -from pandas.tools.plotting import boxplot_frame_groupby +from pandas.tools.plotting import boxplot_frame_groupby # noqa DataFrameGroupBy.boxplot = boxplot_frame_groupby diff --git a/pandas/core/internals.py b/pandas/core/internals.py index 6e9005395281c..10053d33d6b51 100644 --- a/pandas/core/internals.py +++ b/pandas/core/internals.py @@ -3981,7 +3981,7 @@ def form_blocks(arrays, names, axes): klass=DatetimeTZBlock, fastpath=True, placement=[i], ) - for i, names, array in datetime_tz_items] + for i, _, array in datetime_tz_items] blocks.extend(dttz_blocks) if len(bool_items): @@ -3999,7 +3999,7 @@ def form_blocks(arrays, names, axes): if len(cat_items) > 0: cat_blocks = [make_block(array, klass=CategoricalBlock, fastpath=True, placement=[i]) - for i, names, array in cat_items] + for i, _, array in cat_items] blocks.extend(cat_blocks) if len(extra_locs): diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py index 05257dd0ac625..a31efc63269b6 100644 --- a/pandas/core/reshape.py +++ b/pandas/core/reshape.py @@ -282,7 +282,7 @@ def _unstack_multiple(data, clocs): for i in range(len(clocs)): val = clocs[i] result = result.unstack(val) - clocs = [val if i > val else val - 1 for val in clocs] + clocs = [v if i > v else v - 1 for v in clocs] return result diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py index 4bacaadd915d1..524508b828980 100644 --- a/pandas/sparse/panel.py +++ b/pandas/sparse/panel.py @@ -520,10 +520,10 @@ def _convert_frames(frames, index, columns, fill_value=np.nan, kind='block'): output[item] = df if index is None: - all_indexes = [df.index for df in output.values()] + all_indexes = [x.index for x in output.values()] index = _get_combined_index(all_indexes) if columns is None: - all_columns = [df.columns for df in output.values()] + all_columns = [x.columns for x in output.values()] columns = _get_combined_index(all_columns) index = _ensure_index(index) diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py index 818e2fb89008d..e68b94342985d 100644 --- a/pandas/tests/frame/test_apply.py +++ b/pandas/tests/frame/test_apply.py @@ -262,8 +262,8 @@ def transform(row): return row def transform2(row): - if (notnull(row['C']) and row['C'].startswith('shin') - and row['A'] == 'foo'): + if (notnull(row['C']) and row['C'].startswith('shin') and + row['A'] == 'foo'): row['D'] = 7 return row diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py index ff9567c8a40b1..6077c8e6f63ee 100644 --- a/pandas/tests/frame/test_indexing.py +++ b/pandas/tests/frame/test_indexing.py @@ -2178,8 +2178,8 @@ def test_where(self): def _safe_add(df): # only add to the numeric items def is_ok(s): - return (issubclass(s.dtype.type, (np.integer, np.floating)) - and s.dtype != 'uint8') + return (issubclass(s.dtype.type, (np.integer, np.floating)) and + s.dtype != 'uint8') return DataFrame(dict([(c, s + 1) if is_ok(s) else (c, s) for c, s in compat.iteritems(df)])) diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py index 52594b982a0d0..6db507f0e4151 100644 --- a/pandas/tests/frame/test_query_eval.py +++ b/pandas/tests/frame/test_query_eval.py @@ -581,8 +581,7 @@ def test_chained_cmp_and_in(self): df = DataFrame(randn(100, len(cols)), columns=cols) res = df.query('a < b < c and a not in b not in c', engine=engine, parser=parser) - ind = ((df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & - ~df.c.isin(df.b)) + ind = (df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & ~df.c.isin(df.b) # noqa expec = df[ind] assert_frame_equal(res, expec) diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py index c5c005beeb69e..2dda4a37e6449 100644 --- a/pandas/tests/frame/test_repr_info.py +++ b/pandas/tests/frame/test_repr_info.py @@ -328,13 +328,13 @@ def test_info_memory_usage(self): res = buf.getvalue().splitlines() self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) - self.assertTrue(df_with_object_index.memory_usage(index=True, - deep=True).sum() - > df_with_object_index.memory_usage(index=True).sum()) + self.assertGreater(df_with_object_index.memory_usage(index=True, + deep=True).sum(), + df_with_object_index.memory_usage(index=True).sum()) df_object = pd.DataFrame({'a': ['a']}) - self.assertTrue(df_object.memory_usage(deep=True).sum() - > df_object.memory_usage().sum()) + self.assertGreater(df_object.memory_usage(deep=True).sum(), + df_object.memory_usage().sum()) # Test a DataFrame with duplicate columns dtypes = ['int64', 'int64', 'int64', 'float64'] diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py index c0963d885a08d..e7d64324e6590 100644 --- a/pandas/tests/frame/test_reshape.py +++ b/pandas/tests/frame/test_reshape.py @@ -10,7 +10,8 @@ import numpy as np from pandas.compat import u -from pandas import DataFrame, Index, Series, MultiIndex, date_range, Timedelta, Period +from pandas import (DataFrame, Index, Series, MultiIndex, date_range, + Timedelta, Period) import pandas as pd from pandas.util.testing import (assert_series_equal, diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py index 10a5b9dbefe02..99f894bfd3320 100644 --- a/pandas/tests/test_base.py +++ b/pandas/tests/test_base.py @@ -332,7 +332,7 @@ def test_none_comparison(self): self.assertTrue(result.iat[0]) self.assertTrue(result.iat[1]) - result = None == o + result = None == o # noqa self.assertFalse(result.iat[0]) self.assertFalse(result.iat[1]) diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py index 0f99d367de6fd..c175630748b38 100644 --- a/pandas/tests/test_groupby.py +++ b/pandas/tests/test_groupby.py @@ -3858,8 +3858,8 @@ def test_groupby_categorical(self): np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] - * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', + '75%', 'max'] * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) def test_groupby_datetime_categorical(self): @@ -3899,8 +3899,8 @@ def test_groupby_datetime_categorical(self): np.arange(4).repeat(8), levels, ordered=True) exp = CategoricalIndex(expc) self.assert_index_equal(desc_result.index.get_level_values(0), exp) - exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', '75%', 'max'] - * 4) + exp = Index(['count', 'mean', 'std', 'min', '25%', '50%', + '75%', 'max'] * 4) self.assert_index_equal(desc_result.index.get_level_values(1), exp) def test_groupby_categorical_index(self): diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py index cd9f44317da49..9f8d672723954 100644 --- a/pandas/tests/test_multilevel.py +++ b/pandas/tests/test_multilevel.py @@ -105,8 +105,8 @@ def test_append_index(self): expected = Index._simple_new( np.array([(1.1, datetime.datetime(2011, 1, 1, tzinfo=tz), 'A'), (1.2, datetime.datetime(2011, 1, 2, tzinfo=tz), 'B'), - (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] - + expected_tuples), None) + (1.3, datetime.datetime(2011, 1, 3, tzinfo=tz), 'C')] + + expected_tuples), None) self.assertTrue(result.equals(expected)) def test_dataframe_constructor(self): diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index b9ca3f331711d..9a427cb26520c 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -1,6 +1,13 @@ import os from nose import SkipTest +import copy +import numpy as np +import pandas as pd +from pandas import DataFrame +from pandas.util.testing import TestCase +import pandas.util.testing as tm + # this is a mess. Getting failures on a python 2.7 build with # whenever we try to import jinja, whether it's installed or not. # so we're explicitly skipping that one *before* we try to import @@ -14,14 +21,6 @@ except ImportError: raise SkipTest("No Jinja2") -import copy - -import numpy as np -import pandas as pd -from pandas import DataFrame -from pandas.util.testing import TestCase -import pandas.util.testing as tm - class TestStyler(TestCase): @@ -196,8 +195,8 @@ def test_apply_subset(self): expected = dict(((r, c), ['color: baz']) for r, row in enumerate(self.df.index) for c, col in enumerate(self.df.columns) - if row in self.df.loc[slice_].index - and col in self.df.loc[slice_].columns) + if row in self.df.loc[slice_].index and + col in self.df.loc[slice_].columns) self.assertEqual(result, expected) def test_applymap_subset(self): @@ -213,8 +212,8 @@ def f(x): expected = dict(((r, c), ['foo: bar']) for r, row in enumerate(self.df.index) for c, col in enumerate(self.df.columns) - if row in self.df.loc[slice_].index - and col in self.df.loc[slice_].columns) + if row in self.df.loc[slice_].index and + col in self.df.loc[slice_].columns) self.assertEqual(result, expected) def test_empty(self): diff --git a/pandas/tests/test_window.py b/pandas/tests/test_window.py index 65f90c320bb68..0b8de24a1bd42 100644 --- a/pandas/tests/test_window.py +++ b/pandas/tests/test_window.py @@ -1002,32 +1002,31 @@ def simple_wma(s, w): return (s.multiply(w).cumsum() / w.cumsum()).fillna(method='ffill') for (s, adjust, ignore_na, w) in [ - (s0, True, False, [np.nan, (1. - alpha), 1.]), - (s0, True, True, [np.nan, (1. - alpha), 1.]), - (s0, False, False, [np.nan, (1. - alpha), alpha]), - (s0, False, True, [np.nan, (1. - alpha), alpha]), - (s1, True, False, [(1. - alpha) ** 2, np.nan, 1.]), - (s1, True, True, [(1. - alpha), np.nan, 1.]), - (s1, False, False, [(1. - alpha) ** 2, np.nan, alpha]), - (s1, False, True, [(1. - alpha), np.nan, alpha]), - (s2, True, False, [np.nan, (1. - alpha) - ** 3, np.nan, np.nan, 1., np.nan]), - (s2, True, True, [np.nan, (1. - alpha), - np.nan, np.nan, 1., np.nan]), - (s2, False, False, [np.nan, (1. - alpha) - ** 3, np.nan, np.nan, alpha, np.nan]), - (s2, False, True, [np.nan, (1. - alpha), - np.nan, np.nan, alpha, np.nan]), - (s3, True, False, [(1. - alpha) - ** 3, np.nan, (1. - alpha), 1.]), - (s3, True, True, [(1. - alpha) ** - 2, np.nan, (1. - alpha), 1.]), - (s3, False, False, [(1. - alpha) ** 3, np.nan, - (1. - alpha) * alpha, - alpha * ((1. - alpha) ** 2 + alpha)]), - (s3, False, True, [(1. - alpha) ** 2, - np.nan, (1. - alpha) * alpha, alpha]), - ]: + (s0, True, False, [np.nan, (1. - alpha), 1.]), + (s0, True, True, [np.nan, (1. - alpha), 1.]), + (s0, False, False, [np.nan, (1. - alpha), alpha]), + (s0, False, True, [np.nan, (1. - alpha), alpha]), + (s1, True, False, [(1. - alpha) ** 2, np.nan, 1.]), + (s1, True, True, [(1. - alpha), np.nan, 1.]), + (s1, False, False, [(1. - alpha) ** 2, np.nan, alpha]), + (s1, False, True, [(1. - alpha), np.nan, alpha]), + (s2, True, False, [np.nan, (1. - alpha) ** + 3, np.nan, np.nan, 1., np.nan]), + (s2, True, True, [np.nan, (1. - alpha), + np.nan, np.nan, 1., np.nan]), + (s2, False, False, [np.nan, (1. - alpha) ** + 3, np.nan, np.nan, alpha, np.nan]), + (s2, False, True, [np.nan, (1. - alpha), + np.nan, np.nan, alpha, np.nan]), + (s3, True, False, [(1. - alpha) ** + 3, np.nan, (1. - alpha), 1.]), + (s3, True, True, [(1. - alpha) ** + 2, np.nan, (1. - alpha), 1.]), + (s3, False, False, [(1. - alpha) ** 3, np.nan, + (1. - alpha) * alpha, + alpha * ((1. - alpha) ** 2 + alpha)]), + (s3, False, True, [(1. - alpha) ** 2, + np.nan, (1. - alpha) * alpha, alpha])]: expected = simple_wma(s, Series(w)) result = s.ewm(com=com, adjust=adjust, ignore_na=ignore_na).mean() diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py index cac5d8ae3cf51..82fdf0a3d3b46 100644 --- a/pandas/tools/merge.py +++ b/pandas/tools/merge.py @@ -470,8 +470,7 @@ def _get_merge_keys(self): def _validate_specification(self): # Hm, any way to make this logic less complicated?? - if (self.on is None and self.left_on is None - and self.right_on is None): + if self.on is None and self.left_on is None and self.right_on is None: if self.left_index and self.right_index: self.left_on, self.right_on = (), () @@ -1185,7 +1184,7 @@ def _make_concat_multiindex(indexes, keys, levels=None, names=None): names = list(names) else: # make sure that all of the passed indices have the same nlevels - if not len(set([i.nlevels for i in indexes])) == 1: + if not len(set([idx.nlevels for idx in indexes])) == 1: raise AssertionError("Cannot concat indices that do" " not have the same number of levels") diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py index 120a30199e522..8e920e0a99a7a 100644 --- a/pandas/tools/pivot.py +++ b/pandas/tools/pivot.py @@ -357,8 +357,8 @@ def _convert_by(by): if by is None: by = [] elif (np.isscalar(by) or isinstance(by, (np.ndarray, Index, - Series, Grouper)) - or hasattr(by, '__call__')): + Series, Grouper)) or + hasattr(by, '__call__')): by = [by] else: by = list(by) diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 8f7c0a2b1be9a..03d9fe75da8cc 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -108,8 +108,8 @@ def _mpl_ge_1_3_1(): import matplotlib # The or v[0] == '0' is because their versioneer is # messed up on dev - return (matplotlib.__version__ >= LooseVersion('1.3.1') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.3.1') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -117,8 +117,8 @@ def _mpl_ge_1_3_1(): def _mpl_ge_1_4_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.4') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.4') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -126,8 +126,8 @@ def _mpl_ge_1_4_0(): def _mpl_ge_1_5_0(): try: import matplotlib - return (matplotlib.__version__ >= LooseVersion('1.5') - or matplotlib.__version__[0] == '0') + return (matplotlib.__version__ >= LooseVersion('1.5') or + matplotlib.__version__[0] == '0') except ImportError: return False @@ -1789,10 +1789,10 @@ def _update_stacker(cls, ax, stacking_id, values): ax._stacker_neg_prior[stacking_id] += values def _post_plot_logic(self, ax, data): - condition = (not self._use_dynamic_x() - and data.index.is_all_dates - and not self.subplots - or (self.subplots and self.sharex)) + condition = (not self._use_dynamic_x() and + data.index.is_all_dates and + not self.subplots or + (self.subplots and self.sharex)) index_name = self._get_index_name() @@ -2186,8 +2186,8 @@ def blank_labeler(label, value): # Blank out labels for values of 0 so they don't overlap # with nonzero wedges if labels is not None: - blabels = [blank_labeler(label, value) for - label, value in zip(labels, y)] + blabels = [blank_labeler(l, value) for + l, value in zip(labels, y)] else: blabels = None results = ax.pie(y, labels=blabels, **kwds) @@ -2331,7 +2331,7 @@ def _make_plot(self): self.maybe_color_bp(bp) self._return_obj = ret - labels = [l for l, y in self._iter_data()] + labels = [l for l, _ in self._iter_data()] labels = [com.pprint_thing(l) for l in labels] if not self.use_index: labels = [com.pprint_thing(key) for key in range(len(labels))] diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py index d83b0e3f250ca..a150e55b06ff3 100644 --- a/pandas/tseries/frequencies.py +++ b/pandas/tseries/frequencies.py @@ -6,6 +6,7 @@ import numpy as np +import pandas.core.algorithms as algos from pandas.core.algorithms import unique from pandas.tseries.offsets import DateOffset from pandas.util.decorators import cache_readonly @@ -1100,8 +1101,6 @@ def _get_wom_rule(self): return 'WOM-%d%s' % (week, wd) -import pandas.core.algorithms as algos - class _TimedeltaFrequencyInferer(_FrequencyInferer): diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index a632913fbe4fe..77aa05bc1189d 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1079,9 +1079,9 @@ def _maybe_utc_convert(self, other): def _wrap_joined_index(self, joined, other): name = self.name if self.name == other.name else None - if (isinstance(other, DatetimeIndex) - and self.offset == other.offset - and self._can_fast_union(other)): + if (isinstance(other, DatetimeIndex) and + self.offset == other.offset and + self._can_fast_union(other)): joined = self._shallow_copy(joined) joined.name = name return joined diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py index 50c0a1ab7f336..1a666f5ed012b 100644 --- a/pandas/tseries/offsets.py +++ b/pandas/tseries/offsets.py @@ -13,6 +13,7 @@ from pandas.tslib import Timestamp, OutOfBoundsDatetime, Timedelta import functools +import operator __all__ = ['Day', 'BusinessDay', 'BDay', 'CustomBusinessDay', 'CDay', 'CBMonthEnd', 'CBMonthBegin', @@ -111,8 +112,8 @@ def wrapper(self, other): def _is_normalized(dt): - if (dt.hour != 0 or dt.minute != 0 or dt.second != 0 - or dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): + if (dt.hour != 0 or dt.minute != 0 or dt.second != 0 or + dt.microsecond != 0 or getattr(dt, 'nanosecond', 0) != 0): return False return True @@ -268,8 +269,8 @@ def apply_index(self, i): if (self._use_relativedelta and set(self.kwds).issubset(relativedelta_fast)): - months = ((self.kwds.get('years', 0) * 12 - + self.kwds.get('months', 0)) * self.n) + months = ((self.kwds.get('years', 0) * 12 + + self.kwds.get('months', 0)) * self.n) if months: shifted = tslib.shift_months(i.asi8, months) i = i._shallow_copy(shifted) @@ -321,8 +322,8 @@ def __repr__(self): exclude = set(['n', 'inc', 'normalize']) attrs = [] for attr in sorted(self.__dict__): - if ((attr == 'kwds' and len(self.kwds) == 0) - or attr.startswith('_')): + if ((attr == 'kwds' and len(self.kwds) == 0) or + attr.startswith('_')): continue elif attr == 'kwds': kwds_new = {} @@ -2437,8 +2438,6 @@ def onOffset(self, dt): # --------------------------------------------------------------------- # Ticks -import operator - def _tick_comp(op): def f(self, other): diff --git a/pandas/tseries/tdi.py b/pandas/tseries/tdi.py index fafe13e1f2c09..9129a156848a9 100644 --- a/pandas/tseries/tdi.py +++ b/pandas/tseries/tdi.py @@ -522,8 +522,8 @@ def join(self, other, how='left', level=None, return_indexers=False): def _wrap_joined_index(self, joined, other): name = self.name if self.name == other.name else None - if (isinstance(other, TimedeltaIndex) and self.freq == other.freq - and self._can_fast_union(other)): + if (isinstance(other, TimedeltaIndex) and self.freq == other.freq and + self._can_fast_union(other)): joined = self._shallow_copy(joined, name=name) return joined else: diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py index c46d21d2a8759..901d9f41e3949 100644 --- a/pandas/tseries/tests/test_offsets.py +++ b/pandas/tseries/tests/test_offsets.py @@ -4209,8 +4209,9 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset): self.assertTrue(timedelta(offset.kwds['days']) + tstart.date() == t.date()) # expect the same hour of day, minute, second, ... - self.assertTrue(t.hour == tstart.hour and t.minute == tstart.minute - and t.second == tstart.second) + self.assertTrue(t.hour == tstart.hour and + t.minute == tstart.minute and + t.second == tstart.second) elif offset_name in self.valid_date_offsets_singular: # expect the signular offset value to match between tstart and t datepart_offset = getattr(t, offset_name @@ -4223,8 +4224,10 @@ def _test_offset(self, offset_name, offset_n, tstart, expected_utc_offset): ).tz_convert('US/Pacific')) def _make_timestamp(self, string, hrs_offset, tz): - offset_string = '{hrs:02d}00'.format(hrs=hrs_offset) if hrs_offset >= 0 else \ - '-{hrs:02d}00'.format(hrs=-1 * hrs_offset) + if hrs_offset >= 0: + offset_string = '{hrs:02d}00'.format(hrs=hrs_offset) + else: + offset_string = '-{hrs:02d}00'.format(hrs=-1 * hrs_offset) return Timestamp(string + offset_string).tz_convert(tz) def test_fallback_plural(self): diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py index b9326aa8e1c60..c761a35649ba9 100644 --- a/pandas/tseries/tests/test_resample.py +++ b/pandas/tseries/tests/test_resample.py @@ -1276,9 +1276,9 @@ def test_resample_anchored_multiday(self): s = pd.Series(np.random.randn(5), index=pd.date_range('2014-10-14 23:06:23.206', - periods=3, freq='400L') - | pd.date_range('2014-10-15 23:00:00', - periods=2, freq='2200L')) + periods=3, freq='400L') | + pd.date_range('2014-10-15 23:00:00', + periods=2, freq='2200L')) # Ensure left closing works result = s.resample('2200L').mean() diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py index 9d80489904eb5..5665e502b8558 100644 --- a/pandas/tseries/tests/test_timeseries.py +++ b/pandas/tseries/tests/test_timeseries.py @@ -2451,7 +2451,7 @@ def test_constructor_datetime64_tzformat(self): self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8) tm._skip_if_no_dateutil() - from dateutil.tz import tzoffset + # Non ISO 8601 format results in dateutil.tz.tzoffset for freq in ['AS', 'W-SUN']: idx = date_range('2013/1/1 0:00:00-5:00', '2016/1/1 23:59:59-5:00', diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py index 96a9fd67733a1..086f23cd2d4fd 100644 --- a/pandas/tseries/tests/test_timeseries_legacy.py +++ b/pandas/tseries/tests/test_timeseries_legacy.py @@ -2,11 +2,8 @@ from datetime import datetime import sys import os - import nose - import numpy as np -randn = np.random.randn from pandas import (Index, Series, date_range, Timestamp, DatetimeIndex, Int64Index, to_datetime) @@ -24,6 +21,8 @@ import pandas.compat as compat from pandas.core.datetools import BDay +randn = np.random.randn + # infortunately, too much has changed to handle these legacy pickles # class TestLegacySupport(unittest.TestCase): diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index 5f3f1f09729be..4c6ec91ad1f18 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -29,8 +29,8 @@ def test_constructor(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) - * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, + base_expected) tests = [(base_str, base_dt, base_expected), ('2014-07-01 10:00', datetime.datetime(2014, 7, 1, 10), @@ -89,8 +89,8 @@ def test_constructor_with_stringoffset(self): # confirm base representation is correct import calendar - self.assertEqual(calendar.timegm(base_dt.timetuple()) - * 1000000000, base_expected) + self.assertEqual(calendar.timegm(base_dt.timetuple()) * 1000000000, + base_expected) tests = [(base_str, base_expected), ('2014-07-01 12:00:00+02:00', diff --git a/pandas/util/nosetester.py b/pandas/util/nosetester.py index fdee9be20afa3..445cb79978fc1 100644 --- a/pandas/util/nosetester.py +++ b/pandas/util/nosetester.py @@ -138,7 +138,8 @@ def test(self, label='fast', verbose=1, extra_argv=None, * 'full' - fast (as above) and slow tests as in the 'no -A' option to nosetests - this is the same as ''. * None or '' - run all tests. - * attribute_identifier - string passed directly to nosetests as '-A'. + * attribute_identifier - string passed directly to nosetests + as '-A'. verbose : int, optional Verbosity value for test outputs, in the range 1-10. Default is 1. extra_argv : list, optional @@ -200,8 +201,9 @@ def test(self, label='fast', verbose=1, extra_argv=None, # Reset the warning filters to the default state, # so that running the tests is more repeatable. warnings.resetwarnings() - # Set all warnings to 'warn', this is because the default 'once' - # has the bad property of possibly shadowing later warnings. + # Set all warnings to 'warn', this is because the default + # 'once' has the bad property of possibly shadowing later + # warnings. warnings.filterwarnings('always') # Force the requested warnings to raise for warningtype in raise_warnings:
closes #11928
https://api.github.com/repos/pandas-dev/pandas/pulls/12208
2016-02-02T17:27:08Z
2016-02-03T14:06:28Z
2016-02-03T14:06:28Z
2016-02-03T14:06:28Z
CLN: remove core/matrix.py, not imported into the pandas namespace
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 2be438dd7890e..193d8f83ded79 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -445,7 +445,7 @@ Removal of prior version deprecations/changes - Removal of ``rolling_corr_pairwise`` in favor of ``.rolling().corr(pairwise=True)`` (:issue:`4950`) - Removal of ``expanding_corr_pairwise`` in favor of ``.expanding().corr(pairwise=True)`` (:issue:`4950`) - +- Removal of ``DataMatrix`` module. This was not imported into the pandas namespace in any event (:issue:`12111`) diff --git a/pandas/core/matrix.py b/pandas/core/matrix.py deleted file mode 100644 index 15842464cfda8..0000000000000 --- a/pandas/core/matrix.py +++ /dev/null @@ -1,3 +0,0 @@ -# flake8: noqa - -from pandas.core.frame import DataFrame as DataMatrix
https://api.github.com/repos/pandas-dev/pandas/pulls/12111
2016-01-21T12:38:14Z
2016-01-21T15:37:16Z
2016-01-21T15:37:15Z
2016-01-22T15:40:43Z
ENH: GH12034 RangeIndex.union with RangeIndex returns RangeIndex if possible
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 2706cb200dd54..2be438dd7890e 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -110,7 +110,7 @@ Range Index A ``RangeIndex`` has been added to the ``Int64Index`` sub-classes to support a memory saving alternative for common use cases. This has a similar implementation to the python ``range`` object (``xrange`` in python 2), in that it only stores the start, stop, and step values for the index. It will transparently interact with the user API, converting to ``Int64Index`` if needed. -This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12070`, :issue:`12071`) +This will now be the default constructed index for ``NDFrame`` objects, rather than previous an ``Int64Index``. (:issue:`939`, :issue:`12070`, :issue:`12071`, :issue:`12109`) Previous Behavior: diff --git a/pandas/core/index.py b/pandas/core/index.py index 1fbb717bf76d8..558da897b241e 100644 --- a/pandas/core/index.py +++ b/pandas/core/index.py @@ -4307,7 +4307,48 @@ def union(self, other): ------- union : Index """ - # note: could return a RangeIndex in some circumstances + self._assert_can_do_setop(other) + if len(other) == 0 or self.equals(other): + return self + if len(self) == 0: + return other + if isinstance(other, RangeIndex): + start_s, step_s = self._start, self._step + end_s = self._start + self._step * (len(self) - 1) + start_o, step_o = other._start, other._step + end_o = other._start + other._step * (len(other) - 1) + if self._step < 0: + start_s, step_s, end_s = end_s, -step_s, start_s + if other._step < 0: + start_o, step_o, end_o = end_o, -step_o, start_o + if len(self) == 1 and len(other) == 1: + step_s = step_o = abs(self._start - other._start) + elif len(self) == 1: + step_s = step_o + elif len(other) == 1: + step_o = step_s + start_r = min(start_s, start_o) + end_r = max(end_s, end_o) + if step_o == step_s: + if ((start_s - start_o) % step_s == 0 and + (start_s - end_o) <= step_s and + (start_o - end_s) <= step_s): + return RangeIndex(start_r, end_r + step_s, step_s) + if ((step_s % 2 == 0) and + (abs(start_s - start_o) <= step_s / 2) and + (abs(end_s - end_o) <= step_s / 2)): + return RangeIndex(start_r, end_r + step_s / 2, step_s / 2) + elif step_o % step_s == 0: + if ((start_o - start_s) % step_s == 0 and + (start_o + step_s >= start_s) and + (end_o - step_s <= end_s)): + return RangeIndex(start_r, end_r + step_s, step_s) + elif step_s % step_o == 0: + if ((start_s - start_o) % step_o == 0 and + (start_s + step_o >= start_o) and + (end_s - step_o <= end_o)): + return RangeIndex(start_r, end_r + step_o, step_o) + return self._int64index.union(other) def join(self, other, how='left', level=None, return_indexers=False): diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py index b0210c9fde2e9..68150bfbca3f9 100644 --- a/pandas/tests/test_index.py +++ b/pandas/tests/test_index.py @@ -4130,6 +4130,39 @@ def test_union_noncomparable(self): expected = np.concatenate((other, self.index)) self.assert_numpy_array_equal(result, expected) + def test_union(self): + RI = RangeIndex + I64 = Int64Index + cases = [(RI(0, 10, 1), RI(0, 10, 1), RI(0, 10, 1)), + (RI(0, 10, 1), RI(5, 20, 1), RI(0, 20, 1)), + (RI(0, 10, 1), RI(10, 20, 1), RI(0, 20, 1)), + (RI(0, -10, -1), RI(0, -10, -1), RI(0, -10, -1)), + (RI(0, -10, -1), RI(-10, -20, -1), RI(-19, 1, 1)), + (RI(0, 10, 2), RI(1, 10, 2), RI(0, 10, 1)), + (RI(0, 11, 2), RI(1, 12, 2), RI(0, 12, 1)), + (RI(0, 21, 4), RI(-2, 24, 4), RI(-2, 24, 2)), + (RI(0, -20, -2), RI(-1, -21, -2), RI(-19, 1, 1)), + (RI(0, 100, 5), RI(0, 100, 20), RI(0, 100, 5)), + (RI(0, -100, -5), RI(5, -100, -20), RI(-95, 10, 5)), + (RI(0, -11, -1), RI(1, -12, -4), RI(-11, 2, 1)), + (RI(), RI(), RI()), + (RI(0, -10, -2), RI(), RI(0, -10, -2)), + (RI(0, 100, 2), RI(100, 150, 200), RI(0, 102, 2)), + (RI(0, -100, -2), RI(-100, 50, 102), RI(-100, 4, 2)), + (RI(0, -100, -1), RI(0, -50, -3), RI(-99, 1, 1)), + (RI(0, 1, 1), RI(5, 6, 10), RI(0, 6, 5)), + (RI(0, 10, 5), RI(-5, -6, -20), RI(-5, 10, 5)), + (RI(0, 3, 1), RI(4, 5, 1), I64([0, 1, 2, 4])), + (RI(0, 10, 1), I64([]), RI(0, 10, 1)), + (RI(), I64([1, 5, 6]), I64([1, 5, 6]))] + for idx1, idx2, expected in cases: + res1 = idx1.union(idx2) + res2 = idx2.union(idx1) + res3 = idx1._int64index.union(idx2) + tm.assert_index_equal(res1, expected, exact=True) + tm.assert_index_equal(res2, expected, exact=True) + tm.assert_index_equal(res3, expected) + def test_nbytes(self): # memory savings vs int index
xref #12034
https://api.github.com/repos/pandas-dev/pandas/pulls/12109
2016-01-21T12:02:01Z
2016-01-21T12:26:50Z
2016-01-21T12:26:50Z
2016-01-21T12:26:59Z
Show Index Headers on DataFrames with Style
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index db9cf5ae86d39..3fce0939445d7 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -538,3 +538,4 @@ of columns didn't match the number of series provided (:issue:`12039`). - Bug in ``Series`` constructor with read-only data (:issue:`11502`) - Bug in ``.loc`` setitem indexer preventing the use of a TZ-aware DatetimeIndex (:issue:`12050`) +- Big in ``.style`` indexes and multi-indexes not appearing (:issue:`11655`) diff --git a/pandas/core/style.py b/pandas/core/style.py index d8cb53e04ea03..203eda4fdf338 100644 --- a/pandas/core/style.py +++ b/pandas/core/style.py @@ -206,6 +206,24 @@ def _translate(self): "class": " ".join(cs)}) head.append(row_es) + if self.data.index.names: + index_header_row = [] + + for c, name in enumerate(self.data.index.names): + cs = [COL_HEADING_CLASS, + "level%s" % (n_clvls + 1), + "col%s" % c] + index_header_row.append({"type": "th", "value": name, + "class": " ".join(cs)}) + + index_header_row.extend( + [{"type": "th", + "value": BLANK_VALUE, + "class": " ".join([BLANK_CLASS]) + }] * len(clabels[0])) + + head.append(index_header_row) + body = [] for r, idx in enumerate(self.data.index): cs = [ROW_HEADING_CLASS, "level%s" % c, "row%s" % r] diff --git a/pandas/tests/test_style.py b/pandas/tests/test_style.py index fd8540fdf9c0a..b9ca3f331711d 100644 --- a/pandas/tests/test_style.py +++ b/pandas/tests/test_style.py @@ -130,6 +130,40 @@ def test_set_properties_subset(self): expected = {(0, 0): ['color: white']} self.assertEqual(result, expected) + def test_index_name(self): + # https://github.com/pydata/pandas/issues/11655 + df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]}) + result = df.set_index('A').style._translate() + + expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'col_heading level0 col0', 'type': 'th', + 'value': 'B'}, + {'class': 'col_heading level0 col1', 'type': 'th', + 'value': 'C'}], + [{'class': 'col_heading level2 col0', 'type': 'th', + 'value': 'A'}, + {'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'blank', 'type': 'th', 'value': ''}]] + + self.assertEqual(result['head'], expected) + + def test_multiindex_name(self): + # https://github.com/pydata/pandas/issues/11655 + df = pd.DataFrame({'A': [1, 2], 'B': [3, 4], 'C': [5, 6]}) + result = df.set_index(['A', 'B']).style._translate() + + expected = [[{'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'blank', 'type': 'th', 'value': ''}, + {'class': 'col_heading level0 col0', 'type': 'th', + 'value': 'C'}], + [{'class': 'col_heading level2 col0', 'type': 'th', + 'value': 'A'}, + {'class': 'col_heading level2 col1', 'type': 'th', + 'value': 'B'}, + {'class': 'blank', 'type': 'th', 'value': ''}]] + + self.assertEqual(result['head'], expected) + def test_apply_axis(self): df = pd.DataFrame({'A': [0, 0], 'B': [1, 1]}) f = lambda x: ['val: %s' % x.max() for v in x]
Partial solution for https://github.com/pydata/pandas/issues/11655 (**Conditional HTML styling hides MultiIndex structure**) and https://github.com/pydata/pandas/issues/11610 (**Followup to Conditional HTML Styling**). The Style API is inconsistent with `DataFrame.to_html()` when printing index names. Currently the style API doesn't print any index names but `to_html` and the standard notebook repr functions do. This PR adds a row for index headings in the `Styler._translate` method. This PR does not cause multi-index rownames to span multiple rows, but I can work on adding that if people want. ## Reproducing Output: ``` python import pandas as pd import numpy as np np.random.seed(24) df = pd.DataFrame({'A': np.linspace(1, 10, 10)}) df = pd.concat([df, pd.DataFrame(np.random.randn(10, 4), columns=list('BCDE'))], axis=1) ``` ``` python df.set_index(['A', 'B']).style ``` ##### Current Output <img width="506" alt="screen shot 2016-01-19 at 10 19 08 am" src="https://cloud.githubusercontent.com/assets/3064019/12422922/32254fb2-be97-11e5-8096-fcadbd902daa.png"> ##### Patched Output: <img width="441" alt="screen shot 2016-01-19 at 10 19 19 am" src="https://cloud.githubusercontent.com/assets/3064019/12422947/46affb6c-be97-11e5-83c7-acd276bdb88f.png"> ## TODO: - [X] Add Tests
https://api.github.com/repos/pandas-dev/pandas/pulls/12090
2016-01-19T15:57:13Z
2016-01-20T23:55:54Z
2016-01-20T23:55:54Z
2021-03-31T20:21:14Z
CI: use wheels on all numpy dev builds
diff --git a/.travis.yml b/.travis.yml index 959a1f7e11e41..049a5c056928c 100644 --- a/.travis.yml +++ b/.travis.yml @@ -42,7 +42,6 @@ matrix: - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - CLIPBOARD_GUI=gtk2 - - BUILD_TYPE=conda - DOC_BUILD=true # if rst files were changed, build docs in parallel with tests - python: 3.4 env: @@ -57,14 +56,12 @@ matrix: - NOSE_ARGS="not slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - JOB_NAME: "27_slow" - JOB_TAG=_SLOW - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - python: 3.4 env: - JOB_NAME: "34_slow" @@ -72,37 +69,24 @@ matrix: - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - JOB_NAME: "27_build_test_conda" - JOB_TAG=_BUILD_TEST - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - - BUILD_TEST=true - - python: 2.7 - env: - - JOB_NAME: "27_build_test_pydata" - - JOB_TAG=_BUILD_TEST - - NOSE_ARGS="not slow and not disabled" - - FULL_DEPS=true - - BUILD_TYPE=pydata - BUILD_TEST=true - python: 2.7 env: - - JOB_NAME: "27_numpy_master" - - JOB_TAG=_NUMPY_DEV_master + - JOB_NAME: "27_numpy_dev" + - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - NUMPY_BUILD=master - - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" - python: 3.5 env: - JOB_NAME: "35_numpy_dev" - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - BUILD_TYPE=conda - PANDAS_TESTING_MODE="deprecate" allow_failures: - python: 2.7 @@ -111,7 +95,6 @@ matrix: - JOB_TAG=_SLOW - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - python: 3.4 env: - JOB_NAME: "34_slow" @@ -119,14 +102,11 @@ matrix: - NOSE_ARGS="slow and not network and not disabled" - FULL_DEPS=true - CLIPBOARD=xsel - - BUILD_TYPE=conda - python: 2.7 env: - - JOB_NAME: "27_numpy_master" - - JOB_TAG=_NUMPY_DEV_master + - JOB_NAME: "27_numpy_dev" + - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - NUMPY_BUILD=master - - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" - python: 2.7 env: @@ -134,22 +114,12 @@ matrix: - JOB_TAG=_BUILD_TEST - NOSE_ARGS="not slow and not disabled" - FULL_DEPS=true - - BUILD_TYPE=conda - - BUILD_TEST=true - - python: 2.7 - env: - - JOB_NAME: "27_build_test_pydata" - - JOB_TAG=_BUILD_TEST - - NOSE_ARGS="not slow and not disabled" - - FULL_DEPS=true - - BUILD_TYPE=pydata - BUILD_TEST=true - python: 3.5 env: - JOB_NAME: "35_numpy_dev" - JOB_TAG=_NUMPY_DEV - NOSE_ARGS="not slow and not network and not disabled" - - BUILD_TYPE=conda - PANDAS_TESTING_MODE="deprecate" before_install: @@ -162,7 +132,7 @@ before_install: - pwd - uname -a - python -V - - ci/before_install.sh + - ci/before_install_travis.sh # Xvfb stuff for clipboard functionality; see the travis-ci documentation - export DISPLAY=:99.0 - sh -e /etc/init.d/xvfb start @@ -170,7 +140,7 @@ before_install: install: - echo "install" - ci/prep_ccache.sh - - ci/install_${BUILD_TYPE}.sh + - ci/install_travis.sh - ci/submit_ccache.sh before_script: diff --git a/ci/before_install.sh b/ci/before_install_travis.sh similarity index 100% rename from ci/before_install.sh rename to ci/before_install_travis.sh diff --git a/ci/install-2.7_NUMPY_DEV.sh b/ci/install-2.7_NUMPY_DEV.sh new file mode 100644 index 0000000000000..00b6255daf70f --- /dev/null +++ b/ci/install-2.7_NUMPY_DEV.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +source activate pandas + +echo "install numpy master wheel" + +# remove the system installed numpy +pip uninstall numpy -y + +# we need these for numpy + +# these wheels don't play nice with the conda libgfortran / openblas +# time conda install -n pandas libgfortran openblas || exit 1 + +time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran + +# install numpy wheel from master +pip install --pre --upgrade --no-index --timeout=60 --trusted-host travis-dev-wheels.scipy.org -f http://travis-dev-wheels.scipy.org/ numpy + +true diff --git a/ci/install_pydata.sh b/ci/install_pydata.sh deleted file mode 100755 index 667b57897be7e..0000000000000 --- a/ci/install_pydata.sh +++ /dev/null @@ -1,159 +0,0 @@ -#!/bin/bash - -# There are 2 distinct pieces that get zipped and cached -# - The venv site-packages dir including the installed dependencies -# - The pandas build artifacts, using the build cache support via -# scripts/use_build_cache.py -# -# if the user opted in to use the cache and we're on a whitelisted fork -# - if the server doesn't hold a cached version of venv/pandas build, -# do things the slow way, and put the results on the cache server -# for the next time. -# - if the cache files are available, instal some necessaries via apt -# (no compiling needed), then directly goto script and collect 200$. -# - -function edit_init() -{ - if [ -n "$LOCALE_OVERRIDE" ]; then - echo "Adding locale to the first line of pandas/__init__.py" - rm -f pandas/__init__.pyc - sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n" - sed -i "$sedc" pandas/__init__.py - echo "head -4 pandas/__init__.py" - head -4 pandas/__init__.py - echo - fi -} - -edit_init - -python_major_version="${TRAVIS_PYTHON_VERSION:0:1}" -[ "$python_major_version" == "2" ] && python_major_version="" - -home_dir=$(pwd) -echo "home_dir: [$home_dir]" - -# known working -# pip==1.5.1 -# setuptools==2.2 -# wheel==0.22 -# nose==1.3.3 - -pip install -I -U pip -pip install -I -U setuptools -pip install wheel==0.22 -#pip install nose==1.3.3 -pip install nose==1.3.4 - -# comment this line to disable the fetching of wheel files -base_url=http://pandas.pydata.org/pandas-build/dev/wheels - -wheel_box=${TRAVIS_PYTHON_VERSION}${JOB_TAG} -PIP_ARGS+=" -I --use-wheel --find-links=$base_url/$wheel_box/ --allow-external --allow-insecure" - -if [ -n "$LOCALE_OVERRIDE" ]; then - # make sure the locale is available - # probably useless, since you would need to relogin - time sudo locale-gen "$LOCALE_OVERRIDE" -fi - -# we need these for numpy -time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran - -if [ -n "$NUMPY_BUILD" ]; then - # building numpy - - cd $home_dir - echo "cloning numpy" - - rm -Rf /tmp/numpy - cd /tmp - - # remove the system installed numpy - pip uninstall numpy -y - - # install cython - pip install --find-links http://wheels.astropy.org/ --find-links http://wheels2.astropy.org/ --use-wheel Cython - - # clone & install - git clone --branch $NUMPY_BUILD https://github.com/numpy/numpy.git numpy - cd numpy - time pip install . - pip uninstall cython -y - - cd $home_dir - numpy_version=$(python -c 'import numpy; print(numpy.__version__)') - echo "[$home_dir] numpy current: $numpy_version" -fi - -# Force virtualenv to accept system_site_packages -rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt - -# build deps -time pip install $PIP_ARGS -r ci/requirements-${wheel_box}.build - -# Need to enable for locale testing. The location of the locale file(s) is -# distro specific. For example, on Arch Linux all of the locales are in a -# commented file--/etc/locale.gen--that must be commented in to be used -# whereas Ubuntu looks in /var/lib/locales/supported.d/* and generates locales -# based on what's in the files in that folder -time echo 'it_CH.UTF-8 UTF-8' | sudo tee -a /var/lib/locales/supported.d/it -time sudo locale-gen - - -# install gui for clipboard testing -if [ -n "$CLIPBOARD_GUI" ]; then - echo "Using CLIPBOARD_GUI: $CLIPBOARD_GUI" - [ -n "$python_major_version" ] && py="py" - python_cb_gui_pkg=python${python_major_version}-${py}${CLIPBOARD_GUI} - time sudo apt-get $APT_ARGS install $python_cb_gui_pkg -fi - - -# install a clipboard if $CLIPBOARD is not empty -if [ -n "$CLIPBOARD" ]; then - echo "Using clipboard: $CLIPBOARD" - time sudo apt-get $APT_ARGS install $CLIPBOARD -fi - - -# Optional Deps -if [ -n "$FULL_DEPS" ]; then - echo "Installing FULL_DEPS" - - # need libhdf5 for PyTables - time sudo apt-get $APT_ARGS install libhdf5-serial-dev -fi - - -# set the compiler cache to work -if [ "$IRON_TOKEN" ]; then - export PATH=/usr/lib/ccache:/usr/lib64/ccache:$PATH - gcc=$(which gcc) - echo "gcc: $gcc" - ccache=$(which ccache) - echo "ccache: $ccache" - export CC='ccache gcc' -fi - -# build pandas -if [ "$BUILD_TEST" ]; then - pip uninstall --yes cython - pip install cython==0.15.1 - ( python setup.py build_ext --inplace ) || true - ( python setup.py develop ) || true -else - python setup.py build_ext --inplace - python setup.py develop -fi - -# install the run libs -time pip install $PIP_ARGS -r ci/requirements-${wheel_box}.run - -# restore cython (if not numpy building) -if [ -z "$NUMPY_BUILD" ]; then - time pip install $PIP_ARGS $(cat ci/requirements-${wheel_box}.txt | grep -i cython) -fi - -true diff --git a/ci/install_conda.sh b/ci/install_travis.sh similarity index 100% rename from ci/install_conda.sh rename to ci/install_travis.sh diff --git a/ci/requirements-2.7_NUMPY_DEV_master.build b/ci/requirements-2.7_NUMPY_DEV.build similarity index 58% rename from ci/requirements-2.7_NUMPY_DEV_master.build rename to ci/requirements-2.7_NUMPY_DEV.build index 7d1d11daf9eeb..d15edbfa3d2c1 100644 --- a/ci/requirements-2.7_NUMPY_DEV_master.build +++ b/ci/requirements-2.7_NUMPY_DEV.build @@ -1,3 +1,3 @@ python-dateutil pytz -cython==0.19.1 +cython diff --git a/ci/requirements-2.7_NUMPY_DEV.run b/ci/requirements-2.7_NUMPY_DEV.run new file mode 100644 index 0000000000000..0aa987baefb1d --- /dev/null +++ b/ci/requirements-2.7_NUMPY_DEV.run @@ -0,0 +1,2 @@ +python-dateutil +pytz diff --git a/ci/requirements-2.7_NUMPY_DEV_master.run b/ci/requirements-2.7_NUMPY_DEV_master.run deleted file mode 100644 index e69de29bb2d1d..0000000000000
- remove pydata 2.7 test build - change 2.7 numpy_dev build to use wheels - remove need for BUILD_TYPE, all builds are now conda closes #12057
https://api.github.com/repos/pandas-dev/pandas/pulls/12068
2016-01-17T03:55:02Z
2016-01-17T16:21:48Z
2016-01-17T16:21:48Z
2016-01-17T16:21:48Z
CI: add 3.5 build with numpy_dev installed by wheel from master, #12057
diff --git a/.travis.yml b/.travis.yml index 9fdb98c0124b8..959a1f7e11e41 100644 --- a/.travis.yml +++ b/.travis.yml @@ -97,6 +97,13 @@ matrix: - NUMPY_BUILD=master - BUILD_TYPE=pydata - PANDAS_TESTING_MODE="deprecate" + - python: 3.5 + env: + - JOB_NAME: "35_numpy_dev" + - JOB_TAG=_NUMPY_DEV + - NOSE_ARGS="not slow and not network and not disabled" + - BUILD_TYPE=conda + - PANDAS_TESTING_MODE="deprecate" allow_failures: - python: 2.7 env: @@ -137,6 +144,13 @@ matrix: - FULL_DEPS=true - BUILD_TYPE=pydata - BUILD_TEST=true + - python: 3.5 + env: + - JOB_NAME: "35_numpy_dev" + - JOB_TAG=_NUMPY_DEV + - NOSE_ARGS="not slow and not network and not disabled" + - BUILD_TYPE=conda + - PANDAS_TESTING_MODE="deprecate" before_install: - echo "before_install" diff --git a/ci/install-3.5_NUMPY_DEV.sh b/ci/install-3.5_NUMPY_DEV.sh new file mode 100644 index 0000000000000..00b6255daf70f --- /dev/null +++ b/ci/install-3.5_NUMPY_DEV.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +source activate pandas + +echo "install numpy master wheel" + +# remove the system installed numpy +pip uninstall numpy -y + +# we need these for numpy + +# these wheels don't play nice with the conda libgfortran / openblas +# time conda install -n pandas libgfortran openblas || exit 1 + +time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran + +# install numpy wheel from master +pip install --pre --upgrade --no-index --timeout=60 --trusted-host travis-dev-wheels.scipy.org -f http://travis-dev-wheels.scipy.org/ numpy + +true diff --git a/ci/install_conda.sh b/ci/install_conda.sh index 465a4e3f63142..335286d7d1676 100755 --- a/ci/install_conda.sh +++ b/ci/install_conda.sh @@ -81,6 +81,14 @@ conda info -a || exit 1 # build deps REQ="ci/requirements-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.build" time conda create -n pandas python=$TRAVIS_PYTHON_VERSION nose flake8 || exit 1 + +# may have additional installation instructions for this build +INSTALL="ci/install-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.sh" +if [ -e ${INSTALL} ]; then + time bash $INSTALL || exit 1 +fi + +# install deps time conda install -n pandas --file=${REQ} || exit 1 source activate pandas diff --git a/ci/requirements-3.5.run b/ci/requirements-3.5.run index 64ed11b744ffd..2401a0fc11673 100644 --- a/ci/requirements-3.5.run +++ b/ci/requirements-3.5.run @@ -13,12 +13,10 @@ html5lib lxml matplotlib jinja2 +bottleneck +sqlalchemy +pymysql +psycopg2 -# currently causing some warnings -#sqlalchemy -#pymysql -#psycopg2 - -# not available from conda -#beautiful-soup -#bottleneck +# incompat with conda ATM +# beautiful-soup diff --git a/ci/requirements-3.5_NUMPY_DEV.build b/ci/requirements-3.5_NUMPY_DEV.build new file mode 100644 index 0000000000000..d15edbfa3d2c1 --- /dev/null +++ b/ci/requirements-3.5_NUMPY_DEV.build @@ -0,0 +1,3 @@ +python-dateutil +pytz +cython diff --git a/ci/requirements-3.5_NUMPY_DEV.run b/ci/requirements-3.5_NUMPY_DEV.run new file mode 100644 index 0000000000000..0aa987baefb1d --- /dev/null +++ b/ci/requirements-3.5_NUMPY_DEV.run @@ -0,0 +1,2 @@ +python-dateutil +pytz
- adds back some 3.5 deps previously missing in conda - closes #12057 (adds 3.5 wheel from nump)
https://api.github.com/repos/pandas-dev/pandas/pulls/12065
2016-01-16T19:37:45Z
2016-01-17T03:20:24Z
2016-01-17T03:20:24Z
2016-01-17T03:20:24Z
DOC: read_csv() ignores quotes when a regex is used in sep
diff --git a/doc/source/io.rst b/doc/source/io.rst index 041daaeb3b12f..e301e353071d9 100644 --- a/doc/source/io.rst +++ b/doc/source/io.rst @@ -87,7 +87,7 @@ They can take a number of arguments: on. With ``sep=None``, ``read_csv`` will try to infer the delimiter automatically in some cases by "sniffing". The separator may be specified as a regular expression; for instance - you may use '\|\\s*' to indicate a pipe plus arbitrary whitespace. + you may use '\|\\s*' to indicate a pipe plus arbitrary whitespace, but ignores quotes in the data when a regex is used in separator. - ``delim_whitespace``: Parse whitespace-delimited (spaces or tabs) file (much faster than using a regular expression) - ``compression``: decompress ``'gzip'`` and ``'bz2'`` formats on the fly.
made doc changes regarding #11989
https://api.github.com/repos/pandas-dev/pandas/pulls/12059
2016-01-16T00:25:30Z
2016-01-16T17:33:32Z
2016-01-16T17:33:32Z
2016-01-16T17:33:36Z
COMPAT: numpy compat with NaT != NaT, #12049
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index c1f14ce6703a0..ce324e8a2dab1 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -445,7 +445,7 @@ Bug Fixes - Accept unicode in ``Timedelta`` constructor (:issue:`11995`) - Bug in value label reading for ``StataReader`` when reading incrementally (:issue:`12014`) - Bug in vectorized ``DateOffset`` when ``n`` parameter is ``0`` (:issue:`11370`) - +- Compat for numpy 1.11 w.r.t. ``NaT`` comparison changes (:issue:`12049`) diff --git a/pandas/core/common.py b/pandas/core/common.py index b80b7eecaeb11..0326352ef3444 100644 --- a/pandas/core/common.py +++ b/pandas/core/common.py @@ -379,12 +379,13 @@ def array_equivalent(left, right, strict_nan=False): """ left, right = np.asarray(left), np.asarray(right) + + # shape compat if left.shape != right.shape: return False # Object arrays can contain None, NaN and NaT. - if (issubclass(left.dtype.type, np.object_) or - issubclass(right.dtype.type, np.object_)): + if is_object_dtype(left) or is_object_dtype(right): if not strict_nan: # pd.isnull considers NaN and None to be equivalent. @@ -405,13 +406,21 @@ def array_equivalent(left, right, strict_nan=False): return True # NaNs can occur in float and complex arrays. - if issubclass(left.dtype.type, (np.floating, np.complexfloating)): + if is_float_dtype(left) or is_complex_dtype(left): return ((left == right) | (np.isnan(left) & np.isnan(right))).all() # numpy will will not allow this type of datetimelike vs integer comparison elif is_datetimelike_v_numeric(left, right): return False + # M8/m8 + elif needs_i8_conversion(left) and needs_i8_conversion(right): + if not is_dtype_equal(left.dtype, right.dtype): + return False + + left = left.view('i8') + right = right.view('i8') + # NaNs cannot occur otherwise. return np.array_equal(left, right) diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index bc204740567de..a22d8f11c9a75 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -8,7 +8,9 @@ import numpy as np import pandas as pd from pandas.tslib import iNaT, NaT -from pandas import Series, DataFrame, date_range, DatetimeIndex, Timestamp, Float64Index +from pandas import (Series, DataFrame, date_range, + DatetimeIndex, TimedeltaIndex, + Timestamp, Float64Index) from pandas import compat from pandas.compat import range, long, lrange, lmap, u from pandas.core.common import notnull, isnull, array_equivalent @@ -322,20 +324,40 @@ def test_array_equivalent(): np.array([np.nan, 1, np.nan])) assert array_equivalent(np.array([np.nan, None], dtype='object'), np.array([np.nan, None], dtype='object')) - assert array_equivalent(np.array([np.nan, 1+1j], dtype='complex'), - np.array([np.nan, 1+1j], dtype='complex')) - assert not array_equivalent(np.array([np.nan, 1+1j], dtype='complex'), - np.array([np.nan, 1+2j], dtype='complex')) + assert array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), + np.array([np.nan, 1 + 1j], dtype='complex')) + assert not array_equivalent(np.array([np.nan, 1 + 1j], dtype='complex'), + np.array([np.nan, 1 + 2j], dtype='complex')) assert not array_equivalent(np.array([np.nan, 1, np.nan]), np.array([np.nan, 2, np.nan])) - assert not array_equivalent(np.array(['a', 'b', 'c', 'd']), np.array(['e', 'e'])) - assert array_equivalent(Float64Index([0, np.nan]), Float64Index([0, np.nan])) - assert not array_equivalent(Float64Index([0, np.nan]), Float64Index([1, np.nan])) - assert array_equivalent(DatetimeIndex([0, np.nan]), DatetimeIndex([0, np.nan])) - assert not array_equivalent(DatetimeIndex([0, np.nan]), DatetimeIndex([1, np.nan])) + assert not array_equivalent(np.array(['a', 'b', 'c', 'd']), + np.array(['e', 'e'])) + assert array_equivalent(Float64Index([0, np.nan]), + Float64Index([0, np.nan])) + assert not array_equivalent(Float64Index([0, np.nan]), + Float64Index([1, np.nan])) + assert array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([0, np.nan])) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([1, np.nan])) + assert array_equivalent(TimedeltaIndex([0, np.nan]), + TimedeltaIndex([0, np.nan])) + assert not array_equivalent(TimedeltaIndex([0, np.nan]), + TimedeltaIndex([1, np.nan])) + assert array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan], tz='US/Eastern'), + DatetimeIndex([1, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan], tz='CET'), + DatetimeIndex([0, np.nan], tz='US/Eastern')) + assert not array_equivalent(DatetimeIndex([0, np.nan]), + TimedeltaIndex([0, np.nan])) + def test_datetimeindex_from_empty_datetime64_array(): - for unit in [ 'ms', 'us', 'ns' ]: + for unit in ['ms', 'us', 'ns']: idx = DatetimeIndex(np.array([], dtype='datetime64[%s]' % unit)) assert(len(idx) == 0) diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py index ba43537ee609b..55a6bf6f13b63 100644 --- a/pandas/tseries/tests/test_tslib.py +++ b/pandas/tseries/tests/test_tslib.py @@ -399,8 +399,10 @@ def test_asm8(self): 1000, ] for n in ns: - self.assertEqual(Timestamp(n).asm8, np.datetime64(n, 'ns'), n) - self.assertEqual(Timestamp('nat').asm8, np.datetime64('nat', 'ns')) + self.assertEqual(Timestamp(n).asm8.view('i8'), + np.datetime64(n, 'ns').view('i8'), n) + self.assertEqual(Timestamp('nat').asm8.view('i8'), + np.datetime64('nat', 'ns').view('i8')) def test_fields(self): @@ -752,13 +754,11 @@ def test_coercing_dates_outside_of_datetime64_ns_bounds(self): np.array([invalid_date], dtype='object'), errors='raise', ) - self.assertTrue( - np.array_equal( - tslib.array_to_datetime( - np.array([invalid_date], dtype='object'), errors='coerce', - ), - np.array([tslib.iNaT], dtype='M8[ns]') - ) + self.assert_numpy_array_equal( + tslib.array_to_datetime( + np.array([invalid_date], dtype='object'), + errors='coerce'), + np.array([tslib.iNaT], dtype='M8[ns]') ) arr = np.array(['1/1/1000', '1/1/2000'], dtype=object)
closes #12049
https://api.github.com/repos/pandas-dev/pandas/pulls/12058
2016-01-15T20:59:07Z
2016-01-16T16:30:52Z
2016-01-16T16:30:52Z
2016-01-16T16:30:52Z
DOC :Updated indexer_between_time documentation
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py index 96efa317ce612..0dae564352967 100644 --- a/pandas/tseries/index.py +++ b/pandas/tseries/index.py @@ -1804,7 +1804,6 @@ def indexer_between_time(self, start_time, end_time, include_start=True, "%I%M%S%p") include_start : boolean, default True include_end : boolean, default True - tz : string or pytz.timezone or dateutil.tz.tzfile, default None Returns -------
A minor documentation update to the [DatetimeIndex.indexer_between_time](http://pandas.pydata.org/pandas-docs/version/0.17.1/generated/pandas.DatetimeIndex.indexer_between_time.html) function. Closes #12038
https://api.github.com/repos/pandas-dev/pandas/pulls/12043
2016-01-15T08:44:51Z
2016-01-15T09:47:45Z
2016-01-15T09:47:45Z
2016-01-15T12:19:09Z
BUG: .plot modifing `colors` input
diff --git a/doc/source/whatsnew/v0.18.0.txt b/doc/source/whatsnew/v0.18.0.txt index 4ce2ce5b69cb4..9effa186017fd 100644 --- a/doc/source/whatsnew/v0.18.0.txt +++ b/doc/source/whatsnew/v0.18.0.txt @@ -489,7 +489,8 @@ Bug Fixes - Bug in ``DataFrame`` when masking an empty ``DataFrame`` (:issue:`11859`) - +- Bug in ``.plot`` potentially modifying the ``colors`` input when the number +of columns didn't match the number of series provided (:issue:`12039`). diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py index ff69068a3495c..0fc5916676dd3 100644 --- a/pandas/tests/test_graphics.py +++ b/pandas/tests/test_graphics.py @@ -2717,6 +2717,12 @@ def test_line_colors(self): # Forced show plot _check_plot_works(df.plot, color=custom_colors) + @slow + def test_dont_modify_colors(self): + colors = ['r', 'g', 'b'] + pd.DataFrame(np.random.rand(10, 2)).plot(color=colors) + self.assertEqual(len(colors), 3) + @slow def test_line_colors_and_styles_subplots(self): # GH 9894 diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py index 3e9c788914a5a..43bcd2373df69 100644 --- a/pandas/tools/plotting.py +++ b/pandas/tools/plotting.py @@ -158,7 +158,7 @@ def _get_standard_colors(num_colors=None, colormap=None, color_type='default', if colormap is not None: warnings.warn("'color' and 'colormap' cannot be used " "simultaneously. Using 'color'") - colors = color + colors = list(color) if com.is_list_like(color) else color else: if color_type == 'default': # need to call list() on the result to copy so we don't
Closes https://github.com/pydata/pandas/issues/12039 Just added a defensive copy in `_get_standard_colors` on list-likes.
https://api.github.com/repos/pandas-dev/pandas/pulls/12040
2016-01-15T02:57:40Z
2016-01-16T12:49:06Z
2016-01-16T12:49:06Z
2016-01-16T12:49:09Z
CI: lint for rest of pandas
diff --git a/.travis.yml b/.travis.yml index 087d7f1565707..9fdb98c0124b8 100644 --- a/.travis.yml +++ b/.travis.yml @@ -175,3 +175,4 @@ after_script: - source activate pandas && ci/print_versions.py - ci/print_skipped.py /tmp/nosetests.xml - ci/lint.sh + - ci/lint_ok_for_now.sh diff --git a/ci/lint.sh b/ci/lint.sh index 1795451f7ace4..97d318b48469e 100755 --- a/ci/lint.sh +++ b/ci/lint.sh @@ -4,8 +4,11 @@ echo "inside $0" source activate pandas -echo flake8 pandas/core --statistics -flake8 pandas/core --statistics +for path in 'core' +do + echo "linting -> pandas/$path" + flake8 pandas/$path --filename '*.py' --statistics -q +done RET="$?" diff --git a/ci/lint_ok_for_now.sh b/ci/lint_ok_for_now.sh new file mode 100755 index 0000000000000..eba667fadde06 --- /dev/null +++ b/ci/lint_ok_for_now.sh @@ -0,0 +1,20 @@ +#!/bin/bash + +echo "inside $0" + +source activate pandas + +for path in 'io' 'stats' 'computation' 'tseries' 'util' 'compat' 'tools' 'sparse' 'tests' +do + echo "linting [ok_for_now] -> pandas/$path" + flake8 pandas/$path --filename '*.py' --statistics -q +done + +RET="$?" + +# we are disabling the return code for now +# to have Travis-CI pass. When the code +# passes linting, re-enable +#exit "$RET" + +exit 0
see output at the end: https://travis-ci.org/jreback/pandas/jobs/102213682 this only shows the summary stats. Once these are much lower we can remove the `-q` and show the actual errors. As more files are fixed can move the checks to `lint.sh`
https://api.github.com/repos/pandas-dev/pandas/pulls/12035
2016-01-13T23:35:30Z
2016-01-14T01:20:02Z
2016-01-14T01:20:02Z
2016-01-14T01:20:02Z
Break apart test_frame.py and fix all flake8 warnings
diff --git a/pandas/io/tests/test_packers.py b/pandas/io/tests/test_packers.py index 3434afc4129c4..d6a9feb1bd8f4 100644 --- a/pandas/io/tests/test_packers.py +++ b/pandas/io/tests/test_packers.py @@ -12,9 +12,9 @@ date_range, period_range, Index, SparseSeries, SparseDataFrame, SparsePanel) import pandas.util.testing as tm -from pandas.util.testing import ensure_clean, assert_index_equal -from pandas.tests.test_series import assert_series_equal -from pandas.tests.test_frame import assert_frame_equal +from pandas.util.testing import (ensure_clean, assert_index_equal, + assert_series_equal, + assert_frame_equal) from pandas.tests.test_panel import assert_panel_equal import pandas diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py index 2f148c89ccbe9..64ffd7482ee34 100644 --- a/pandas/sparse/tests/test_sparse.py +++ b/pandas/sparse/tests/test_sparse.py @@ -33,7 +33,9 @@ from pandas.sparse.api import (SparseSeries, SparseDataFrame, SparsePanel, SparseArray) -import pandas.tests.test_frame as test_frame +from pandas.tests.frame.test_misc_api import ( + SafeForSparse as SparseFrameTests) + import pandas.tests.test_panel as test_panel import pandas.tests.test_series as test_series @@ -922,7 +924,7 @@ class TestSparseTimeSeries(tm.TestCase): pass -class TestSparseDataFrame(tm.TestCase, test_frame.SafeForSparse): +class TestSparseDataFrame(tm.TestCase, SparseFrameTests): klass = SparseDataFrame _multiprocess_can_split_ = True diff --git a/pandas/tests/frame/__init__.py b/pandas/tests/frame/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/frame/common.py b/pandas/tests/frame/common.py new file mode 100644 index 0000000000000..37f67712e1b58 --- /dev/null +++ b/pandas/tests/frame/common.py @@ -0,0 +1,141 @@ +import numpy as np + +from pandas import compat +from pandas.util.decorators import cache_readonly +import pandas.util.testing as tm +import pandas as pd + +_seriesd = tm.getSeriesData() +_tsd = tm.getTimeSeriesData() + +_frame = pd.DataFrame(_seriesd) +_frame2 = pd.DataFrame(_seriesd, columns=['D', 'C', 'B', 'A']) +_intframe = pd.DataFrame(dict((k, v.astype(int)) + for k, v in compat.iteritems(_seriesd))) + +_tsframe = pd.DataFrame(_tsd) + +_mixed_frame = _frame.copy() +_mixed_frame['foo'] = 'bar' + + +class TestData(object): + + @cache_readonly + def frame(self): + return _frame.copy() + + @cache_readonly + def frame2(self): + return _frame2.copy() + + @cache_readonly + def intframe(self): + # force these all to int64 to avoid platform testing issues + return pd.DataFrame(dict([(c, s) for c, s in + compat.iteritems(_intframe)]), + dtype=np.int64) + + @cache_readonly + def tsframe(self): + return _tsframe.copy() + + @cache_readonly + def mixed_frame(self): + return _mixed_frame.copy() + + @cache_readonly + def mixed_float(self): + return pd.DataFrame({'A': _frame['A'].copy().astype('float32'), + 'B': _frame['B'].copy().astype('float32'), + 'C': _frame['C'].copy().astype('float16'), + 'D': _frame['D'].copy().astype('float64')}) + + @cache_readonly + def mixed_float2(self): + return pd.DataFrame({'A': _frame2['A'].copy().astype('float32'), + 'B': _frame2['B'].copy().astype('float32'), + 'C': _frame2['C'].copy().astype('float16'), + 'D': _frame2['D'].copy().astype('float64')}) + + @cache_readonly + def mixed_int(self): + return pd.DataFrame({'A': _intframe['A'].copy().astype('int32'), + 'B': np.ones(len(_intframe['B']), dtype='uint64'), + 'C': _intframe['C'].copy().astype('uint8'), + 'D': _intframe['D'].copy().astype('int64')}) + + @cache_readonly + def all_mixed(self): + return pd.DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'float32': np.array([1.] * 10, dtype='float32'), + 'int32': np.array([1] * 10, dtype='int32')}, + index=np.arange(10)) + + @cache_readonly + def tzframe(self): + result = pd.DataFrame({'A': pd.date_range('20130101', periods=3), + 'B': pd.date_range('20130101', periods=3, + tz='US/Eastern'), + 'C': pd.date_range('20130101', periods=3, + tz='CET')}) + result.iloc[1, 1] = pd.NaT + result.iloc[1, 2] = pd.NaT + return result + + @cache_readonly + def empty(self): + return pd.DataFrame({}) + + @cache_readonly + def ts1(self): + return tm.makeTimeSeries() + + @cache_readonly + def ts2(self): + return tm.makeTimeSeries()[5:] + + @cache_readonly + def simple(self): + arr = np.array([[1., 2., 3.], + [4., 5., 6.], + [7., 8., 9.]]) + + return pd.DataFrame(arr, columns=['one', 'two', 'three'], + index=['a', 'b', 'c']) + +# self.ts3 = tm.makeTimeSeries()[-5:] +# self.ts4 = tm.makeTimeSeries()[1:-1] + + +def _check_mixed_float(df, dtype=None): + # float16 are most likely to be upcasted to float32 + dtypes = dict(A='float32', B='float32', C='float16', D='float64') + if isinstance(dtype, compat.string_types): + dtypes = dict([(k, dtype) for k, v in dtypes.items()]) + elif isinstance(dtype, dict): + dtypes.update(dtype) + if dtypes.get('A'): + assert(df.dtypes['A'] == dtypes['A']) + if dtypes.get('B'): + assert(df.dtypes['B'] == dtypes['B']) + if dtypes.get('C'): + assert(df.dtypes['C'] == dtypes['C']) + if dtypes.get('D'): + assert(df.dtypes['D'] == dtypes['D']) + + +def _check_mixed_int(df, dtype=None): + dtypes = dict(A='int32', B='uint64', C='uint8', D='int64') + if isinstance(dtype, compat.string_types): + dtypes = dict([(k, dtype) for k, v in dtypes.items()]) + elif isinstance(dtype, dict): + dtypes.update(dtype) + if dtypes.get('A'): + assert(df.dtypes['A'] == dtypes['A']) + if dtypes.get('B'): + assert(df.dtypes['B'] == dtypes['B']) + if dtypes.get('C'): + assert(df.dtypes['C'] == dtypes['C']) + if dtypes.get('D'): + assert(df.dtypes['D'] == dtypes['D']) diff --git a/pandas/tests/frame/test_alter_axes.py b/pandas/tests/frame/test_alter_axes.py new file mode 100644 index 0000000000000..bea8eab932be3 --- /dev/null +++ b/pandas/tests/frame/test_alter_axes.py @@ -0,0 +1,620 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta + +import numpy as np + +from pandas.compat import lrange +from pandas import DataFrame, Series, Index, MultiIndex +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameAlterAxes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_set_index(self): + idx = Index(np.arange(len(self.mixed_frame))) + + # cache it + _ = self.mixed_frame['foo'] # noqa + self.mixed_frame.index = idx + self.assertIs(self.mixed_frame['foo'].index, idx) + with assertRaisesRegexp(ValueError, 'Length mismatch'): + self.mixed_frame.index = idx[::2] + + def test_set_index_cast(self): + + # issue casting an index then set_index + df = DataFrame({'A': [1.1, 2.2, 3.3], 'B': [5.0, 6.1, 7.2]}, + index=[2010, 2011, 2012]) + expected = df.ix[2010] + new_index = df.index.astype(np.int32) + df.index = new_index + result = df.ix[2010] + assert_series_equal(result, expected) + + def test_set_index2(self): + df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], + 'B': ['one', 'two', 'three', 'one', 'two'], + 'C': ['a', 'b', 'c', 'd', 'e'], + 'D': np.random.randn(5), + 'E': np.random.randn(5)}) + + # new object, single-column + result = df.set_index('C') + result_nodrop = df.set_index('C', drop=False) + + index = Index(df['C'], name='C') + + expected = df.ix[:, ['A', 'B', 'D', 'E']] + expected.index = index + + expected_nodrop = df.copy() + expected_nodrop.index = index + + assert_frame_equal(result, expected) + assert_frame_equal(result_nodrop, expected_nodrop) + self.assertEqual(result.index.name, index.name) + + # inplace, single + df2 = df.copy() + + df2.set_index('C', inplace=True) + + assert_frame_equal(df2, expected) + + df3 = df.copy() + df3.set_index('C', drop=False, inplace=True) + + assert_frame_equal(df3, expected_nodrop) + + # create new object, multi-column + result = df.set_index(['A', 'B']) + result_nodrop = df.set_index(['A', 'B'], drop=False) + + index = MultiIndex.from_arrays([df['A'], df['B']], names=['A', 'B']) + + expected = df.ix[:, ['C', 'D', 'E']] + expected.index = index + + expected_nodrop = df.copy() + expected_nodrop.index = index + + assert_frame_equal(result, expected) + assert_frame_equal(result_nodrop, expected_nodrop) + self.assertEqual(result.index.names, index.names) + + # inplace + df2 = df.copy() + df2.set_index(['A', 'B'], inplace=True) + assert_frame_equal(df2, expected) + + df3 = df.copy() + df3.set_index(['A', 'B'], drop=False, inplace=True) + assert_frame_equal(df3, expected_nodrop) + + # corner case + with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): + df.set_index('A', verify_integrity=True) + + # append + result = df.set_index(['A', 'B'], append=True) + xp = df.reset_index().set_index(['index', 'A', 'B']) + xp.index.names = [None, 'A', 'B'] + assert_frame_equal(result, xp) + + # append to existing multiindex + rdf = df.set_index(['A'], append=True) + rdf = rdf.set_index(['B', 'C'], append=True) + expected = df.set_index(['A', 'B', 'C'], append=True) + assert_frame_equal(rdf, expected) + + # Series + result = df.set_index(df.C) + self.assertEqual(result.index.name, 'C') + + def test_set_index_nonuniq(self): + df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], + 'B': ['one', 'two', 'three', 'one', 'two'], + 'C': ['a', 'b', 'c', 'd', 'e'], + 'D': np.random.randn(5), + 'E': np.random.randn(5)}) + with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): + df.set_index('A', verify_integrity=True, inplace=True) + self.assertIn('A', df) + + def test_set_index_bug(self): + # GH1590 + df = DataFrame({'val': [0, 1, 2], 'key': ['a', 'b', 'c']}) + df2 = df.select(lambda indx: indx >= 1) + rs = df2.set_index('key') + xp = DataFrame({'val': [1, 2]}, + Index(['b', 'c'], name='key')) + assert_frame_equal(rs, xp) + + def test_set_index_pass_arrays(self): + df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'foo', 'foo'], + 'B': ['one', 'one', 'two', 'three', + 'two', 'two', 'one', 'three'], + 'C': np.random.randn(8), + 'D': np.random.randn(8)}) + + # multiple columns + result = df.set_index(['A', df['B'].values], drop=False) + expected = df.set_index(['A', 'B'], drop=False) + + # TODO should set_index check_names ? + assert_frame_equal(result, expected, check_names=False) + + def test_construction_with_categorical_index(self): + + ci = tm.makeCategoricalIndex(10) + + # with Categorical + df = DataFrame({'A': np.random.randn(10), + 'B': ci.values}) + idf = df.set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + # from a CategoricalIndex + df = DataFrame({'A': np.random.randn(10), + 'B': ci}) + idf = df.set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + idf = df.set_index('B').reset_index().set_index('B') + str(idf) + tm.assert_index_equal(idf.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + new_df = idf.reset_index() + new_df.index = df.B + tm.assert_index_equal(new_df.index, ci, check_names=False) + self.assertEqual(idf.index.name, 'B') + + def test_set_index_cast_datetimeindex(self): + df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i) + for i in range(1000)], + 'B': np.random.randn(1000)}) + + idf = df.set_index('A') + tm.assertIsInstance(idf.index, pd.DatetimeIndex) + + # don't cast a DatetimeIndex WITH a tz, leave as object + # GH 6032 + i = (pd.DatetimeIndex( + pd.tseries.tools.to_datetime(['2013-1-1 13:00', + '2013-1-2 14:00'], errors="raise")) + .tz_localize('US/Pacific')) + df = DataFrame(np.random.randn(2, 1), columns=['A']) + + expected = Series(np.array([pd.Timestamp('2013-01-01 13:00:00-0800', + tz='US/Pacific'), + pd.Timestamp('2013-01-02 14:00:00-0800', + tz='US/Pacific')], + dtype="object")) + + # convert index to series + result = Series(i) + assert_series_equal(result, expected) + + # assignt to frame + df['B'] = i + result = df['B'] + assert_series_equal(result, expected, check_names=False) + self.assertEqual(result.name, 'B') + + # keep the timezone + result = i.to_series(keep_tz=True) + assert_series_equal(result.reset_index(drop=True), expected) + + # convert to utc + df['C'] = i.to_series().reset_index(drop=True) + result = df['C'] + comp = pd.DatetimeIndex(expected.values).copy() + comp.tz = None + self.assert_numpy_array_equal(result.values, comp.values) + + # list of datetimes with a tz + df['D'] = i.to_pydatetime() + result = df['D'] + assert_series_equal(result, expected, check_names=False) + self.assertEqual(result.name, 'D') + + # GH 6785 + # set the index manually + import pytz + df = DataFrame( + [{'ts': datetime(2014, 4, 1, tzinfo=pytz.utc), 'foo': 1}]) + expected = df.set_index('ts') + df.index = df['ts'] + df.pop('ts') + assert_frame_equal(df, expected) + + # GH 3950 + # reset_index with single level + for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']: + idx = pd.date_range('1/1/2011', periods=5, + freq='D', tz=tz, name='idx') + df = pd.DataFrame( + {'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) + + expected = pd.DataFrame({'idx': [datetime(2011, 1, 1), + datetime(2011, 1, 2), + datetime(2011, 1, 3), + datetime(2011, 1, 4), + datetime(2011, 1, 5)], + 'a': range(5), + 'b': ['A', 'B', 'C', 'D', 'E']}, + columns=['idx', 'a', 'b']) + expected['idx'] = expected['idx'].apply( + lambda d: pd.Timestamp(d, tz=tz)) + assert_frame_equal(df.reset_index(), expected) + + def test_set_index_multiindexcolumns(self): + columns = MultiIndex.from_tuples([('foo', 1), ('foo', 2), ('bar', 1)]) + df = DataFrame(np.random.randn(3, 3), columns=columns) + rs = df.set_index(df.columns[0]) + xp = df.ix[:, 1:] + xp.index = df.ix[:, 0].values + xp.index.names = [df.columns[0]] + assert_frame_equal(rs, xp) + + def test_set_index_empty_column(self): + # #1971 + df = DataFrame([ + dict(a=1, p=0), + dict(a=2, m=10), + dict(a=3, m=11, p=20), + dict(a=4, m=12, p=21) + ], columns=('a', 'm', 'p', 'x')) + + # it works! + result = df.set_index(['a', 'x']) + repr(result) + + def test_set_columns(self): + cols = Index(np.arange(len(self.mixed_frame.columns))) + self.mixed_frame.columns = cols + with assertRaisesRegexp(ValueError, 'Length mismatch'): + self.mixed_frame.columns = cols[::2] + + # Renaming + + def test_rename(self): + mapping = { + 'A': 'a', + 'B': 'b', + 'C': 'c', + 'D': 'd' + } + + renamed = self.frame.rename(columns=mapping) + renamed2 = self.frame.rename(columns=str.lower) + + assert_frame_equal(renamed, renamed2) + assert_frame_equal(renamed2.rename(columns=str.upper), + self.frame, check_names=False) + + # index + data = { + 'A': {'foo': 0, 'bar': 1} + } + + # gets sorted alphabetical + df = DataFrame(data) + renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'}) + self.assert_numpy_array_equal(renamed.index, ['foo', 'bar']) + + renamed = df.rename(index=str.upper) + self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO']) + + # have to pass something + self.assertRaises(TypeError, self.frame.rename) + + # partial columns + renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'}) + self.assert_numpy_array_equal( + renamed.columns, ['A', 'B', 'foo', 'bar']) + + # other axis + renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'}) + self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar']) + + # index with name + index = Index(['foo', 'bar'], name='name') + renamer = DataFrame(data, index=index) + renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'}) + self.assert_numpy_array_equal(renamed.index, ['bar', 'foo']) + self.assertEqual(renamed.index.name, renamer.index.name) + + # MultiIndex + tuples_index = [('foo1', 'bar1'), ('foo2', 'bar2')] + tuples_columns = [('fizz1', 'buzz1'), ('fizz2', 'buzz2')] + index = MultiIndex.from_tuples(tuples_index, names=['foo', 'bar']) + columns = MultiIndex.from_tuples( + tuples_columns, names=['fizz', 'buzz']) + renamer = DataFrame([(0, 0), (1, 1)], index=index, columns=columns) + renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'}, + columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'}) + new_index = MultiIndex.from_tuples( + [('foo3', 'bar1'), ('foo2', 'bar3')]) + new_columns = MultiIndex.from_tuples( + [('fizz3', 'buzz1'), ('fizz2', 'buzz3')]) + self.assert_numpy_array_equal(renamed.index, new_index) + self.assert_numpy_array_equal(renamed.columns, new_columns) + self.assertEqual(renamed.index.names, renamer.index.names) + self.assertEqual(renamed.columns.names, renamer.columns.names) + + def test_rename_nocopy(self): + renamed = self.frame.rename(columns={'C': 'foo'}, copy=False) + renamed['foo'] = 1. + self.assertTrue((self.frame['C'] == 1.).all()) + + def test_rename_inplace(self): + self.frame.rename(columns={'C': 'foo'}) + self.assertIn('C', self.frame) + self.assertNotIn('foo', self.frame) + + c_id = id(self.frame['C']) + frame = self.frame.copy() + frame.rename(columns={'C': 'foo'}, inplace=True) + + self.assertNotIn('C', frame) + self.assertIn('foo', frame) + self.assertNotEqual(id(frame['foo']), c_id) + + def test_rename_bug(self): + # GH 5344 + # rename set ref_locs, and set_index was not resetting + df = DataFrame({0: ['foo', 'bar'], 1: ['bah', 'bas'], 2: [1, 2]}) + df = df.rename(columns={0: 'a'}) + df = df.rename(columns={1: 'b'}) + df = df.set_index(['a', 'b']) + df.columns = ['2001-01-01'] + expected = DataFrame([[1], [2]], + index=MultiIndex.from_tuples( + [('foo', 'bah'), ('bar', 'bas')], + names=['a', 'b']), + columns=['2001-01-01']) + assert_frame_equal(df, expected) + + def test_reorder_levels(self): + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], + [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]], + names=['L0', 'L1', 'L2']) + df = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, index=index) + + # no change, position + result = df.reorder_levels([0, 1, 2]) + assert_frame_equal(df, result) + + # no change, labels + result = df.reorder_levels(['L0', 'L1', 'L2']) + assert_frame_equal(df, result) + + # rotate, position + result = df.reorder_levels([1, 2, 0]) + e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], + labels=[[0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1], + [0, 0, 0, 0, 0, 0]], + names=['L1', 'L2', 'L0']) + expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, + index=e_idx) + assert_frame_equal(result, expected) + + result = df.reorder_levels([0, 0, 0]) + e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], + labels=[[0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0], + [0, 0, 0, 0, 0, 0]], + names=['L0', 'L0', 'L0']) + expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, + index=e_idx) + assert_frame_equal(result, expected) + + result = df.reorder_levels(['L0', 'L0', 'L0']) + assert_frame_equal(result, expected) + + def test_reset_index(self): + stacked = self.frame.stack()[::2] + stacked = DataFrame({'foo': stacked, 'bar': stacked}) + + names = ['first', 'second'] + stacked.index.names = names + deleveled = stacked.reset_index() + for i, (lev, lab) in enumerate(zip(stacked.index.levels, + stacked.index.labels)): + values = lev.take(lab) + name = names[i] + assert_almost_equal(values, deleveled[name]) + + stacked.index.names = [None, None] + deleveled2 = stacked.reset_index() + self.assert_numpy_array_equal(deleveled['first'], + deleveled2['level_0']) + self.assert_numpy_array_equal(deleveled['second'], + deleveled2['level_1']) + + # default name assigned + rdf = self.frame.reset_index() + self.assert_numpy_array_equal(rdf['index'], self.frame.index.values) + + # default name assigned, corner case + df = self.frame.copy() + df['index'] = 'foo' + rdf = df.reset_index() + self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values) + + # but this is ok + self.frame.index.name = 'index' + deleveled = self.frame.reset_index() + self.assert_numpy_array_equal(deleveled['index'], + self.frame.index.values) + self.assert_numpy_array_equal(deleveled.index, + np.arange(len(deleveled))) + + # preserve column names + self.frame.columns.name = 'columns' + resetted = self.frame.reset_index() + self.assertEqual(resetted.columns.name, 'columns') + + # only remove certain columns + frame = self.frame.reset_index().set_index(['index', 'A', 'B']) + rs = frame.reset_index(['A', 'B']) + + # TODO should reset_index check_names ? + assert_frame_equal(rs, self.frame, check_names=False) + + rs = frame.reset_index(['index', 'A', 'B']) + assert_frame_equal(rs, self.frame.reset_index(), check_names=False) + + rs = frame.reset_index(['index', 'A', 'B']) + assert_frame_equal(rs, self.frame.reset_index(), check_names=False) + + rs = frame.reset_index('A') + xp = self.frame.reset_index().set_index(['index', 'B']) + assert_frame_equal(rs, xp, check_names=False) + + # test resetting in place + df = self.frame.copy() + resetted = self.frame.reset_index() + df.reset_index(inplace=True) + assert_frame_equal(df, resetted, check_names=False) + + frame = self.frame.reset_index().set_index(['index', 'A', 'B']) + rs = frame.reset_index('A', drop=True) + xp = self.frame.copy() + del xp['A'] + xp = xp.set_index(['B'], append=True) + assert_frame_equal(rs, xp, check_names=False) + + def test_reset_index_right_dtype(self): + time = np.arange(0.0, 10, np.sqrt(2) / 2) + s1 = Series((9.81 * time ** 2) / 2, + index=Index(time, name='time'), + name='speed') + df = DataFrame(s1) + + resetted = s1.reset_index() + self.assertEqual(resetted['time'].dtype, np.float64) + + resetted = df.reset_index() + self.assertEqual(resetted['time'].dtype, np.float64) + + def test_reset_index_multiindex_col(self): + vals = np.random.randn(3, 3).astype(object) + idx = ['x', 'y', 'z'] + full = np.hstack(([[x] for x in idx], vals)) + df = DataFrame(vals, Index(idx, name='a'), + columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) + rs = df.reset_index() + xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], + ['', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index(col_fill=None) + xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index(col_level=1, col_fill='blah') + xp = DataFrame(full, columns=[['blah', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + df = DataFrame(vals, + MultiIndex.from_arrays([[0, 1, 2], ['x', 'y', 'z']], + names=['d', 'a']), + columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) + rs = df.reset_index('a', ) + xp = DataFrame(full, Index([0, 1, 2], name='d'), + columns=[['a', 'b', 'b', 'c'], + ['', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index('a', col_fill=None) + xp = DataFrame(full, Index(lrange(3), name='d'), + columns=[['a', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + rs = df.reset_index('a', col_fill='blah', col_level=1) + xp = DataFrame(full, Index(lrange(3), name='d'), + columns=[['blah', 'b', 'b', 'c'], + ['a', 'mean', 'median', 'mean']]) + assert_frame_equal(rs, xp) + + def test_reset_index_with_datetimeindex_cols(self): + # GH5818 + # + df = pd.DataFrame([[1, 2], [3, 4]], + columns=pd.date_range('1/1/2013', '1/2/2013'), + index=['A', 'B']) + + result = df.reset_index() + expected = pd.DataFrame([['A', 1, 2], ['B', 3, 4]], + columns=['index', datetime(2013, 1, 1), + datetime(2013, 1, 2)]) + assert_frame_equal(result, expected) + + def test_set_index_names(self): + df = pd.util.testing.makeDataFrame() + df.index.name = 'name' + + self.assertEqual(df.set_index(df.index).index.names, ['name']) + + mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B']) + mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values, + names=['A', 'B', 'A', 'B']) + + df = df.set_index(['A', 'B']) + + self.assertEqual(df.set_index(df.index).index.names, ['A', 'B']) + + # Check that set_index isn't converting a MultiIndex into an Index + self.assertTrue(isinstance(df.set_index(df.index).index, MultiIndex)) + + # Check actual equality + tm.assert_index_equal(df.set_index(df.index).index, mi) + + # Check that [MultiIndex, MultiIndex] yields a MultiIndex rather + # than a pair of tuples + self.assertTrue(isinstance(df.set_index( + [df.index, df.index]).index, MultiIndex)) + + # Check equality + tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2) + + def test_rename_objects(self): + renamed = self.mixed_frame.rename(columns=str.upper) + self.assertIn('FOO', renamed) + self.assertNotIn('foo', renamed) + + def test_assign_columns(self): + self.frame['hi'] = 'there' + + frame = self.frame.copy() + frame.columns = ['foo', 'bar', 'baz', 'quux', 'foo2'] + assert_series_equal(self.frame['C'], frame['baz'], check_names=False) + assert_series_equal(self.frame['hi'], frame['foo2'], check_names=False) diff --git a/pandas/tests/frame/test_analytics.py b/pandas/tests/frame/test_analytics.py new file mode 100644 index 0000000000000..f68faf99d3143 --- /dev/null +++ b/pandas/tests/frame/test_analytics.py @@ -0,0 +1,2268 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import timedelta, datetime +from distutils.version import LooseVersion +import sys +import nose + +from numpy import nan +from numpy.random import randn +import numpy as np + +from pandas.compat import lrange +from pandas import (compat, isnull, notnull, DataFrame, Series, + MultiIndex, date_range, Timestamp) +import pandas as pd +import pandas.core.common as com +import pandas.core.nanops as nanops + +from pandas.util.testing import (assert_almost_equal, + assert_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm +from pandas import _np_version_under1p9 + +from pandas.tests.frame.common import TestData + + +class TestDataFrameAnalytics(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + # ---------------------------------------------------------------------= + # Correlation and covariance + + def test_corr_pearson(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('pearson') + + def test_corr_kendall(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('kendall') + + def test_corr_spearman(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + self._check_method('spearman') + + def _check_method(self, method='pearson', check_minp=False): + if not check_minp: + correls = self.frame.corr(method=method) + exp = self.frame['A'].corr(self.frame['C'], method=method) + assert_almost_equal(correls['A']['C'], exp) + else: + result = self.frame.corr(min_periods=len(self.frame) - 8) + expected = self.frame.corr() + expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan + assert_frame_equal(result, expected) + + def test_corr_non_numeric(self): + tm._skip_if_no_scipy() + self.frame['A'][:5] = nan + self.frame['B'][5:10] = nan + + # exclude non-numeric types + result = self.mixed_frame.corr() + expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr() + assert_frame_equal(result, expected) + + def test_corr_nooverlap(self): + tm._skip_if_no_scipy() + + # nothing in common + for meth in ['pearson', 'kendall', 'spearman']: + df = DataFrame({'A': [1, 1.5, 1, np.nan, np.nan, np.nan], + 'B': [np.nan, np.nan, np.nan, 1, 1.5, 1], + 'C': [np.nan, np.nan, np.nan, np.nan, + np.nan, np.nan]}) + rs = df.corr(meth) + self.assertTrue(isnull(rs.ix['A', 'B'])) + self.assertTrue(isnull(rs.ix['B', 'A'])) + self.assertEqual(rs.ix['A', 'A'], 1) + self.assertEqual(rs.ix['B', 'B'], 1) + self.assertTrue(isnull(rs.ix['C', 'C'])) + + def test_corr_constant(self): + tm._skip_if_no_scipy() + + # constant --> all NA + + for meth in ['pearson', 'spearman']: + df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan], + 'B': [np.nan, np.nan, np.nan, 1, 1, 1]}) + rs = df.corr(meth) + self.assertTrue(isnull(rs.values).all()) + + def test_corr_int(self): + # dtypes other than float64 #1761 + df3 = DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 3, 4]}) + + # it works! + df3.cov() + df3.corr() + + def test_corr_int_and_boolean(self): + tm._skip_if_no_scipy() + + # when dtypes of pandas series are different + # then ndarray will have dtype=object, + # so it need to be properly handled + df = DataFrame({"a": [True, False], "b": [1, 0]}) + + expected = DataFrame(np.ones((2, 2)), index=[ + 'a', 'b'], columns=['a', 'b']) + for meth in ['pearson', 'kendall', 'spearman']: + assert_frame_equal(df.corr(meth), expected) + + def test_cov(self): + # min_periods no NAs (corner case) + expected = self.frame.cov() + result = self.frame.cov(min_periods=len(self.frame)) + + assert_frame_equal(expected, result) + + result = self.frame.cov(min_periods=len(self.frame) + 1) + self.assertTrue(isnull(result.values).all()) + + # with NAs + frame = self.frame.copy() + frame['A'][:5] = nan + frame['B'][5:10] = nan + result = self.frame.cov(min_periods=len(self.frame) - 8) + expected = self.frame.cov() + expected.ix['A', 'B'] = np.nan + expected.ix['B', 'A'] = np.nan + + # regular + self.frame['A'][:5] = nan + self.frame['B'][:10] = nan + cov = self.frame.cov() + + assert_almost_equal(cov['A']['C'], + self.frame['A'].cov(self.frame['C'])) + + # exclude non-numeric types + result = self.mixed_frame.cov() + expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov() + assert_frame_equal(result, expected) + + # Single column frame + df = DataFrame(np.linspace(0.0, 1.0, 10)) + result = df.cov() + expected = DataFrame(np.cov(df.values.T).reshape((1, 1)), + index=df.columns, columns=df.columns) + assert_frame_equal(result, expected) + df.ix[0] = np.nan + result = df.cov() + expected = DataFrame(np.cov(df.values[1:].T).reshape((1, 1)), + index=df.columns, columns=df.columns) + assert_frame_equal(result, expected) + + def test_corrwith(self): + a = self.tsframe + noise = Series(randn(len(a)), index=a.index) + + b = self.tsframe + noise + + # make sure order does not matter + b = b.reindex(columns=b.columns[::-1], index=b.index[::-1][10:]) + del b['B'] + + colcorr = a.corrwith(b, axis=0) + assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) + + rowcorr = a.corrwith(b, axis=1) + assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) + + dropped = a.corrwith(b, axis=0, drop=True) + assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) + self.assertNotIn('B', dropped) + + dropped = a.corrwith(b, axis=1, drop=True) + self.assertNotIn(a.index[-1], dropped.index) + + # non time-series data + index = ['a', 'b', 'c', 'd', 'e'] + columns = ['one', 'two', 'three', 'four'] + df1 = DataFrame(randn(5, 4), index=index, columns=columns) + df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns) + correls = df1.corrwith(df2, axis=1) + for row in index[:4]: + assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) + + def test_corrwith_with_objects(self): + df1 = tm.makeTimeDataFrame() + df2 = tm.makeTimeDataFrame() + cols = ['A', 'B', 'C', 'D'] + + df1['obj'] = 'foo' + df2['obj'] = 'bar' + + result = df1.corrwith(df2) + expected = df1.ix[:, cols].corrwith(df2.ix[:, cols]) + assert_series_equal(result, expected) + + result = df1.corrwith(df2, axis=1) + expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1) + assert_series_equal(result, expected) + + def test_corrwith_series(self): + result = self.tsframe.corrwith(self.tsframe['A']) + expected = self.tsframe.apply(self.tsframe['A'].corr) + + assert_series_equal(result, expected) + + def test_corrwith_matches_corrcoef(self): + df1 = DataFrame(np.arange(10000), columns=['a']) + df2 = DataFrame(np.arange(10000) ** 2, columns=['a']) + c1 = df1.corrwith(df2)['a'] + c2 = np.corrcoef(df1['a'], df2['a'])[0][1] + + assert_almost_equal(c1, c2) + self.assertTrue(c1 < 1) + + def test_bool_describe_in_mixed_frame(self): + df = DataFrame({ + 'string_data': ['a', 'b', 'c', 'd', 'e'], + 'bool_data': [True, True, False, False, False], + 'int_data': [10, 20, 30, 40, 50], + }) + + # Boolean data and integer data is included in .describe() output, + # string data isn't + self.assert_numpy_array_equal(df.describe().columns, [ + 'bool_data', 'int_data']) + + bool_describe = df.describe()['bool_data'] + + # Both the min and the max values should stay booleans + self.assertEqual(bool_describe['min'].dtype, np.bool_) + self.assertEqual(bool_describe['max'].dtype, np.bool_) + + self.assertFalse(bool_describe['min']) + self.assertTrue(bool_describe['max']) + + # For numeric operations, like mean or median, the values True/False + # are cast to the integer values 1 and 0 + assert_almost_equal(bool_describe['mean'], 0.4) + assert_almost_equal(bool_describe['50%'], 0) + + def test_reduce_mixed_frame(self): + # GH 6806 + df = DataFrame({ + 'bool_data': [True, True, False, False, False], + 'int_data': [10, 20, 30, 40, 50], + 'string_data': ['a', 'b', 'c', 'd', 'e'], + }) + df.reindex(columns=['bool_data', 'int_data', 'string_data']) + test = df.sum(axis=0) + assert_almost_equal(test.values, [2, 150, 'abcde']) + assert_series_equal(test, df.T.sum(axis=1)) + + def test_count(self): + f = lambda s: notnull(s).sum() + self._check_stat_op('count', f, + has_skipna=False, + has_numeric_only=True, + check_dtype=False, + check_dates=True) + + # corner case + frame = DataFrame() + ct1 = frame.count(1) + tm.assertIsInstance(ct1, Series) + + ct2 = frame.count(0) + tm.assertIsInstance(ct2, Series) + + # GH #423 + df = DataFrame(index=lrange(10)) + result = df.count(1) + expected = Series(0, index=df.index) + assert_series_equal(result, expected) + + df = DataFrame(columns=lrange(10)) + result = df.count(0) + expected = Series(0, index=df.columns) + assert_series_equal(result, expected) + + df = DataFrame() + result = df.count() + expected = Series(0, index=[]) + assert_series_equal(result, expected) + + def test_sum(self): + self._check_stat_op('sum', np.sum, has_numeric_only=True) + + # mixed types (with upcasting happening) + self._check_stat_op('sum', np.sum, + frame=self.mixed_float.astype('float32'), + has_numeric_only=True, check_dtype=False, + check_less_precise=True) + + def test_stat_operators_attempt_obj_array(self): + data = { + 'a': [-0.00049987540199591344, -0.0016467257772919831, + 0.00067695870775883013], + 'b': [-0, -0, 0.0], + 'c': [0.00031111847529610595, 0.0014902627951905339, + -0.00094099200035979691] + } + df1 = DataFrame(data, index=['foo', 'bar', 'baz'], + dtype='O') + methods = ['sum', 'mean', 'prod', 'var', 'std', 'skew', 'min', 'max'] + + # GH #676 + df2 = DataFrame({0: [np.nan, 2], 1: [np.nan, 3], + 2: [np.nan, 4]}, dtype=object) + + for df in [df1, df2]: + for meth in methods: + self.assertEqual(df.values.dtype, np.object_) + result = getattr(df, meth)(1) + expected = getattr(df.astype('f8'), meth)(1) + + if not tm._incompat_bottleneck_version(meth): + assert_series_equal(result, expected) + + def test_mean(self): + self._check_stat_op('mean', np.mean, check_dates=True) + + def test_product(self): + self._check_stat_op('product', np.prod) + + def test_median(self): + def wrapper(x): + if isnull(x).any(): + return np.nan + return np.median(x) + + self._check_stat_op('median', wrapper, check_dates=True) + + def test_min(self): + self._check_stat_op('min', np.min, check_dates=True) + self._check_stat_op('min', np.min, frame=self.intframe) + + def test_cummin(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cummin = self.tsframe.cummin() + expected = self.tsframe.apply(Series.cummin) + assert_frame_equal(cummin, expected) + + # axis = 1 + cummin = self.tsframe.cummin(axis=1) + expected = self.tsframe.apply(Series.cummin, axis=1) + assert_frame_equal(cummin, expected) + + # it works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cummin() # noqa + + # fix issue + cummin_xs = self.tsframe.cummin(axis=1) + self.assertEqual(np.shape(cummin_xs), np.shape(self.tsframe)) + + def test_cummax(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cummax = self.tsframe.cummax() + expected = self.tsframe.apply(Series.cummax) + assert_frame_equal(cummax, expected) + + # axis = 1 + cummax = self.tsframe.cummax(axis=1) + expected = self.tsframe.apply(Series.cummax, axis=1) + assert_frame_equal(cummax, expected) + + # it works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cummax() # noqa + + # fix issue + cummax_xs = self.tsframe.cummax(axis=1) + self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe)) + + def test_max(self): + self._check_stat_op('max', np.max, check_dates=True) + self._check_stat_op('max', np.max, frame=self.intframe) + + def test_mad(self): + f = lambda x: np.abs(x - x.mean()).mean() + self._check_stat_op('mad', f) + + def test_var_std(self): + alt = lambda x: np.var(x, ddof=1) + self._check_stat_op('var', alt) + + alt = lambda x: np.std(x, ddof=1) + self._check_stat_op('std', alt) + + result = self.tsframe.std(ddof=4) + expected = self.tsframe.apply(lambda x: x.std(ddof=4)) + assert_almost_equal(result, expected) + + result = self.tsframe.var(ddof=4) + expected = self.tsframe.apply(lambda x: x.var(ddof=4)) + assert_almost_equal(result, expected) + + arr = np.repeat(np.random.random((1, 1000)), 1000, 0) + result = nanops.nanvar(arr, axis=0) + self.assertFalse((result < 0).any()) + if nanops._USE_BOTTLENECK: + nanops._USE_BOTTLENECK = False + result = nanops.nanvar(arr, axis=0) + self.assertFalse((result < 0).any()) + nanops._USE_BOTTLENECK = True + + def test_numeric_only_flag(self): + # GH #9201 + methods = ['sem', 'var', 'std'] + df1 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) + # set one entry to a number in str format + df1.ix[0, 'foo'] = '100' + + df2 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) + # set one entry to a non-number str + df2.ix[0, 'foo'] = 'a' + + for meth in methods: + result = getattr(df1, meth)(axis=1, numeric_only=True) + expected = getattr(df1[['bar', 'baz']], meth)(axis=1) + assert_series_equal(expected, result) + + result = getattr(df2, meth)(axis=1, numeric_only=True) + expected = getattr(df2[['bar', 'baz']], meth)(axis=1) + assert_series_equal(expected, result) + + # df1 has all numbers, df2 has a letter inside + self.assertRaises(TypeError, lambda: getattr(df1, meth) + (axis=1, numeric_only=False)) + self.assertRaises(TypeError, lambda: getattr(df2, meth) + (axis=1, numeric_only=False)) + + def test_quantile(self): + from numpy import percentile + + q = self.tsframe.quantile(0.1, axis=0) + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.tsframe.quantile(0.9, axis=1) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test degenerate case + q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0) + assert(np.isnan(q['x']) and np.isnan(q['y'])) + + # non-numeric exclusion + df = DataFrame({'col1': ['A', 'A', 'B', 'B'], 'col2': [1, 2, 3, 4]}) + rs = df.quantile(0.5) + xp = df.median() + assert_series_equal(rs, xp) + + # axis + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + result = df.quantile(.5, axis=1) + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + assert_series_equal(result, expected) + + result = df.quantile([.5, .75], axis=1) + expected = DataFrame({1: [1.5, 1.75], 2: [2.5, 2.75], + 3: [3.5, 3.75]}, index=[0.5, 0.75]) + assert_frame_equal(result, expected, check_index_type=True) + + # We may want to break API in the future to change this + # so that we exclude non-numeric along the same axis + # See GH #7312 + df = DataFrame([[1, 2, 3], + ['a', 'b', 4]]) + result = df.quantile(.5, axis=1) + expected = Series([3., 4.], index=[0, 1]) + assert_series_equal(result, expected) + + def test_quantile_axis_parameter(self): + # GH 9543/9544 + + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + + result = df.quantile(.5, axis=0) + + expected = Series([2., 3.], index=["A", "B"]) + assert_series_equal(result, expected) + + expected = df.quantile(.5, axis="index") + assert_series_equal(result, expected) + + result = df.quantile(.5, axis=1) + + expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) + assert_series_equal(result, expected) + + result = df.quantile(.5, axis="columns") + assert_series_equal(result, expected) + + self.assertRaises(ValueError, df.quantile, 0.1, axis=-1) + self.assertRaises(ValueError, df.quantile, 0.1, axis="column") + + def test_quantile_interpolation(self): + # GH #10174 + if _np_version_under1p9: + raise nose.SkipTest("Numpy version under 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test with and without interpolation keyword + q1 = self.intframe.quantile(0.1) + self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) + assert_series_equal(q, q1) + + # interpolation method other than default linear + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + result = df.quantile(.5, axis=1, interpolation='nearest') + expected = Series([1., 2., 3.], index=[1, 2, 3]) + assert_series_equal(result, expected) + + # axis + result = df.quantile([.5, .75], axis=1, interpolation='lower') + expected = DataFrame({1: [1., 1.], 2: [2., 2.], + 3: [3., 3.]}, index=[0.5, 0.75]) + assert_frame_equal(result, expected) + + # test degenerate case + df = DataFrame({'x': [], 'y': []}) + q = df.quantile(0.1, axis=0, interpolation='higher') + assert(np.isnan(q['x']) and np.isnan(q['y'])) + + # multi + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + result = df.quantile([.25, .5], interpolation='midpoint') + expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], + index=[.25, .5], columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + def test_quantile_interpolation_np_lt_1p9(self): + # GH #10174 + if not _np_version_under1p9: + raise nose.SkipTest("Numpy version is greater than 1.9") + + from numpy import percentile + + # interpolation = linear (default case) + q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') + self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) + q = self.intframe.quantile(0.1) + self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) + + # test with and without interpolation keyword + q1 = self.intframe.quantile(0.1) + self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) + assert_series_equal(q, q1) + + # interpolation method other than default linear + expErrMsg = "Interpolation methods other than linear" + df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile(.5, axis=1, interpolation='nearest') + + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile([.5, .75], axis=1, interpolation='lower') + + # test degenerate case + df = DataFrame({'x': [], 'y': []}) + with assertRaisesRegexp(ValueError, expErrMsg): + q = df.quantile(0.1, axis=0, interpolation='higher') + + # multi + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + with assertRaisesRegexp(ValueError, expErrMsg): + df.quantile([.25, .5], interpolation='midpoint') + + def test_quantile_multi(self): + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['a', 'b', 'c']) + result = df.quantile([.25, .5]) + expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], + index=[.25, .5], columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.quantile([.25, .5], axis=1) + expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], + index=[.25, .5], columns=[0, 1, 2]) + + # empty + result = DataFrame({'x': [], 'y': []}).quantile([0.1, .9], axis=0) + expected = DataFrame({'x': [np.nan, np.nan], 'y': [np.nan, np.nan]}, + index=[.1, .9]) + assert_frame_equal(result, expected) + + def test_quantile_datetime(self): + df = DataFrame({'a': pd.to_datetime(['2010', '2011']), 'b': [0, 5]}) + + # exclude datetime + result = df.quantile(.5) + expected = Series([2.5], index=['b']) + + # datetime + result = df.quantile(.5, numeric_only=False) + expected = Series([Timestamp('2010-07-02 12:00:00'), 2.5], + index=['a', 'b']) + assert_series_equal(result, expected) + + # datetime w/ multi + result = df.quantile([.5], numeric_only=False) + expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), 2.5]], + index=[.5], columns=['a', 'b']) + assert_frame_equal(result, expected) + + # axis = 1 + df['c'] = pd.to_datetime(['2011', '2012']) + result = df[['a', 'c']].quantile(.5, axis=1, numeric_only=False) + expected = Series([Timestamp('2010-07-02 12:00:00'), + Timestamp('2011-07-02 12:00:00')], + index=[0, 1]) + assert_series_equal(result, expected) + + result = df[['a', 'c']].quantile([.5], axis=1, numeric_only=False) + expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), + Timestamp('2011-07-02 12:00:00')]], + index=[0.5], columns=[0, 1]) + assert_frame_equal(result, expected) + + def test_quantile_invalid(self): + msg = 'percentiles should all be in the interval \\[0, 1\\]' + for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: + with tm.assertRaisesRegexp(ValueError, msg): + self.tsframe.quantile(invalid) + + def test_cumsum(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cumsum = self.tsframe.cumsum() + expected = self.tsframe.apply(Series.cumsum) + assert_frame_equal(cumsum, expected) + + # axis = 1 + cumsum = self.tsframe.cumsum(axis=1) + expected = self.tsframe.apply(Series.cumsum, axis=1) + assert_frame_equal(cumsum, expected) + + # works + df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) + result = df.cumsum() # noqa + + # fix issue + cumsum_xs = self.tsframe.cumsum(axis=1) + self.assertEqual(np.shape(cumsum_xs), np.shape(self.tsframe)) + + def test_cumprod(self): + self.tsframe.ix[5:10, 0] = nan + self.tsframe.ix[10:15, 1] = nan + self.tsframe.ix[15:, 2] = nan + + # axis = 0 + cumprod = self.tsframe.cumprod() + expected = self.tsframe.apply(Series.cumprod) + assert_frame_equal(cumprod, expected) + + # axis = 1 + cumprod = self.tsframe.cumprod(axis=1) + expected = self.tsframe.apply(Series.cumprod, axis=1) + assert_frame_equal(cumprod, expected) + + # fix issue + cumprod_xs = self.tsframe.cumprod(axis=1) + self.assertEqual(np.shape(cumprod_xs), np.shape(self.tsframe)) + + # ints + df = self.tsframe.fillna(0).astype(int) + df.cumprod(0) + df.cumprod(1) + + # ints32 + df = self.tsframe.fillna(0).astype(np.int32) + df.cumprod(0) + df.cumprod(1) + + def test_rank(self): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + self.frame['A'][::2] = np.nan + self.frame['B'][::3] = np.nan + self.frame['C'][::4] = np.nan + self.frame['D'][::5] = np.nan + + ranks0 = self.frame.rank() + ranks1 = self.frame.rank(1) + mask = np.isnan(self.frame.values) + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fvals) + exp0[mask] = np.nan + + exp1 = np.apply_along_axis(rankdata, 1, fvals) + exp1[mask] = np.nan + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # integers + df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4))) + + result = df.rank() + exp = df.astype(float).rank() + assert_frame_equal(result, exp) + + result = df.rank(1) + exp = df.astype(float).rank(1) + assert_frame_equal(result, exp) + + def test_rank2(self): + df = DataFrame([[1, 3, 2], [1, 2, 3]]) + expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0 + result = df.rank(1, pct=True) + assert_frame_equal(result, expected) + + df = DataFrame([[1, 3, 2], [1, 2, 3]]) + expected = df.rank(0) / 2.0 + result = df.rank(0, pct=True) + assert_frame_equal(result, expected) + + df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']]) + expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]]) + result = df.rank(0, numeric_only=False) + assert_frame_equal(result, expected) + + df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']]) + expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]]) + result = df.rank(0, numeric_only=False) + assert_frame_equal(result, expected) + + # f7u12, this does not work without extensive workaround + data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], + [datetime(2000, 1, 2), datetime(2000, 1, 3), + datetime(2000, 1, 1)]] + df = DataFrame(data) + + # check the rank + expected = DataFrame([[2., nan, 1.], + [2., 3., 1.]]) + result = df.rank(1, numeric_only=False) + assert_frame_equal(result, expected) + + # mixed-type frames + self.mixed_frame['datetime'] = datetime.now() + self.mixed_frame['timedelta'] = timedelta(days=1, seconds=1) + + result = self.mixed_frame.rank(1) + expected = self.mixed_frame.rank(1, numeric_only=True) + assert_frame_equal(result, expected) + + df = DataFrame({"a": [1e-20, -5, 1e-20 + 1e-40, 10, + 1e60, 1e80, 1e-30]}) + exp = DataFrame({"a": [3.5, 1., 3.5, 5., 6., 7., 2.]}) + assert_frame_equal(df.rank(), exp) + + def test_rank_na_option(self): + tm._skip_if_no_scipy() + from scipy.stats import rankdata + + self.frame['A'][::2] = np.nan + self.frame['B'][::3] = np.nan + self.frame['C'][::4] = np.nan + self.frame['D'][::5] = np.nan + + # bottom + ranks0 = self.frame.rank(na_option='bottom') + ranks1 = self.frame.rank(1, na_option='bottom') + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fvals) + exp1 = np.apply_along_axis(rankdata, 1, fvals) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # top + ranks0 = self.frame.rank(na_option='top') + ranks1 = self.frame.rank(1, na_option='top') + + fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values + fval1 = self.frame.T + fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T + fval1 = fval1.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, fval0) + exp1 = np.apply_along_axis(rankdata, 1, fval1) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # descending + + # bottom + ranks0 = self.frame.rank(na_option='top', ascending=False) + ranks1 = self.frame.rank(1, na_option='top', ascending=False) + + fvals = self.frame.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, -fvals) + exp1 = np.apply_along_axis(rankdata, 1, -fvals) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + # descending + + # top + ranks0 = self.frame.rank(na_option='bottom', ascending=False) + ranks1 = self.frame.rank(1, na_option='bottom', ascending=False) + + fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values + fval1 = self.frame.T + fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T + fval1 = fval1.fillna(np.inf).values + + exp0 = np.apply_along_axis(rankdata, 0, -fval0) + exp1 = np.apply_along_axis(rankdata, 1, -fval1) + + assert_almost_equal(ranks0.values, exp0) + assert_almost_equal(ranks1.values, exp1) + + def test_sem(self): + alt = lambda x: np.std(x, ddof=1) / np.sqrt(len(x)) + self._check_stat_op('sem', alt) + + result = self.tsframe.sem(ddof=4) + expected = self.tsframe.apply( + lambda x: x.std(ddof=4) / np.sqrt(len(x))) + assert_almost_equal(result, expected) + + arr = np.repeat(np.random.random((1, 1000)), 1000, 0) + result = nanops.nansem(arr, axis=0) + self.assertFalse((result < 0).any()) + if nanops._USE_BOTTLENECK: + nanops._USE_BOTTLENECK = False + result = nanops.nansem(arr, axis=0) + self.assertFalse((result < 0).any()) + nanops._USE_BOTTLENECK = True + + def test_skew(self): + tm._skip_if_no_scipy() + from scipy.stats import skew + + def alt(x): + if len(x) < 3: + return np.nan + return skew(x, bias=False) + + self._check_stat_op('skew', alt) + + def test_kurt(self): + tm._skip_if_no_scipy() + + from scipy.stats import kurtosis + + def alt(x): + if len(x) < 4: + return np.nan + return kurtosis(x, bias=False) + + self._check_stat_op('kurt', alt) + + index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], + labels=[[0, 0, 0, 0, 0, 0], + [0, 1, 2, 0, 1, 2], + [0, 1, 0, 1, 0, 1]]) + df = DataFrame(np.random.randn(6, 3), index=index) + + kurt = df.kurt() + kurt2 = df.kurt(level=0).xs('bar') + assert_series_equal(kurt, kurt2, check_names=False) + self.assertTrue(kurt.name is None) + self.assertEqual(kurt2.name, 'bar') + + def _check_stat_op(self, name, alternative, frame=None, has_skipna=True, + has_numeric_only=False, check_dtype=True, + check_dates=False, check_less_precise=False): + if frame is None: + frame = self.frame + # set some NAs + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + + f = getattr(frame, name) + + if check_dates: + df = DataFrame({'b': date_range('1/1/2001', periods=2)}) + _f = getattr(df, name) + result = _f() + self.assertIsInstance(result, Series) + + df['a'] = lrange(len(df)) + result = getattr(df, name)() + self.assertIsInstance(result, Series) + self.assertTrue(len(result)) + + if has_skipna: + def skipna_wrapper(x): + nona = x.dropna() + if len(nona) == 0: + return np.nan + return alternative(nona) + + def wrapper(x): + return alternative(x.values) + + result0 = f(axis=0, skipna=False) + result1 = f(axis=1, skipna=False) + assert_series_equal(result0, frame.apply(wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) + # HACK: win32 + assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False, + check_less_precise=check_less_precise) + else: + skipna_wrapper = alternative + wrapper = alternative + + result0 = f(axis=0) + result1 = f(axis=1) + assert_series_equal(result0, frame.apply(skipna_wrapper), + check_dtype=check_dtype, + check_less_precise=check_less_precise) + if not tm._incompat_bottleneck_version(name): + assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), + check_dtype=False, + check_less_precise=check_less_precise) + + # check dtypes + if check_dtype: + lcd_dtype = frame.values.dtype + self.assertEqual(lcd_dtype, result0.dtype) + self.assertEqual(lcd_dtype, result1.dtype) + + # result = f(axis=1) + # comp = frame.apply(alternative, axis=1).reindex(result.index) + # assert_series_equal(result, comp) + + # bad axis + assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) + # make sure works on mixed-type frame + getattr(self.mixed_frame, name)(axis=0) + getattr(self.mixed_frame, name)(axis=1) + + if has_numeric_only: + getattr(self.mixed_frame, name)(axis=0, numeric_only=True) + getattr(self.mixed_frame, name)(axis=1, numeric_only=True) + getattr(self.frame, name)(axis=0, numeric_only=False) + getattr(self.frame, name)(axis=1, numeric_only=False) + + # all NA case + if has_skipna: + all_na = self.frame * np.NaN + r0 = getattr(all_na, name)(axis=0) + r1 = getattr(all_na, name)(axis=1) + if not tm._incompat_bottleneck_version(name): + self.assertTrue(np.isnan(r0).all()) + self.assertTrue(np.isnan(r1).all()) + + def test_mode(self): + df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11], + "B": [10, 10, 10, np.nan, 3, 4], + "C": [8, 8, 8, 9, 9, 9], + "D": np.arange(6, dtype='int64'), + "E": [8, 8, 1, 1, 3, 3]}) + assert_frame_equal(df[["A"]].mode(), + pd.DataFrame({"A": [12]})) + expected = pd.Series([], dtype='int64', name='D').to_frame() + assert_frame_equal(df[["D"]].mode(), expected) + expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame() + assert_frame_equal(df[["E"]].mode(), expected) + assert_frame_equal(df[["A", "B"]].mode(), + pd.DataFrame({"A": [12], "B": [10.]})) + assert_frame_equal(df.mode(), + pd.DataFrame({"A": [12, np.nan, np.nan], + "B": [10, np.nan, np.nan], + "C": [8, 9, np.nan], + "D": [np.nan, np.nan, np.nan], + "E": [1, 3, 8]})) + + # outputs in sorted order + df["C"] = list(reversed(df["C"])) + com.pprint_thing(df["C"]) + com.pprint_thing(df["C"].mode()) + a, b = (df[["A", "B", "C"]].mode(), + pd.DataFrame({"A": [12, np.nan], + "B": [10, np.nan], + "C": [8, 9]})) + com.pprint_thing(a) + com.pprint_thing(b) + assert_frame_equal(a, b) + # should work with heterogeneous types + df = pd.DataFrame({"A": np.arange(6, dtype='int64'), + "B": pd.date_range('2011', periods=6), + "C": list('abcdef')}) + exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype), + "B": pd.Series([], dtype=df["B"].dtype), + "C": pd.Series([], dtype=df["C"].dtype)}) + assert_frame_equal(df.mode(), exp) + + # and also when not empty + df.loc[1, "A"] = 0 + df.loc[4, "B"] = df.loc[3, "B"] + df.loc[5, "C"] = 'e' + exp = pd.DataFrame({"A": pd.Series([0], dtype=df["A"].dtype), + "B": pd.Series([df.loc[3, "B"]], + dtype=df["B"].dtype), + "C": pd.Series(['e'], dtype=df["C"].dtype)}) + + assert_frame_equal(df.mode(), exp) + + def test_operators_timedelta64(self): + from datetime import timedelta + df = DataFrame(dict(A=date_range('2012-1-1', periods=3, freq='D'), + B=date_range('2012-1-2', periods=3, freq='D'), + C=Timestamp('20120101') - + timedelta(minutes=5, seconds=5))) + + diffs = DataFrame(dict(A=df['A'] - df['C'], + B=df['A'] - df['B'])) + + # min + result = diffs.min() + self.assertEqual(result[0], diffs.ix[0, 'A']) + self.assertEqual(result[1], diffs.ix[0, 'B']) + + result = diffs.min(axis=1) + self.assertTrue((result == diffs.ix[0, 'B']).all()) + + # max + result = diffs.max() + self.assertEqual(result[0], diffs.ix[2, 'A']) + self.assertEqual(result[1], diffs.ix[2, 'B']) + + result = diffs.max(axis=1) + self.assertTrue((result == diffs['A']).all()) + + # abs + result = diffs.abs() + result2 = abs(diffs) + expected = DataFrame(dict(A=df['A'] - df['C'], + B=df['B'] - df['A'])) + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + # mixed frame + mixed = diffs.copy() + mixed['C'] = 'foo' + mixed['D'] = 1 + mixed['E'] = 1. + mixed['F'] = Timestamp('20130101') + + # results in an object array + from pandas.tseries.timedeltas import ( + _coerce_scalar_to_timedelta_type as _coerce) + + result = mixed.min() + expected = Series([_coerce(timedelta(seconds=5 * 60 + 5)), + _coerce(timedelta(days=-1)), + 'foo', 1, 1.0, + Timestamp('20130101')], + index=mixed.columns) + assert_series_equal(result, expected) + + # excludes numeric + result = mixed.min(axis=1) + expected = Series([1, 1, 1.], index=[0, 1, 2]) + assert_series_equal(result, expected) + + # works when only those columns are selected + result = mixed[['A', 'B']].min(1) + expected = Series([timedelta(days=-1)] * 3) + assert_series_equal(result, expected) + + result = mixed[['A', 'B']].min() + expected = Series([timedelta(seconds=5 * 60 + 5), + timedelta(days=-1)], index=['A', 'B']) + assert_series_equal(result, expected) + + # GH 3106 + df = DataFrame({'time': date_range('20130102', periods=5), + 'time2': date_range('20130105', periods=5)}) + df['off1'] = df['time2'] - df['time'] + self.assertEqual(df['off1'].dtype, 'timedelta64[ns]') + + df['off2'] = df['time'] - df['time2'] + df._consolidate_inplace() + self.assertTrue(df['off1'].dtype == 'timedelta64[ns]') + self.assertTrue(df['off2'].dtype == 'timedelta64[ns]') + + def test_sum_corner(self): + axis0 = self.empty.sum(0) + axis1 = self.empty.sum(1) + tm.assertIsInstance(axis0, Series) + tm.assertIsInstance(axis1, Series) + self.assertEqual(len(axis0), 0) + self.assertEqual(len(axis1), 0) + + def test_sum_object(self): + values = self.frame.values.astype(int) + frame = DataFrame(values, index=self.frame.index, + columns=self.frame.columns) + deltas = frame * timedelta(1) + deltas.sum() + + def test_sum_bool(self): + # ensure this works, bug report + bools = np.isnan(self.frame) + bools.sum(1) + bools.sum(0) + + def test_mean_corner(self): + # unit test when have object data + the_mean = self.mixed_frame.mean(axis=0) + the_sum = self.mixed_frame.sum(axis=0, numeric_only=True) + self.assertTrue(the_sum.index.equals(the_mean.index)) + self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns)) + + # xs sum mixed type, just want to know it works... + the_mean = self.mixed_frame.mean(axis=1) + the_sum = self.mixed_frame.sum(axis=1, numeric_only=True) + self.assertTrue(the_sum.index.equals(the_mean.index)) + + # take mean of boolean column + self.frame['bool'] = self.frame['A'] > 0 + means = self.frame.mean(0) + self.assertEqual(means['bool'], self.frame['bool'].values.mean()) + + def test_stats_mixed_type(self): + # don't blow up + self.mixed_frame.std(1) + self.mixed_frame.var(1) + self.mixed_frame.mean(1) + self.mixed_frame.skew(1) + + def test_median_corner(self): + def wrapper(x): + if isnull(x).any(): + return np.nan + return np.median(x) + + self._check_stat_op('median', wrapper, frame=self.intframe, + check_dtype=False, check_dates=True) + + # Miscellanea + + def test_count_objects(self): + dm = DataFrame(self.mixed_frame._series) + df = DataFrame(self.mixed_frame._series) + + assert_series_equal(dm.count(), df.count()) + assert_series_equal(dm.count(1), df.count(1)) + + def test_cumsum_corner(self): + dm = DataFrame(np.arange(20).reshape(4, 5), + index=lrange(4), columns=lrange(5)) + # ?(wesm) + result = dm.cumsum() # noqa + + def test_sum_bools(self): + df = DataFrame(index=lrange(1), columns=lrange(10)) + bools = isnull(df) + self.assertEqual(bools.sum(axis=1)[0], 10) + + # Index of max / min + + def test_idxmin(self): + frame = self.frame + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + for skipna in [True, False]: + for axis in [0, 1]: + for df in [frame, self.intframe]: + result = df.idxmin(axis=axis, skipna=skipna) + expected = df.apply( + Series.idxmin, axis=axis, skipna=skipna) + assert_series_equal(result, expected) + + self.assertRaises(ValueError, frame.idxmin, axis=2) + + def test_idxmax(self): + frame = self.frame + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + for skipna in [True, False]: + for axis in [0, 1]: + for df in [frame, self.intframe]: + result = df.idxmax(axis=axis, skipna=skipna) + expected = df.apply( + Series.idxmax, axis=axis, skipna=skipna) + assert_series_equal(result, expected) + + self.assertRaises(ValueError, frame.idxmax, axis=2) + + # ---------------------------------------------------------------------- + # Logical reductions + + def test_any_all(self): + self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True) + self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True) + + df = DataFrame(randn(10, 4)) > 0 + df.any(1) + df.all(1) + df.any(1, bool_only=True) + df.all(1, bool_only=True) + + # skip pathological failure cases + # class CantNonzero(object): + + # def __nonzero__(self): + # raise ValueError + + # df[4] = CantNonzero() + + # it works! + # df.any(1) + # df.all(1) + # df.any(1, bool_only=True) + # df.all(1, bool_only=True) + + # df[4][4] = np.nan + # df.any(1) + # df.all(1) + # df.any(1, bool_only=True) + # df.all(1, bool_only=True) + + def _check_bool_op(self, name, alternative, frame=None, has_skipna=True, + has_bool_only=False): + if frame is None: + frame = self.frame > 0 + # set some NAs + frame = DataFrame(frame.values.astype(object), frame.index, + frame.columns) + frame.ix[5:10] = np.nan + frame.ix[15:20, -2:] = np.nan + + f = getattr(frame, name) + + if has_skipna: + def skipna_wrapper(x): + nona = x.dropna().values + return alternative(nona) + + def wrapper(x): + return alternative(x.values) + + result0 = f(axis=0, skipna=False) + result1 = f(axis=1, skipna=False) + assert_series_equal(result0, frame.apply(wrapper)) + assert_series_equal(result1, frame.apply(wrapper, axis=1), + check_dtype=False) # HACK: win32 + else: + skipna_wrapper = alternative + wrapper = alternative + + result0 = f(axis=0) + result1 = f(axis=1) + assert_series_equal(result0, frame.apply(skipna_wrapper)) + assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), + check_dtype=False) + + # result = f(axis=1) + # comp = frame.apply(alternative, axis=1).reindex(result.index) + # assert_series_equal(result, comp) + + # bad axis + self.assertRaises(ValueError, f, axis=2) + + # make sure works on mixed-type frame + mixed = self.mixed_frame + mixed['_bool_'] = np.random.randn(len(mixed)) > 0 + getattr(mixed, name)(axis=0) + getattr(mixed, name)(axis=1) + + class NonzeroFail: + + def __nonzero__(self): + raise ValueError + + mixed['_nonzero_fail_'] = NonzeroFail() + + if has_bool_only: + getattr(mixed, name)(axis=0, bool_only=True) + getattr(mixed, name)(axis=1, bool_only=True) + getattr(frame, name)(axis=0, bool_only=False) + getattr(frame, name)(axis=1, bool_only=False) + + # all NA case + if has_skipna: + all_na = frame * np.NaN + r0 = getattr(all_na, name)(axis=0) + r1 = getattr(all_na, name)(axis=1) + if name == 'any': + self.assertFalse(r0.any()) + self.assertFalse(r1.any()) + else: + self.assertTrue(r0.all()) + self.assertTrue(r1.all()) + + # ---------------------------------------------------------------------- + # Top / bottom + + def test_nlargest(self): + # GH10393 + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10])}) + result = df.nlargest(5, 'a') + expected = df.sort_values('a', ascending=False).head(5) + assert_frame_equal(result, expected) + + def test_nlargest_multiple_columns(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10]), + 'c': np.random.permutation(10).astype('float64')}) + result = df.nlargest(5, ['a', 'b']) + expected = df.sort_values(['a', 'b'], ascending=False).head(5) + assert_frame_equal(result, expected) + + def test_nsmallest(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10])}) + result = df.nsmallest(5, 'a') + expected = df.sort_values('a').head(5) + assert_frame_equal(result, expected) + + def test_nsmallest_multiple_columns(self): + from string import ascii_lowercase + df = pd.DataFrame({'a': np.random.permutation(10), + 'b': list(ascii_lowercase[:10]), + 'c': np.random.permutation(10).astype('float64')}) + result = df.nsmallest(5, ['a', 'c']) + expected = df.sort_values(['a', 'c']).head(5) + assert_frame_equal(result, expected) + + # ---------------------------------------------------------------------- + # Isin + + def test_isin(self): + # GH #4211 + df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], + 'ids2': ['a', 'n', 'c', 'n']}, + index=['foo', 'bar', 'baz', 'qux']) + other = ['a', 'b', 'c'] + + result = df.isin(other) + expected = DataFrame([df.loc[s].isin(other) for s in df.index]) + assert_frame_equal(result, expected) + + def test_isin_empty(self): + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + result = df.isin([]) + expected = pd.DataFrame(False, df.index, df.columns) + assert_frame_equal(result, expected) + + def test_isin_dict(self): + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + d = {'A': ['a']} + + expected = DataFrame(False, df.index, df.columns) + expected.loc[0, 'A'] = True + + result = df.isin(d) + assert_frame_equal(result, expected) + + # non unique columns + df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) + df.columns = ['A', 'A'] + expected = DataFrame(False, df.index, df.columns) + expected.loc[0, 'A'] = True + result = df.isin(d) + assert_frame_equal(result, expected) + + def test_isin_with_string_scalar(self): + # GH4763 + df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], + 'ids2': ['a', 'n', 'c', 'n']}, + index=['foo', 'bar', 'baz', 'qux']) + with tm.assertRaises(TypeError): + df.isin('a') + + with tm.assertRaises(TypeError): + df.isin('aaa') + + def test_isin_df(self): + df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) + df2 = DataFrame({'A': [0, 2, 12, 4], 'B': [2, np.nan, 4, 5]}) + expected = DataFrame(False, df1.index, df1.columns) + result = df1.isin(df2) + expected['A'].loc[[1, 3]] = True + expected['B'].loc[[0, 2]] = True + assert_frame_equal(result, expected) + + # partial overlapping columns + df2.columns = ['A', 'C'] + result = df1.isin(df2) + expected['B'] = False + assert_frame_equal(result, expected) + + def test_isin_df_dupe_values(self): + df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) + # just cols duped + df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], + columns=['B', 'B']) + with tm.assertRaises(ValueError): + df1.isin(df2) + + # just index duped + df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], + columns=['A', 'B'], index=[0, 0, 1, 1]) + with tm.assertRaises(ValueError): + df1.isin(df2) + + # cols and index: + df2.columns = ['B', 'B'] + with tm.assertRaises(ValueError): + df1.isin(df2) + + def test_isin_dupe_self(self): + other = DataFrame({'A': [1, 0, 1, 0], 'B': [1, 1, 0, 0]}) + df = DataFrame([[1, 1], [1, 0], [0, 0]], columns=['A', 'A']) + result = df.isin(other) + expected = DataFrame(False, index=df.index, columns=df.columns) + expected.loc[0] = True + expected.iloc[1, 1] = True + assert_frame_equal(result, expected) + + def test_isin_against_series(self): + df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}, + index=['a', 'b', 'c', 'd']) + s = pd.Series([1, 3, 11, 4], index=['a', 'b', 'c', 'd']) + expected = DataFrame(False, index=df.index, columns=df.columns) + expected['A'].loc['a'] = True + expected.loc['d'] = True + result = df.isin(s) + assert_frame_equal(result, expected) + + def test_isin_multiIndex(self): + idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'), + (0, 'b', 'bar'), (0, 'b', 'baz'), + (2, 'a', 'foo'), (2, 'a', 'bar'), + (2, 'c', 'bar'), (2, 'c', 'baz'), + (1, 'b', 'foo'), (1, 'b', 'bar'), + (1, 'c', 'bar'), (1, 'c', 'baz')]) + df1 = DataFrame({'A': np.ones(12), + 'B': np.zeros(12)}, index=idx) + df2 = DataFrame({'A': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1], + 'B': [1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1]}) + # against regular index + expected = DataFrame(False, index=df1.index, columns=df1.columns) + result = df1.isin(df2) + assert_frame_equal(result, expected) + + df2.index = idx + expected = df2.values.astype(np.bool) + expected[:, 1] = ~expected[:, 1] + expected = DataFrame(expected, columns=['A', 'B'], index=idx) + + result = df1.isin(df2) + assert_frame_equal(result, expected) + + # ---------------------------------------------------------------------- + # Row deduplication + + def test_drop_duplicates(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('AAA') + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep='last') + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep=False) + expected = df.ix[[]] + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('AAA', take_last=True) + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + # multi column + expected = df.ix[[0, 1, 2, 3]] + result = df.drop_duplicates(np.array(['AAA', 'B'])) + assert_frame_equal(result, expected) + result = df.drop_duplicates(['AAA', 'B']) + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AAA', 'B'), keep='last') + expected = df.ix[[0, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AAA', 'B'), keep=False) + expected = df.ix[[0]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(('AAA', 'B'), take_last=True) + expected = df.ix[[0, 5, 6, 7]] + assert_frame_equal(result, expected) + + # consider everything + df2 = df.ix[:, ['AAA', 'B', 'C']] + + result = df2.drop_duplicates() + # in this case only + expected = df2.drop_duplicates(['AAA', 'B']) + assert_frame_equal(result, expected) + + result = df2.drop_duplicates(keep='last') + expected = df2.drop_duplicates(['AAA', 'B'], keep='last') + assert_frame_equal(result, expected) + + result = df2.drop_duplicates(keep=False) + expected = df2.drop_duplicates(['AAA', 'B'], keep=False) + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df2.drop_duplicates(take_last=True) + with tm.assert_produces_warning(FutureWarning): + expected = df2.drop_duplicates(['AAA', 'B'], take_last=True) + assert_frame_equal(result, expected) + + # integers + result = df.drop_duplicates('C') + expected = df.iloc[[0, 2]] + assert_frame_equal(result, expected) + result = df.drop_duplicates('C', keep='last') + expected = df.iloc[[-2, -1]] + assert_frame_equal(result, expected) + + df['E'] = df['C'].astype('int8') + result = df.drop_duplicates('E') + expected = df.iloc[[0, 2]] + assert_frame_equal(result, expected) + result = df.drop_duplicates('E', keep='last') + expected = df.iloc[[-2, -1]] + assert_frame_equal(result, expected) + + # GH 11376 + df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0], + 'y': [0, 6, 5, 5, 9, 1, 2]}) + expected = df.loc[df.index != 3] + assert_frame_equal(df.drop_duplicates(), expected) + + df = pd.DataFrame([[1, 0], [0, 2]]) + assert_frame_equal(df.drop_duplicates(), df) + + df = pd.DataFrame([[-2, 0], [0, -4]]) + assert_frame_equal(df.drop_duplicates(), df) + + x = np.iinfo(np.int64).max / 3 * 2 + df = pd.DataFrame([[-x, x], [0, x + 4]]) + assert_frame_equal(df.drop_duplicates(), df) + + df = pd.DataFrame([[-x, x], [x, x + 4]]) + assert_frame_equal(df.drop_duplicates(), df) + + # GH 11864 + df = pd.DataFrame([i] * 9 for i in range(16)) + df = df.append([[1] + [0] * 8], ignore_index=True) + + for keep in ['first', 'last', False]: + assert_equal(df.duplicated(keep=keep).sum(), 0) + + def test_drop_duplicates_for_take_all(self): + df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar', + 'foo', 'bar', 'qux', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('AAA') + expected = df.iloc[[0, 1, 2, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep='last') + expected = df.iloc[[2, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('AAA', keep=False) + expected = df.iloc[[2, 6]] + assert_frame_equal(result, expected) + + # multiple columns + result = df.drop_duplicates(['AAA', 'B']) + expected = df.iloc[[0, 1, 2, 3, 4, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['AAA', 'B'], keep='last') + expected = df.iloc[[0, 1, 2, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['AAA', 'B'], keep=False) + expected = df.iloc[[0, 1, 2, 6]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_deprecated_warning(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + expected = df[:2] + + # Raises warning + with tm.assert_produces_warning(False): + result = df.drop_duplicates(subset='AAA') + assert_frame_equal(result, expected) + + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(cols='AAA') + assert_frame_equal(result, expected) + + # Does not allow both subset and cols + self.assertRaises(TypeError, df.drop_duplicates, + kwargs={'cols': 'AAA', 'subset': 'B'}) + + # Does not allow unknown kwargs + self.assertRaises(TypeError, df.drop_duplicates, + kwargs={'subset': 'AAA', 'bad_arg': True}) + + # deprecate take_last + # Raises warning + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(take_last=False, subset='AAA') + assert_frame_equal(result, expected) + + self.assertRaises(ValueError, df.drop_duplicates, keep='invalid_name') + + def test_drop_duplicates_tuple(self): + df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates(('AA', 'AB')) + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AA', 'AB'), keep='last') + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(('AA', 'AB'), keep=False) + expected = df.ix[[]] # empty df + self.assertEqual(len(result), 0) + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(('AA', 'AB'), take_last=True) + expected = df.ix[[6, 7]] + assert_frame_equal(result, expected) + + # multi column + expected = df.ix[[0, 1, 2, 3]] + result = df.drop_duplicates((('AA', 'AB'), 'B')) + assert_frame_equal(result, expected) + + def test_drop_duplicates_NA(self): + # none + df = DataFrame({'A': [None, None, 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('A') + expected = df.ix[[0, 2, 3]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep='last') + expected = df.ix[[1, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep=False) + expected = df.ix[[]] # empty df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('A', take_last=True) + expected = df.ix[[1, 6, 7]] + assert_frame_equal(result, expected) + + # multi column + result = df.drop_duplicates(['A', 'B']) + expected = df.ix[[0, 2, 3, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['A', 'B'], keep='last') + expected = df.ix[[1, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['A', 'B'], keep=False) + expected = df.ix[[6]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(['A', 'B'], take_last=True) + expected = df.ix[[1, 5, 6, 7]] + assert_frame_equal(result, expected) + + # nan + df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], + 'D': lrange(8)}) + + # single column + result = df.drop_duplicates('C') + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep='last') + expected = df.ix[[3, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep=False) + expected = df.ix[[]] # empty df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 0) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates('C', take_last=True) + expected = df.ix[[3, 7]] + assert_frame_equal(result, expected) + + # multi column + result = df.drop_duplicates(['C', 'B']) + expected = df.ix[[0, 1, 2, 4]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['C', 'B'], keep='last') + expected = df.ix[[1, 3, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates(['C', 'B'], keep=False) + expected = df.ix[[1]] + assert_frame_equal(result, expected) + + # deprecate take_last + with tm.assert_produces_warning(FutureWarning): + result = df.drop_duplicates(['C', 'B'], take_last=True) + expected = df.ix[[1, 3, 6, 7]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_NA_for_take_all(self): + # none + df = DataFrame({'A': [None, None, 'foo', 'bar', + 'foo', 'baz', 'bar', 'qux'], + 'C': [1.0, np.nan, np.nan, np.nan, 1., 2., 3, 1.]}) + + # single column + result = df.drop_duplicates('A') + expected = df.iloc[[0, 2, 3, 5, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep='last') + expected = df.iloc[[1, 4, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('A', keep=False) + expected = df.iloc[[5, 7]] + assert_frame_equal(result, expected) + + # nan + + # single column + result = df.drop_duplicates('C') + expected = df.iloc[[0, 1, 5, 6]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep='last') + expected = df.iloc[[3, 5, 6, 7]] + assert_frame_equal(result, expected) + + result = df.drop_duplicates('C', keep=False) + expected = df.iloc[[5, 6]] + assert_frame_equal(result, expected) + + def test_drop_duplicates_inplace(self): + orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # single column + df = orig.copy() + df.drop_duplicates('A', inplace=True) + expected = orig[:2] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates('A', keep='last', inplace=True) + expected = orig.ix[[6, 7]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates('A', keep=False, inplace=True) + expected = orig.ix[[]] + result = df + assert_frame_equal(result, expected) + self.assertEqual(len(df), 0) + + # deprecate take_last + df = orig.copy() + with tm.assert_produces_warning(FutureWarning): + df.drop_duplicates('A', take_last=True, inplace=True) + expected = orig.ix[[6, 7]] + result = df + assert_frame_equal(result, expected) + + # multi column + df = orig.copy() + df.drop_duplicates(['A', 'B'], inplace=True) + expected = orig.ix[[0, 1, 2, 3]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates(['A', 'B'], keep='last', inplace=True) + expected = orig.ix[[0, 5, 6, 7]] + result = df + assert_frame_equal(result, expected) + + df = orig.copy() + df.drop_duplicates(['A', 'B'], keep=False, inplace=True) + expected = orig.ix[[0]] + result = df + assert_frame_equal(result, expected) + + # deprecate take_last + df = orig.copy() + with tm.assert_produces_warning(FutureWarning): + df.drop_duplicates(['A', 'B'], take_last=True, inplace=True) + expected = orig.ix[[0, 5, 6, 7]] + result = df + assert_frame_equal(result, expected) + + # consider everything + orig2 = orig.ix[:, ['A', 'B', 'C']].copy() + + df2 = orig2.copy() + df2.drop_duplicates(inplace=True) + # in this case only + expected = orig2.drop_duplicates(['A', 'B']) + result = df2 + assert_frame_equal(result, expected) + + df2 = orig2.copy() + df2.drop_duplicates(keep='last', inplace=True) + expected = orig2.drop_duplicates(['A', 'B'], keep='last') + result = df2 + assert_frame_equal(result, expected) + + df2 = orig2.copy() + df2.drop_duplicates(keep=False, inplace=True) + expected = orig2.drop_duplicates(['A', 'B'], keep=False) + result = df2 + assert_frame_equal(result, expected) + + # deprecate take_last + df2 = orig2.copy() + with tm.assert_produces_warning(FutureWarning): + df2.drop_duplicates(take_last=True, inplace=True) + with tm.assert_produces_warning(FutureWarning): + expected = orig2.drop_duplicates(['A', 'B'], take_last=True) + result = df2 + assert_frame_equal(result, expected) + + def test_duplicated_deprecated_warning(self): + df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', + 'foo', 'bar', 'bar', 'foo'], + 'B': ['one', 'one', 'two', 'two', + 'two', 'two', 'one', 'two'], + 'C': [1, 1, 2, 2, 2, 2, 1, 2], + 'D': lrange(8)}) + + # Raises warning + with tm.assert_produces_warning(False): + result = df.duplicated(subset='AAA') + + with tm.assert_produces_warning(FutureWarning): + result = df.duplicated(cols='AAA') # noqa + + # Does not allow both subset and cols + self.assertRaises(TypeError, df.duplicated, + kwargs={'cols': 'AAA', 'subset': 'B'}) + + # Does not allow unknown kwargs + self.assertRaises(TypeError, df.duplicated, + kwargs={'subset': 'AAA', 'bad_arg': True}) + + # Rounding + + def test_round(self): + # GH 2665 + + # Test that rounding an empty DataFrame does nothing + df = DataFrame() + assert_frame_equal(df, df.round()) + + # Here's the test frame we'll be working with + df = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) + + # Default round to integer (i.e. decimals=0) + expected_rounded = DataFrame( + {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) + assert_frame_equal(df.round(), expected_rounded) + + # Round with an integer + decimals = 2 + expected_rounded = DataFrame( + {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) + assert_frame_equal(df.round(decimals), expected_rounded) + + # This should also work with np.round (since np.round dispatches to + # df.round) + assert_frame_equal(np.round(df, decimals), expected_rounded) + + # Round with a list + round_list = [1, 2] + with self.assertRaises(TypeError): + df.round(round_list) + + # Round with a dictionary + expected_rounded = DataFrame( + {'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]}) + round_dict = {'col1': 1, 'col2': 2} + assert_frame_equal(df.round(round_dict), expected_rounded) + + # Incomplete dict + expected_partially_rounded = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) + partial_round_dict = {'col2': 1} + assert_frame_equal( + df.round(partial_round_dict), expected_partially_rounded) + + # Dict with unknown elements + wrong_round_dict = {'col3': 2, 'col2': 1} + assert_frame_equal( + df.round(wrong_round_dict), expected_partially_rounded) + + # float input to `decimals` + non_int_round_dict = {'col1': 1, 'col2': 0.5} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + # String input + non_int_round_dict = {'col1': 1, 'col2': 'foo'} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # List input + non_int_round_dict = {'col1': 1, 'col2': [1, 2]} + with self.assertRaises(TypeError): + df.round(non_int_round_dict) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # Non integer Series inputs + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + non_int_round_Series = Series(non_int_round_dict) + with self.assertRaises(TypeError): + df.round(non_int_round_Series) + + # Negative numbers + negative_round_dict = {'col1': -1, 'col2': -2} + big_df = df * 100 + expected_neg_rounded = DataFrame( + {'col1': [110., 210, 310], 'col2': [100., 200, 300]}) + assert_frame_equal( + big_df.round(negative_round_dict), expected_neg_rounded) + + # nan in Series round + nan_round_Series = Series({'col1': nan, 'col2': 1}) + + # TODO(wesm): unused? + expected_nan_round = DataFrame({ # noqa + 'col1': [1.123, 2.123, 3.123], + 'col2': [1.2, 2.2, 3.2]}) + + if sys.version < LooseVersion('2.7'): + # Rounding with decimal is a ValueError in Python < 2.7 + with self.assertRaises(ValueError): + df.round(nan_round_Series) + else: + with self.assertRaises(TypeError): + df.round(nan_round_Series) + + # Make sure this doesn't break existing Series.round + assert_series_equal(df['col1'].round(1), expected_rounded['col1']) + + # named columns + # GH 11986 + decimals = 2 + expected_rounded = DataFrame( + {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) + df.columns.name = "cols" + expected_rounded.columns.name = "cols" + assert_frame_equal(df.round(decimals), expected_rounded) + + # interaction of named columns & series + assert_series_equal(df['col1'].round(decimals), + expected_rounded['col1']) + assert_series_equal(df.round(decimals)['col1'], + expected_rounded['col1']) + + def test_round_mixed_type(self): + # GH11885 + df = DataFrame({'col1': [1.1, 2.2, 3.3, 4.4], + 'col2': ['1', 'a', 'c', 'f'], + 'col3': date_range('20111111', periods=4)}) + round_0 = DataFrame({'col1': [1., 2., 3., 4.], + 'col2': ['1', 'a', 'c', 'f'], + 'col3': date_range('20111111', periods=4)}) + assert_frame_equal(df.round(), round_0) + assert_frame_equal(df.round(1), df) + assert_frame_equal(df.round({'col1': 1}), df) + assert_frame_equal(df.round({'col1': 0}), round_0) + assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) + assert_frame_equal(df.round({'col3': 1}), df) + + def test_round_issue(self): + # GH11611 + + df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'], + index=['first', 'second', 'third']) + + dfs = pd.concat((df, df), axis=1) + rounded = dfs.round() + self.assertTrue(rounded.index.equals(dfs.index)) + + decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A']) + self.assertRaises(ValueError, df.round, decimals) + + def test_built_in_round(self): + if not compat.PY3: + raise nose.SkipTest("build in round cannot be overriden " + "prior to Python 3") + + # GH11763 + # Here's the test frame we'll be working with + df = DataFrame( + {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) + + # Default round to integer (i.e. decimals=0) + expected_rounded = DataFrame( + {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) + assert_frame_equal(round(df), expected_rounded) + + # Clip + + def test_clip(self): + median = self.frame.median().median() + + capped = self.frame.clip_upper(median) + self.assertFalse((capped.values > median).any()) + + floored = self.frame.clip_lower(median) + self.assertFalse((floored.values < median).any()) + + double = self.frame.clip(upper=median, lower=median) + self.assertFalse((double.values != median).any()) + + def test_dataframe_clip(self): + # GH #2747 + df = DataFrame(np.random.randn(1000, 2)) + + for lb, ub in [(-1, 1), (1, -1)]: + clipped_df = df.clip(lb, ub) + + lb, ub = min(lb, ub), max(ub, lb) + lb_mask = df.values <= lb + ub_mask = df.values >= ub + mask = ~lb_mask & ~ub_mask + self.assertTrue((clipped_df.values[lb_mask] == lb).all()) + self.assertTrue((clipped_df.values[ub_mask] == ub).all()) + self.assertTrue((clipped_df.values[mask] == + df.values[mask]).all()) + + def test_clip_against_series(self): + # GH #6966 + + df = DataFrame(np.random.randn(1000, 2)) + lb = Series(np.random.randn(1000)) + ub = lb + 1 + + clipped_df = df.clip(lb, ub, axis=0) + + for i in range(2): + lb_mask = df.iloc[:, i] <= lb + ub_mask = df.iloc[:, i] >= ub + mask = ~lb_mask & ~ub_mask + + result = clipped_df.loc[lb_mask, i] + assert_series_equal(result, lb[lb_mask], check_names=False) + self.assertEqual(result.name, i) + + result = clipped_df.loc[ub_mask, i] + assert_series_equal(result, ub[ub_mask], check_names=False) + self.assertEqual(result.name, i) + + assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) + + def test_clip_against_frame(self): + df = DataFrame(np.random.randn(1000, 2)) + lb = DataFrame(np.random.randn(1000, 2)) + ub = lb + 1 + + clipped_df = df.clip(lb, ub) + + lb_mask = df <= lb + ub_mask = df >= ub + mask = ~lb_mask & ~ub_mask + + assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) + assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) + assert_frame_equal(clipped_df[mask], df[mask]) + + # Matrix-like + + def test_dot(self): + a = DataFrame(np.random.randn(3, 4), index=['a', 'b', 'c'], + columns=['p', 'q', 'r', 's']) + b = DataFrame(np.random.randn(4, 2), index=['p', 'q', 'r', 's'], + columns=['one', 'two']) + + result = a.dot(b) + expected = DataFrame(np.dot(a.values, b.values), + index=['a', 'b', 'c'], + columns=['one', 'two']) + # Check alignment + b1 = b.reindex(index=reversed(b.index)) + result = a.dot(b) + assert_frame_equal(result, expected) + + # Check series argument + result = a.dot(b['one']) + assert_series_equal(result, expected['one'], check_names=False) + self.assertTrue(result.name is None) + + result = a.dot(b1['one']) + assert_series_equal(result, expected['one'], check_names=False) + self.assertTrue(result.name is None) + + # can pass correct-length arrays + row = a.ix[0].values + + result = a.dot(row) + exp = a.dot(a.ix[0]) + assert_series_equal(result, exp) + + with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): + a.dot(row[:-1]) + + a = np.random.rand(1, 5) + b = np.random.rand(5, 1) + A = DataFrame(a) + + # TODO(wesm): unused + B = DataFrame(b) # noqa + + # it works + result = A.dot(b) + + # unaligned + df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4)) + df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3]) + + assertRaisesRegexp(ValueError, 'aligned', df.dot, df2) diff --git a/pandas/tests/frame/test_apply.py b/pandas/tests/frame/test_apply.py new file mode 100644 index 0000000000000..818e2fb89008d --- /dev/null +++ b/pandas/tests/frame/test_apply.py @@ -0,0 +1,402 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +import numpy as np + +from pandas import (notnull, DataFrame, Series, MultiIndex, date_range, + Timestamp, compat) +import pandas as pd +import pandas.core.common as com +from pandas.util.testing import (assert_series_equal, + assert_frame_equal) +import pandas.util.testing as tm +from pandas.tests.frame.common import TestData + + +class TestDataFrameApply(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_apply(self): + # ufunc + applied = self.frame.apply(np.sqrt) + assert_series_equal(np.sqrt(self.frame['A']), applied['A']) + + # aggregator + applied = self.frame.apply(np.mean) + self.assertEqual(applied['A'], np.mean(self.frame['A'])) + + d = self.frame.index[0] + applied = self.frame.apply(np.mean, axis=1) + self.assertEqual(applied[d], np.mean(self.frame.xs(d))) + self.assertIs(applied.index, self.frame.index) # want this + + # invalid axis + df = DataFrame( + [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) + self.assertRaises(ValueError, df.apply, lambda x: x, 2) + + # GH9573 + df = DataFrame({'c0': ['A', 'A', 'B', 'B'], + 'c1': ['C', 'C', 'D', 'D']}) + df = df.apply(lambda ts: ts.astype('category')) + self.assertEqual(df.shape, (4, 2)) + self.assertTrue(isinstance(df['c0'].dtype, com.CategoricalDtype)) + self.assertTrue(isinstance(df['c1'].dtype, com.CategoricalDtype)) + + def test_apply_mixed_datetimelike(self): + # mixed datetimelike + # GH 7778 + df = DataFrame({'A': date_range('20130101', periods=3), + 'B': pd.to_timedelta(np.arange(3), unit='s')}) + result = df.apply(lambda x: x, axis=1) + assert_frame_equal(result, df) + + def test_apply_empty(self): + # empty + applied = self.empty.apply(np.sqrt) + self.assertTrue(applied.empty) + + applied = self.empty.apply(np.mean) + self.assertTrue(applied.empty) + + no_rows = self.frame[:0] + result = no_rows.apply(lambda x: x.mean()) + expected = Series(np.nan, index=self.frame.columns) + assert_series_equal(result, expected) + + no_cols = self.frame.ix[:, []] + result = no_cols.apply(lambda x: x.mean(), axis=1) + expected = Series(np.nan, index=self.frame.index) + assert_series_equal(result, expected) + + # 2476 + xp = DataFrame(index=['a']) + rs = xp.apply(lambda x: x['a'], axis=1) + assert_frame_equal(xp, rs) + + # reduce with an empty DataFrame + x = [] + result = self.empty.apply(x.append, axis=1, reduce=False) + assert_frame_equal(result, self.empty) + result = self.empty.apply(x.append, axis=1, reduce=True) + assert_series_equal(result, Series( + [], index=pd.Index([], dtype=object))) + + empty_with_cols = DataFrame(columns=['a', 'b', 'c']) + result = empty_with_cols.apply(x.append, axis=1, reduce=False) + assert_frame_equal(result, empty_with_cols) + result = empty_with_cols.apply(x.append, axis=1, reduce=True) + assert_series_equal(result, Series( + [], index=pd.Index([], dtype=object))) + + # Ensure that x.append hasn't been called + self.assertEqual(x, []) + + def test_apply_standard_nonunique(self): + df = DataFrame( + [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) + rs = df.apply(lambda s: s[0], axis=1) + xp = Series([1, 4, 7], ['a', 'a', 'c']) + assert_series_equal(rs, xp) + + rs = df.T.apply(lambda s: s[0], axis=0) + assert_series_equal(rs, xp) + + def test_apply_broadcast(self): + broadcasted = self.frame.apply(np.mean, broadcast=True) + agged = self.frame.apply(np.mean) + + for col, ts in compat.iteritems(broadcasted): + self.assertTrue((ts == agged[col]).all()) + + broadcasted = self.frame.apply(np.mean, axis=1, broadcast=True) + agged = self.frame.apply(np.mean, axis=1) + for idx in broadcasted.index: + self.assertTrue((broadcasted.xs(idx) == agged[idx]).all()) + + def test_apply_raw(self): + result0 = self.frame.apply(np.mean, raw=True) + result1 = self.frame.apply(np.mean, axis=1, raw=True) + + expected0 = self.frame.apply(lambda x: x.values.mean()) + expected1 = self.frame.apply(lambda x: x.values.mean(), axis=1) + + assert_series_equal(result0, expected0) + assert_series_equal(result1, expected1) + + # no reduction + result = self.frame.apply(lambda x: x * 2, raw=True) + expected = self.frame * 2 + assert_frame_equal(result, expected) + + def test_apply_axis1(self): + d = self.frame.index[0] + tapplied = self.frame.apply(np.mean, axis=1) + self.assertEqual(tapplied[d], np.mean(self.frame.xs(d))) + + def test_apply_ignore_failures(self): + result = self.mixed_frame._apply_standard(np.mean, 0, + ignore_failures=True) + expected = self.mixed_frame._get_numeric_data().apply(np.mean) + assert_series_equal(result, expected) + + def test_apply_mixed_dtype_corner(self): + df = DataFrame({'A': ['foo'], + 'B': [1.]}) + result = df[:0].apply(np.mean, axis=1) + # the result here is actually kind of ambiguous, should it be a Series + # or a DataFrame? + expected = Series(np.nan, index=pd.Index([], dtype='int64')) + assert_series_equal(result, expected) + + df = DataFrame({'A': ['foo'], + 'B': [1.]}) + result = df.apply(lambda x: x['A'], axis=1) + expected = Series(['foo'], index=[0]) + assert_series_equal(result, expected) + + result = df.apply(lambda x: x['B'], axis=1) + expected = Series([1.], index=[0]) + assert_series_equal(result, expected) + + def test_apply_empty_infer_type(self): + no_cols = DataFrame(index=['a', 'b', 'c']) + no_index = DataFrame(columns=['a', 'b', 'c']) + + def _check(df, f): + test_res = f(np.array([], dtype='f8')) + is_reduction = not isinstance(test_res, np.ndarray) + + def _checkit(axis=0, raw=False): + res = df.apply(f, axis=axis, raw=raw) + if is_reduction: + agg_axis = df._get_agg_axis(axis) + tm.assertIsInstance(res, Series) + self.assertIs(res.index, agg_axis) + else: + tm.assertIsInstance(res, DataFrame) + + _checkit() + _checkit(axis=1) + _checkit(raw=True) + _checkit(axis=0, raw=True) + + _check(no_cols, lambda x: x) + _check(no_cols, lambda x: x.mean()) + _check(no_index, lambda x: x) + _check(no_index, lambda x: x.mean()) + + result = no_cols.apply(lambda x: x.mean(), broadcast=True) + tm.assertIsInstance(result, DataFrame) + + def test_apply_with_args_kwds(self): + def add_some(x, howmuch=0): + return x + howmuch + + def agg_and_add(x, howmuch=0): + return x.mean() + howmuch + + def subtract_and_divide(x, sub, divide=1): + return (x - sub) / divide + + result = self.frame.apply(add_some, howmuch=2) + exp = self.frame.apply(lambda x: x + 2) + assert_frame_equal(result, exp) + + result = self.frame.apply(agg_and_add, howmuch=2) + exp = self.frame.apply(lambda x: x.mean() + 2) + assert_series_equal(result, exp) + + res = self.frame.apply(subtract_and_divide, args=(2,), divide=2) + exp = self.frame.apply(lambda x: (x - 2.) / 2.) + assert_frame_equal(res, exp) + + def test_apply_yield_list(self): + result = self.frame.apply(list) + assert_frame_equal(result, self.frame) + + def test_apply_reduce_Series(self): + self.frame.ix[::2, 'A'] = np.nan + expected = self.frame.mean(1) + result = self.frame.apply(np.mean, axis=1) + assert_series_equal(result, expected) + + def test_apply_differently_indexed(self): + df = DataFrame(np.random.randn(20, 10)) + + result0 = df.apply(Series.describe, axis=0) + expected0 = DataFrame(dict((i, v.describe()) + for i, v in compat.iteritems(df)), + columns=df.columns) + assert_frame_equal(result0, expected0) + + result1 = df.apply(Series.describe, axis=1) + expected1 = DataFrame(dict((i, v.describe()) + for i, v in compat.iteritems(df.T)), + columns=df.index).T + assert_frame_equal(result1, expected1) + + def test_apply_modify_traceback(self): + data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', + 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', + 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', + 'dull', 'shiny', 'shiny', 'dull', + 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) + + data.loc[4, 'C'] = np.nan + + def transform(row): + if row['C'].startswith('shin') and row['A'] == 'foo': + row['D'] = 7 + return row + + def transform2(row): + if (notnull(row['C']) and row['C'].startswith('shin') + and row['A'] == 'foo'): + row['D'] = 7 + return row + + try: + transformed = data.apply(transform, axis=1) # noqa + except AttributeError as e: + self.assertEqual(len(e.args), 2) + self.assertEqual(e.args[1], 'occurred at index 4') + self.assertEqual( + e.args[0], "'float' object has no attribute 'startswith'") + + def test_apply_bug(self): + + # GH 6125 + positions = pd.DataFrame([[1, 'ABC0', 50], [1, 'YUM0', 20], + [1, 'DEF0', 20], [2, 'ABC1', 50], + [2, 'YUM1', 20], [2, 'DEF1', 20]], + columns=['a', 'market', 'position']) + + def f(r): + return r['market'] + expected = positions.apply(f, axis=1) + + positions = DataFrame([[datetime(2013, 1, 1), 'ABC0', 50], + [datetime(2013, 1, 2), 'YUM0', 20], + [datetime(2013, 1, 3), 'DEF0', 20], + [datetime(2013, 1, 4), 'ABC1', 50], + [datetime(2013, 1, 5), 'YUM1', 20], + [datetime(2013, 1, 6), 'DEF1', 20]], + columns=['a', 'market', 'position']) + result = positions.apply(f, axis=1) + assert_series_equal(result, expected) + + def test_apply_convert_objects(self): + data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', + 'bar', 'bar', 'bar', 'bar', + 'foo', 'foo', 'foo'], + 'B': ['one', 'one', 'one', 'two', + 'one', 'one', 'one', 'two', + 'two', 'two', 'one'], + 'C': ['dull', 'dull', 'shiny', 'dull', + 'dull', 'shiny', 'shiny', 'dull', + 'shiny', 'shiny', 'shiny'], + 'D': np.random.randn(11), + 'E': np.random.randn(11), + 'F': np.random.randn(11)}) + + result = data.apply(lambda x: x, axis=1) + assert_frame_equal(result._convert(datetime=True), data) + + def test_apply_attach_name(self): + result = self.frame.apply(lambda x: x.name) + expected = Series(self.frame.columns, index=self.frame.columns) + assert_series_equal(result, expected) + + result = self.frame.apply(lambda x: x.name, axis=1) + expected = Series(self.frame.index, index=self.frame.index) + assert_series_equal(result, expected) + + # non-reductions + result = self.frame.apply(lambda x: np.repeat(x.name, len(x))) + expected = DataFrame(np.tile(self.frame.columns, + (len(self.frame.index), 1)), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(result, expected) + + result = self.frame.apply(lambda x: np.repeat(x.name, len(x)), + axis=1) + expected = DataFrame(np.tile(self.frame.index, + (len(self.frame.columns), 1)).T, + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(result, expected) + + def test_apply_multi_index(self): + s = DataFrame([[1, 2], [3, 4], [5, 6]]) + s.index = MultiIndex.from_arrays([['a', 'a', 'b'], ['c', 'd', 'd']]) + s.columns = ['col1', 'col2'] + res = s.apply(lambda x: Series({'min': min(x), 'max': max(x)}), 1) + tm.assertIsInstance(res.index, MultiIndex) + + def test_apply_dict(self): + + # GH 8735 + A = DataFrame([['foo', 'bar'], ['spam', 'eggs']]) + A_dicts = pd.Series([dict([(0, 'foo'), (1, 'spam')]), + dict([(0, 'bar'), (1, 'eggs')])]) + B = DataFrame([[0, 1], [2, 3]]) + B_dicts = pd.Series([dict([(0, 0), (1, 2)]), dict([(0, 1), (1, 3)])]) + fn = lambda x: x.to_dict() + + for df, dicts in [(A, A_dicts), (B, B_dicts)]: + reduce_true = df.apply(fn, reduce=True) + reduce_false = df.apply(fn, reduce=False) + reduce_none = df.apply(fn, reduce=None) + + assert_series_equal(reduce_true, dicts) + assert_frame_equal(reduce_false, df) + assert_series_equal(reduce_none, dicts) + + def test_applymap(self): + applied = self.frame.applymap(lambda x: x * 2) + assert_frame_equal(applied, self.frame * 2) + result = self.frame.applymap(type) + + # GH #465, function returning tuples + result = self.frame.applymap(lambda x: (x, x)) + tm.assertIsInstance(result['A'][0], tuple) + + # GH 2909, object conversion to float in constructor? + df = DataFrame(data=[1, 'a']) + result = df.applymap(lambda x: x) + self.assertEqual(result.dtypes[0], object) + + df = DataFrame(data=[1., 'a']) + result = df.applymap(lambda x: x) + self.assertEqual(result.dtypes[0], object) + + # GH2786 + df = DataFrame(np.random.random((3, 4))) + df2 = df.copy() + cols = ['a', 'a', 'a', 'a'] + df.columns = cols + + expected = df2.applymap(str) + expected.columns = cols + result = df.applymap(str) + assert_frame_equal(result, expected) + + # datetime/timedelta + df['datetime'] = Timestamp('20130101') + df['timedelta'] = pd.Timedelta('1 min') + result = df.applymap(str) + for f in ['datetime', 'timedelta']: + self.assertEqual(result.loc[0, f], str(df.loc[0, f])) diff --git a/pandas/tests/frame/test_axis_select_reindex.py b/pandas/tests/frame/test_axis_select_reindex.py new file mode 100644 index 0000000000000..32df9fac42550 --- /dev/null +++ b/pandas/tests/frame/test_axis_select_reindex.py @@ -0,0 +1,808 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import random +import numpy as np + +from pandas.compat import lrange, lzip, u +from pandas import (compat, DataFrame, Series, Index, MultiIndex, + date_range, isnull) +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSelectReindex(tm.TestCase, TestData): + # These are specific reindex-based tests; other indexing tests should go in + # test_indexing + + _multiprocess_can_split_ = True + + def test_drop_names(self): + df = DataFrame([[1, 2, 3], [3, 4, 5], [5, 6, 7]], + index=['a', 'b', 'c'], + columns=['d', 'e', 'f']) + df.index.name, df.columns.name = 'first', 'second' + df_dropped_b = df.drop('b') + df_dropped_e = df.drop('e', axis=1) + df_inplace_b, df_inplace_e = df.copy(), df.copy() + df_inplace_b.drop('b', inplace=True) + df_inplace_e.drop('e', axis=1, inplace=True) + for obj in (df_dropped_b, df_dropped_e, df_inplace_b, df_inplace_e): + self.assertEqual(obj.index.name, 'first') + self.assertEqual(obj.columns.name, 'second') + self.assertEqual(list(df.columns), ['d', 'e', 'f']) + + self.assertRaises(ValueError, df.drop, ['g']) + self.assertRaises(ValueError, df.drop, ['g'], 1) + + # errors = 'ignore' + dropped = df.drop(['g'], errors='ignore') + expected = Index(['a', 'b', 'c'], name='first') + self.assert_index_equal(dropped.index, expected) + + dropped = df.drop(['b', 'g'], errors='ignore') + expected = Index(['a', 'c'], name='first') + self.assert_index_equal(dropped.index, expected) + + dropped = df.drop(['g'], axis=1, errors='ignore') + expected = Index(['d', 'e', 'f'], name='second') + self.assert_index_equal(dropped.columns, expected) + + dropped = df.drop(['d', 'g'], axis=1, errors='ignore') + expected = Index(['e', 'f'], name='second') + self.assert_index_equal(dropped.columns, expected) + + def test_drop_col_still_multiindex(self): + arrays = [['a', 'b', 'c', 'top'], + ['', '', '', 'OD'], + ['', '', '', 'wx']] + + tuples = sorted(zip(*arrays)) + index = MultiIndex.from_tuples(tuples) + + df = DataFrame(np.random.randn(3, 4), columns=index) + del df[('a', '', '')] + assert(isinstance(df.columns, MultiIndex)) + + def test_drop(self): + simple = DataFrame({"A": [1, 2, 3, 4], "B": [0, 1, 2, 3]}) + assert_frame_equal(simple.drop("A", axis=1), simple[['B']]) + assert_frame_equal(simple.drop(["A", "B"], axis='columns'), + simple[[]]) + assert_frame_equal(simple.drop([0, 1, 3], axis=0), simple.ix[[2], :]) + assert_frame_equal(simple.drop( + [0, 3], axis='index'), simple.ix[[1, 2], :]) + + self.assertRaises(ValueError, simple.drop, 5) + self.assertRaises(ValueError, simple.drop, 'C', 1) + self.assertRaises(ValueError, simple.drop, [1, 5]) + self.assertRaises(ValueError, simple.drop, ['A', 'C'], 1) + + # errors = 'ignore' + assert_frame_equal(simple.drop(5, errors='ignore'), simple) + assert_frame_equal(simple.drop([0, 5], errors='ignore'), + simple.ix[[1, 2, 3], :]) + assert_frame_equal(simple.drop('C', axis=1, errors='ignore'), simple) + assert_frame_equal(simple.drop(['A', 'C'], axis=1, errors='ignore'), + simple[['B']]) + + # non-unique - wheee! + nu_df = DataFrame(lzip(range(3), range(-3, 1), list('abc')), + columns=['a', 'a', 'b']) + assert_frame_equal(nu_df.drop('a', axis=1), nu_df[['b']]) + assert_frame_equal(nu_df.drop('b', axis='columns'), nu_df['a']) + + nu_df = nu_df.set_index(pd.Index(['X', 'Y', 'X'])) + nu_df.columns = list('abc') + assert_frame_equal(nu_df.drop('X', axis='rows'), nu_df.ix[["Y"], :]) + assert_frame_equal(nu_df.drop(['X', 'Y'], axis=0), nu_df.ix[[], :]) + + # inplace cache issue + # GH 5628 + df = pd.DataFrame(np.random.randn(10, 3), columns=list('abc')) + expected = df[~(df.b > 0)] + df.drop(labels=df[df.b > 0].index, inplace=True) + assert_frame_equal(df, expected) + + def test_reindex(self): + newFrame = self.frame.reindex(self.ts1.index) + + for col in newFrame.columns: + for idx, val in compat.iteritems(newFrame[col]): + if idx in self.frame.index: + if np.isnan(val): + self.assertTrue(np.isnan(self.frame[col][idx])) + else: + self.assertEqual(val, self.frame[col][idx]) + else: + self.assertTrue(np.isnan(val)) + + for col, series in compat.iteritems(newFrame): + self.assertTrue(tm.equalContents(series.index, newFrame.index)) + emptyFrame = self.frame.reindex(Index([])) + self.assertEqual(len(emptyFrame.index), 0) + + # Cython code should be unit-tested directly + nonContigFrame = self.frame.reindex(self.ts1.index[::2]) + + for col in nonContigFrame.columns: + for idx, val in compat.iteritems(nonContigFrame[col]): + if idx in self.frame.index: + if np.isnan(val): + self.assertTrue(np.isnan(self.frame[col][idx])) + else: + self.assertEqual(val, self.frame[col][idx]) + else: + self.assertTrue(np.isnan(val)) + + for col, series in compat.iteritems(nonContigFrame): + self.assertTrue(tm.equalContents(series.index, + nonContigFrame.index)) + + # corner cases + + # Same index, copies values but not index if copy=False + newFrame = self.frame.reindex(self.frame.index, copy=False) + self.assertIs(newFrame.index, self.frame.index) + + # length zero + newFrame = self.frame.reindex([]) + self.assertTrue(newFrame.empty) + self.assertEqual(len(newFrame.columns), len(self.frame.columns)) + + # length zero with columns reindexed with non-empty index + newFrame = self.frame.reindex([]) + newFrame = newFrame.reindex(self.frame.index) + self.assertEqual(len(newFrame.index), len(self.frame.index)) + self.assertEqual(len(newFrame.columns), len(self.frame.columns)) + + # pass non-Index + newFrame = self.frame.reindex(list(self.ts1.index)) + self.assertTrue(newFrame.index.equals(self.ts1.index)) + + # copy with no axes + result = self.frame.reindex() + assert_frame_equal(result, self.frame) + self.assertFalse(result is self.frame) + + def test_reindex_nan(self): + df = pd.DataFrame([[1, 2], [3, 5], [7, 11], [9, 23]], + index=[2, np.nan, 1, 5], + columns=['joe', 'jim']) + + i, j = [np.nan, 5, 5, np.nan, 1, 2, np.nan], [1, 3, 3, 1, 2, 0, 1] + assert_frame_equal(df.reindex(i), df.iloc[j]) + + df.index = df.index.astype('object') + assert_frame_equal(df.reindex(i), df.iloc[j], check_index_type=False) + + # GH10388 + df = pd.DataFrame({'other': ['a', 'b', np.nan, 'c'], + 'date': ['2015-03-22', np.nan, + '2012-01-08', np.nan], + 'amount': [2, 3, 4, 5]}) + + df['date'] = pd.to_datetime(df.date) + df['delta'] = (pd.to_datetime('2015-06-18') - df['date']).shift(1) + + left = df.set_index(['delta', 'other', 'date']).reset_index() + right = df.reindex(columns=['delta', 'other', 'date', 'amount']) + assert_frame_equal(left, right) + + def test_reindex_name_remains(self): + s = Series(random.rand(10)) + df = DataFrame(s, index=np.arange(len(s))) + i = Series(np.arange(10), name='iname') + + df = df.reindex(i) + self.assertEqual(df.index.name, 'iname') + + df = df.reindex(Index(np.arange(10), name='tmpname')) + self.assertEqual(df.index.name, 'tmpname') + + s = Series(random.rand(10)) + df = DataFrame(s.T, index=np.arange(len(s))) + i = Series(np.arange(10), name='iname') + df = df.reindex(columns=i) + self.assertEqual(df.columns.name, 'iname') + + def test_reindex_int(self): + smaller = self.intframe.reindex(self.intframe.index[::2]) + + self.assertEqual(smaller['A'].dtype, np.int64) + + bigger = smaller.reindex(self.intframe.index) + self.assertEqual(bigger['A'].dtype, np.float64) + + smaller = self.intframe.reindex(columns=['A', 'B']) + self.assertEqual(smaller['A'].dtype, np.int64) + + def test_reindex_like(self): + other = self.frame.reindex(index=self.frame.index[:10], + columns=['C', 'B']) + + assert_frame_equal(other, self.frame.reindex_like(other)) + + def test_reindex_columns(self): + newFrame = self.frame.reindex(columns=['A', 'B', 'E']) + + assert_series_equal(newFrame['B'], self.frame['B']) + self.assertTrue(np.isnan(newFrame['E']).all()) + self.assertNotIn('C', newFrame) + + # length zero + newFrame = self.frame.reindex(columns=[]) + self.assertTrue(newFrame.empty) + + def test_reindex_axes(self): + # GH 3317, reindexing by both axes loses freq of the index + df = DataFrame(np.ones((3, 3)), + index=[datetime(2012, 1, 1), + datetime(2012, 1, 2), + datetime(2012, 1, 3)], + columns=['a', 'b', 'c']) + time_freq = date_range('2012-01-01', '2012-01-03', freq='d') + some_cols = ['a', 'b'] + + index_freq = df.reindex(index=time_freq).index.freq + both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq + seq_freq = df.reindex(index=time_freq).reindex( + columns=some_cols).index.freq + self.assertEqual(index_freq, both_freq) + self.assertEqual(index_freq, seq_freq) + + def test_reindex_fill_value(self): + df = DataFrame(np.random.randn(10, 4)) + + # axis=0 + result = df.reindex(lrange(15)) + self.assertTrue(np.isnan(result.values[-5:]).all()) + + result = df.reindex(lrange(15), fill_value=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + # axis=1 + result = df.reindex(columns=lrange(5), fill_value=0.) + expected = df.copy() + expected[4] = 0. + assert_frame_equal(result, expected) + + result = df.reindex(columns=lrange(5), fill_value=0) + expected = df.copy() + expected[4] = 0 + assert_frame_equal(result, expected) + + result = df.reindex(columns=lrange(5), fill_value='foo') + expected = df.copy() + expected[4] = 'foo' + assert_frame_equal(result, expected) + + # reindex_axis + result = df.reindex_axis(lrange(15), fill_value=0., axis=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + result = df.reindex_axis(lrange(5), fill_value=0., axis=1) + expected = df.reindex(columns=lrange(5)).fillna(0) + assert_frame_equal(result, expected) + + # other dtypes + df['foo'] = 'foo' + result = df.reindex(lrange(15), fill_value=0) + expected = df.reindex(lrange(15)).fillna(0) + assert_frame_equal(result, expected) + + def test_reindex_dups(self): + + # GH4746, reindex on duplicate index error messages + arr = np.random.randn(10) + df = DataFrame(arr, index=[1, 2, 3, 4, 5, 1, 2, 3, 4, 5]) + + # set index is ok + result = df.copy() + result.index = list(range(len(df))) + expected = DataFrame(arr, index=list(range(len(df)))) + assert_frame_equal(result, expected) + + # reindex fails + self.assertRaises(ValueError, df.reindex, index=list(range(len(df)))) + + def test_align(self): + af, bf = self.frame.align(self.frame) + self.assertIsNot(af._data, self.frame._data) + + af, bf = self.frame.align(self.frame, copy=False) + self.assertIs(af._data, self.frame._data) + + # axis = 0 + other = self.frame.ix[:-5, :3] + af, bf = self.frame.align(other, axis=0, fill_value=-1) + self.assertTrue(bf.columns.equals(other.columns)) + # test fill value + join_idx = self.frame.index.join(other.index) + diff_a = self.frame.index.difference(join_idx) + diff_b = other.index.difference(join_idx) + diff_a_vals = af.reindex(diff_a).values + diff_b_vals = bf.reindex(diff_b).values + self.assertTrue((diff_a_vals == -1).all()) + + af, bf = self.frame.align(other, join='right', axis=0) + self.assertTrue(bf.columns.equals(other.columns)) + self.assertTrue(bf.index.equals(other.index)) + self.assertTrue(af.index.equals(other.index)) + + # axis = 1 + other = self.frame.ix[:-5, :3].copy() + af, bf = self.frame.align(other, axis=1) + self.assertTrue(bf.columns.equals(self.frame.columns)) + self.assertTrue(bf.index.equals(other.index)) + + # test fill value + join_idx = self.frame.index.join(other.index) + diff_a = self.frame.index.difference(join_idx) + diff_b = other.index.difference(join_idx) + diff_a_vals = af.reindex(diff_a).values + + # TODO(wesm): unused? + diff_b_vals = bf.reindex(diff_b).values # noqa + + self.assertTrue((diff_a_vals == -1).all()) + + af, bf = self.frame.align(other, join='inner', axis=1) + self.assertTrue(bf.columns.equals(other.columns)) + + af, bf = self.frame.align(other, join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(other.columns)) + + # test other non-float types + af, bf = self.intframe.align(other, join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(other.columns)) + + af, bf = self.mixed_frame.align(self.mixed_frame, + join='inner', axis=1, method='pad') + self.assertTrue(bf.columns.equals(self.mixed_frame.columns)) + + af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=None) + self.assertTrue(bf.index.equals(Index([]))) + + af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + # mixed floats/ints + af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1, + method=None, fill_value=0) + self.assertTrue(bf.index.equals(Index([]))) + + # try to align dataframe to series along bad axis + self.assertRaises(ValueError, self.frame.align, af.ix[0, :3], + join='inner', axis=2) + + # align dataframe to series with broadcast or not + idx = self.frame.index + s = Series(range(len(idx)), index=idx) + + left, right = self.frame.align(s, axis=0) + tm.assert_index_equal(left.index, self.frame.index) + tm.assert_index_equal(right.index, self.frame.index) + self.assertTrue(isinstance(right, Series)) + + left, right = self.frame.align(s, broadcast_axis=1) + tm.assert_index_equal(left.index, self.frame.index) + expected = {} + for c in self.frame.columns: + expected[c] = s + expected = DataFrame(expected, index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(right, expected) + + # GH 9558 + df = DataFrame({'a': [1, 2, 3], 'b': [4, 5, 6]}) + result = df[df['a'] == 2] + expected = DataFrame([[2, 5]], index=[1], columns=['a', 'b']) + assert_frame_equal(result, expected) + + result = df.where(df['a'] == 2, 0) + expected = DataFrame({'a': [0, 2, 0], 'b': [0, 5, 0]}) + assert_frame_equal(result, expected) + + def _check_align(self, a, b, axis, fill_axis, how, method, limit=None): + aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit, + fill_axis=fill_axis) + + join_index, join_columns = None, None + + ea, eb = a, b + if axis is None or axis == 0: + join_index = a.index.join(b.index, how=how) + ea = ea.reindex(index=join_index) + eb = eb.reindex(index=join_index) + + if axis is None or axis == 1: + join_columns = a.columns.join(b.columns, how=how) + ea = ea.reindex(columns=join_columns) + eb = eb.reindex(columns=join_columns) + + ea = ea.fillna(axis=fill_axis, method=method, limit=limit) + eb = eb.fillna(axis=fill_axis, method=method, limit=limit) + + assert_frame_equal(aa, ea) + assert_frame_equal(ab, eb) + + def test_align_fill_method_inner(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('inner', meth, ax, fax) + + def test_align_fill_method_outer(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('outer', meth, ax, fax) + + def test_align_fill_method_left(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('left', meth, ax, fax) + + def test_align_fill_method_right(self): + for meth in ['pad', 'bfill']: + for ax in [0, 1, None]: + for fax in [0, 1]: + self._check_align_fill('right', meth, ax, fax) + + def _check_align_fill(self, kind, meth, ax, fax): + left = self.frame.ix[0:4, :10] + right = self.frame.ix[2:, 6:] + empty = self.frame.ix[:0, :0] + + self._check_align(left, right, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(left, right, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # empty left + self._check_align(empty, right, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(empty, right, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # empty right + self._check_align(left, empty, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(left, empty, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + # both empty + self._check_align(empty, empty, axis=ax, fill_axis=fax, + how=kind, method=meth) + self._check_align(empty, empty, axis=ax, fill_axis=fax, + how=kind, method=meth, limit=1) + + def test_align_int_fill_bug(self): + # GH #910 + X = np.arange(10 * 10, dtype='float64').reshape(10, 10) + Y = np.ones((10, 1), dtype=int) + + df1 = DataFrame(X) + df1['0.X'] = Y.squeeze() + + df2 = df1.astype(float) + + result = df1 - df1.mean() + expected = df2 - df2.mean() + assert_frame_equal(result, expected) + + def test_align_multiindex(self): + # GH 10665 + # same test cases as test_align_multiindex in test_series.py + + midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], + names=('a', 'b', 'c')) + idx = pd.Index(range(2), name='b') + df1 = pd.DataFrame(np.arange(12, dtype='int64'), index=midx) + df2 = pd.DataFrame(np.arange(2, dtype='int64'), index=idx) + + # these must be the same results (but flipped) + res1l, res1r = df1.align(df2, join='left') + res2l, res2r = df2.align(df1, join='right') + + expl = df1 + assert_frame_equal(expl, res1l) + assert_frame_equal(expl, res2r) + expr = pd.DataFrame([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) + assert_frame_equal(expr, res1r) + assert_frame_equal(expr, res2l) + + res1l, res1r = df1.align(df2, join='right') + res2l, res2r = df2.align(df1, join='left') + + exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], + names=('a', 'b', 'c')) + expl = pd.DataFrame([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) + assert_frame_equal(expl, res1l) + assert_frame_equal(expl, res2r) + expr = pd.DataFrame([0, 0, 1, 1] * 2, index=exp_idx) + assert_frame_equal(expr, res1r) + assert_frame_equal(expr, res2l) + + def test_filter(self): + # items + filtered = self.frame.filter(['A', 'B', 'E']) + self.assertEqual(len(filtered.columns), 2) + self.assertNotIn('E', filtered) + + filtered = self.frame.filter(['A', 'B', 'E'], axis='columns') + self.assertEqual(len(filtered.columns), 2) + self.assertNotIn('E', filtered) + + # other axis + idx = self.frame.index[0:4] + filtered = self.frame.filter(idx, axis='index') + expected = self.frame.reindex(index=idx) + assert_frame_equal(filtered, expected) + + # like + fcopy = self.frame.copy() + fcopy['AA'] = 1 + + filtered = fcopy.filter(like='A') + self.assertEqual(len(filtered.columns), 2) + self.assertIn('AA', filtered) + + # like with ints in column names + df = DataFrame(0., index=[0, 1, 2], columns=[0, 1, '_A', '_B']) + filtered = df.filter(like='_') + self.assertEqual(len(filtered.columns), 2) + + # regex with ints in column names + # from PR #10384 + df = DataFrame(0., index=[0, 1, 2], columns=['A1', 1, 'B', 2, 'C']) + expected = DataFrame( + 0., index=[0, 1, 2], columns=pd.Index([1, 2], dtype=object)) + filtered = df.filter(regex='^[0-9]+$') + assert_frame_equal(filtered, expected) + + expected = DataFrame(0., index=[0, 1, 2], columns=[0, '0', 1, '1']) + # shouldn't remove anything + filtered = expected.filter(regex='^[0-9]+$') + assert_frame_equal(filtered, expected) + + # pass in None + with assertRaisesRegexp(TypeError, 'Must pass'): + self.frame.filter(items=None) + + # objects + filtered = self.mixed_frame.filter(like='foo') + self.assertIn('foo', filtered) + + # unicode columns, won't ascii-encode + df = self.frame.rename(columns={'B': u('\u2202')}) + filtered = df.filter(like='C') + self.assertTrue('C' in filtered) + + def test_filter_regex_search(self): + fcopy = self.frame.copy() + fcopy['AA'] = 1 + + # regex + filtered = fcopy.filter(regex='[A]+') + self.assertEqual(len(filtered.columns), 2) + self.assertIn('AA', filtered) + + # doesn't have to be at beginning + df = DataFrame({'aBBa': [1, 2], + 'BBaBB': [1, 2], + 'aCCa': [1, 2], + 'aCCaBB': [1, 2]}) + + result = df.filter(regex='BB') + exp = df[[x for x in df.columns if 'BB' in x]] + assert_frame_equal(result, exp) + + def test_filter_corner(self): + empty = DataFrame() + + result = empty.filter([]) + assert_frame_equal(result, empty) + + result = empty.filter(like='foo') + assert_frame_equal(result, empty) + + def test_select(self): + f = lambda x: x.weekday() == 2 + result = self.tsframe.select(f, axis=0) + expected = self.tsframe.reindex( + index=self.tsframe.index[[f(x) for x in self.tsframe.index]]) + assert_frame_equal(result, expected) + + result = self.frame.select(lambda x: x in ('B', 'D'), axis=1) + expected = self.frame.reindex(columns=['B', 'D']) + + # TODO should reindex check_names? + assert_frame_equal(result, expected, check_names=False) + + def test_take(self): + # homogeneous + order = [3, 1, 2, 0] + for df in [self.frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['D', 'B', 'C', 'A']] + assert_frame_equal(result, expected, check_names=False) + + # neg indicies + order = [2, 1, -1] + for df in [self.frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['C', 'B', 'D']] + assert_frame_equal(result, expected, check_names=False) + + # illegal indices + self.assertRaises(IndexError, df.take, [3, 1, 2, 30], axis=0) + self.assertRaises(IndexError, df.take, [3, 1, 2, -31], axis=0) + self.assertRaises(IndexError, df.take, [3, 1, 2, 5], axis=1) + self.assertRaises(IndexError, df.take, [3, 1, 2, -5], axis=1) + + # mixed-dtype + order = [4, 1, 2, 0, 3] + for df in [self.mixed_frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']] + assert_frame_equal(result, expected) + + # neg indicies + order = [4, 1, -2] + for df in [self.mixed_frame]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['foo', 'B', 'D']] + assert_frame_equal(result, expected) + + # by dtype + order = [1, 2, 0, 3] + for df in [self.mixed_float, self.mixed_int]: + + result = df.take(order, axis=0) + expected = df.reindex(df.index.take(order)) + assert_frame_equal(result, expected) + + # axis = 1 + result = df.take(order, axis=1) + expected = df.ix[:, ['B', 'C', 'A', 'D']] + assert_frame_equal(result, expected) + + def test_reindex_boolean(self): + frame = DataFrame(np.ones((10, 2), dtype=bool), + index=np.arange(0, 20, 2), + columns=[0, 2]) + + reindexed = frame.reindex(np.arange(10)) + self.assertEqual(reindexed.values.dtype, np.object_) + self.assertTrue(isnull(reindexed[0][1])) + + reindexed = frame.reindex(columns=lrange(3)) + self.assertEqual(reindexed.values.dtype, np.object_) + self.assertTrue(isnull(reindexed[1]).all()) + + def test_reindex_objects(self): + reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B']) + self.assertIn('foo', reindexed) + + reindexed = self.mixed_frame.reindex(columns=['A', 'B']) + self.assertNotIn('foo', reindexed) + + def test_reindex_corner(self): + index = Index(['a', 'b', 'c']) + dm = self.empty.reindex(index=[1, 2, 3]) + reindexed = dm.reindex(columns=index) + self.assertTrue(reindexed.columns.equals(index)) + + # ints are weird + + smaller = self.intframe.reindex(columns=['A', 'B', 'E']) + self.assertEqual(smaller['E'].dtype, np.float64) + + def test_reindex_axis(self): + cols = ['A', 'B', 'E'] + reindexed1 = self.intframe.reindex_axis(cols, axis=1) + reindexed2 = self.intframe.reindex(columns=cols) + assert_frame_equal(reindexed1, reindexed2) + + rows = self.intframe.index[0:5] + reindexed1 = self.intframe.reindex_axis(rows, axis=0) + reindexed2 = self.intframe.reindex(index=rows) + assert_frame_equal(reindexed1, reindexed2) + + self.assertRaises(ValueError, self.intframe.reindex_axis, rows, axis=2) + + # no-op case + cols = self.frame.columns.copy() + newFrame = self.frame.reindex_axis(cols, axis=1) + assert_frame_equal(newFrame, self.frame) + + def test_reindex_with_nans(self): + df = DataFrame([[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]], + columns=['a', 'b'], + index=[100.0, 101.0, np.nan, 102.0, 103.0]) + + result = df.reindex(index=[101.0, 102.0, 103.0]) + expected = df.iloc[[1, 3, 4]] + assert_frame_equal(result, expected) + + result = df.reindex(index=[103.0]) + expected = df.iloc[[4]] + assert_frame_equal(result, expected) + + result = df.reindex(index=[101.0]) + expected = df.iloc[[1]] + assert_frame_equal(result, expected) + + def test_reindex_multi(self): + df = DataFrame(np.random.randn(3, 3)) + + result = df.reindex(lrange(4), lrange(4)) + expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randint(0, 10, (3, 3))) + + result = df.reindex(lrange(4), lrange(4)) + expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randint(0, 10, (3, 3))) + + result = df.reindex(lrange(2), lrange(2)) + expected = df.reindex(lrange(2)).reindex(columns=lrange(2)) + + assert_frame_equal(result, expected) + + df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a', 'b', 'c']) + + result = df.reindex(index=[0, 1], columns=['a', 'b']) + expected = df.reindex([0, 1]).reindex(columns=['a', 'b']) + + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_block_internals.py b/pandas/tests/frame/test_block_internals.py new file mode 100644 index 0000000000000..f337bf48c05ee --- /dev/null +++ b/pandas/tests/frame/test_block_internals.py @@ -0,0 +1,532 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import itertools + +from numpy import nan +import numpy as np + +from pandas import (DataFrame, Series, Timestamp, date_range, compat, + option_context) +from pandas.compat import StringIO +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Segregated collection of methods that require the BlockManager internal data +# structure + + +class TestDataFrameBlockInternals(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_cast_internals(self): + casted = DataFrame(self.frame._data, dtype=int) + expected = DataFrame(self.frame._series, dtype=int) + assert_frame_equal(casted, expected) + + casted = DataFrame(self.frame._data, dtype=np.int32) + expected = DataFrame(self.frame._series, dtype=np.int32) + assert_frame_equal(casted, expected) + + def test_consolidate(self): + self.frame['E'] = 7. + consolidated = self.frame.consolidate() + self.assertEqual(len(consolidated._data.blocks), 1) + + # Ensure copy, do I want this? + recons = consolidated.consolidate() + self.assertIsNot(recons, consolidated) + assert_frame_equal(recons, consolidated) + + self.frame['F'] = 8. + self.assertEqual(len(self.frame._data.blocks), 3) + self.frame.consolidate(inplace=True) + self.assertEqual(len(self.frame._data.blocks), 1) + + def test_consolidate_inplace(self): + frame = self.frame.copy() # noqa + + # triggers in-place consolidation + for letter in range(ord('A'), ord('Z')): + self.frame[chr(letter)] = chr(letter) + + def test_as_matrix_consolidate(self): + self.frame['E'] = 7. + self.assertFalse(self.frame._data.is_consolidated()) + _ = self.frame.as_matrix() # noqa + self.assertTrue(self.frame._data.is_consolidated()) + + def test_modify_values(self): + self.frame.values[5] = 5 + self.assertTrue((self.frame.values[5] == 5).all()) + + # unconsolidated + self.frame['E'] = 7. + self.frame.values[6] = 6 + self.assertTrue((self.frame.values[6] == 6).all()) + + def test_boolean_set_uncons(self): + self.frame['E'] = 7. + + expected = self.frame.values.copy() + expected[expected > 1] = 2 + + self.frame[self.frame > 1] = 2 + assert_almost_equal(expected, self.frame.values) + + def test_as_matrix_numeric_cols(self): + self.frame['foo'] = 'bar' + + values = self.frame.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.float64) + + def test_as_matrix_lcd(self): + + # mixed lcd + values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.float64) + + values = self.mixed_float.as_matrix(['A', 'B', 'C']) + self.assertEqual(values.dtype, np.float32) + + values = self.mixed_float.as_matrix(['C']) + self.assertEqual(values.dtype, np.float16) + + values = self.mixed_int.as_matrix(['A', 'B', 'C', 'D']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A', 'D']) + self.assertEqual(values.dtype, np.int64) + + # guess all ints are cast to uints.... + values = self.mixed_int.as_matrix(['A', 'B', 'C']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A', 'C']) + self.assertEqual(values.dtype, np.int32) + + values = self.mixed_int.as_matrix(['C', 'D']) + self.assertEqual(values.dtype, np.int64) + + values = self.mixed_int.as_matrix(['A']) + self.assertEqual(values.dtype, np.int32) + + values = self.mixed_int.as_matrix(['C']) + self.assertEqual(values.dtype, np.uint8) + + def test_constructor_with_convert(self): + # this is actually mostly a test of lib.maybe_convert_objects + # #2845 + df = DataFrame({'A': [2 ** 63 - 1]}) + result = df['A'] + expected = Series(np.asarray([2 ** 63 - 1], np.int64), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2 ** 63]}) + result = df['A'] + expected = Series(np.asarray([2 ** 63], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [datetime(2005, 1, 1), True]}) + result = df['A'] + expected = Series(np.asarray([datetime(2005, 1, 1), True], np.object_), + name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [None, 1]}) + result = df['A'] + expected = Series(np.asarray([np.nan, 1], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0, 2]}) + result = df['A'] + expected = Series(np.asarray([1.0, 2], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, 3]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, 3], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, 3.0]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, 3.0], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, True]}) + result = df['A'] + expected = Series(np.asarray([1.0 + 2.0j, True], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0, None]}) + result = df['A'] + expected = Series(np.asarray([1.0, np.nan], np.float_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [1.0 + 2.0j, None]}) + result = df['A'] + expected = Series(np.asarray( + [1.0 + 2.0j, np.nan], np.complex_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2.0, 1, True, None]}) + result = df['A'] + expected = Series(np.asarray( + [2.0, 1, True, None], np.object_), name='A') + assert_series_equal(result, expected) + + df = DataFrame({'A': [2.0, 1, datetime(2006, 1, 1), None]}) + result = df['A'] + expected = Series(np.asarray([2.0, 1, datetime(2006, 1, 1), + None], np.object_), name='A') + assert_series_equal(result, expected) + + def test_construction_with_mixed(self): + # test construction edge cases with mixed types + + # f7u12, this does not work without extensive workaround + data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], + [datetime(2000, 1, 2), datetime(2000, 1, 3), + datetime(2000, 1, 1)]] + df = DataFrame(data) + + # check dtypes + result = df.get_dtype_counts().sort_values() + expected = Series({'datetime64[ns]': 3}) + + # mixed-type frames + self.mixed_frame['datetime'] = datetime.now() + self.mixed_frame['timedelta'] = timedelta(days=1, seconds=1) + self.assertEqual(self.mixed_frame['datetime'].dtype, 'M8[ns]') + self.assertEqual(self.mixed_frame['timedelta'].dtype, 'm8[ns]') + result = self.mixed_frame.get_dtype_counts().sort_values() + expected = Series({'float64': 4, + 'object': 1, + 'datetime64[ns]': 1, + 'timedelta64[ns]': 1}).sort_values() + assert_series_equal(result, expected) + + def test_construction_with_conversions(self): + + # convert from a numpy array of non-ns timedelta64 + arr = np.array([1, 2, 3], dtype='timedelta64[s]') + s = Series(arr) + expected = Series(pd.timedelta_range('00:00:01', periods=3, freq='s')) + assert_series_equal(s, expected) + + df = DataFrame(index=range(3)) + df['A'] = arr + expected = DataFrame({'A': pd.timedelta_range('00:00:01', periods=3, + freq='s')}, + index=range(3)) + assert_frame_equal(df, expected) + + # convert from a numpy array of non-ns datetime64 + # note that creating a numpy datetime64 is in LOCAL time!!!! + # seems to work for M8[D], but not for M8[s] + + s = Series(np.array(['2013-01-01', '2013-01-02', + '2013-01-03'], dtype='datetime64[D]')) + assert_series_equal(s, Series(date_range('20130101', periods=3, + freq='D'))) + + # s = Series(np.array(['2013-01-01 00:00:01','2013-01-01 + # 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]')) + + # assert_series_equal(s,date_range('20130101 + # 00:00:01',period=3,freq='s')) + + expected = DataFrame({ + 'dt1': Timestamp('20130101'), + 'dt2': date_range('20130101', periods=3), + # 'dt3' : date_range('20130101 00:00:01',periods=3,freq='s'), + }, index=range(3)) + + df = DataFrame(index=range(3)) + df['dt1'] = np.datetime64('2013-01-01') + df['dt2'] = np.array(['2013-01-01', '2013-01-02', '2013-01-03'], + dtype='datetime64[D]') + + # df['dt3'] = np.array(['2013-01-01 00:00:01','2013-01-01 + # 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]') + + assert_frame_equal(df, expected) + + def test_constructor_compound_dtypes(self): + # GH 5191 + # compound dtypes should raise not-implementederror + + def f(dtype): + data = list(itertools.repeat((datetime(2001, 1, 1), + "aa", 20), 9)) + return DataFrame(data=data, + columns=["A", "B", "C"], + dtype=dtype) + + self.assertRaises(NotImplementedError, f, + [("A", "datetime64[h]"), + ("B", "str"), + ("C", "int32")]) + + # these work (though results may be unexpected) + f('int64') + f('float64') + + # 10822 + # invalid error message on dt inference + if not compat.is_platform_windows(): + f('M8[ns]') + + def test_equals_different_blocks(self): + # GH 9330 + df0 = pd.DataFrame({"A": ["x", "y"], "B": [1, 2], + "C": ["w", "z"]}) + df1 = df0.reset_index()[["A", "B", "C"]] + # this assert verifies that the above operations have + # induced a block rearrangement + self.assertTrue(df0._data.blocks[0].dtype != + df1._data.blocks[0].dtype) + # do the real tests + assert_frame_equal(df0, df1) + self.assertTrue(df0.equals(df1)) + self.assertTrue(df1.equals(df0)) + + def test_copy_blocks(self): + # API/ENH 9607 + df = DataFrame(self.frame, copy=True) + column = df.columns[0] + + # use the default copy=True, change a column + blocks = df.as_blocks() + for dtype, _df in blocks.items(): + if column in _df: + _df.ix[:, column] = _df[column] + 1 + + # make sure we did not change the original DataFrame + self.assertFalse(_df[column].equals(df[column])) + + def test_no_copy_blocks(self): + # API/ENH 9607 + df = DataFrame(self.frame, copy=True) + column = df.columns[0] + + # use the copy=False, change a column + blocks = df.as_blocks(copy=False) + for dtype, _df in blocks.items(): + if column in _df: + _df.ix[:, column] = _df[column] + 1 + + # make sure we did change the original DataFrame + self.assertTrue(_df[column].equals(df[column])) + + def test_copy(self): + cop = self.frame.copy() + cop['E'] = cop['A'] + self.assertNotIn('E', self.frame) + + # copy objects + copy = self.mixed_frame.copy() + self.assertIsNot(copy._data, self.mixed_frame._data) + + def test_pickle(self): + unpickled = self.round_trip_pickle(self.mixed_frame) + assert_frame_equal(self.mixed_frame, unpickled) + + # buglet + self.mixed_frame._data.ndim + + # empty + unpickled = self.round_trip_pickle(self.empty) + repr(unpickled) + + # tz frame + unpickled = self.round_trip_pickle(self.tzframe) + assert_frame_equal(self.tzframe, unpickled) + + def test_consolidate_datetime64(self): + # numpy vstack bug + + data = """\ +starting,ending,measure +2012-06-21 00:00,2012-06-23 07:00,77 +2012-06-23 07:00,2012-06-23 16:30,65 +2012-06-23 16:30,2012-06-25 08:00,77 +2012-06-25 08:00,2012-06-26 12:00,0 +2012-06-26 12:00,2012-06-27 08:00,77 +""" + df = pd.read_csv(StringIO(data), parse_dates=[0, 1]) + + ser_starting = df.starting + ser_starting.index = ser_starting.values + ser_starting = ser_starting.tz_localize('US/Eastern') + ser_starting = ser_starting.tz_convert('UTC') + + ser_ending = df.ending + ser_ending.index = ser_ending.values + ser_ending = ser_ending.tz_localize('US/Eastern') + ser_ending = ser_ending.tz_convert('UTC') + + df.starting = ser_starting.index + df.ending = ser_ending.index + + tm.assert_index_equal(pd.DatetimeIndex( + df.starting), ser_starting.index) + tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index) + + def test_is_mixed_type(self): + self.assertFalse(self.frame._is_mixed_type) + self.assertTrue(self.mixed_frame._is_mixed_type) + + def test_get_numeric_data(self): + # TODO(wesm): unused? + intname = np.dtype(np.int_).name # noqa + floatname = np.dtype(np.float_).name # noqa + + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'f': Timestamp('20010102')}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = Series({'int64': 1, 'float64': 1, + datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + 'd': np.array([1.] * 10, dtype='float32'), + 'e': np.array([1] * 10, dtype='int32'), + 'f': np.array([1] * 10, dtype='int16'), + 'g': Timestamp('20010102')}, + index=np.arange(10)) + + result = df._get_numeric_data() + expected = df.ix[:, ['a', 'b', 'd', 'e', 'f']] + assert_frame_equal(result, expected) + + only_obj = df.ix[:, ['c', 'g']] + result = only_obj._get_numeric_data() + expected = df.ix[:, []] + assert_frame_equal(result, expected) + + df = DataFrame.from_dict( + {'a': [1, 2], 'b': ['foo', 'bar'], 'c': [np.pi, np.e]}) + result = df._get_numeric_data() + expected = DataFrame.from_dict({'a': [1, 2], 'c': [np.pi, np.e]}) + assert_frame_equal(result, expected) + + df = result.copy() + result = df._get_numeric_data() + expected = df + assert_frame_equal(result, expected) + + def test_convert_objects(self): + + oops = self.mixed_frame.T.T + converted = oops._convert(datetime=True) + assert_frame_equal(converted, self.mixed_frame) + self.assertEqual(converted['A'].dtype, np.float64) + + # force numeric conversion + self.mixed_frame['H'] = '1.' + self.mixed_frame['I'] = '1' + + # add in some items that will be nan + l = len(self.mixed_frame) + self.mixed_frame['J'] = '1.' + self.mixed_frame['K'] = '1' + self.mixed_frame.ix[0:5, ['J', 'K']] = 'garbled' + converted = self.mixed_frame._convert(datetime=True, numeric=True) + self.assertEqual(converted['H'].dtype, 'float64') + self.assertEqual(converted['I'].dtype, 'int64') + self.assertEqual(converted['J'].dtype, 'float64') + self.assertEqual(converted['K'].dtype, 'float64') + self.assertEqual(len(converted['J'].dropna()), l - 5) + self.assertEqual(len(converted['K'].dropna()), l - 5) + + # via astype + converted = self.mixed_frame.copy() + converted['H'] = converted['H'].astype('float64') + converted['I'] = converted['I'].astype('int64') + self.assertEqual(converted['H'].dtype, 'float64') + self.assertEqual(converted['I'].dtype, 'int64') + + # via astype, but errors + converted = self.mixed_frame.copy() + with assertRaisesRegexp(ValueError, 'invalid literal'): + converted['H'].astype('int32') + + # mixed in a single column + df = DataFrame(dict(s=Series([1, 'na', 3, 4]))) + result = df._convert(datetime=True, numeric=True) + expected = DataFrame(dict(s=Series([1, np.nan, 3, 4]))) + assert_frame_equal(result, expected) + + def test_convert_objects_no_conversion(self): + mixed1 = DataFrame( + {'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']}) + mixed2 = mixed1._convert(datetime=True) + assert_frame_equal(mixed1, mixed2) + + def test_stale_cached_series_bug_473(self): + + # this is chained, but ok + with option_context('chained_assignment', None): + Y = DataFrame(np.random.random((4, 4)), index=('a', 'b', 'c', 'd'), + columns=('e', 'f', 'g', 'h')) + repr(Y) + Y['e'] = Y['e'].astype('object') + Y['g']['c'] = np.NaN + repr(Y) + result = Y.sum() # noqa + exp = Y['g'].sum() # noqa + self.assertTrue(pd.isnull(Y['g']['c'])) + + def test_get_X_columns(self): + # numeric and object columns + + df = DataFrame({'a': [1, 2, 3], + 'b': [True, False, True], + 'c': ['foo', 'bar', 'baz'], + 'd': [None, None, None], + 'e': [3.14, 0.577, 2.773]}) + + self.assert_numpy_array_equal(df._get_numeric_data().columns, + ['a', 'b', 'e']) + + def test_strange_column_corruption_issue(self): + # (wesm) Unclear how exactly this is related to internal matters + df = DataFrame(index=[0, 1]) + df[0] = nan + wasCol = {} + # uncommenting these makes the results match + # for col in xrange(100, 200): + # wasCol[col] = 1 + # df[col] = nan + + for i, dt in enumerate(df.index): + for col in range(100, 200): + if col not in wasCol: + wasCol[col] = 1 + df[col] = nan + df[col][dt] = i + + myid = 100 + + first = len(df.ix[pd.isnull(df[myid]), [myid]]) + second = len(df.ix[pd.isnull(df[myid]), [myid]]) + self.assertTrue(first == second == 0) diff --git a/pandas/tests/frame/test_combine_concat.py b/pandas/tests/frame/test_combine_concat.py new file mode 100644 index 0000000000000..77077440ea301 --- /dev/null +++ b/pandas/tests/frame/test_combine_concat.py @@ -0,0 +1,455 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import nan +import numpy as np + +from pandas.compat import lrange +from pandas import DataFrame, Series, Index, Timestamp +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameCombineConcat(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_combine_first_mixed(self): + a = Series(['a', 'b'], index=lrange(2)) + b = Series(lrange(2), index=lrange(2)) + f = DataFrame({'A': a, 'B': b}) + + a = Series(['a', 'b'], index=lrange(5, 7)) + b = Series(lrange(2), index=lrange(5, 7)) + g = DataFrame({'A': a, 'B': b}) + + # TODO(wesm): no verification? + combined = f.combine_first(g) # noqa + + def test_combine_multiple_frames_dtypes(self): + + # GH 2759 + A = DataFrame(data=np.ones((10, 2)), columns=[ + 'foo', 'bar'], dtype=np.float64) + B = DataFrame(data=np.ones((10, 2)), dtype=np.float32) + results = pd.concat((A, B), axis=1).get_dtype_counts() + expected = Series(dict(float64=2, float32=2)) + assert_series_equal(results, expected) + + def test_append_series_dict(self): + df = DataFrame(np.random.randn(5, 4), + columns=['foo', 'bar', 'baz', 'qux']) + + series = df.ix[4] + with assertRaisesRegexp(ValueError, 'Indexes have overlapping values'): + df.append(series, verify_integrity=True) + series.name = None + with assertRaisesRegexp(TypeError, 'Can only append a Series if ' + 'ignore_index=True'): + df.append(series, verify_integrity=True) + + result = df.append(series[::-1], ignore_index=True) + expected = df.append(DataFrame({0: series[::-1]}, index=df.columns).T, + ignore_index=True) + assert_frame_equal(result, expected) + + # dict + result = df.append(series.to_dict(), ignore_index=True) + assert_frame_equal(result, expected) + + result = df.append(series[::-1][:3], ignore_index=True) + expected = df.append(DataFrame({0: series[::-1][:3]}).T, + ignore_index=True) + assert_frame_equal(result, expected.ix[:, result.columns]) + + # can append when name set + row = df.ix[4] + row.name = 5 + result = df.append(row) + expected = df.append(df[-1:], ignore_index=True) + assert_frame_equal(result, expected) + + def test_append_list_of_series_dicts(self): + df = DataFrame(np.random.randn(5, 4), + columns=['foo', 'bar', 'baz', 'qux']) + + dicts = [x.to_dict() for idx, x in df.iterrows()] + + result = df.append(dicts, ignore_index=True) + expected = df.append(df, ignore_index=True) + assert_frame_equal(result, expected) + + # different columns + dicts = [{'foo': 1, 'bar': 2, 'baz': 3, 'peekaboo': 4}, + {'foo': 5, 'bar': 6, 'baz': 7, 'peekaboo': 8}] + result = df.append(dicts, ignore_index=True) + expected = df.append(DataFrame(dicts), ignore_index=True) + assert_frame_equal(result, expected) + + def test_append_empty_dataframe(self): + + # Empty df append empty df + df1 = DataFrame([]) + df2 = DataFrame([]) + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Non-empty df append empty df + df1 = DataFrame(np.random.randn(5, 2)) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Empty df with columns append empty df + df1 = DataFrame(columns=['bar', 'foo']) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + # Non-Empty df with columns append empty df + df1 = DataFrame(np.random.randn(5, 2), columns=['bar', 'foo']) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + def test_append_dtypes(self): + + # GH 5754 + # row appends of different dtypes (so need to do by-item) + # can sometimes infer the correct type + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(5)) + df2 = DataFrame() + result = df1.append(df2) + expected = df1.copy() + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': 'foo'}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame({'bar': [Timestamp('20130101'), 'foo']}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': np.nan}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([Timestamp('20130101'), np.nan], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': np.nan}, index=lrange(1, 2), dtype=object) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([Timestamp('20130101'), np.nan], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': np.nan}, index=lrange(1)) + df2 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1, 2)) + result = df1.append(df2) + expected = DataFrame( + {'bar': Series([np.nan, Timestamp('20130101')], dtype='M8[ns]')}) + assert_frame_equal(result, expected) + + df1 = DataFrame({'bar': Timestamp('20130101')}, index=lrange(1)) + df2 = DataFrame({'bar': 1}, index=lrange(1, 2), dtype=object) + result = df1.append(df2) + expected = DataFrame({'bar': Series([Timestamp('20130101'), 1])}) + assert_frame_equal(result, expected) + + def test_combine_first(self): + # disjoint + head, tail = self.frame[:5], self.frame[5:] + + combined = head.combine_first(tail) + reordered_frame = self.frame.reindex(combined.index) + assert_frame_equal(combined, reordered_frame) + self.assertTrue(tm.equalContents(combined.columns, self.frame.columns)) + assert_series_equal(combined['A'], reordered_frame['A']) + + # same index + fcopy = self.frame.copy() + fcopy['A'] = 1 + del fcopy['C'] + + fcopy2 = self.frame.copy() + fcopy2['B'] = 0 + del fcopy2['D'] + + combined = fcopy.combine_first(fcopy2) + + self.assertTrue((combined['A'] == 1).all()) + assert_series_equal(combined['B'], fcopy['B']) + assert_series_equal(combined['C'], fcopy2['C']) + assert_series_equal(combined['D'], fcopy['D']) + + # overlap + head, tail = reordered_frame[:10].copy(), reordered_frame + head['A'] = 1 + + combined = head.combine_first(tail) + self.assertTrue((combined['A'][:10] == 1).all()) + + # reverse overlap + tail['A'][:10] = 0 + combined = tail.combine_first(head) + self.assertTrue((combined['A'][:10] == 0).all()) + + # no overlap + f = self.frame[:10] + g = self.frame[10:] + combined = f.combine_first(g) + assert_series_equal(combined['A'].reindex(f.index), f['A']) + assert_series_equal(combined['A'].reindex(g.index), g['A']) + + # corner cases + comb = self.frame.combine_first(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combine_first(self.frame) + assert_frame_equal(comb, self.frame) + + comb = self.frame.combine_first(DataFrame(index=["faz", "boo"])) + self.assertTrue("faz" in comb.index) + + # #2525 + df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)]) + df2 = DataFrame({}, columns=['b']) + result = df.combine_first(df2) + self.assertTrue('b' in result) + + def test_combine_first_mixed_bug(self): + idx = Index(['a', 'b', 'c', 'e']) + ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) + ser2 = Series(['a', 'b', 'c', 'e'], index=idx) + ser3 = Series([12, 4, 5, 97], index=idx) + + frame1 = DataFrame({"col0": ser1, + "col2": ser2, + "col3": ser3}) + + idx = Index(['a', 'b', 'c', 'f']) + ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) + ser2 = Series(['a', 'b', 'c', 'f'], index=idx) + ser3 = Series([12, 4, 5, 97], index=idx) + + frame2 = DataFrame({"col1": ser1, + "col2": ser2, + "col5": ser3}) + + combined = frame1.combine_first(frame2) + self.assertEqual(len(combined.columns), 5) + + # gh 3016 (same as in update) + df = DataFrame([[1., 2., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + + other = DataFrame([[45, 45]], index=[0], columns=['A', 'B']) + result = df.combine_first(other) + assert_frame_equal(result, df) + + df.ix[0, 'A'] = np.nan + result = df.combine_first(other) + df.ix[0, 'A'] = 45 + assert_frame_equal(result, df) + + # doc example + df1 = DataFrame({'A': [1., np.nan, 3., 5., np.nan], + 'B': [np.nan, 2., 3., np.nan, 6.]}) + + df2 = DataFrame({'A': [5., 2., 4., np.nan, 3., 7.], + 'B': [np.nan, np.nan, 3., 4., 6., 8.]}) + + result = df1.combine_first(df2) + expected = DataFrame( + {'A': [1, 2, 3, 5, 3, 7.], 'B': [np.nan, 2, 3, 4, 6, 8]}) + assert_frame_equal(result, expected) + + # GH3552, return object dtype with bools + df1 = DataFrame( + [[np.nan, 3., True], [-4.6, np.nan, True], [np.nan, 7., False]]) + df2 = DataFrame( + [[-42.6, np.nan, True], [-5., 1.6, False]], index=[1, 2]) + + result = df1.combine_first(df2)[2] + expected = Series([True, True, False], name=2) + assert_series_equal(result, expected) + + # GH 3593, converting datetime64[ns] incorrecly + df0 = DataFrame({"a": [datetime(2000, 1, 1), + datetime(2000, 1, 2), + datetime(2000, 1, 3)]}) + df1 = DataFrame({"a": [None, None, None]}) + df2 = df1.combine_first(df0) + assert_frame_equal(df2, df0) + + df2 = df0.combine_first(df1) + assert_frame_equal(df2, df0) + + df0 = DataFrame({"a": [datetime(2000, 1, 1), + datetime(2000, 1, 2), + datetime(2000, 1, 3)]}) + df1 = DataFrame({"a": [datetime(2000, 1, 2), None, None]}) + df2 = df1.combine_first(df0) + result = df0.copy() + result.iloc[0, :] = df1.iloc[0, :] + assert_frame_equal(df2, result) + + df2 = df0.combine_first(df1) + assert_frame_equal(df2, df0) + + def test_update(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other) + + expected = DataFrame([[1.5, nan, 3], + [3.6, 2, 3], + [1.5, nan, 3], + [1.5, nan, 7.]]) + assert_frame_equal(df, expected) + + def test_update_dtypes(self): + + # gh 3016 + df = DataFrame([[1., 2., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + + other = DataFrame([[45, 45]], index=[0], columns=['A', 'B']) + df.update(other) + + expected = DataFrame([[45., 45., False, True], [4., 5., True, False]], + columns=['A', 'B', 'bool1', 'bool2']) + assert_frame_equal(df, expected) + + def test_update_nooverwrite(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other, overwrite=False) + + expected = DataFrame([[1.5, nan, 3], + [1.5, 2, 3], + [1.5, nan, 3], + [1.5, nan, 3.]]) + assert_frame_equal(df, expected) + + def test_update_filtered(self): + df = DataFrame([[1.5, nan, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[3.6, 2., np.nan], + [np.nan, np.nan, 7]], index=[1, 3]) + + df.update(other, filter_func=lambda x: x > 2) + + expected = DataFrame([[1.5, nan, 3], + [1.5, nan, 3], + [1.5, nan, 3], + [1.5, nan, 7.]]) + assert_frame_equal(df, expected) + + def test_update_raise(self): + df = DataFrame([[1.5, 1, 3.], + [1.5, nan, 3.], + [1.5, nan, 3], + [1.5, nan, 3]]) + + other = DataFrame([[2., nan], + [nan, 7]], index=[1, 3], columns=[1, 2]) + with assertRaisesRegexp(ValueError, "Data overlaps"): + df.update(other, raise_conflict=True) + + def test_update_from_non_df(self): + d = {'a': Series([1, 2, 3, 4]), 'b': Series([5, 6, 7, 8])} + df = DataFrame(d) + + d['a'] = Series([5, 6, 7, 8]) + df.update(d) + + expected = DataFrame(d) + + assert_frame_equal(df, expected) + + d = {'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]} + df = DataFrame(d) + + d['a'] = [5, 6, 7, 8] + df.update(d) + + expected = DataFrame(d) + + assert_frame_equal(df, expected) + + def test_join_str_datetime(self): + str_dates = ['20120209', '20120222'] + dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] + + A = DataFrame(str_dates, index=lrange(2), columns=['aa']) + C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates) + + tst = A.join(C, on='aa') + + self.assertEqual(len(tst.columns), 3) + + def test_join_multiindex_leftright(self): + # GH 10741 + df1 = (pd.DataFrame([['a', 'x', 0.471780], ['a', 'y', 0.774908], + ['a', 'z', 0.563634], ['b', 'x', -0.353756], + ['b', 'y', 0.368062], ['b', 'z', -1.721840], + ['c', 'x', 1], ['c', 'y', 2], ['c', 'z', 3]], + columns=['first', 'second', 'value1']) + .set_index(['first', 'second'])) + + df2 = (pd.DataFrame([['a', 10], ['b', 20]], + columns=['first', 'value2']) + .set_index(['first'])) + + exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], + [-0.353756, 20], [0.368062, 20], + [-1.721840, 20], + [1.000000, np.nan], [2.000000, np.nan], + [3.000000, np.nan]], + index=df1.index, columns=['value1', 'value2']) + + # these must be the same results (but columns are flipped) + assert_frame_equal(df1.join(df2, how='left'), exp) + assert_frame_equal(df2.join(df1, how='right'), + exp[['value2', 'value1']]) + + exp_idx = pd.MultiIndex.from_product([['a', 'b'], ['x', 'y', 'z']], + names=['first', 'second']) + exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], + [-0.353756, 20], [0.368062, 20], [-1.721840, 20]], + index=exp_idx, columns=['value1', 'value2']) + + assert_frame_equal(df1.join(df2, how='right'), exp) + assert_frame_equal(df2.join(df1, how='left'), + exp[['value2', 'value1']]) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py new file mode 100644 index 0000000000000..87c263e129361 --- /dev/null +++ b/pandas/tests/frame/test_constructors.py @@ -0,0 +1,1997 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import functools +import itertools + +import nose + +from numpy.random import randn + +import numpy as np +import numpy.ma as ma +import numpy.ma.mrecords as mrecords + +from pandas.compat import (lmap, long, zip, range, lrange, lzip, + OrderedDict) +from pandas import compat +from pandas import (DataFrame, Index, Series, notnull, isnull, + MultiIndex, Timedelta, Timestamp, + date_range) +from pandas.util.misc import is_little_endian +from pandas.core.common import PandasError +import pandas as pd +import pandas.core.common as com +import pandas.lib as lib + +from pandas.core.dtypes import DatetimeTZDtype + +from pandas.util.testing import (assert_almost_equal, + assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +MIXED_FLOAT_DTYPES = ['float16', 'float32', 'float64'] +MIXED_INT_DTYPES = ['uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64'] + + +class TestDataFrameConstructors(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_constructor(self): + df = DataFrame() + self.assertEqual(len(df.index), 0) + + df = DataFrame(data={}) + self.assertEqual(len(df.index), 0) + + def test_constructor_mixed(self): + index, data = tm.getMixedTypeDict() + + # TODO(wesm), incomplete test? + indexed_frame = DataFrame(data, index=index) # noqa + unindexed_frame = DataFrame(data) # noqa + + self.assertEqual(self.mixed_frame['foo'].dtype, np.object_) + + def test_constructor_cast_failure(self): + foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64) + self.assertEqual(foo['a'].dtype, object) + + # GH 3010, constructing with odd arrays + df = DataFrame(np.ones((4, 2))) + + # this is ok + df['foo'] = np.ones((4, 2)).tolist() + + # this is not ok + self.assertRaises(ValueError, df.__setitem__, tuple(['test']), + np.ones((4, 2))) + + # this is ok + df['foo2'] = np.ones((4, 2)).tolist() + + def test_constructor_dtype_copy(self): + orig_df = DataFrame({ + 'col1': [1.], + 'col2': [2.], + 'col3': [3.]}) + + new_df = pd.DataFrame(orig_df, dtype=float, copy=True) + + new_df['col1'] = 200. + self.assertEqual(orig_df['col1'][0], 1.) + + def test_constructor_dtype_nocast_view(self): + df = DataFrame([[1, 2]]) + should_be_view = DataFrame(df, dtype=df[0].dtype) + should_be_view[0][0] = 99 + self.assertEqual(df.values[0, 0], 99) + + should_be_view = DataFrame(df.values, dtype=df[0].dtype) + should_be_view[0][0] = 97 + self.assertEqual(df.values[0, 0], 97) + + def test_constructor_dtype_list_data(self): + df = DataFrame([[1, '2'], + [None, 'a']], dtype=object) + self.assertIsNone(df.ix[1, 0]) + self.assertEqual(df.ix[0, 1], '2') + + def test_constructor_list_frames(self): + + # GH 3243 + result = DataFrame([DataFrame([])]) + self.assertEqual(result.shape, (1, 0)) + + result = DataFrame([DataFrame(dict(A=lrange(5)))]) + tm.assertIsInstance(result.iloc[0, 0], DataFrame) + + def test_constructor_mixed_dtypes(self): + + def _make_mixed_dtypes_df(typ, ad=None): + + if typ == 'int': + dtypes = MIXED_INT_DTYPES + arrays = [np.array(np.random.rand(10), dtype=d) + for d in dtypes] + elif typ == 'float': + dtypes = MIXED_FLOAT_DTYPES + arrays = [np.array(np.random.randint( + 10, size=10), dtype=d) for d in dtypes] + + zipper = lzip(dtypes, arrays) + for d, a in zipper: + assert(a.dtype == d) + if ad is None: + ad = dict() + ad.update(dict([(d, a) for d, a in zipper])) + return DataFrame(ad) + + def _check_mixed_dtypes(df, dtypes=None): + if dtypes is None: + dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES + for d in dtypes: + if d in df: + assert(df.dtypes[d] == d) + + # mixed floating and integer coexinst in the same frame + df = _make_mixed_dtypes_df('float') + _check_mixed_dtypes(df) + + # add lots of types + df = _make_mixed_dtypes_df('float', dict(A=1, B='foo', C='bar')) + _check_mixed_dtypes(df) + + # GH 622 + df = _make_mixed_dtypes_df('int') + _check_mixed_dtypes(df) + + def test_constructor_complex_dtypes(self): + # GH10952 + a = np.random.rand(10).astype(np.complex64) + b = np.random.rand(10).astype(np.complex128) + + df = DataFrame({'a': a, 'b': b}) + self.assertEqual(a.dtype, df.a.dtype) + self.assertEqual(b.dtype, df.b.dtype) + + def test_constructor_rec(self): + rec = self.frame.to_records(index=False) + + # Assigning causes segfault in NumPy < 1.5.1 + # rec.dtype.names = list(rec.dtype.names)[::-1] + + index = self.frame.index + + df = DataFrame(rec) + self.assert_numpy_array_equal(df.columns, rec.dtype.names) + + df2 = DataFrame(rec, index=index) + self.assert_numpy_array_equal(df2.columns, rec.dtype.names) + self.assertTrue(df2.index.equals(index)) + + rng = np.arange(len(rec))[::-1] + df3 = DataFrame(rec, index=rng, columns=['C', 'B']) + expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B']) + assert_frame_equal(df3, expected) + + def test_constructor_bool(self): + df = DataFrame({0: np.ones(10, dtype=bool), + 1: np.zeros(10, dtype=bool)}) + self.assertEqual(df.values.dtype, np.bool_) + + def test_constructor_overflow_int64(self): + values = np.array([2 ** 64 - i for i in range(1, 10)], + dtype=np.uint64) + + result = DataFrame({'a': values}) + self.assertEqual(result['a'].dtype, object) + + # #2355 + data_scores = [(6311132704823138710, 273), (2685045978526272070, 23), + (8921811264899370420, 45), + (long(17019687244989530680), 270), + (long(9930107427299601010), 273)] + dtype = [('uid', 'u8'), ('score', 'u8')] + data = np.zeros((len(data_scores),), dtype=dtype) + data[:] = data_scores + df_crawls = DataFrame(data) + self.assertEqual(df_crawls['uid'].dtype, object) + + def test_constructor_ordereddict(self): + import random + nitems = 100 + nums = lrange(nitems) + random.shuffle(nums) + expected = ['A%d' % i for i in nums] + df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems))) + self.assertEqual(expected, list(df.columns)) + + def test_constructor_dict(self): + frame = DataFrame({'col1': self.ts1, + 'col2': self.ts2}) + + tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False) + tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False) + + frame = DataFrame({'col1': self.ts1, + 'col2': self.ts2}, + columns=['col2', 'col3', 'col4']) + + self.assertEqual(len(frame), len(self.ts2)) + self.assertNotIn('col1', frame) + self.assertTrue(isnull(frame['col3']).all()) + + # Corner cases + self.assertEqual(len(DataFrame({})), 0) + + # mix dict and array, wrong size - no spec for which error should raise + # first + with tm.assertRaises(ValueError): + DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']}) + + # Length-one dict micro-optimization + frame = DataFrame({'A': {'1': 1, '2': 2}}) + self.assert_numpy_array_equal(frame.index, ['1', '2']) + + # empty dict plus index + idx = Index([0, 1, 2]) + frame = DataFrame({}, index=idx) + self.assertIs(frame.index, idx) + + # empty with index and columns + idx = Index([0, 1, 2]) + frame = DataFrame({}, index=idx, columns=idx) + self.assertIs(frame.index, idx) + self.assertIs(frame.columns, idx) + self.assertEqual(len(frame._series), 3) + + # with dict of empty list and Series + frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B']) + self.assertTrue(frame.index.equals(Index([]))) + + # GH10856 + # dict with scalar values should raise error, even if columns passed + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}) + + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}, columns=['a']) + + with tm.assertRaises(ValueError): + DataFrame({'a': 0.7}, columns=['b']) + + def test_constructor_multi_index(self): + # GH 4078 + # construction error with mi and all-nan frame + tuples = [(2, 3), (3, 3), (3, 3)] + mi = MultiIndex.from_tuples(tuples) + df = DataFrame(index=mi, columns=mi) + self.assertTrue(pd.isnull(df).values.ravel().all()) + + tuples = [(3, 3), (2, 3), (3, 3)] + mi = MultiIndex.from_tuples(tuples) + df = DataFrame(index=mi, columns=mi) + self.assertTrue(pd.isnull(df).values.ravel().all()) + + def test_constructor_error_msgs(self): + msg = "Mixing dicts with non-Series may lead to ambiguous ordering." + # mix dict and array, wrong size + with assertRaisesRegexp(ValueError, msg): + DataFrame({'A': {'a': 'a', 'b': 'b'}, + 'B': ['a', 'b', 'c']}) + + # wrong size ndarray, GH 3105 + msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)" + with assertRaisesRegexp(ValueError, msg): + DataFrame(np.arange(12).reshape((4, 3)), + columns=['foo', 'bar', 'baz'], + index=pd.date_range('2000-01-01', periods=3)) + + # higher dim raise exception + with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) + + # wrong size axis labels + with assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(3, 1\)"): + DataFrame(np.random.rand(2, 3), columns=['A', 'B', 'C'], index=[1]) + + with assertRaisesRegexp(ValueError, "Shape of passed values is " + "\(3, 2\), indices imply \(2, 2\)"): + DataFrame(np.random.rand(2, 3), columns=['A', 'B'], index=[1, 2]) + + with assertRaisesRegexp(ValueError, 'If using all scalar values, you ' + 'must pass an index'): + DataFrame({'a': False, 'b': True}) + + def test_constructor_with_embedded_frames(self): + + # embedded data frames + df1 = DataFrame({'a': [1, 2, 3], 'b': [3, 4, 5]}) + df2 = DataFrame([df1, df1 + 10]) + + df2.dtypes + str(df2) + + result = df2.loc[0, 0] + assert_frame_equal(result, df1) + + result = df2.loc[1, 0] + assert_frame_equal(result, df1 + 10) + + def test_constructor_subclass_dict(self): + # Test for passing dict subclass to constructor + data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)), + 'col2': tm.TestSubDict((x, 20.0 * x) for x in range(10))} + df = DataFrame(data) + refdf = DataFrame(dict((col, dict(compat.iteritems(val))) + for col, val in compat.iteritems(data))) + assert_frame_equal(refdf, df) + + data = tm.TestSubDict(compat.iteritems(data)) + df = DataFrame(data) + assert_frame_equal(refdf, df) + + # try with defaultdict + from collections import defaultdict + data = {} + self.frame['B'][:10] = np.nan + for k, v in compat.iteritems(self.frame): + dct = defaultdict(dict) + dct.update(v.to_dict()) + data[k] = dct + frame = DataFrame(data) + assert_frame_equal(self.frame.sort_index(), frame) + + def test_constructor_dict_block(self): + expected = [[4., 3., 2., 1.]] + df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, + columns=['d', 'c', 'b', 'a']) + assert_almost_equal(df.values, expected) + + def test_constructor_dict_cast(self): + # cast float tests + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + frame = DataFrame(test_data, dtype=float) + self.assertEqual(len(frame), 3) + self.assertEqual(frame['B'].dtype, np.float64) + self.assertEqual(frame['A'].dtype, np.float64) + + frame = DataFrame(test_data) + self.assertEqual(len(frame), 3) + self.assertEqual(frame['B'].dtype, np.object_) + self.assertEqual(frame['A'].dtype, np.float64) + + # can't cast to float + test_data = { + 'A': dict(zip(range(20), tm.makeStringIndex(20))), + 'B': dict(zip(range(15), randn(15))) + } + frame = DataFrame(test_data, dtype=float) + self.assertEqual(len(frame), 20) + self.assertEqual(frame['A'].dtype, np.object_) + self.assertEqual(frame['B'].dtype, np.float64) + + def test_constructor_dict_dont_upcast(self): + d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}} + df = DataFrame(d) + tm.assertIsInstance(df['Col1']['Row2'], float) + + dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2]) + tm.assertIsInstance(dm[1][1], int) + + def test_constructor_dict_of_tuples(self): + # GH #1491 + data = {'a': (1, 2, 3), 'b': (4, 5, 6)} + + result = DataFrame(data) + expected = DataFrame(dict((k, list(v)) + for k, v in compat.iteritems(data))) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_dict_multiindex(self): + check = lambda result, expected: assert_frame_equal( + result, expected, check_dtype=True, check_index_type=True, + check_column_type=True, check_names=True) + d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2}, + ('b', 'a'): {('i', 'i'): 6, ('i', 'j'): 5, ('j', 'i'): 4}, + ('b', 'c'): {('i', 'i'): 7, ('i', 'j'): 8, ('j', 'i'): 9}} + _d = sorted(d.items()) + df = DataFrame(d) + expected = DataFrame( + [x[1] for x in _d], + index=MultiIndex.from_tuples([x[0] for x in _d])).T + expected.index = MultiIndex.from_tuples(expected.index) + check(df, expected) + + d['z'] = {'y': 123., ('i', 'i'): 111, ('i', 'j'): 111, ('j', 'i'): 111} + _d.insert(0, ('z', d['z'])) + expected = DataFrame( + [x[1] for x in _d], + index=Index([x[0] for x in _d], tupleize_cols=False)).T + expected.index = Index(expected.index, tupleize_cols=False) + df = DataFrame(d) + df = df.reindex(columns=expected.columns, index=expected.index) + check(df, expected) + + def test_constructor_dict_datetime64_index(self): + # GH 10160 + dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] + + def create_data(constructor): + return dict((i, {constructor(s): 2 * i}) + for i, s in enumerate(dates_as_str)) + + data_datetime64 = create_data(np.datetime64) + data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) + data_Timestamp = create_data(Timestamp) + + expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, + {0: None, 1: 2, 2: None, 3: None}, + {0: None, 1: None, 2: 4, 3: None}, + {0: None, 1: None, 2: None, 3: 6}], + index=[Timestamp(dt) for dt in dates_as_str]) + + result_datetime64 = DataFrame(data_datetime64) + result_datetime = DataFrame(data_datetime) + result_Timestamp = DataFrame(data_Timestamp) + assert_frame_equal(result_datetime64, expected) + assert_frame_equal(result_datetime, expected) + assert_frame_equal(result_Timestamp, expected) + + def test_constructor_dict_timedelta64_index(self): + # GH 10160 + td_as_int = [1, 2, 3, 4] + + def create_data(constructor): + return dict((i, {constructor(s): 2 * i}) + for i, s in enumerate(td_as_int)) + + data_timedelta64 = create_data(lambda x: np.timedelta64(x, 'D')) + data_timedelta = create_data(lambda x: timedelta(days=x)) + data_Timedelta = create_data(lambda x: Timedelta(x, 'D')) + + expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, + {0: None, 1: 2, 2: None, 3: None}, + {0: None, 1: None, 2: 4, 3: None}, + {0: None, 1: None, 2: None, 3: 6}], + index=[Timedelta(td, 'D') for td in td_as_int]) + + result_timedelta64 = DataFrame(data_timedelta64) + result_timedelta = DataFrame(data_timedelta) + result_Timedelta = DataFrame(data_Timedelta) + assert_frame_equal(result_timedelta64, expected) + assert_frame_equal(result_timedelta, expected) + assert_frame_equal(result_Timedelta, expected) + + def test_nested_dict_frame_constructor(self): + rng = pd.period_range('1/1/2000', periods=5) + df = DataFrame(randn(10, 5), columns=rng) + + data = {} + for col in df.columns: + for row in df.index: + data.setdefault(col, {})[row] = df.get_value(row, col) + + result = DataFrame(data, columns=rng) + assert_frame_equal(result, df) + + data = {} + for col in df.columns: + for row in df.index: + data.setdefault(row, {})[col] = df.get_value(row, col) + + result = DataFrame(data, index=rng).T + assert_frame_equal(result, df) + + def _check_basic_constructor(self, empty): + # mat: 2d matrix with shpae (3, 2) to input. empty - makes sized + # objects + mat = empty((2, 3), dtype=float) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + + # 1-D input + frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3]) + self.assertEqual(len(frame.index), 3) + self.assertEqual(len(frame.columns), 1) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.int64) + self.assertEqual(frame.values.dtype, np.int64) + + # wrong size axis labels + msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame(mat, columns=['A', 'B', 'C'], index=[1]) + msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame(mat, columns=['A', 'B'], index=[1, 2]) + + # higher dim raise exception + with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): + DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'], + index=[1]) + + # automatic labeling + frame = DataFrame(mat) + self.assert_numpy_array_equal(frame.index, lrange(2)) + self.assert_numpy_array_equal(frame.columns, lrange(3)) + + frame = DataFrame(mat, index=[1, 2]) + self.assert_numpy_array_equal(frame.columns, lrange(3)) + + frame = DataFrame(mat, columns=['A', 'B', 'C']) + self.assert_numpy_array_equal(frame.index, lrange(2)) + + # 0-length axis + frame = DataFrame(empty((0, 3))) + self.assertEqual(len(frame.index), 0) + + frame = DataFrame(empty((3, 0))) + self.assertEqual(len(frame.columns), 0) + + def test_constructor_ndarray(self): + self._check_basic_constructor(np.ones) + + frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A']) + self.assertEqual(len(frame), 2) + + def test_constructor_maskedarray(self): + self._check_basic_constructor(ma.masked_all) + + # Check non-masked values + mat = ma.masked_all((2, 3), dtype=float) + mat[0, 0] = 1.0 + mat[1, 2] = 2.0 + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1.0, frame['A'][1]) + self.assertEqual(2.0, frame['C'][2]) + + # what is this even checking?? + mat = ma.masked_all((2, 3), dtype=float) + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + def test_constructor_maskedarray_nonfloat(self): + # masked int promoted to float + mat = ma.masked_all((2, 3), dtype=int) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.float64) + self.assertEqual(frame.values.dtype, np.float64) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = 1 + mat2[1, 2] = 2 + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1, frame['A'][1]) + self.assertEqual(2, frame['C'][2]) + + # masked np.datetime64 stays (use lib.NaT as null) + mat = ma.masked_all((2, 3), dtype='M8[ns]') + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(isnull(frame).values.all()) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=np.int64) + self.assertEqual(frame.values.dtype, np.int64) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = 1 + mat2[1, 2] = 2 + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(1, frame['A'].view('i8')[1]) + self.assertEqual(2, frame['C'].view('i8')[2]) + + # masked bool promoted to object + mat = ma.masked_all((2, 3), dtype=bool) + # 2-D input + frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) + + self.assertEqual(len(frame.index), 2) + self.assertEqual(len(frame.columns), 3) + self.assertTrue(np.all(~np.asarray(frame == frame))) + + # cast type + frame = DataFrame(mat, columns=['A', 'B', 'C'], + index=[1, 2], dtype=object) + self.assertEqual(frame.values.dtype, object) + + # Check non-masked values + mat2 = ma.copy(mat) + mat2[0, 0] = True + mat2[1, 2] = False + frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) + self.assertEqual(True, frame['A'][1]) + self.assertEqual(False, frame['C'][2]) + + def test_constructor_mrecarray(self): + # Ensure mrecarray produces frame identical to dict of masked arrays + # from GH3479 + + assert_fr_equal = functools.partial(assert_frame_equal, + check_index_type=True, + check_column_type=True, + check_frame_type=True) + arrays = [ + ('float', np.array([1.5, 2.0])), + ('int', np.array([1, 2])), + ('str', np.array(['abc', 'def'])), + ] + for name, arr in arrays[:]: + arrays.append(('masked1_' + name, + np.ma.masked_array(arr, mask=[False, True]))) + arrays.append(('masked_all', np.ma.masked_all((2,)))) + arrays.append(('masked_none', + np.ma.masked_array([1.0, 2.5], mask=False))) + + # call assert_frame_equal for all selections of 3 arrays + for comb in itertools.combinations(arrays, 3): + names, data = zip(*comb) + mrecs = mrecords.fromarrays(data, names=names) + + # fill the comb + comb = dict([(k, v.filled()) if hasattr( + v, 'filled') else (k, v) for k, v in comb]) + + expected = DataFrame(comb, columns=names) + result = DataFrame(mrecs) + assert_fr_equal(result, expected) + + # specify columns + expected = DataFrame(comb, columns=names[::-1]) + result = DataFrame(mrecs, columns=names[::-1]) + assert_fr_equal(result, expected) + + # specify index + expected = DataFrame(comb, columns=names, index=[1, 2]) + result = DataFrame(mrecs, index=[1, 2]) + assert_fr_equal(result, expected) + + def test_constructor_corner(self): + df = DataFrame(index=[]) + self.assertEqual(df.values.shape, (0, 0)) + + # empty but with specified dtype + df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=object) + self.assertEqual(df.values.dtype, np.object_) + + # does not error but ends up float + df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=int) + self.assertEqual(df.values.dtype, np.object_) + + # #1783 empty dtype object + df = DataFrame({}, columns=['foo', 'bar']) + self.assertEqual(df.values.dtype, np.object_) + + df = DataFrame({'b': 1}, index=lrange(10), columns=list('abc'), + dtype=int) + self.assertEqual(df.values.dtype, np.object_) + + def test_constructor_scalar_inference(self): + data = {'int': 1, 'bool': True, + 'float': 3., 'complex': 4j, 'object': 'foo'} + df = DataFrame(data, index=np.arange(10)) + + self.assertEqual(df['int'].dtype, np.int64) + self.assertEqual(df['bool'].dtype, np.bool_) + self.assertEqual(df['float'].dtype, np.float64) + self.assertEqual(df['complex'].dtype, np.complex128) + self.assertEqual(df['object'].dtype, np.object_) + + def test_constructor_arrays_and_scalars(self): + df = DataFrame({'a': randn(10), 'b': True}) + exp = DataFrame({'a': df['a'].values, 'b': [True] * 10}) + + assert_frame_equal(df, exp) + with tm.assertRaisesRegexp(ValueError, 'must pass an index'): + DataFrame({'a': False, 'b': True}) + + def test_constructor_DataFrame(self): + df = DataFrame(self.frame) + assert_frame_equal(df, self.frame) + + df_casted = DataFrame(self.frame, dtype=np.int64) + self.assertEqual(df_casted.values.dtype, np.int64) + + def test_constructor_more(self): + # used to be in test_matrix.py + arr = randn(10) + dm = DataFrame(arr, columns=['A'], index=np.arange(10)) + self.assertEqual(dm.values.ndim, 2) + + arr = randn(0) + dm = DataFrame(arr) + self.assertEqual(dm.values.ndim, 2) + self.assertEqual(dm.values.ndim, 2) + + # no data specified + dm = DataFrame(columns=['A', 'B'], index=np.arange(10)) + self.assertEqual(dm.values.shape, (10, 2)) + + dm = DataFrame(columns=['A', 'B']) + self.assertEqual(dm.values.shape, (0, 2)) + + dm = DataFrame(index=np.arange(10)) + self.assertEqual(dm.values.shape, (10, 0)) + + # corner, silly + # TODO: Fix this Exception to be better... + with assertRaisesRegexp(PandasError, 'constructor not ' + 'properly called'): + DataFrame((1, 2, 3)) + + # can't cast + mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1) + with assertRaisesRegexp(ValueError, 'cast'): + DataFrame(mat, index=[0, 1], columns=[0], dtype=float) + + dm = DataFrame(DataFrame(self.frame._series)) + assert_frame_equal(dm, self.frame) + + # int cast + dm = DataFrame({'A': np.ones(10, dtype=int), + 'B': np.ones(10, dtype=np.float64)}, + index=np.arange(10)) + + self.assertEqual(len(dm.columns), 2) + self.assertEqual(dm.values.dtype, np.float64) + + def test_constructor_empty_list(self): + df = DataFrame([], index=[]) + expected = DataFrame(index=[]) + assert_frame_equal(df, expected) + + # GH 9939 + df = DataFrame([], columns=['A', 'B']) + expected = DataFrame({}, columns=['A', 'B']) + assert_frame_equal(df, expected) + + # Empty generator: list(empty_gen()) == [] + def empty_gen(): + return + yield + + df = DataFrame(empty_gen(), columns=['A', 'B']) + assert_frame_equal(df, expected) + + def test_constructor_list_of_lists(self): + # GH #484 + l = [[1, 'a'], [2, 'b']] + df = DataFrame(data=l, columns=["num", "str"]) + self.assertTrue(com.is_integer_dtype(df['num'])) + self.assertEqual(df['str'].dtype, np.object_) + + # GH 4851 + # list of 0-dim ndarrays + expected = DataFrame({0: range(10)}) + data = [np.array(x) for x in range(10)] + result = DataFrame(data) + assert_frame_equal(result, expected) + + def test_constructor_sequence_like(self): + # GH 3783 + # collections.Squence like + import collections + + class DummyContainer(collections.Sequence): + + def __init__(self, lst): + self._lst = lst + + def __getitem__(self, n): + return self._lst.__getitem__(n) + + def __len__(self, n): + return self._lst.__len__() + + l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])] + columns = ["num", "str"] + result = DataFrame(l, columns=columns) + expected = DataFrame([[1, 'a'], [2, 'b']], columns=columns) + assert_frame_equal(result, expected, check_dtype=False) + + # GH 4297 + # support Array + import array + result = DataFrame.from_items([('A', array.array('i', range(10)))]) + expected = DataFrame({'A': list(range(10))}) + assert_frame_equal(result, expected, check_dtype=False) + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([array.array('i', range(10)), + array.array('i', range(10))]) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_iterator(self): + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([range(10), range(10)]) + assert_frame_equal(result, expected) + + def test_constructor_generator(self): + # related #2305 + + gen1 = (i for i in range(10)) + gen2 = (i for i in range(10)) + + expected = DataFrame([list(range(10)), list(range(10))]) + result = DataFrame([gen1, gen2]) + assert_frame_equal(result, expected) + + gen = ([i, 'a'] for i in range(10)) + result = DataFrame(gen) + expected = DataFrame({0: range(10), 1: 'a'}) + assert_frame_equal(result, expected, check_dtype=False) + + def test_constructor_list_of_dicts(self): + data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), + OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), + OrderedDict([['a', 1.5], ['d', 6]]), + OrderedDict(), + OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), + OrderedDict([['b', 3], ['c', 4], ['d', 6]])] + + result = DataFrame(data) + expected = DataFrame.from_dict(dict(zip(range(len(data)), data)), + orient='index') + assert_frame_equal(result, expected.reindex(result.index)) + + result = DataFrame([{}]) + expected = DataFrame(index=[0]) + assert_frame_equal(result, expected) + + def test_constructor_list_of_series(self): + data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), + OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] + sdict = OrderedDict(zip(['x', 'y'], data)) + idx = Index(['a', 'b', 'c']) + + # all named + data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), + Series([1.5, 3, 6], idx, name='y')] + result = DataFrame(data2) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected) + + # some unnamed + data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), + Series([1.5, 3, 6], idx)] + result = DataFrame(data2) + + sdict = OrderedDict(zip(['x', 'Unnamed 0'], data)) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result.sort_index(), expected) + + # none named + data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), + OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), + OrderedDict([['a', 1.5], ['d', 6]]), + OrderedDict(), + OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), + OrderedDict([['b', 3], ['c', 4], ['d', 6]])] + data = [Series(d) for d in data] + + result = DataFrame(data) + sdict = OrderedDict(zip(range(len(data)), data)) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected.reindex(result.index)) + + result2 = DataFrame(data, index=np.arange(6)) + assert_frame_equal(result, result2) + + result = DataFrame([Series({})]) + expected = DataFrame(index=[0]) + assert_frame_equal(result, expected) + + data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), + OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] + sdict = OrderedDict(zip(range(len(data)), data)) + + idx = Index(['a', 'b', 'c']) + data2 = [Series([1.5, 3, 4], idx, dtype='O'), + Series([1.5, 3, 6], idx)] + result = DataFrame(data2) + expected = DataFrame.from_dict(sdict, orient='index') + assert_frame_equal(result, expected) + + def test_constructor_list_of_derived_dicts(self): + class CustomDict(dict): + pass + d = {'a': 1.5, 'b': 3} + + data_custom = [CustomDict(d)] + data = [d] + + result_custom = DataFrame(data_custom) + result = DataFrame(data) + assert_frame_equal(result, result_custom) + + def test_constructor_ragged(self): + data = {'A': randn(10), + 'B': randn(8)} + with assertRaisesRegexp(ValueError, 'arrays must all be same length'): + DataFrame(data) + + def test_constructor_scalar(self): + idx = Index(lrange(3)) + df = DataFrame({"a": 0}, index=idx) + expected = DataFrame({"a": [0, 0, 0]}, index=idx) + assert_frame_equal(df, expected, check_dtype=False) + + def test_constructor_Series_copy_bug(self): + df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A']) + df.copy() + + def test_constructor_mixed_dict_and_Series(self): + data = {} + data['A'] = {'foo': 1, 'bar': 2, 'baz': 3} + data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo']) + + result = DataFrame(data) + self.assertTrue(result.index.is_monotonic) + + # ordering ambiguous, raise exception + with assertRaisesRegexp(ValueError, 'ambiguous ordering'): + DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}}) + + # this is OK though + result = DataFrame({'A': ['a', 'b'], + 'B': Series(['a', 'b'], index=['a', 'b'])}) + expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']}, + index=['a', 'b']) + assert_frame_equal(result, expected) + + def test_constructor_tuples(self): + result = DataFrame({'A': [(1, 2), (3, 4)]}) + expected = DataFrame({'A': Series([(1, 2), (3, 4)])}) + assert_frame_equal(result, expected) + + def test_constructor_namedtuples(self): + # GH11181 + from collections import namedtuple + named_tuple = namedtuple("Pandas", list('ab')) + tuples = [named_tuple(1, 3), named_tuple(2, 4)] + expected = DataFrame({'a': [1, 2], 'b': [3, 4]}) + result = DataFrame(tuples) + assert_frame_equal(result, expected) + + # with columns + expected = DataFrame({'y': [1, 2], 'z': [3, 4]}) + result = DataFrame(tuples, columns=['y', 'z']) + assert_frame_equal(result, expected) + + def test_constructor_orient(self): + data_dict = self.mixed_frame.T._series + recons = DataFrame.from_dict(data_dict, orient='index') + expected = self.mixed_frame.sort_index() + assert_frame_equal(recons, expected) + + # dict of sequence + a = {'hi': [32, 3, 3], + 'there': [3, 5, 3]} + rs = DataFrame.from_dict(a, orient='index') + xp = DataFrame.from_dict(a).T.reindex(list(a.keys())) + assert_frame_equal(rs, xp) + + def test_constructor_Series_named(self): + a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') + df = DataFrame(a) + self.assertEqual(df.columns[0], 'x') + self.assertTrue(df.index.equals(a.index)) + + # ndarray like + arr = np.random.randn(10) + s = Series(arr, name='x') + df = DataFrame(s) + expected = DataFrame(dict(x=s)) + assert_frame_equal(df, expected) + + s = Series(arr, index=range(3, 13)) + df = DataFrame(s) + expected = DataFrame({0: s}) + assert_frame_equal(df, expected) + + self.assertRaises(ValueError, DataFrame, s, columns=[1, 2]) + + # #2234 + a = Series([], name='x') + df = DataFrame(a) + self.assertEqual(df.columns[0], 'x') + + # series with name and w/o + s1 = Series(arr, name='x') + df = DataFrame([s1, arr]).T + expected = DataFrame({'x': s1, 'Unnamed 0': arr}, + columns=['x', 'Unnamed 0']) + assert_frame_equal(df, expected) + + # this is a bit non-intuitive here; the series collapse down to arrays + df = DataFrame([arr, s1]).T + expected = DataFrame({1: s1, 0: arr}, columns=[0, 1]) + assert_frame_equal(df, expected) + + def test_constructor_Series_differently_indexed(self): + # name + s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') + + # no name + s2 = Series([1, 2, 3], index=['a', 'b', 'c']) + + other_index = Index(['a', 'b']) + + df1 = DataFrame(s1, index=other_index) + exp1 = DataFrame(s1.reindex(other_index)) + self.assertEqual(df1.columns[0], 'x') + assert_frame_equal(df1, exp1) + + df2 = DataFrame(s2, index=other_index) + exp2 = DataFrame(s2.reindex(other_index)) + self.assertEqual(df2.columns[0], 0) + self.assertTrue(df2.index.equals(other_index)) + assert_frame_equal(df2, exp2) + + def test_constructor_manager_resize(self): + index = list(self.frame.index[:5]) + columns = list(self.frame.columns[:3]) + + result = DataFrame(self.frame._data, index=index, + columns=columns) + self.assert_numpy_array_equal(result.index, index) + self.assert_numpy_array_equal(result.columns, columns) + + def test_constructor_from_items(self): + items = [(c, self.frame[c]) for c in self.frame.columns] + recons = DataFrame.from_items(items) + assert_frame_equal(recons, self.frame) + + # pass some columns + recons = DataFrame.from_items(items, columns=['C', 'B', 'A']) + assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) + + # orient='index' + + row_items = [(idx, self.mixed_frame.xs(idx)) + for idx in self.mixed_frame.index] + + recons = DataFrame.from_items(row_items, + columns=self.mixed_frame.columns, + orient='index') + assert_frame_equal(recons, self.mixed_frame) + self.assertEqual(recons['A'].dtype, np.float64) + + with tm.assertRaisesRegexp(TypeError, + "Must pass columns with orient='index'"): + DataFrame.from_items(row_items, orient='index') + + # orient='index', but thar be tuples + arr = lib.list_to_object_array( + [('bar', 'baz')] * len(self.mixed_frame)) + self.mixed_frame['foo'] = arr + row_items = [(idx, list(self.mixed_frame.xs(idx))) + for idx in self.mixed_frame.index] + recons = DataFrame.from_items(row_items, + columns=self.mixed_frame.columns, + orient='index') + assert_frame_equal(recons, self.mixed_frame) + tm.assertIsInstance(recons['foo'][0], tuple) + + rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])], + orient='index', + columns=['one', 'two', 'three']) + xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], + columns=['one', 'two', 'three']) + assert_frame_equal(rs, xp) + + def test_constructor_mix_series_nonseries(self): + df = DataFrame({'A': self.frame['A'], + 'B': list(self.frame['B'])}, columns=['A', 'B']) + assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) + + with tm.assertRaisesRegexp(ValueError, 'does not match index length'): + DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]}) + + def test_constructor_miscast_na_int_dtype(self): + df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64) + expected = DataFrame([[np.nan, 1], [1, 0]]) + assert_frame_equal(df, expected) + + def test_constructor_iterator_failure(self): + with assertRaisesRegexp(TypeError, 'iterator'): + df = DataFrame(iter([1, 2, 3])) # noqa + + def test_constructor_column_duplicates(self): + # it works! #2079 + df = DataFrame([[8, 5]], columns=['a', 'a']) + edf = DataFrame([[8, 5]]) + edf.columns = ['a', 'a'] + + assert_frame_equal(df, edf) + + idf = DataFrame.from_items( + [('a', [8]), ('a', [5])], columns=['a', 'a']) + assert_frame_equal(idf, edf) + + self.assertRaises(ValueError, DataFrame.from_items, + [('a', [8]), ('a', [5]), ('b', [6])], + columns=['b', 'a', 'a']) + + def test_constructor_empty_with_string_dtype(self): + # GH 9428 + expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object) + + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_) + assert_frame_equal(df, expected) + df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5') + assert_frame_equal(df, expected) + + def test_constructor_single_value(self): + # expecting single value upcasting here + df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c']) + assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'), + df.index, df.columns)) + + df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c']) + assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), + df.index, df.columns)) + + df = DataFrame('a', index=[1, 2], columns=['a', 'c']) + assert_frame_equal(df, DataFrame(np.array([['a', 'a'], + ['a', 'a']], + dtype=object), + index=[1, 2], + columns=['a', 'c'])) + + self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2]) + self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c']) + with tm.assertRaisesRegexp(TypeError, 'incompatible data and dtype'): + DataFrame('a', [1, 2], ['a', 'c'], float) + + def test_constructor_with_datetimes(self): + intname = np.dtype(np.int_).name + floatname = np.dtype(np.float_).name + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + # single item + df = DataFrame({'A': 1, 'B': 'foo', 'C': 'bar', + 'D': Timestamp("20010101"), + 'E': datetime(2001, 1, 2, 0, 0)}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = Series({'int64': 1, datetime64name: 2, objectname: 2}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 + # ndarray with a dtype specified) + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + floatname: np.array(1., dtype=floatname), + intname: np.array(1, dtype=intname)}, + index=np.arange(10)) + result = df.get_dtype_counts() + expected = {objectname: 1} + if intname == 'int64': + expected['int64'] = 2 + else: + expected['int64'] = 1 + expected[intname] = 1 + if floatname == 'float64': + expected['float64'] = 2 + else: + expected['float64'] = 1 + expected[floatname] = 1 + + result.sort_index() + expected = Series(expected) + expected.sort_index() + assert_series_equal(result, expected) + + # check with ndarray construction ndim>0 + df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', + floatname: np.array([1.] * 10, dtype=floatname), + intname: np.array([1] * 10, dtype=intname)}, + index=np.arange(10)) + result = df.get_dtype_counts() + result.sort_index() + assert_series_equal(result, expected) + + # GH 2809 + ind = date_range(start="2000-01-01", freq="D", periods=10) + datetimes = [ts.to_pydatetime() for ts in ind] + datetime_s = Series(datetimes) + self.assertEqual(datetime_s.dtype, 'M8[ns]') + df = DataFrame({'datetime_s': datetime_s}) + result = df.get_dtype_counts() + expected = Series({datetime64name: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # GH 2810 + ind = date_range(start="2000-01-01", freq="D", periods=10) + datetimes = [ts.to_pydatetime() for ts in ind] + dates = [ts.date() for ts in ind] + df = DataFrame({'datetimes': datetimes, 'dates': dates}) + result = df.get_dtype_counts() + expected = Series({datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + # GH 7594 + # don't coerce tz-aware + import pytz + tz = pytz.timezone('US/Eastern') + dt = tz.localize(datetime(2012, 1, 1)) + + df = DataFrame({'End Date': dt}, index=[0]) + self.assertEqual(df.iat[0, 0], dt) + assert_series_equal(df.dtypes, Series( + {'End Date': 'datetime64[ns, US/Eastern]'})) + + df = DataFrame([{'End Date': dt}]) + self.assertEqual(df.iat[0, 0], dt) + assert_series_equal(df.dtypes, Series( + {'End Date': 'datetime64[ns, US/Eastern]'})) + + # tz-aware (UTC and other tz's) + # GH 8411 + dr = date_range('20130101', periods=3) + df = DataFrame({'value': dr}) + self.assertTrue(df.iat[0, 0].tz is None) + dr = date_range('20130101', periods=3, tz='UTC') + df = DataFrame({'value': dr}) + self.assertTrue(str(df.iat[0, 0].tz) == 'UTC') + dr = date_range('20130101', periods=3, tz='US/Eastern') + df = DataFrame({'value': dr}) + self.assertTrue(str(df.iat[0, 0].tz) == 'US/Eastern') + + # GH 7822 + # preserver an index with a tz on dict construction + i = date_range('1/1/2011', periods=5, freq='10s', tz='US/Eastern') + + expected = DataFrame( + {'a': i.to_series(keep_tz=True).reset_index(drop=True)}) + df = DataFrame() + df['a'] = i + assert_frame_equal(df, expected) + + df = DataFrame({'a': i}) + assert_frame_equal(df, expected) + + # multiples + i_no_tz = date_range('1/1/2011', periods=5, freq='10s') + df = DataFrame({'a': i, 'b': i_no_tz}) + expected = DataFrame({'a': i.to_series(keep_tz=True) + .reset_index(drop=True), 'b': i_no_tz}) + assert_frame_equal(df, expected) + + def test_constructor_with_datetime_tz(self): + + # 8260 + # support datetime64 with tz + + idx = Index(date_range('20130101', periods=3, tz='US/Eastern'), + name='foo') + dr = date_range('20130110', periods=3) + + # construction + df = DataFrame({'A': idx, 'B': dr}) + self.assertTrue(df['A'].dtype, 'M8[ns, US/Eastern') + self.assertTrue(df['A'].name == 'A') + assert_series_equal(df['A'], Series(idx, name='A')) + assert_series_equal(df['B'], Series(dr, name='B')) + + # construction from dict + df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), + B=Timestamp('20130603', tz='CET')), + index=range(5)) + assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]', + 'datetime64[ns, CET]'], + index=['A', 'B'])) + + # dtypes + tzframe = DataFrame({'A': date_range('20130101', periods=3), + 'B': date_range('20130101', periods=3, + tz='US/Eastern'), + 'C': date_range('20130101', periods=3, tz='CET')}) + tzframe.iloc[1, 1] = pd.NaT + tzframe.iloc[1, 2] = pd.NaT + result = tzframe.dtypes.sort_index() + expected = Series([np.dtype('datetime64[ns]'), + DatetimeTZDtype('datetime64[ns, US/Eastern]'), + DatetimeTZDtype('datetime64[ns, CET]')], + ['A', 'B', 'C']) + + # concat + df3 = pd.concat([df2.A.to_frame(), df2.B.to_frame()], axis=1) + assert_frame_equal(df2, df3) + + # select_dtypes + result = df3.select_dtypes(include=['datetime64[ns]']) + expected = df3.reindex(columns=[]) + assert_frame_equal(result, expected) + + # this will select based on issubclass, and these are the same class + result = df3.select_dtypes(include=['datetime64[ns, CET]']) + expected = df3 + assert_frame_equal(result, expected) + + # from index + idx2 = date_range('20130101', periods=3, tz='US/Eastern', name='foo') + df2 = DataFrame(idx2) + assert_series_equal(df2['foo'], Series(idx2, name='foo')) + df2 = DataFrame(Series(idx2)) + assert_series_equal(df2['foo'], Series(idx2, name='foo')) + + idx2 = date_range('20130101', periods=3, tz='US/Eastern') + df2 = DataFrame(idx2) + assert_series_equal(df2[0], Series(idx2, name=0)) + df2 = DataFrame(Series(idx2)) + assert_series_equal(df2[0], Series(idx2, name=0)) + + # interleave with object + result = self.tzframe.assign(D='foo').values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', tz='CET')], + ['foo', 'foo', 'foo']], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + # interleave with only datetime64[ns] + result = self.tzframe.values + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], dtype=object).T + self.assert_numpy_array_equal(result, expected) + + # astype + expected = np.array([[Timestamp('2013-01-01 00:00:00'), + Timestamp('2013-01-02 00:00:00'), + Timestamp('2013-01-03 00:00:00')], + [Timestamp('2013-01-01 00:00:00-0500', + tz='US/Eastern'), + pd.NaT, + Timestamp('2013-01-03 00:00:00-0500', + tz='US/Eastern')], + [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), + pd.NaT, + Timestamp('2013-01-03 00:00:00+0100', + tz='CET')]], + dtype=object).T + result = self.tzframe.astype(object) + assert_frame_equal(result, DataFrame( + expected, index=self.tzframe.index, columns=self.tzframe.columns)) + + result = self.tzframe.astype('datetime64[ns]') + expected = DataFrame({'A': date_range('20130101', periods=3), + 'B': (date_range('20130101', periods=3, + tz='US/Eastern') + .tz_convert('UTC') + .tz_localize(None)), + 'C': (date_range('20130101', periods=3, + tz='CET') + .tz_convert('UTC') + .tz_localize(None))}) + expected.iloc[1, 1] = pd.NaT + expected.iloc[1, 2] = pd.NaT + assert_frame_equal(result, expected) + + # str formatting + result = self.tzframe.astype(str) + expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00', + '2013-01-01 00:00:00+01:00'], + ['2013-01-02', 'NaT', 'NaT'], + ['2013-01-03', '2013-01-03 00:00:00-05:00', + '2013-01-03 00:00:00+01:00']], dtype=object) + self.assert_numpy_array_equal(result, expected) + + result = str(self.tzframe) + self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 ' + '2013-01-01 00:00:00+01:00' in result) + self.assertTrue('1 2013-01-02 ' + 'NaT NaT' in result) + self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 ' + '2013-01-03 00:00:00+01:00' in result) + + # setitem + df['C'] = idx + assert_series_equal(df['C'], Series(idx, name='C')) + + df['D'] = 'foo' + df['D'] = idx + assert_series_equal(df['D'], Series(idx, name='D')) + del df['D'] + + # assert that A & C are not sharing the same base (e.g. they + # are copies) + b1 = df._data.blocks[1] + b2 = df._data.blocks[2] + self.assertTrue(b1.values.equals(b2.values)) + self.assertFalse(id(b1.values.values.base) == + id(b2.values.values.base)) + + # with nan + df2 = df.copy() + df2.iloc[1, 1] = pd.NaT + df2.iloc[1, 2] = pd.NaT + result = df2['B'] + assert_series_equal(notnull(result), Series( + [True, False, True], name='B')) + assert_series_equal(df2.dtypes, df.dtypes) + + # set/reset + df = DataFrame({'A': [0, 1, 2]}, index=idx) + result = df.reset_index() + self.assertTrue(result['foo'].dtype, 'M8[ns, US/Eastern') + + result = result.set_index('foo') + tm.assert_index_equal(df.index, idx) + + def test_constructor_for_list_with_dtypes(self): + # TODO(wesm): unused + intname = np.dtype(np.int_).name # noqa + floatname = np.dtype(np.float_).name # noqa + datetime64name = np.dtype('M8[ns]').name + objectname = np.dtype(np.object_).name + + # test list of lists/ndarrays + df = DataFrame([np.arange(5) for x in range(5)]) + result = df.get_dtype_counts() + expected = Series({'int64': 5}) + + df = DataFrame([np.array(np.arange(5), dtype='int32') + for x in range(5)]) + result = df.get_dtype_counts() + expected = Series({'int32': 5}) + + # overflow issue? (we always expecte int64 upcasting here) + df = DataFrame({'a': [2 ** 31, 2 ** 31 + 1]}) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + # GH #2751 (construction with no index specified), make sure we cast to + # platform values + df = DataFrame([1, 2]) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame([1., 2.]) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': [1, 2]}) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': [1., 2.]}) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': 1}, index=lrange(3)) + result = df.get_dtype_counts() + expected = Series({'int64': 1}) + assert_series_equal(result, expected) + + df = DataFrame({'a': 1.}, index=lrange(3)) + result = df.get_dtype_counts() + expected = Series({'float64': 1}) + assert_series_equal(result, expected) + + # with object list + df = DataFrame({'a': [1, 2, 4, 7], 'b': [1.2, 2.3, 5.1, 6.3], + 'c': list('abcd'), + 'd': [datetime(2000, 1, 1) for i in range(4)], + 'e': [1., 2, 4., 7]}) + result = df.get_dtype_counts() + expected = Series( + {'int64': 1, 'float64': 2, datetime64name: 1, objectname: 1}) + result.sort_index() + expected.sort_index() + assert_series_equal(result, expected) + + def test_constructor_frame_copy(self): + cop = DataFrame(self.frame, copy=True) + cop['A'] = 5 + self.assertTrue((cop['A'] == 5).all()) + self.assertFalse((self.frame['A'] == 5).all()) + + def test_constructor_ndarray_copy(self): + df = DataFrame(self.frame.values) + + self.frame.values[5] = 5 + self.assertTrue((df.values[5] == 5).all()) + + df = DataFrame(self.frame.values, copy=True) + self.frame.values[6] = 6 + self.assertFalse((df.values[6] == 6).all()) + + def test_constructor_series_copy(self): + series = self.frame._series + + df = DataFrame({'A': series['A']}) + df['A'][:] = 5 + + self.assertFalse((series['A'] == 5).all()) + + def test_constructor_with_nas(self): + # GH 5016 + # na's in indicies + + def check(df): + for i in range(len(df.columns)): + df.iloc[:, i] + + # allow single nans to succeed + indexer = np.arange(len(df.columns))[isnull(df.columns)] + + if len(indexer) == 1: + assert_series_equal(df.iloc[:, indexer[0]], df.loc[:, np.nan]) + + # multiple nans should fail + else: + + def f(): + df.loc[:, np.nan] + self.assertRaises(TypeError, f) + + df = DataFrame([[1, 2, 3], [4, 5, 6]], index=[1, np.nan]) + check(df) + + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1.1, 2.2, np.nan]) + check(df) + + df = DataFrame([[0, 1, 2, 3], [4, 5, 6, 7]], + columns=[np.nan, 1.1, 2.2, np.nan]) + check(df) + + df = DataFrame([[0.0, 1, 2, 3.0], [4, 5, 6, 7]], + columns=[np.nan, 1.1, 2.2, np.nan]) + check(df) + + def test_constructor_lists_to_object_dtype(self): + # from #1074 + d = DataFrame({'a': [np.nan, False]}) + self.assertEqual(d['a'].dtype, np.object_) + self.assertFalse(d['a'][1]) + + def test_from_records_to_records(self): + # from numpy documentation + arr = np.zeros((2,), dtype=('i4,f4,a10')) + arr[:] = [(1, 2., 'Hello'), (2, 3., "World")] + + # TODO(wesm): unused + frame = DataFrame.from_records(arr) # noqa + + index = np.arange(len(arr))[::-1] + indexed_frame = DataFrame.from_records(arr, index=index) + self.assert_numpy_array_equal(indexed_frame.index, index) + + # without names, it should go to last ditch + arr2 = np.zeros((2, 3)) + assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) + + # wrong length + msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' + with assertRaisesRegexp(ValueError, msg): + DataFrame.from_records(arr, index=index[:-1]) + + indexed_frame = DataFrame.from_records(arr, index='f1') + + # what to do? + records = indexed_frame.to_records() + self.assertEqual(len(records.dtype.names), 3) + + records = indexed_frame.to_records(index=False) + self.assertEqual(len(records.dtype.names), 2) + self.assertNotIn('index', records.dtype.names) + + def test_from_records_nones(self): + tuples = [(1, 2, None, 3), + (1, 2, None, 3), + (None, 2, 5, 3)] + + df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd']) + self.assertTrue(np.isnan(df['c'][0])) + + def test_from_records_iterator(self): + arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6), + (7., 7., 8, 8)], + dtype=[('x', np.float64), ('u', np.float32), + ('y', np.int64), ('z', np.int32)]) + df = DataFrame.from_records(iter(arr), nrows=2) + xp = DataFrame({'x': np.array([1.0, 3.0], dtype=np.float64), + 'u': np.array([1.0, 3.0], dtype=np.float32), + 'y': np.array([2, 4], dtype=np.int64), + 'z': np.array([2, 4], dtype=np.int32)}) + assert_frame_equal(df.reindex_like(xp), xp) + + # no dtypes specified here, so just compare with the default + arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)] + df = DataFrame.from_records(iter(arr), columns=['x', 'y'], + nrows=2) + assert_frame_equal(df, xp.reindex( + columns=['x', 'y']), check_dtype=False) + + def test_from_records_tuples_generator(self): + def tuple_generator(length): + for i in range(length): + letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + yield (i, letters[i % len(letters)], i / length) + + columns_names = ['Integer', 'String', 'Float'] + columns = [[i[j] for i in tuple_generator( + 10)] for j in range(len(columns_names))] + data = {'Integer': columns[0], + 'String': columns[1], 'Float': columns[2]} + expected = DataFrame(data, columns=columns_names) + + generator = tuple_generator(10) + result = DataFrame.from_records(generator, columns=columns_names) + assert_frame_equal(result, expected) + + def test_from_records_lists_generator(self): + def list_generator(length): + for i in range(length): + letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' + yield [i, letters[i % len(letters)], i / length] + + columns_names = ['Integer', 'String', 'Float'] + columns = [[i[j] for i in list_generator( + 10)] for j in range(len(columns_names))] + data = {'Integer': columns[0], + 'String': columns[1], 'Float': columns[2]} + expected = DataFrame(data, columns=columns_names) + + generator = list_generator(10) + result = DataFrame.from_records(generator, columns=columns_names) + assert_frame_equal(result, expected) + + def test_from_records_columns_not_modified(self): + tuples = [(1, 2, 3), + (1, 2, 3), + (2, 5, 3)] + + columns = ['a', 'b', 'c'] + original_columns = list(columns) + + df = DataFrame.from_records(tuples, columns=columns, index='a') # noqa + + self.assertEqual(columns, original_columns) + + def test_from_records_decimal(self): + from decimal import Decimal + + tuples = [(Decimal('1.5'),), (Decimal('2.5'),), (None,)] + + df = DataFrame.from_records(tuples, columns=['a']) + self.assertEqual(df['a'].dtype, object) + + df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True) + self.assertEqual(df['a'].dtype, np.float64) + self.assertTrue(np.isnan(df['a'].values[-1])) + + def test_from_records_duplicates(self): + result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)], + columns=['a', 'b', 'a']) + + expected = DataFrame([(1, 2, 3), (4, 5, 6)], + columns=['a', 'b', 'a']) + + assert_frame_equal(result, expected) + + def test_from_records_set_index_name(self): + def create_dict(order_id): + return {'order_id': order_id, 'quantity': np.random.randint(1, 10), + 'price': np.random.randint(1, 10)} + documents = [create_dict(i) for i in range(10)] + # demo missing data + documents.append({'order_id': 10, 'quantity': 5}) + + result = DataFrame.from_records(documents, index='order_id') + self.assertEqual(result.index.name, 'order_id') + + # MultiIndex + result = DataFrame.from_records(documents, + index=['order_id', 'quantity']) + self.assertEqual(result.index.names, ('order_id', 'quantity')) + + def test_from_records_misc_brokenness(self): + # #2179 + + data = {1: ['foo'], 2: ['bar']} + + result = DataFrame.from_records(data, columns=['a', 'b']) + exp = DataFrame(data, columns=['a', 'b']) + assert_frame_equal(result, exp) + + # overlap in index/index_names + + data = {'a': [1, 2, 3], 'b': [4, 5, 6]} + + result = DataFrame.from_records(data, index=['a', 'b', 'c']) + exp = DataFrame(data, index=['a', 'b', 'c']) + assert_frame_equal(result, exp) + + # GH 2623 + rows = [] + rows.append([datetime(2010, 1, 1), 1]) + rows.append([datetime(2010, 1, 2), 'hi']) # test col upconverts to obj + df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) + results = df2_obj.get_dtype_counts() + expected = Series({'datetime64[ns]': 1, 'object': 1}) + + rows = [] + rows.append([datetime(2010, 1, 1), 1]) + rows.append([datetime(2010, 1, 2), 1]) + df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) + results = df2_obj.get_dtype_counts() + expected = Series({'datetime64[ns]': 1, 'int64': 1}) + assert_series_equal(results, expected) + + def test_from_records_empty(self): + # 3562 + result = DataFrame.from_records([], columns=['a', 'b', 'c']) + expected = DataFrame(columns=['a', 'b', 'c']) + assert_frame_equal(result, expected) + + result = DataFrame.from_records([], columns=['a', 'b', 'b']) + expected = DataFrame(columns=['a', 'b', 'b']) + assert_frame_equal(result, expected) + + def test_from_records_empty_with_nonempty_fields_gh3682(self): + a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)]) + df = DataFrame.from_records(a, index='id') + assert_numpy_array_equal(df.index, Index([1], name='id')) + self.assertEqual(df.index.name, 'id') + assert_numpy_array_equal(df.columns, Index(['value'])) + + b = np.array([], dtype=[('id', np.int64), ('value', np.int64)]) + df = DataFrame.from_records(b, index='id') + assert_numpy_array_equal(df.index, Index([], name='id')) + self.assertEqual(df.index.name, 'id') + + def test_from_records_with_datetimes(self): + + # this may fail on certain platforms because of a numpy issue + # related GH6140 + if not is_little_endian(): + raise nose.SkipTest("known failure of test on non-little endian") + + # construction with a null in a recarray + # GH 6140 + expected = DataFrame({'EXPIRY': [datetime(2005, 3, 1, 0, 0), None]}) + + arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] + dtypes = [('EXPIRY', '<M8[ns]')] + + try: + recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) + except (ValueError): + raise nose.SkipTest("known failure of numpy rec array creation") + + result = DataFrame.from_records(recarray) + assert_frame_equal(result, expected) + + # coercion should work too + arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] + dtypes = [('EXPIRY', '<M8[m]')] + recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) + result = DataFrame.from_records(recarray) + assert_frame_equal(result, expected) + + def test_from_records_sequencelike(self): + df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), + 'A1': np.array(np.random.randn(6), dtype=np.float64), + 'B': np.array(np.arange(6), dtype=np.int64), + 'C': ['foo'] * 6, + 'D': np.array([True, False] * 3, dtype=bool), + 'E': np.array(np.random.randn(6), dtype=np.float32), + 'E1': np.array(np.random.randn(6), dtype=np.float32), + 'F': np.array(np.arange(6), dtype=np.int32)}) + + # this is actually tricky to create the recordlike arrays and + # have the dtypes be intact + blocks = df.blocks + tuples = [] + columns = [] + dtypes = [] + for dtype, b in compat.iteritems(blocks): + columns.extend(b.columns) + dtypes.extend([(c, np.dtype(dtype).descr[0][1]) + for c in b.columns]) + for i in range(len(df.index)): + tup = [] + for _, b in compat.iteritems(blocks): + tup.extend(b.iloc[i].values) + tuples.append(tuple(tup)) + + recarray = np.array(tuples, dtype=dtypes).view(np.recarray) + recarray2 = df.to_records() + lists = [list(x) for x in tuples] + + # tuples (lose the dtype info) + result = (DataFrame.from_records(tuples, columns=columns) + .reindex(columns=df.columns)) + + # created recarray and with to_records recarray (have dtype info) + result2 = (DataFrame.from_records(recarray, columns=columns) + .reindex(columns=df.columns)) + result3 = (DataFrame.from_records(recarray2, columns=columns) + .reindex(columns=df.columns)) + + # list of tupels (no dtype info) + result4 = (DataFrame.from_records(lists, columns=columns) + .reindex(columns=df.columns)) + + assert_frame_equal(result, df, check_dtype=False) + assert_frame_equal(result2, df) + assert_frame_equal(result3, df) + assert_frame_equal(result4, df, check_dtype=False) + + # tuples is in the order of the columns + result = DataFrame.from_records(tuples) + self.assert_numpy_array_equal(result.columns, lrange(8)) + + # test exclude parameter & we are casting the results here (as we don't + # have dtype info to recover) + columns_to_test = [columns.index('C'), columns.index('E1')] + + exclude = list(set(range(8)) - set(columns_to_test)) + result = DataFrame.from_records(tuples, exclude=exclude) + result.columns = [columns[i] for i in sorted(columns_to_test)] + assert_series_equal(result['C'], df['C']) + assert_series_equal(result['E1'], df['E1'].astype('float64')) + + # empty case + result = DataFrame.from_records([], columns=['foo', 'bar', 'baz']) + self.assertEqual(len(result), 0) + self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz']) + + result = DataFrame.from_records([]) + self.assertEqual(len(result), 0) + self.assertEqual(len(result.columns), 0) + + def test_from_records_dictlike(self): + + # test the dict methods + df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), + 'A1': np.array(np.random.randn(6), dtype=np.float64), + 'B': np.array(np.arange(6), dtype=np.int64), + 'C': ['foo'] * 6, + 'D': np.array([True, False] * 3, dtype=bool), + 'E': np.array(np.random.randn(6), dtype=np.float32), + 'E1': np.array(np.random.randn(6), dtype=np.float32), + 'F': np.array(np.arange(6), dtype=np.int32)}) + + # columns is in a different order here than the actual items iterated + # from the dict + columns = [] + for dtype, b in compat.iteritems(df.blocks): + columns.extend(b.columns) + + asdict = dict((x, y) for x, y in compat.iteritems(df)) + asdict2 = dict((x, y.values) for x, y in compat.iteritems(df)) + + # dict of series & dict of ndarrays (have dtype info) + results = [] + results.append(DataFrame.from_records( + asdict).reindex(columns=df.columns)) + results.append(DataFrame.from_records(asdict, columns=columns) + .reindex(columns=df.columns)) + results.append(DataFrame.from_records(asdict2, columns=columns) + .reindex(columns=df.columns)) + + for r in results: + assert_frame_equal(r, df) + + def test_from_records_with_index_data(self): + df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) + + data = np.random.randn(10) + df1 = DataFrame.from_records(df, index=data) + assert(df1.index.equals(Index(data))) + + def test_from_records_bad_index_column(self): + df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) + + # should pass + df1 = DataFrame.from_records(df, index=['C']) + assert(df1.index.equals(Index(df.C))) + + df1 = DataFrame.from_records(df, index='C') + assert(df1.index.equals(Index(df.C))) + + # should fail + self.assertRaises(ValueError, DataFrame.from_records, df, index=[2]) + self.assertRaises(KeyError, DataFrame.from_records, df, index=2) + + def test_from_records_non_tuple(self): + class Record(object): + + def __init__(self, *args): + self.args = args + + def __getitem__(self, i): + return self.args[i] + + def __iter__(self): + return iter(self.args) + + recs = [Record(1, 2, 3), Record(4, 5, 6), Record(7, 8, 9)] + tups = lmap(tuple, recs) + + result = DataFrame.from_records(recs) + expected = DataFrame.from_records(tups) + assert_frame_equal(result, expected) + + def test_from_records_len0_with_columns(self): + # #2633 + result = DataFrame.from_records([], index='foo', + columns=['foo', 'bar']) + + self.assertTrue(np.array_equal(result.columns, ['bar'])) + self.assertEqual(len(result), 0) + self.assertEqual(result.index.name, 'foo') diff --git a/pandas/tests/frame/test_convert_to.py b/pandas/tests/frame/test_convert_to.py new file mode 100644 index 0000000000000..8bb253e17fd06 --- /dev/null +++ b/pandas/tests/frame/test_convert_to.py @@ -0,0 +1,174 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from numpy import nan +import numpy as np + +from pandas import compat +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameConvertTo(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_to_dict(self): + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + recons_data = DataFrame(test_data).to_dict() + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][k2]) + + recons_data = DataFrame(test_data).to_dict("l") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][int(k2) - 1]) + + recons_data = DataFrame(test_data).to_dict("s") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k][k2]) + + recons_data = DataFrame(test_data).to_dict("sp") + + expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'], + 'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]} + + tm.assert_almost_equal(recons_data, expected_split) + + recons_data = DataFrame(test_data).to_dict("r") + + expected_records = [{'A': 1.0, 'B': '1'}, + {'A': 2.0, 'B': '2'}, + {'A': nan, 'B': '3'}] + + tm.assert_almost_equal(recons_data, expected_records) + + # GH10844 + recons_data = DataFrame(test_data).to_dict("i") + + for k, v in compat.iteritems(test_data): + for k2, v2 in compat.iteritems(v): + self.assertEqual(v2, recons_data[k2][k]) + + def test_to_dict_timestamp(self): + + # GH11247 + # split/records producing np.datetime64 rather than Timestamps + # on datetime64[ns] dtypes only + + tsmp = Timestamp('20130101') + test_data = DataFrame({'A': [tsmp, tsmp], 'B': [tsmp, tsmp]}) + test_data_mixed = DataFrame({'A': [tsmp, tsmp], 'B': [1, 2]}) + + expected_records = [{'A': tsmp, 'B': tsmp}, + {'A': tsmp, 'B': tsmp}] + expected_records_mixed = [{'A': tsmp, 'B': 1}, + {'A': tsmp, 'B': 2}] + + tm.assert_almost_equal(test_data.to_dict( + orient='records'), expected_records) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='records'), expected_records_mixed) + + expected_series = { + 'A': Series([tsmp, tsmp]), + 'B': Series([tsmp, tsmp]), + } + expected_series_mixed = { + 'A': Series([tsmp, tsmp]), + 'B': Series([1, 2]), + } + + tm.assert_almost_equal(test_data.to_dict( + orient='series'), expected_series) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='series'), expected_series_mixed) + + expected_split = { + 'index': [0, 1], + 'data': [[tsmp, tsmp], + [tsmp, tsmp]], + 'columns': ['A', 'B'] + } + expected_split_mixed = { + 'index': [0, 1], + 'data': [[tsmp, 1], + [tsmp, 2]], + 'columns': ['A', 'B'] + } + + tm.assert_almost_equal(test_data.to_dict( + orient='split'), expected_split) + tm.assert_almost_equal(test_data_mixed.to_dict( + orient='split'), expected_split_mixed) + + def test_to_dict_invalid_orient(self): + df = DataFrame({'A': [0, 1]}) + self.assertRaises(ValueError, df.to_dict, orient='xinvalid') + + def test_to_records_dt64(self): + df = DataFrame([["one", "two", "three"], + ["four", "five", "six"]], + index=date_range("2012-01-01", "2012-01-02")) + self.assertEqual(df.to_records()['index'][0], df.index[0]) + + rs = df.to_records(convert_datetime64=False) + self.assertEqual(rs['index'][0], df.index.values[0]) + + def test_to_records_with_multindex(self): + # GH3189 + index = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], + ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] + data = np.zeros((8, 4)) + df = DataFrame(data, index=index) + r = df.to_records(index=True)['level_0'] + self.assertTrue('bar' in r) + self.assertTrue('one' not in r) + + def test_to_records_with_Mapping_type(self): + import email + from email.parser import Parser + import collections + + collections.Mapping.register(email.message.Message) + + headers = Parser().parsestr('From: <user@example.com>\n' + 'To: <someone_else@example.com>\n' + 'Subject: Test message\n' + '\n' + 'Body would go here\n') + + frame = DataFrame.from_records([headers]) + all(x in frame for x in ['Type', 'Subject', 'From']) + + def test_to_records_floats(self): + df = DataFrame(np.random.rand(10, 10)) + df.to_records() + + def test_to_records_index_name(self): + df = DataFrame(np.random.randn(3, 3)) + df.index.name = 'X' + rs = df.to_records() + self.assertIn('X', rs.dtype.fields) + + df = DataFrame(np.random.randn(3, 3)) + rs = df.to_records() + self.assertIn('index', rs.dtype.fields) + + df.index = MultiIndex.from_tuples([('a', 'x'), ('a', 'y'), ('b', 'z')]) + df.index.names = ['A', None] + rs = df.to_records() + self.assertIn('level_0', rs.dtype.fields) diff --git a/pandas/tests/frame/test_dtypes.py b/pandas/tests/frame/test_dtypes.py new file mode 100644 index 0000000000000..97ca8238b78f9 --- /dev/null +++ b/pandas/tests/frame/test_dtypes.py @@ -0,0 +1,396 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import timedelta + +import numpy as np + +from pandas import (DataFrame, Series, date_range, Timedelta, Timestamp, + compat, option_context) +from pandas.compat import u +from pandas.tests.frame.common import TestData +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + makeCustomDataframe as mkdf) +import pandas.util.testing as tm +import pandas as pd + + +class TestDataFrameDataTypes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_concat_empty_dataframe_dtypes(self): + df = DataFrame(columns=list("abc")) + df['a'] = df['a'].astype(np.bool_) + df['b'] = df['b'].astype(np.int32) + df['c'] = df['c'].astype(np.float64) + + result = pd.concat([df, df]) + self.assertEqual(result['a'].dtype, np.bool_) + self.assertEqual(result['b'].dtype, np.int32) + self.assertEqual(result['c'].dtype, np.float64) + + result = pd.concat([df, df.astype(np.float64)]) + self.assertEqual(result['a'].dtype, np.object_) + self.assertEqual(result['b'].dtype, np.float64) + self.assertEqual(result['c'].dtype, np.float64) + + def test_empty_frame_dtypes_ftypes(self): + empty_df = pd.DataFrame() + assert_series_equal(empty_df.dtypes, pd.Series(dtype=np.object)) + assert_series_equal(empty_df.ftypes, pd.Series(dtype=np.object)) + + nocols_df = pd.DataFrame(index=[1, 2, 3]) + assert_series_equal(nocols_df.dtypes, pd.Series(dtype=np.object)) + assert_series_equal(nocols_df.ftypes, pd.Series(dtype=np.object)) + + norows_df = pd.DataFrame(columns=list("abc")) + assert_series_equal(norows_df.dtypes, pd.Series( + np.object, index=list("abc"))) + assert_series_equal(norows_df.ftypes, pd.Series( + 'object:dense', index=list("abc"))) + + norows_int_df = pd.DataFrame(columns=list("abc")).astype(np.int32) + assert_series_equal(norows_int_df.dtypes, pd.Series( + np.dtype('int32'), index=list("abc"))) + assert_series_equal(norows_int_df.ftypes, pd.Series( + 'int32:dense', index=list("abc"))) + + odict = compat.OrderedDict + df = pd.DataFrame(odict([('a', 1), ('b', True), ('c', 1.0)]), + index=[1, 2, 3]) + ex_dtypes = pd.Series(odict([('a', np.int64), + ('b', np.bool), + ('c', np.float64)])) + ex_ftypes = pd.Series(odict([('a', 'int64:dense'), + ('b', 'bool:dense'), + ('c', 'float64:dense')])) + assert_series_equal(df.dtypes, ex_dtypes) + assert_series_equal(df.ftypes, ex_ftypes) + + # same but for empty slice of df + assert_series_equal(df[:0].dtypes, ex_dtypes) + assert_series_equal(df[:0].ftypes, ex_ftypes) + + def test_dtypes_are_correct_after_column_slice(self): + # GH6525 + df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_) + odict = compat.OrderedDict + assert_series_equal(df.dtypes, + pd.Series(odict([('a', np.float_), + ('b', np.float_), + ('c', np.float_)]))) + assert_series_equal(df.iloc[:, 2:].dtypes, + pd.Series(odict([('c', np.float_)]))) + assert_series_equal(df.dtypes, + pd.Series(odict([('a', np.float_), + ('b', np.float_), + ('c', np.float_)]))) + + def test_select_dtypes_include(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.Categorical(list('abc'))}) + ri = df.select_dtypes(include=[np.number]) + ei = df[['b', 'c', 'd']] + assert_frame_equal(ri, ei) + + ri = df.select_dtypes(include=[np.number, 'category']) + ei = df[['b', 'c', 'd', 'f']] + assert_frame_equal(ri, ei) + + def test_select_dtypes_exclude(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True]}) + re = df.select_dtypes(exclude=[np.number]) + ee = df[['a', 'e']] + assert_frame_equal(re, ee) + + def test_select_dtypes_exclude_include(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + exclude = np.datetime64, + include = np.bool_, 'integer' + r = df.select_dtypes(include=include, exclude=exclude) + e = df[['b', 'c', 'e']] + assert_frame_equal(r, e) + + exclude = 'datetime', + include = 'bool', 'int64', 'int32' + r = df.select_dtypes(include=include, exclude=exclude) + e = df[['b', 'e']] + assert_frame_equal(r, e) + + def test_select_dtypes_not_an_attr_but_still_valid_dtype(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + df['g'] = df.f.diff() + assert not hasattr(np, 'u8') + r = df.select_dtypes(include=['i8', 'O'], exclude=['timedelta']) + e = df[['a', 'b']] + assert_frame_equal(r, e) + + r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]']) + e = df[['a', 'b', 'g']] + assert_frame_equal(r, e) + + def test_select_dtypes_empty(self): + df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) + with tm.assertRaisesRegexp(ValueError, 'at least one of include or ' + 'exclude must be nonempty'): + df.select_dtypes() + + def test_select_dtypes_raises_on_string(self): + df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(include='object') + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(exclude='object') + with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): + df.select_dtypes(include=int, exclude='object') + + def test_select_dtypes_bad_datetime64(self): + df = DataFrame({'a': list('abc'), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): + df.select_dtypes(include=['datetime64[D]']) + + with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): + df.select_dtypes(exclude=['datetime64[as]']) + + def test_select_dtypes_str_raises(self): + df = DataFrame({'a': list('abc'), + 'g': list(u('abc')), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + string_dtypes = set((str, 'str', np.string_, 'S1', + 'unicode', np.unicode_, 'U1')) + try: + string_dtypes.add(unicode) + except NameError: + pass + for dt in string_dtypes: + with tm.assertRaisesRegexp(TypeError, + 'string dtypes are not allowed'): + df.select_dtypes(include=[dt]) + with tm.assertRaisesRegexp(TypeError, + 'string dtypes are not allowed'): + df.select_dtypes(exclude=[dt]) + + def test_select_dtypes_bad_arg_raises(self): + df = DataFrame({'a': list('abc'), + 'g': list(u('abc')), + 'b': list(range(1, 4)), + 'c': np.arange(3, 6).astype('u1'), + 'd': np.arange(4.0, 7.0, dtype='float64'), + 'e': [True, False, True], + 'f': pd.date_range('now', periods=3).values}) + with tm.assertRaisesRegexp(TypeError, 'data type.*not understood'): + df.select_dtypes(['blargy, blarg, blarg']) + + def test_select_dtypes_typecodes(self): + # GH 11990 + df = mkdf(30, 3, data_gen_f=lambda x, y: np.random.random()) + expected = df + FLOAT_TYPES = list(np.typecodes['AllFloat']) + assert_frame_equal(df.select_dtypes(FLOAT_TYPES), expected) + + def test_dtypes_gh8722(self): + self.mixed_frame['bool'] = self.mixed_frame['A'] > 0 + result = self.mixed_frame.dtypes + expected = Series(dict((k, v.dtype) + for k, v in compat.iteritems(self.mixed_frame)), + index=result.index) + assert_series_equal(result, expected) + + # compat, GH 8722 + with option_context('use_inf_as_null', True): + df = DataFrame([[1]]) + result = df.dtypes + assert_series_equal(result, Series({0: np.dtype('int64')})) + + def test_ftypes(self): + frame = self.mixed_float + expected = Series(dict(A='float32:dense', + B='float32:dense', + C='float16:dense', + D='float64:dense')).sort_values() + result = frame.ftypes.sort_values() + assert_series_equal(result, expected) + + def test_astype(self): + casted = self.frame.astype(int) + expected = DataFrame(self.frame.values.astype(int), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + casted = self.frame.astype(np.int32) + expected = DataFrame(self.frame.values.astype(np.int32), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + self.frame['foo'] = '5' + casted = self.frame.astype(int) + expected = DataFrame(self.frame.values.astype(int), + index=self.frame.index, + columns=self.frame.columns) + assert_frame_equal(casted, expected) + + # mixed casting + def _check_cast(df, v): + self.assertEqual( + list(set([s.dtype.name + for _, s in compat.iteritems(df)]))[0], v) + + mn = self.all_mixed._get_numeric_data().copy() + mn['little_float'] = np.array(12345., dtype='float16') + mn['big_float'] = np.array(123456789101112., dtype='float64') + + casted = mn.astype('float64') + _check_cast(casted, 'float64') + + casted = mn.astype('int64') + _check_cast(casted, 'int64') + + casted = self.mixed_float.reindex(columns=['A', 'B']).astype('float32') + _check_cast(casted, 'float32') + + casted = mn.reindex(columns=['little_float']).astype('float16') + _check_cast(casted, 'float16') + + casted = self.mixed_float.reindex(columns=['A', 'B']).astype('float16') + _check_cast(casted, 'float16') + + casted = mn.astype('float32') + _check_cast(casted, 'float32') + + casted = mn.astype('int32') + _check_cast(casted, 'int32') + + # to object + casted = mn.astype('O') + _check_cast(casted, 'object') + + def test_astype_with_exclude_string(self): + df = self.frame.copy() + expected = self.frame.astype(int) + df['string'] = 'foo' + casted = df.astype(int, raise_on_error=False) + + expected['string'] = 'foo' + assert_frame_equal(casted, expected) + + df = self.frame.copy() + expected = self.frame.astype(np.int32) + df['string'] = 'foo' + casted = df.astype(np.int32, raise_on_error=False) + + expected['string'] = 'foo' + assert_frame_equal(casted, expected) + + def test_astype_with_view(self): + + tf = self.mixed_float.reindex(columns=['A', 'B', 'C']) + + casted = tf.astype(np.int64) + + casted = tf.astype(np.float32) + + # this is the only real reason to do it this way + tf = np.round(self.frame).astype(np.int32) + casted = tf.astype(np.float32, copy=False) + + # TODO(wesm): verification? + tf = self.frame.astype(np.float64) + casted = tf.astype(np.int64, copy=False) # noqa + + def test_astype_cast_nan_int(self): + df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]}) + self.assertRaises(ValueError, df.astype, np.int64) + + def test_astype_str(self): + # GH9757 + a = Series(date_range('2010-01-04', periods=5)) + b = Series(date_range('3/6/2012 00:00', periods=5, tz='US/Eastern')) + c = Series([Timedelta(x, unit='d') for x in range(5)]) + d = Series(range(5)) + e = Series([0.0, 0.2, 0.4, 0.6, 0.8]) + + df = DataFrame({'a': a, 'b': b, 'c': c, 'd': d, 'e': e}) + + # datetimelike + # Test str and unicode on python 2.x and just str on python 3.x + for tt in set([str, compat.text_type]): + result = df.astype(tt) + + expected = DataFrame({ + 'a': list(map(tt, map(lambda x: Timestamp(x)._date_repr, + a._values))), + 'b': list(map(tt, map(Timestamp, b._values))), + 'c': list(map(tt, map(lambda x: Timedelta(x) + ._repr_base(format='all'), c._values))), + 'd': list(map(tt, d._values)), + 'e': list(map(tt, e._values)), + }) + + assert_frame_equal(result, expected) + + # float/nan + # 11302 + # consistency in astype(str) + for tt in set([str, compat.text_type]): + result = DataFrame([np.NaN]).astype(tt) + expected = DataFrame(['nan']) + assert_frame_equal(result, expected) + + result = DataFrame([1.12345678901234567890]).astype(tt) + expected = DataFrame(['1.12345678901']) + assert_frame_equal(result, expected) + + def test_timedeltas(self): + df = DataFrame(dict(A=Series(date_range('2012-1-1', periods=3, + freq='D')), + B=Series([timedelta(days=i) for i in range(3)]))) + result = df.get_dtype_counts().sort_values() + expected = Series( + {'datetime64[ns]': 1, 'timedelta64[ns]': 1}).sort_values() + assert_series_equal(result, expected) + + df['C'] = df['A'] + df['B'] + expected = Series( + {'datetime64[ns]': 2, 'timedelta64[ns]': 1}).sort_values() + result = df.get_dtype_counts().sort_values() + assert_series_equal(result, expected) + + # mixed int types + df['D'] = 1 + expected = Series({'datetime64[ns]': 2, + 'timedelta64[ns]': 1, + 'int64': 1}).sort_values() + result = df.get_dtype_counts().sort_values() + assert_series_equal(result, expected) diff --git a/pandas/tests/frame/test_indexing.py b/pandas/tests/frame/test_indexing.py new file mode 100644 index 0000000000000..ff9567c8a40b1 --- /dev/null +++ b/pandas/tests/frame/test_indexing.py @@ -0,0 +1,2600 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, date, timedelta, time + +from pandas.compat import map, zip, range, lrange, lzip, long +from pandas import compat + +from numpy import nan +from numpy.random import randn +import numpy as np + +import pandas.core.common as com +from pandas import (DataFrame, Index, Series, notnull, isnull, + MultiIndex, DatetimeIndex, Timestamp, + date_range) +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp, + assertRaises) +from pandas.core.indexing import IndexingError + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameIndexing(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_getitem(self): + # slicing + sl = self.frame[:20] + self.assertEqual(20, len(sl.index)) + + # column access + + for _, series in compat.iteritems(sl): + self.assertEqual(20, len(series.index)) + self.assertTrue(tm.equalContents(series.index, sl.index)) + + for key, _ in compat.iteritems(self.frame._series): + self.assertIsNotNone(self.frame[key]) + + self.assertNotIn('random', self.frame) + with assertRaisesRegexp(KeyError, 'random'): + self.frame['random'] + + df = self.frame.copy() + df['$10'] = randn(len(df)) + ad = randn(len(df)) + df['@awesome_domain'] = ad + self.assertRaises(KeyError, df.__getitem__, 'df["$10"]') + res = df['@awesome_domain'] + assert_numpy_array_equal(ad, res.values) + + def test_getitem_dupe_cols(self): + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) + try: + df[['baf']] + except KeyError: + pass + else: + self.fail("Dataframe failed to raise KeyError") + + def test_get(self): + b = self.frame.get('B') + assert_series_equal(b, self.frame['B']) + + self.assertIsNone(self.frame.get('foo')) + assert_series_equal(self.frame.get('foo', self.frame['B']), + self.frame['B']) + # None + # GH 5652 + for df in [DataFrame(), DataFrame(columns=list('AB')), + DataFrame(columns=list('AB'), index=range(3))]: + result = df.get(None) + self.assertIsNone(result) + + def test_getitem_iterator(self): + idx = iter(['A', 'B', 'C']) + result = self.frame.ix[:, idx] + expected = self.frame.ix[:, ['A', 'B', 'C']] + assert_frame_equal(result, expected) + + def test_getitem_list(self): + self.frame.columns.name = 'foo' + + result = self.frame[['B', 'A']] + result2 = self.frame[Index(['B', 'A'])] + + expected = self.frame.ix[:, ['B', 'A']] + expected.columns.name = 'foo' + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + self.assertEqual(result.columns.name, 'foo') + + with assertRaisesRegexp(KeyError, 'not in index'): + self.frame[['B', 'A', 'food']] + with assertRaisesRegexp(KeyError, 'not in index'): + self.frame[Index(['B', 'A', 'foo'])] + + # tuples + df = DataFrame(randn(8, 3), + columns=Index([('foo', 'bar'), ('baz', 'qux'), + ('peek', 'aboo')], name=['sth', 'sth2'])) + + result = df[[('foo', 'bar'), ('baz', 'qux')]] + expected = df.ix[:, :2] + assert_frame_equal(result, expected) + self.assertEqual(result.columns.names, ['sth', 'sth2']) + + def test_setitem_list(self): + + self.frame['E'] = 'foo' + data = self.frame[['A', 'B']] + self.frame[['B', 'A']] = data + + assert_series_equal(self.frame['B'], data['A'], check_names=False) + assert_series_equal(self.frame['A'], data['B'], check_names=False) + + with assertRaisesRegexp(ValueError, + 'Columns must be same length as key'): + data[['A']] = self.frame[['A', 'B']] + + with assertRaisesRegexp(ValueError, 'Length of values does not match ' + 'length of index'): + data['A'] = range(len(data.index) - 1) + + df = DataFrame(0, lrange(3), ['tt1', 'tt2'], dtype=np.int_) + df.ix[1, ['tt1', 'tt2']] = [1, 2] + + result = df.ix[1, ['tt1', 'tt2']] + expected = Series([1, 2], df.columns, dtype=np.int_, name=1) + assert_series_equal(result, expected) + + df['tt1'] = df['tt2'] = '0' + df.ix[1, ['tt1', 'tt2']] = ['1', '2'] + result = df.ix[1, ['tt1', 'tt2']] + expected = Series(['1', '2'], df.columns, name=1) + assert_series_equal(result, expected) + + def test_setitem_list_not_dataframe(self): + data = np.random.randn(len(self.frame), 2) + self.frame[['A', 'B']] = data + assert_almost_equal(self.frame[['A', 'B']].values, data) + + def test_setitem_list_of_tuples(self): + tuples = lzip(self.frame['A'], self.frame['B']) + self.frame['tuples'] = tuples + + result = self.frame['tuples'] + expected = Series(tuples, index=self.frame.index, name='tuples') + assert_series_equal(result, expected) + + def test_setitem_mulit_index(self): + # GH7655, test that assigning to a sub-frame of a frame + # with multi-index columns aligns both rows and columns + it = ['jim', 'joe', 'jolie'], ['first', 'last'], \ + ['left', 'center', 'right'] + + cols = MultiIndex.from_product(it) + index = pd.date_range('20141006', periods=20) + vals = np.random.randint(1, 1000, (len(index), len(cols))) + df = pd.DataFrame(vals, columns=cols, index=index) + + i, j = df.index.values.copy(), it[-1][:] + + np.random.shuffle(i) + df['jim'] = df['jolie'].loc[i, ::-1] + assert_frame_equal(df['jim'], df['jolie']) + + np.random.shuffle(j) + df[('joe', 'first')] = df[('jolie', 'last')].loc[i, j] + assert_frame_equal(df[('joe', 'first')], df[('jolie', 'last')]) + + np.random.shuffle(j) + df[('joe', 'last')] = df[('jolie', 'first')].loc[i, j] + assert_frame_equal(df[('joe', 'last')], df[('jolie', 'first')]) + + def test_getitem_boolean(self): + # boolean indexing + d = self.tsframe.index[10] + indexer = self.tsframe.index > d + indexer_obj = indexer.astype(object) + + subindex = self.tsframe.index[indexer] + subframe = self.tsframe[indexer] + + self.assert_numpy_array_equal(subindex, subframe.index) + with assertRaisesRegexp(ValueError, 'Item wrong length'): + self.tsframe[indexer[:-1]] + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + with tm.assertRaisesRegexp(ValueError, 'boolean values only'): + self.tsframe[self.tsframe] + + # test that Series work + indexer_obj = Series(indexer_obj, self.tsframe.index) + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + # test that Series indexers reindex + with tm.assert_produces_warning(UserWarning): + indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) + + subframe_obj = self.tsframe[indexer_obj] + assert_frame_equal(subframe_obj, subframe) + + # test df[df > 0] + for df in [self.tsframe, self.mixed_frame, + self.mixed_float, self.mixed_int]: + + data = df._get_numeric_data() + bif = df[df > 0] + bifw = DataFrame(dict([(c, np.where(data[c] > 0, data[c], np.nan)) + for c in data.columns]), + index=data.index, columns=data.columns) + + # add back other columns to compare + for c in df.columns: + if c not in bifw: + bifw[c] = df[c] + bifw = bifw.reindex(columns=df.columns) + + assert_frame_equal(bif, bifw, check_dtype=False) + for c in df.columns: + if bif[c].dtype != bifw[c].dtype: + self.assertEqual(bif[c].dtype, df[c].dtype) + + def test_getitem_boolean_casting(self): + + # don't upcast if we don't need to + df = self.tsframe.copy() + df['E'] = 1 + df['E'] = df['E'].astype('int32') + df['E1'] = df['E'].copy() + df['F'] = 1 + df['F'] = df['F'].astype('int64') + df['F1'] = df['F'].copy() + + casted = df[df > 0] + result = casted.get_dtype_counts() + expected = Series({'float64': 4, 'int32': 2, 'int64': 2}) + assert_series_equal(result, expected) + + # int block splitting + df.ix[1:3, ['E1', 'F1']] = 0 + casted = df[df > 0] + result = casted.get_dtype_counts() + expected = Series({'float64': 6, 'int32': 1, 'int64': 1}) + assert_series_equal(result, expected) + + # where dtype conversions + # GH 3733 + df = DataFrame(data=np.random.randn(100, 50)) + df = df.where(df > 0) # create nans + bools = df > 0 + mask = isnull(df) + expected = bools.astype(float).mask(mask) + result = bools.mask(mask) + assert_frame_equal(result, expected) + + def test_getitem_boolean_list(self): + df = DataFrame(np.arange(12).reshape(3, 4)) + + def _checkit(lst): + result = df[lst] + expected = df.ix[df.index[lst]] + assert_frame_equal(result, expected) + + _checkit([True, False, True]) + _checkit([True, True, True]) + _checkit([False, False, False]) + + def test_getitem_boolean_iadd(self): + arr = randn(5, 5) + + df = DataFrame(arr.copy(), columns=['A', 'B', 'C', 'D', 'E']) + + df[df < 0] += 1 + arr[arr < 0] += 1 + + assert_almost_equal(df.values, arr) + + def test_boolean_index_empty_corner(self): + # #2096 + blah = DataFrame(np.empty([0, 1]), columns=['A'], + index=DatetimeIndex([])) + + # both of these should succeed trivially + k = np.array([], bool) + + blah[k] + blah[k] = 0 + + def test_getitem_ix_mixed_integer(self): + df = DataFrame(np.random.randn(4, 3), + index=[1, 10, 'C', 'E'], columns=[1, 2, 3]) + + result = df.ix[:-1] + expected = df.ix[df.index[:-1]] + assert_frame_equal(result, expected) + + result = df.ix[[1, 10]] + expected = df.ix[Index([1, 10], dtype=object)] + assert_frame_equal(result, expected) + + # 11320 + df = pd.DataFrame({"rna": (1.5, 2.2, 3.2, 4.5), + -1000: [11, 21, 36, 40], + 0: [10, 22, 43, 34], + 1000: [0, 10, 20, 30]}, + columns=['rna', -1000, 0, 1000]) + result = df[[1000]] + expected = df.iloc[:, [3]] + assert_frame_equal(result, expected) + result = df[[-1000]] + expected = df.iloc[:, [1]] + assert_frame_equal(result, expected) + + def test_getitem_setitem_ix_negative_integers(self): + result = self.frame.ix[:, -1] + assert_series_equal(result, self.frame['D']) + + result = self.frame.ix[:, [-1]] + assert_frame_equal(result, self.frame[['D']]) + + result = self.frame.ix[:, [-1, -2]] + assert_frame_equal(result, self.frame[['D', 'C']]) + + self.frame.ix[:, [-1]] = 0 + self.assertTrue((self.frame['D'] == 0).all()) + + df = DataFrame(np.random.randn(8, 4)) + self.assertTrue(isnull(df.ix[:, [-1]].values).all()) + + # #1942 + a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)]) + a.ix[-1] = a.ix[-2] + + assert_series_equal(a.ix[-1], a.ix[-2], check_names=False) + self.assertEqual(a.ix[-1].name, 'T') + self.assertEqual(a.ix[-2].name, 'S') + + def test_getattr(self): + assert_series_equal(self.frame.A, self.frame['A']) + self.assertRaises(AttributeError, getattr, self.frame, + 'NONEXISTENT_NAME') + + def test_setattr_column(self): + df = DataFrame({'foobar': 1}, index=lrange(10)) + + df.foobar = 5 + self.assertTrue((df.foobar == 5).all()) + + def test_setitem(self): + # not sure what else to do here + series = self.frame['A'][::2] + self.frame['col5'] = series + self.assertIn('col5', self.frame) + tm.assert_dict_equal(series, self.frame['col5'], + compare_keys=False) + + series = self.frame['A'] + self.frame['col6'] = series + tm.assert_dict_equal(series, self.frame['col6'], + compare_keys=False) + + with tm.assertRaises(KeyError): + self.frame[randn(len(self.frame) + 1)] = 1 + + # set ndarray + arr = randn(len(self.frame)) + self.frame['col9'] = arr + self.assertTrue((self.frame['col9'] == arr).all()) + + self.frame['col7'] = 5 + assert((self.frame['col7'] == 5).all()) + + self.frame['col0'] = 3.14 + assert((self.frame['col0'] == 3.14).all()) + + self.frame['col8'] = 'foo' + assert((self.frame['col8'] == 'foo').all()) + + # this is partially a view (e.g. some blocks are view) + # so raise/warn + smaller = self.frame[:2] + + def f(): + smaller['col10'] = ['1', '2'] + self.assertRaises(com.SettingWithCopyError, f) + self.assertEqual(smaller['col10'].dtype, np.object_) + self.assertTrue((smaller['col10'] == ['1', '2']).all()) + + # with a dtype + for dtype in ['int32', 'int64', 'float32', 'float64']: + self.frame[dtype] = np.array(arr, dtype=dtype) + self.assertEqual(self.frame[dtype].dtype.name, dtype) + + # dtype changing GH4204 + df = DataFrame([[0, 0]]) + df.iloc[0] = np.nan + expected = DataFrame([[np.nan, np.nan]]) + assert_frame_equal(df, expected) + + df = DataFrame([[0, 0]]) + df.loc[0] = np.nan + assert_frame_equal(df, expected) + + def test_setitem_tuple(self): + self.frame['A', 'B'] = self.frame['A'] + assert_series_equal(self.frame['A', 'B'], self.frame[ + 'A'], check_names=False) + + def test_setitem_always_copy(self): + s = self.frame['A'].copy() + self.frame['E'] = s + + self.frame['E'][5:10] = nan + self.assertTrue(notnull(s[5:10]).all()) + + def test_setitem_boolean(self): + df = self.frame.copy() + values = self.frame.values + + df[df['A'] > 0] = 4 + values[values[:, 0] > 0] = 4 + assert_almost_equal(df.values, values) + + # test that column reindexing works + series = df['A'] == 4 + series = series.reindex(df.index[::-1]) + df[series] = 1 + values[values[:, 0] == 4] = 1 + assert_almost_equal(df.values, values) + + df[df > 0] = 5 + values[values > 0] = 5 + assert_almost_equal(df.values, values) + + df[df == 5] = 0 + values[values == 5] = 0 + assert_almost_equal(df.values, values) + + # a df that needs alignment first + df[df[:-1] < 0] = 2 + np.putmask(values[:-1], values[:-1] < 0, 2) + assert_almost_equal(df.values, values) + + # indexed with same shape but rows-reversed df + df[df[::-1] == 2] = 3 + values[values == 2] = 3 + assert_almost_equal(df.values, values) + + with assertRaisesRegexp(TypeError, 'Must pass DataFrame with boolean ' + 'values only'): + df[df * 0] = 2 + + # index with DataFrame + mask = df > np.abs(df) + expected = df.copy() + df[df > np.abs(df)] = nan + expected.values[mask.values] = nan + assert_frame_equal(df, expected) + + # set from DataFrame + expected = df.copy() + df[df > np.abs(df)] = df * 2 + np.putmask(expected.values, mask.values, df.values * 2) + assert_frame_equal(df, expected) + + def test_setitem_cast(self): + self.frame['D'] = self.frame['D'].astype('i8') + self.assertEqual(self.frame['D'].dtype, np.int64) + + # #669, should not cast? + # this is now set to int64, which means a replacement of the column to + # the value dtype (and nothing to do with the existing dtype) + self.frame['B'] = 0 + self.assertEqual(self.frame['B'].dtype, np.int64) + + # cast if pass array of course + self.frame['B'] = np.arange(len(self.frame)) + self.assertTrue(issubclass(self.frame['B'].dtype.type, np.integer)) + + self.frame['foo'] = 'bar' + self.frame['foo'] = 0 + self.assertEqual(self.frame['foo'].dtype, np.int64) + + self.frame['foo'] = 'bar' + self.frame['foo'] = 2.5 + self.assertEqual(self.frame['foo'].dtype, np.float64) + + self.frame['something'] = 0 + self.assertEqual(self.frame['something'].dtype, np.int64) + self.frame['something'] = 2 + self.assertEqual(self.frame['something'].dtype, np.int64) + self.frame['something'] = 2.5 + self.assertEqual(self.frame['something'].dtype, np.float64) + + # GH 7704 + # dtype conversion on setting + df = DataFrame(np.random.rand(30, 3), columns=tuple('ABC')) + df['event'] = np.nan + df.loc[10, 'event'] = 'foo' + result = df.get_dtype_counts().sort_values() + expected = Series({'float64': 3, 'object': 1}).sort_values() + assert_series_equal(result, expected) + + # Test that data type is preserved . #5782 + df = DataFrame({'one': np.arange(6, dtype=np.int8)}) + df.loc[1, 'one'] = 6 + self.assertEqual(df.dtypes.one, np.dtype(np.int8)) + df.one = np.int8(7) + self.assertEqual(df.dtypes.one, np.dtype(np.int8)) + + def test_setitem_boolean_column(self): + expected = self.frame.copy() + mask = self.frame['A'] > 0 + + self.frame.ix[mask, 'B'] = 0 + expected.values[mask.values, 1] = 0 + + assert_frame_equal(self.frame, expected) + + def test_setitem_corner(self): + # corner case + df = DataFrame({'B': [1., 2., 3.], + 'C': ['a', 'b', 'c']}, + index=np.arange(3)) + del df['B'] + df['B'] = [1., 2., 3.] + self.assertIn('B', df) + self.assertEqual(len(df.columns), 2) + + df['A'] = 'beginning' + df['E'] = 'foo' + df['D'] = 'bar' + df[datetime.now()] = 'date' + df[datetime.now()] = 5. + + # what to do when empty frame with index + dm = DataFrame(index=self.frame.index) + dm['A'] = 'foo' + dm['B'] = 'bar' + self.assertEqual(len(dm.columns), 2) + self.assertEqual(dm.values.dtype, np.object_) + + # upcast + dm['C'] = 1 + self.assertEqual(dm['C'].dtype, np.int64) + + dm['E'] = 1. + self.assertEqual(dm['E'].dtype, np.float64) + + # set existing column + dm['A'] = 'bar' + self.assertEqual('bar', dm['A'][0]) + + dm = DataFrame(index=np.arange(3)) + dm['A'] = 1 + dm['foo'] = 'bar' + del dm['foo'] + dm['foo'] = 'bar' + self.assertEqual(dm['foo'].dtype, np.object_) + + dm['coercable'] = ['1', '2', '3'] + self.assertEqual(dm['coercable'].dtype, np.object_) + + def test_setitem_corner2(self): + data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17, + "cruft": np.random.random(20)} + + df = DataFrame(data) + ix = df[df['title'] == 'bar'].index + + df.ix[ix, ['title']] = 'foobar' + df.ix[ix, ['cruft']] = 0 + + assert(df.ix[1, 'title'] == 'foobar') + assert(df.ix[1, 'cruft'] == 0) + + def test_setitem_ambig(self): + # difficulties with mixed-type data + from decimal import Decimal + + # created as float type + dm = DataFrame(index=lrange(3), columns=lrange(3)) + + coercable_series = Series([Decimal(1) for _ in range(3)], + index=lrange(3)) + uncoercable_series = Series(['foo', 'bzr', 'baz'], index=lrange(3)) + + dm[0] = np.ones(3) + self.assertEqual(len(dm.columns), 3) + # self.assertIsNone(dm.objects) + + dm[1] = coercable_series + self.assertEqual(len(dm.columns), 3) + # self.assertIsNone(dm.objects) + + dm[2] = uncoercable_series + self.assertEqual(len(dm.columns), 3) + # self.assertIsNotNone(dm.objects) + self.assertEqual(dm[2].dtype, np.object_) + + def test_setitem_clear_caches(self): + # GH #304 + df = DataFrame({'x': [1.1, 2.1, 3.1, 4.1], 'y': [5.1, 6.1, 7.1, 8.1]}, + index=[0, 1, 2, 3]) + df.insert(2, 'z', np.nan) + + # cache it + foo = df['z'] + + df.ix[2:, 'z'] = 42 + + expected = Series([np.nan, np.nan, 42, 42], index=df.index, name='z') + self.assertIsNot(df['z'], foo) + assert_series_equal(df['z'], expected) + + def test_setitem_None(self): + # GH #766 + self.frame[None] = self.frame['A'] + assert_series_equal( + self.frame.iloc[:, -1], self.frame['A'], check_names=False) + assert_series_equal(self.frame.loc[:, None], self.frame[ + 'A'], check_names=False) + assert_series_equal(self.frame[None], self.frame[ + 'A'], check_names=False) + repr(self.frame) + + def test_setitem_empty(self): + # GH 9596 + df = pd.DataFrame({'a': ['1', '2', '3'], + 'b': ['11', '22', '33'], + 'c': ['111', '222', '333']}) + + result = df.copy() + result.loc[result.b.isnull(), 'a'] = result.a + assert_frame_equal(result, df) + + def test_setitem_empty_frame_with_boolean(self): + # Test for issue #10126 + + for dtype in ('float', 'int64'): + for df in [ + pd.DataFrame(dtype=dtype), + pd.DataFrame(dtype=dtype, index=[1]), + pd.DataFrame(dtype=dtype, columns=['A']), + ]: + df2 = df.copy() + df[df > df2] = 47 + assert_frame_equal(df, df2) + + def test_getitem_empty_frame_with_boolean(self): + # Test for issue #11859 + + df = pd.DataFrame() + df2 = df[df > 0] + assert_frame_equal(df, df2) + + def test_delitem_corner(self): + f = self.frame.copy() + del f['D'] + self.assertEqual(len(f.columns), 3) + self.assertRaises(KeyError, f.__delitem__, 'D') + del f['B'] + self.assertEqual(len(f.columns), 2) + + def test_getitem_fancy_2d(self): + f = self.frame + ix = f.ix + + assert_frame_equal(ix[:, ['B', 'A']], f.reindex(columns=['B', 'A'])) + + subidx = self.frame.index[[5, 4, 1]] + assert_frame_equal(ix[subidx, ['B', 'A']], + f.reindex(index=subidx, columns=['B', 'A'])) + + # slicing rows, etc. + assert_frame_equal(ix[5:10], f[5:10]) + assert_frame_equal(ix[5:10, :], f[5:10]) + assert_frame_equal(ix[:5, ['A', 'B']], + f.reindex(index=f.index[:5], columns=['A', 'B'])) + + # slice rows with labels, inclusive! + expected = ix[5:11] + result = ix[f.index[5]:f.index[10]] + assert_frame_equal(expected, result) + + # slice columns + assert_frame_equal(ix[:, :2], f.reindex(columns=['A', 'B'])) + + # get view + exp = f.copy() + ix[5:10].values[:] = 5 + exp.values[5:10] = 5 + assert_frame_equal(f, exp) + + self.assertRaises(ValueError, ix.__getitem__, f > 0.5) + + def test_slice_floats(self): + index = [52195.504153, 52196.303147, 52198.369883] + df = DataFrame(np.random.rand(3, 2), index=index) + + s1 = df.ix[52195.1:52196.5] + self.assertEqual(len(s1), 2) + + s1 = df.ix[52195.1:52196.6] + self.assertEqual(len(s1), 2) + + s1 = df.ix[52195.1:52198.9] + self.assertEqual(len(s1), 3) + + def test_getitem_fancy_slice_integers_step(self): + df = DataFrame(np.random.randn(10, 5)) + + # this is OK + result = df.ix[:8:2] # noqa + df.ix[:8:2] = np.nan + self.assertTrue(isnull(df.ix[:8:2]).values.all()) + + def test_getitem_setitem_integer_slice_keyerrors(self): + df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2)) + + # this is OK + cp = df.copy() + cp.ix[4:10] = 0 + self.assertTrue((cp.ix[4:10] == 0).values.all()) + + # so is this + cp = df.copy() + cp.ix[3:11] = 0 + self.assertTrue((cp.ix[3:11] == 0).values.all()) + + result = df.ix[4:10] + result2 = df.ix[3:11] + expected = df.reindex([4, 6, 8, 10]) + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + # non-monotonic, raise KeyError + df2 = df.iloc[lrange(5) + lrange(5, 10)[::-1]] + self.assertRaises(KeyError, df2.ix.__getitem__, slice(3, 11)) + self.assertRaises(KeyError, df2.ix.__setitem__, slice(3, 11), 0) + + def test_setitem_fancy_2d(self): + f = self.frame + + ix = f.ix # noqa + + # case 1 + frame = self.frame.copy() + expected = frame.copy() + frame.ix[:, ['B', 'A']] = 1 + expected['B'] = 1. + expected['A'] = 1. + assert_frame_equal(frame, expected) + + # case 2 + frame = self.frame.copy() + frame2 = self.frame.copy() + + expected = frame.copy() + + subidx = self.frame.index[[5, 4, 1]] + values = randn(3, 2) + + frame.ix[subidx, ['B', 'A']] = values + frame2.ix[[5, 4, 1], ['B', 'A']] = values + + expected['B'].ix[subidx] = values[:, 0] + expected['A'].ix[subidx] = values[:, 1] + + assert_frame_equal(frame, expected) + assert_frame_equal(frame2, expected) + + # case 3: slicing rows, etc. + frame = self.frame.copy() + + expected1 = self.frame.copy() + frame.ix[5:10] = 1. + expected1.values[5:10] = 1. + assert_frame_equal(frame, expected1) + + expected2 = self.frame.copy() + arr = randn(5, len(frame.columns)) + frame.ix[5:10] = arr + expected2.values[5:10] = arr + assert_frame_equal(frame, expected2) + + # case 4 + frame = self.frame.copy() + frame.ix[5:10, :] = 1. + assert_frame_equal(frame, expected1) + frame.ix[5:10, :] = arr + assert_frame_equal(frame, expected2) + + # case 5 + frame = self.frame.copy() + frame2 = self.frame.copy() + + expected = self.frame.copy() + values = randn(5, 2) + + frame.ix[:5, ['A', 'B']] = values + expected['A'][:5] = values[:, 0] + expected['B'][:5] = values[:, 1] + assert_frame_equal(frame, expected) + + frame2.ix[:5, [0, 1]] = values + assert_frame_equal(frame2, expected) + + # case 6: slice rows with labels, inclusive! + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[frame.index[5]:frame.index[10]] = 5. + expected.values[5:11] = 5 + assert_frame_equal(frame, expected) + + # case 7: slice columns + frame = self.frame.copy() + frame2 = self.frame.copy() + expected = self.frame.copy() + + # slice indices + frame.ix[:, 1:3] = 4. + expected.values[:, 1:3] = 4. + assert_frame_equal(frame, expected) + + # slice with labels + frame.ix[:, 'B':'C'] = 4. + assert_frame_equal(frame, expected) + + # new corner case of boolean slicing / setting + frame = DataFrame(lzip([2, 3, 9, 6, 7], [np.nan] * 5), + columns=['a', 'b']) + lst = [100] + lst.extend([np.nan] * 4) + expected = DataFrame(lzip([100, 3, 9, 6, 7], lst), + columns=['a', 'b']) + frame[frame['a'] == 2] = 100 + assert_frame_equal(frame, expected) + + def test_fancy_getitem_slice_mixed(self): + sliced = self.mixed_frame.ix[:, -3:] + self.assertEqual(sliced['D'].dtype, np.float64) + + # get view with single block + # setting it triggers setting with copy + sliced = self.frame.ix[:, -3:] + + def f(): + sliced['C'] = 4. + self.assertRaises(com.SettingWithCopyError, f) + self.assertTrue((self.frame['C'] == 4).all()) + + def test_fancy_setitem_int_labels(self): + # integer index defers to label-based indexing + + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + tmp = df.copy() + exp = df.copy() + tmp.ix[[0, 2, 4]] = 5 + exp.values[:3] = 5 + assert_frame_equal(tmp, exp) + + tmp = df.copy() + exp = df.copy() + tmp.ix[6] = 5 + exp.values[3] = 5 + assert_frame_equal(tmp, exp) + + tmp = df.copy() + exp = df.copy() + tmp.ix[:, 2] = 5 + + # tmp correctly sets the dtype + # so match the exp way + exp[2] = 5 + assert_frame_equal(tmp, exp) + + def test_fancy_getitem_int_labels(self): + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + result = df.ix[[4, 2, 0], [2, 0]] + expected = df.reindex(index=[4, 2, 0], columns=[2, 0]) + assert_frame_equal(result, expected) + + result = df.ix[[4, 2, 0]] + expected = df.reindex(index=[4, 2, 0]) + assert_frame_equal(result, expected) + + result = df.ix[4] + expected = df.xs(4) + assert_series_equal(result, expected) + + result = df.ix[:, 3] + expected = df[3] + assert_series_equal(result, expected) + + def test_fancy_index_int_labels_exceptions(self): + df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) + + # labels that aren't contained + self.assertRaises(KeyError, df.ix.__setitem__, + ([0, 1, 2], [2, 3, 4]), 5) + + # try to set indices not contained in frame + self.assertRaises(KeyError, + self.frame.ix.__setitem__, + ['foo', 'bar', 'baz'], 1) + self.assertRaises(KeyError, + self.frame.ix.__setitem__, + (slice(None, None), ['E']), 1) + + # partial setting now allows this GH2578 + # self.assertRaises(KeyError, + # self.frame.ix.__setitem__, + # (slice(None, None), 'E'), 1) + + def test_setitem_fancy_mixed_2d(self): + self.mixed_frame.ix[:5, ['C', 'B', 'A']] = 5 + result = self.mixed_frame.ix[:5, ['C', 'B', 'A']] + self.assertTrue((result.values == 5).all()) + + self.mixed_frame.ix[5] = np.nan + self.assertTrue(isnull(self.mixed_frame.ix[5]).all()) + + self.mixed_frame.ix[5] = self.mixed_frame.ix[6] + assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6], + check_names=False) + + # #1432 + df = DataFrame({1: [1., 2., 3.], + 2: [3, 4, 5]}) + self.assertTrue(df._is_mixed_type) + + df.ix[1] = [5, 10] + + expected = DataFrame({1: [1., 5., 3.], + 2: [3, 10, 5]}) + + assert_frame_equal(df, expected) + + def test_ix_align(self): + b = Series(randn(10), name=0).sort_values() + df_orig = DataFrame(randn(10, 4)) + df = df_orig.copy() + + df.ix[:, 0] = b + assert_series_equal(df.ix[:, 0].reindex(b.index), b) + + dft = df_orig.T + dft.ix[0, :] = b + assert_series_equal(dft.ix[0, :].reindex(b.index), b) + + df = df_orig.copy() + df.ix[:5, 0] = b + s = df.ix[:5, 0] + assert_series_equal(s, b.reindex(s.index)) + + dft = df_orig.T + dft.ix[0, :5] = b + s = dft.ix[0, :5] + assert_series_equal(s, b.reindex(s.index)) + + df = df_orig.copy() + idx = [0, 1, 3, 5] + df.ix[idx, 0] = b + s = df.ix[idx, 0] + assert_series_equal(s, b.reindex(s.index)) + + dft = df_orig.T + dft.ix[0, idx] = b + s = dft.ix[0, idx] + assert_series_equal(s, b.reindex(s.index)) + + def test_ix_frame_align(self): + b = DataFrame(np.random.randn(3, 4)) + df_orig = DataFrame(randn(10, 4)) + df = df_orig.copy() + + df.ix[:3] = b + out = b.ix[:3] + assert_frame_equal(out, b) + + b.sort_index(inplace=True) + + df = df_orig.copy() + df.ix[[0, 1, 2]] = b + out = df.ix[[0, 1, 2]].reindex(b.index) + assert_frame_equal(out, b) + + df = df_orig.copy() + df.ix[:3] = b + out = df.ix[:3] + assert_frame_equal(out, b.reindex(out.index)) + + def test_getitem_setitem_non_ix_labels(self): + df = tm.makeTimeDataFrame() + + start, end = df.index[[5, 10]] + + result = df.ix[start:end] + result2 = df[start:end] + expected = df[5:11] + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + result = df.copy() + result.ix[start:end] = 0 + result2 = df.copy() + result2[start:end] = 0 + expected = df.copy() + expected[5:11] = 0 + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + + def test_ix_multi_take(self): + df = DataFrame(np.random.randn(3, 2)) + rs = df.ix[df.index == 0, :] + xp = df.reindex([0]) + assert_frame_equal(rs, xp) + + """ #1321 + df = DataFrame(np.random.randn(3, 2)) + rs = df.ix[df.index==0, df.columns==1] + xp = df.reindex([0], [1]) + assert_frame_equal(rs, xp) + """ + + def test_ix_multi_take_nonint_index(self): + df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], + columns=['a', 'b']) + rs = df.ix[[0], [0]] + xp = df.reindex(['x'], columns=['a']) + assert_frame_equal(rs, xp) + + def test_ix_multi_take_multiindex(self): + df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], + columns=[['a', 'b'], ['1', '2']]) + rs = df.ix[[0], [0]] + xp = df.reindex(['x'], columns=[('a', '1')]) + assert_frame_equal(rs, xp) + + def test_ix_dup(self): + idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) + df = DataFrame(np.random.randn(len(idx), 3), idx) + + sub = df.ix[:'d'] + assert_frame_equal(sub, df) + + sub = df.ix['a':'c'] + assert_frame_equal(sub, df.ix[0:4]) + + sub = df.ix['b':'d'] + assert_frame_equal(sub, df.ix[2:]) + + def test_getitem_fancy_1d(self): + f = self.frame + ix = f.ix + + # return self if no slicing...for now + self.assertIs(ix[:, :], f) + + # low dimensional slice + xs1 = ix[2, ['C', 'B', 'A']] + xs2 = f.xs(f.index[2]).reindex(['C', 'B', 'A']) + assert_series_equal(xs1, xs2) + + ts1 = ix[5:10, 2] + ts2 = f[f.columns[2]][5:10] + assert_series_equal(ts1, ts2) + + # positional xs + xs1 = ix[0] + xs2 = f.xs(f.index[0]) + assert_series_equal(xs1, xs2) + + xs1 = ix[f.index[5]] + xs2 = f.xs(f.index[5]) + assert_series_equal(xs1, xs2) + + # single column + assert_series_equal(ix[:, 'A'], f['A']) + + # return view + exp = f.copy() + exp.values[5] = 4 + ix[5][:] = 4 + assert_frame_equal(exp, f) + + exp.values[:, 1] = 6 + ix[:, 1][:] = 6 + assert_frame_equal(exp, f) + + # slice of mixed-frame + xs = self.mixed_frame.ix[5] + exp = self.mixed_frame.xs(self.mixed_frame.index[5]) + assert_series_equal(xs, exp) + + def test_setitem_fancy_1d(self): + + # case 1: set cross-section for indices + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[2, ['C', 'B', 'A']] = [1., 2., 3.] + expected['C'][2] = 1. + expected['B'][2] = 2. + expected['A'][2] = 3. + assert_frame_equal(frame, expected) + + frame2 = self.frame.copy() + frame2.ix[2, [3, 2, 1]] = [1., 2., 3.] + assert_frame_equal(frame, expected) + + # case 2, set a section of a column + frame = self.frame.copy() + expected = self.frame.copy() + + vals = randn(5) + expected.values[5:10, 2] = vals + frame.ix[5:10, 2] = vals + assert_frame_equal(frame, expected) + + frame2 = self.frame.copy() + frame2.ix[5:10, 'B'] = vals + assert_frame_equal(frame, expected) + + # case 3: full xs + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[4] = 5. + expected.values[4] = 5. + assert_frame_equal(frame, expected) + + frame.ix[frame.index[4]] = 6. + expected.values[4] = 6. + assert_frame_equal(frame, expected) + + # single column + frame = self.frame.copy() + expected = self.frame.copy() + + frame.ix[:, 'A'] = 7. + expected['A'] = 7. + assert_frame_equal(frame, expected) + + def test_getitem_fancy_scalar(self): + f = self.frame + ix = f.ix + # individual value + for col in f.columns: + ts = f[col] + for idx in f.index[::5]: + assert_almost_equal(ix[idx, col], ts[idx]) + + def test_setitem_fancy_scalar(self): + f = self.frame + expected = self.frame.copy() + ix = f.ix + # individual value + for j, col in enumerate(f.columns): + ts = f[col] # noqa + for idx in f.index[::5]: + i = f.index.get_loc(idx) + val = randn() + expected.values[i, j] = val + ix[idx, col] = val + assert_frame_equal(f, expected) + + def test_getitem_fancy_boolean(self): + f = self.frame + ix = f.ix + + expected = f.reindex(columns=['B', 'D']) + result = ix[:, [False, True, False, True]] + assert_frame_equal(result, expected) + + expected = f.reindex(index=f.index[5:10], columns=['B', 'D']) + result = ix[5:10, [False, True, False, True]] + assert_frame_equal(result, expected) + + boolvec = f.index > f.index[7] + expected = f.reindex(index=f.index[boolvec]) + result = ix[boolvec] + assert_frame_equal(result, expected) + result = ix[boolvec, :] + assert_frame_equal(result, expected) + + result = ix[boolvec, 2:] + expected = f.reindex(index=f.index[boolvec], + columns=['C', 'D']) + assert_frame_equal(result, expected) + + def test_setitem_fancy_boolean(self): + # from 2d, set with booleans + frame = self.frame.copy() + expected = self.frame.copy() + + mask = frame['A'] > 0 + frame.ix[mask] = 0. + expected.values[mask.values] = 0. + assert_frame_equal(frame, expected) + + frame = self.frame.copy() + expected = self.frame.copy() + frame.ix[mask, ['A', 'B']] = 0. + expected.values[mask.values, :2] = 0. + assert_frame_equal(frame, expected) + + def test_getitem_fancy_ints(self): + result = self.frame.ix[[1, 4, 7]] + expected = self.frame.ix[self.frame.index[[1, 4, 7]]] + assert_frame_equal(result, expected) + + result = self.frame.ix[:, [2, 0, 1]] + expected = self.frame.ix[:, self.frame.columns[[2, 0, 1]]] + assert_frame_equal(result, expected) + + def test_getitem_setitem_fancy_exceptions(self): + ix = self.frame.ix + with assertRaisesRegexp(IndexingError, 'Too many indexers'): + ix[:, :, :] + + with assertRaises(IndexingError): + ix[:, :, :] = 1 + + def test_getitem_setitem_boolean_misaligned(self): + # boolean index misaligned labels + mask = self.frame['A'][::-1] > 1 + + result = self.frame.ix[mask] + expected = self.frame.ix[mask[::-1]] + assert_frame_equal(result, expected) + + cp = self.frame.copy() + expected = self.frame.copy() + cp.ix[mask] = 0 + expected.ix[mask] = 0 + assert_frame_equal(cp, expected) + + def test_getitem_setitem_boolean_multi(self): + df = DataFrame(np.random.randn(3, 2)) + + # get + k1 = np.array([True, False, True]) + k2 = np.array([False, True]) + result = df.ix[k1, k2] + expected = df.ix[[0, 2], [1]] + assert_frame_equal(result, expected) + + expected = df.copy() + df.ix[np.array([True, False, True]), + np.array([False, True])] = 5 + expected.ix[[0, 2], [1]] = 5 + assert_frame_equal(df, expected) + + def test_getitem_setitem_float_labels(self): + index = Index([1.5, 2, 3, 4, 5]) + df = DataFrame(np.random.randn(5, 5), index=index) + + result = df.ix[1.5:4] + expected = df.reindex([1.5, 2, 3, 4]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.ix[4:5] + expected = df.reindex([4, 5]) # reindex with int + assert_frame_equal(result, expected, check_index_type=False) + self.assertEqual(len(result), 2) + + result = df.ix[4:5] + expected = df.reindex([4.0, 5.0]) # reindex with float + assert_frame_equal(result, expected) + self.assertEqual(len(result), 2) + + # loc_float changes this to work properly + result = df.ix[1:2] + expected = df.iloc[0:2] + assert_frame_equal(result, expected) + + df.ix[1:2] = 0 + result = df[1:2] + self.assertTrue((result == 0).all().all()) + + # #2727 + index = Index([1.0, 2.5, 3.5, 4.5, 5.0]) + df = DataFrame(np.random.randn(5, 5), index=index) + + # positional slicing only via iloc! + # stacklevel=False -> needed stacklevel depends on index type + with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): + result = df.iloc[1.0:5] + + expected = df.reindex([2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.iloc[4:5] + expected = df.reindex([5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 1) + + # GH 4892, float indexers in iloc are deprecated + import warnings + warnings.filterwarnings(action='error', category=FutureWarning) + + cp = df.copy() + + def f(): + cp.iloc[1.0:5] = 0 + self.assertRaises(FutureWarning, f) + + def f(): + result = cp.iloc[1.0:5] == 0 # noqa + + self.assertRaises(FutureWarning, f) + self.assertTrue(result.values.all()) + self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all()) + + warnings.filterwarnings(action='default', category=FutureWarning) + + cp = df.copy() + cp.iloc[4:5] = 0 + self.assertTrue((cp.iloc[4:5] == 0).values.all()) + self.assertTrue((cp.iloc[0:4] == df.iloc[0:4]).values.all()) + + # float slicing + result = df.ix[1.0:5] + expected = df + assert_frame_equal(result, expected) + self.assertEqual(len(result), 5) + + result = df.ix[1.1:5] + expected = df.reindex([2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 4) + + result = df.ix[4.51:5] + expected = df.reindex([5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 1) + + result = df.ix[1.0:5.0] + expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0]) + assert_frame_equal(result, expected) + self.assertEqual(len(result), 5) + + cp = df.copy() + cp.ix[1.0:5.0] = 0 + result = cp.ix[1.0:5.0] + self.assertTrue((result == 0).values.all()) + + def test_setitem_single_column_mixed(self): + df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['foo', 'bar', 'baz']) + df['str'] = 'qux' + df.ix[::2, 'str'] = nan + expected = [nan, 'qux', nan, 'qux', nan] + assert_almost_equal(df['str'].values, expected) + + def test_setitem_single_column_mixed_datetime(self): + df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['foo', 'bar', 'baz']) + + df['timestamp'] = Timestamp('20010102') + + # check our dtypes + result = df.get_dtype_counts() + expected = Series({'float64': 3, 'datetime64[ns]': 1}) + assert_series_equal(result, expected) + + # set an allowable datetime64 type + from pandas import tslib + df.ix['b', 'timestamp'] = tslib.iNaT + self.assertTrue(com.isnull(df.ix['b', 'timestamp'])) + + # allow this syntax + df.ix['c', 'timestamp'] = nan + self.assertTrue(com.isnull(df.ix['c', 'timestamp'])) + + # allow this syntax + df.ix['d', :] = nan + self.assertTrue(com.isnull(df.ix['c', :]).all() == False) # noqa + + # as of GH 3216 this will now work! + # try to set with a list like item + # self.assertRaises( + # Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan]) + + def test_setitem_frame(self): + piece = self.frame.ix[:2, ['A', 'B']] + self.frame.ix[-2:, ['A', 'B']] = piece.values + assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, + piece.values) + + # GH 3216 + + # already aligned + f = self.mixed_frame.copy() + piece = DataFrame([[1, 2], [3, 4]], index=f.index[ + 0:2], columns=['A', 'B']) + key = (slice(None, 2), ['A', 'B']) + f.ix[key] = piece + assert_almost_equal(f.ix[0:2, ['A', 'B']].values, + piece.values) + + # rows unaligned + f = self.mixed_frame.copy() + piece = DataFrame([[1, 2], [3, 4], [5, 6], [7, 8]], index=list( + f.index[0:2]) + ['foo', 'bar'], columns=['A', 'B']) + key = (slice(None, 2), ['A', 'B']) + f.ix[key] = piece + assert_almost_equal(f.ix[0:2:, ['A', 'B']].values, + piece.values[0:2]) + + # key is unaligned with values + f = self.mixed_frame.copy() + piece = f.ix[:2, ['A']] + piece.index = f.index[-2:] + key = (slice(-2, None), ['A', 'B']) + f.ix[key] = piece + piece['B'] = np.nan + assert_almost_equal(f.ix[-2:, ['A', 'B']].values, + piece.values) + + # ndarray + f = self.mixed_frame.copy() + piece = self.mixed_frame.ix[:2, ['A', 'B']] + key = (slice(-2, None), ['A', 'B']) + f.ix[key] = piece.values + assert_almost_equal(f.ix[-2:, ['A', 'B']].values, + piece.values) + + # needs upcasting + df = DataFrame([[1, 2, 'foo'], [3, 4, 'bar']], columns=['A', 'B', 'C']) + df2 = df.copy() + df2.ix[:, ['A', 'B']] = df.ix[:, ['A', 'B']] + 0.5 + expected = df.reindex(columns=['A', 'B']) + expected += 0.5 + expected['C'] = df['C'] + assert_frame_equal(df2, expected) + + def test_setitem_frame_align(self): + piece = self.frame.ix[:2, ['A', 'B']] + piece.index = self.frame.index[-2:] + piece.columns = ['A', 'B'] + self.frame.ix[-2:, ['A', 'B']] = piece + assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, + piece.values) + + def test_getitem_setitem_ix_duplicates(self): + # #1201 + df = DataFrame(np.random.randn(5, 3), + index=['foo', 'foo', 'bar', 'baz', 'bar']) + + result = df.ix['foo'] + expected = df[:2] + assert_frame_equal(result, expected) + + result = df.ix['bar'] + expected = df.ix[[2, 4]] + assert_frame_equal(result, expected) + + result = df.ix['baz'] + expected = df.ix[3] + assert_series_equal(result, expected) + + def test_getitem_ix_boolean_duplicates_multiple(self): + # #1201 + df = DataFrame(np.random.randn(5, 3), + index=['foo', 'foo', 'bar', 'baz', 'bar']) + + result = df.ix[['bar']] + exp = df.ix[[2, 4]] + assert_frame_equal(result, exp) + + result = df.ix[df[1] > 0] + exp = df[df[1] > 0] + assert_frame_equal(result, exp) + + result = df.ix[df[0] > 0] + exp = df[df[0] > 0] + assert_frame_equal(result, exp) + + def test_getitem_setitem_ix_bool_keyerror(self): + # #2199 + df = DataFrame({'a': [1, 2, 3]}) + + self.assertRaises(KeyError, df.ix.__getitem__, False) + self.assertRaises(KeyError, df.ix.__getitem__, True) + + self.assertRaises(KeyError, df.ix.__setitem__, False, 0) + self.assertRaises(KeyError, df.ix.__setitem__, True, 0) + + def test_getitem_list_duplicates(self): + # #1943 + df = DataFrame(np.random.randn(4, 4), columns=list('AABC')) + df.columns.name = 'foo' + + result = df[['B', 'C']] + self.assertEqual(result.columns.name, 'foo') + + expected = df.ix[:, 2:] + assert_frame_equal(result, expected) + + def test_get_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + result = self.frame.get_value(idx, col) + expected = self.frame[col][idx] + assert_almost_equal(result, expected) + + def test_lookup(self): + def alt(df, rows, cols): + result = [] + for r, c in zip(rows, cols): + result.append(df.get_value(r, c)) + return result + + def testit(df): + rows = list(df.index) * len(df.columns) + cols = list(df.columns) * len(df.index) + result = df.lookup(rows, cols) + expected = alt(df, rows, cols) + assert_almost_equal(result, expected) + + testit(self.mixed_frame) + testit(self.frame) + + df = DataFrame({'label': ['a', 'b', 'a', 'c'], + 'mask_a': [True, True, False, True], + 'mask_b': [True, False, False, False], + 'mask_c': [False, True, False, True]}) + df['mask'] = df.lookup(df.index, 'mask_' + df['label']) + exp_mask = alt(df, df.index, 'mask_' + df['label']) + assert_almost_equal(df['mask'], exp_mask) + self.assertEqual(df['mask'].dtype, np.bool_) + + with tm.assertRaises(KeyError): + self.frame.lookup(['xyz'], ['A']) + + with tm.assertRaises(KeyError): + self.frame.lookup([self.frame.index[0]], ['xyz']) + + with tm.assertRaisesRegexp(ValueError, 'same size'): + self.frame.lookup(['a', 'b', 'c'], ['a']) + + def test_set_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + self.frame.set_value(idx, col, 1) + assert_almost_equal(self.frame[col][idx], 1) + + def test_set_value_resize(self): + + res = self.frame.set_value('foobar', 'B', 0) + self.assertIs(res, self.frame) + self.assertEqual(res.index[-1], 'foobar') + self.assertEqual(res.get_value('foobar', 'B'), 0) + + self.frame.loc['foobar', 'qux'] = 0 + self.assertEqual(self.frame.get_value('foobar', 'qux'), 0) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', 'sam') + self.assertEqual(res3['baz'].dtype, np.object_) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', True) + self.assertEqual(res3['baz'].dtype, np.object_) + + res = self.frame.copy() + res3 = res.set_value('foobar', 'baz', 5) + self.assertTrue(com.is_float_dtype(res3['baz'])) + self.assertTrue(isnull(res3['baz'].drop(['foobar'])).all()) + self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam') + + def test_set_value_with_index_dtype_change(self): + df_orig = DataFrame(randn(3, 3), index=lrange(3), columns=list('ABC')) + + # this is actually ambiguous as the 2 is interpreted as a positional + # so column is not created + df = df_orig.copy() + df.set_value('C', 2, 1.0) + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + # self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) + + df = df_orig.copy() + df.loc['C', 2] = 1.0 + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + # self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) + + # create both new + df = df_orig.copy() + df.set_value('C', 'D', 1.0) + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) + + df = df_orig.copy() + df.loc['C', 'D'] = 1.0 + self.assertEqual(list(df.index), list(df_orig.index) + ['C']) + self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) + + def test_get_set_value_no_partial_indexing(self): + # partial w/ MultiIndex raise exception + index = MultiIndex.from_tuples([(0, 1), (0, 2), (1, 1), (1, 2)]) + df = DataFrame(index=index, columns=lrange(4)) + self.assertRaises(KeyError, df.get_value, 0, 1) + # self.assertRaises(KeyError, df.set_value, 0, 1, 0) + + def test_single_element_ix_dont_upcast(self): + self.frame['E'] = 1 + self.assertTrue(issubclass(self.frame['E'].dtype.type, + (int, np.integer))) + + result = self.frame.ix[self.frame.index[5], 'E'] + self.assertTrue(com.is_integer(result)) + + def test_irow(self): + df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2)) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + df.irow(1) + + result = df.iloc[1] + exp = df.ix[2] + assert_series_equal(result, exp) + + result = df.iloc[2] + exp = df.ix[4] + assert_series_equal(result, exp) + + # slice + result = df.iloc[slice(4, 8)] + expected = df.ix[8:14] + assert_frame_equal(result, expected) + + # verify slice is view + # setting it makes it raise/warn + def f(): + result[2] = 0. + self.assertRaises(com.SettingWithCopyError, f) + exp_col = df[2].copy() + exp_col[4:8] = 0. + assert_series_equal(df[2], exp_col) + + # list of integers + result = df.iloc[[1, 2, 4, 6]] + expected = df.reindex(df.index[[1, 2, 4, 6]]) + assert_frame_equal(result, expected) + + def test_icol(self): + + df = DataFrame(np.random.randn(4, 10), columns=lrange(0, 20, 2)) + + # 10711, deprecated + with tm.assert_produces_warning(FutureWarning): + df.icol(1) + + result = df.iloc[:, 1] + exp = df.ix[:, 2] + assert_series_equal(result, exp) + + result = df.iloc[:, 2] + exp = df.ix[:, 4] + assert_series_equal(result, exp) + + # slice + result = df.iloc[:, slice(4, 8)] + expected = df.ix[:, 8:14] + assert_frame_equal(result, expected) + + # verify slice is view + # and that we are setting a copy + def f(): + result[8] = 0. + self.assertRaises(com.SettingWithCopyError, f) + self.assertTrue((df[8] == 0).all()) + + # list of integers + result = df.iloc[:, [1, 2, 4, 6]] + expected = df.reindex(columns=df.columns[[1, 2, 4, 6]]) + assert_frame_equal(result, expected) + + def test_irow_icol_duplicates(self): + # 10711, deprecated + + df = DataFrame(np.random.rand(3, 3), columns=list('ABC'), + index=list('aab')) + + result = df.iloc[0] + result2 = df.ix[0] + tm.assertIsInstance(result, Series) + assert_almost_equal(result.values, df.values[0]) + assert_series_equal(result, result2) + + result = df.T.iloc[:, 0] + result2 = df.T.ix[:, 0] + tm.assertIsInstance(result, Series) + assert_almost_equal(result.values, df.values[0]) + assert_series_equal(result, result2) + + # multiindex + df = DataFrame(np.random.randn(3, 3), + columns=[['i', 'i', 'j'], ['A', 'A', 'B']], + index=[['i', 'i', 'j'], ['X', 'X', 'Y']]) + + rs = df.iloc[0] + xp = df.ix[0] + assert_series_equal(rs, xp) + + rs = df.iloc[:, 0] + xp = df.T.ix[0] + assert_series_equal(rs, xp) + + rs = df.iloc[:, [0]] + xp = df.ix[:, [0]] + assert_frame_equal(rs, xp) + + # #2259 + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1, 1, 2]) + result = df.iloc[:, [0]] + expected = df.take([0], axis=1) + assert_frame_equal(result, expected) + + def test_icol_sparse_propegate_fill_value(self): + from pandas.sparse.api import SparseDataFrame + df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999) + self.assertTrue(len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values)) + + def test_iget_value(self): + # 10711 deprecated + + with tm.assert_produces_warning(FutureWarning): + self.frame.iget_value(0, 0) + + for i, row in enumerate(self.frame.index): + for j, col in enumerate(self.frame.columns): + result = self.frame.iat[i, j] + expected = self.frame.at[row, col] + assert_almost_equal(result, expected) + + def test_nested_exception(self): + # Ignore the strange way of triggering the problem + # (which may get fixed), it's just a way to trigger + # the issue or reraising an outer exception without + # a named argument + df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], + "c": [7, 8, 9]}).set_index(["a", "b"]) + l = list(df.index) + l[0] = ["a", "b"] + df.index = l + + try: + repr(df) + except Exception as e: + self.assertNotEqual(type(e), UnboundLocalError) + + def test_reindex_methods(self): + df = pd.DataFrame({'x': list(range(5))}) + target = np.array([-0.1, 0.9, 1.1, 1.5]) + + for method, expected_values in [('nearest', [0, 1, 1, 2]), + ('pad', [np.nan, 0, 1, 1]), + ('backfill', [0, 1, 2, 2])]: + expected = pd.DataFrame({'x': expected_values}, index=target) + actual = df.reindex(target, method=method) + assert_frame_equal(expected, actual) + + actual = df.reindex_like(df, method=method, tolerance=0) + assert_frame_equal(df, actual) + + actual = df.reindex(target, method=method, tolerance=1) + assert_frame_equal(expected, actual) + + e2 = expected[::-1] + actual = df.reindex(target[::-1], method=method) + assert_frame_equal(e2, actual) + + new_order = [3, 0, 2, 1] + e2 = expected.iloc[new_order] + actual = df.reindex(target[new_order], method=method) + assert_frame_equal(e2, actual) + + switched_method = ('pad' if method == 'backfill' + else 'backfill' if method == 'pad' + else method) + actual = df[::-1].reindex(target, method=switched_method) + assert_frame_equal(expected, actual) + + expected = pd.DataFrame({'x': [0, 1, 1, np.nan]}, index=target) + actual = df.reindex(target, method='nearest', tolerance=0.2) + assert_frame_equal(expected, actual) + + def test_non_monotonic_reindex_methods(self): + dr = pd.date_range('2013-08-01', periods=6, freq='B') + data = np.random.randn(6, 1) + df = pd.DataFrame(data, index=dr, columns=list('A')) + df_rev = pd.DataFrame(data, index=dr[[3, 4, 5] + [0, 1, 2]], + columns=list('A')) + # index is not monotonic increasing or decreasing + self.assertRaises(ValueError, df_rev.reindex, df.index, method='pad') + self.assertRaises(ValueError, df_rev.reindex, df.index, method='ffill') + self.assertRaises(ValueError, df_rev.reindex, df.index, method='bfill') + self.assertRaises(ValueError, df_rev.reindex, + df.index, method='nearest') + + def test_reindex_level(self): + from itertools import permutations + icol = ['jim', 'joe', 'jolie'] + + def verify_first_level(df, level, idx, check_index_type=True): + f = lambda val: np.nonzero(df[level] == val)[0] + i = np.concatenate(list(map(f, idx))) + left = df.set_index(icol).reindex(idx, level=level) + right = df.iloc[i].set_index(icol) + assert_frame_equal(left, right, check_index_type=check_index_type) + + def verify(df, level, idx, indexer, check_index_type=True): + left = df.set_index(icol).reindex(idx, level=level) + right = df.iloc[indexer].set_index(icol) + assert_frame_equal(left, right, check_index_type=check_index_type) + + df = pd.DataFrame({'jim': list('B' * 4 + 'A' * 2 + 'C' * 3), + 'joe': list('abcdeabcd')[::-1], + 'jolie': [10, 20, 30] * 3, + 'joline': np.random.randint(0, 1000, 9)}) + + target = [['C', 'B', 'A'], ['F', 'C', 'A', 'D'], ['A'], + ['A', 'B', 'C'], ['C', 'A', 'B'], ['C', 'B'], ['C', 'A'], + ['A', 'B'], ['B', 'A', 'C']] + + for idx in target: + verify_first_level(df, 'jim', idx) + + # reindex by these causes different MultiIndex levels + for idx in [['D', 'F'], ['A', 'C', 'B']]: + verify_first_level(df, 'jim', idx, check_index_type=False) + + verify(df, 'joe', list('abcde'), [3, 2, 1, 0, 5, 4, 8, 7, 6]) + verify(df, 'joe', list('abcd'), [3, 2, 1, 0, 5, 8, 7, 6]) + verify(df, 'joe', list('abc'), [3, 2, 1, 8, 7, 6]) + verify(df, 'joe', list('eca'), [1, 3, 4, 6, 8]) + verify(df, 'joe', list('edc'), [0, 1, 4, 5, 6]) + verify(df, 'joe', list('eadbc'), [3, 0, 2, 1, 4, 5, 8, 7, 6]) + verify(df, 'joe', list('edwq'), [0, 4, 5]) + verify(df, 'joe', list('wq'), [], check_index_type=False) + + df = DataFrame({'jim': ['mid'] * 5 + ['btm'] * 8 + ['top'] * 7, + 'joe': ['3rd'] * 2 + ['1st'] * 3 + ['2nd'] * 3 + + ['1st'] * 2 + ['3rd'] * 3 + ['1st'] * 2 + + ['3rd'] * 3 + ['2nd'] * 2, + # this needs to be jointly unique with jim and joe or + # reindexing will fail ~1.5% of the time, this works + # out to needing unique groups of same size as joe + 'jolie': np.concatenate([ + np.random.choice(1000, x, replace=False) + for x in [2, 3, 3, 2, 3, 2, 3, 2]]), + 'joline': np.random.randn(20).round(3) * 10}) + + for idx in permutations(df['jim'].unique()): + for i in range(3): + verify_first_level(df, 'jim', idx[:i + 1]) + + i = [2, 3, 4, 0, 1, 8, 9, 5, 6, 7, 10, + 11, 12, 13, 14, 18, 19, 15, 16, 17] + verify(df, 'joe', ['1st', '2nd', '3rd'], i) + + i = [0, 1, 2, 3, 4, 10, 11, 12, 5, 6, + 7, 8, 9, 15, 16, 17, 18, 19, 13, 14] + verify(df, 'joe', ['3rd', '2nd', '1st'], i) + + i = [0, 1, 5, 6, 7, 10, 11, 12, 18, 19, 15, 16, 17] + verify(df, 'joe', ['2nd', '3rd'], i) + + i = [0, 1, 2, 3, 4, 10, 11, 12, 8, 9, 15, 16, 17, 13, 14] + verify(df, 'joe', ['3rd', '1st'], i) + + def test_getitem_ix_float_duplicates(self): + df = pd.DataFrame(np.random.randn(3, 3), + index=[0.1, 0.2, 0.2], columns=list('abc')) + expect = df.iloc[1:] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df.index = [1, 0.2, 0.2] + expect = df.iloc[1:] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df = pd.DataFrame(np.random.randn(4, 3), + index=[1, 0.2, 0.2, 1], columns=list('abc')) + expect = df.iloc[1:-1] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[1:-1, 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + df.index = [0.1, 0.2, 2, 0.2] + expect = df.iloc[[1, -1]] + assert_frame_equal(df.loc[0.2], expect) + assert_frame_equal(df.ix[0.2], expect) + + expect = df.iloc[[1, -1], 0] + assert_series_equal(df.loc[0.2, 'a'], expect) + + def test_setitem_with_sparse_value(self): + # GH8131 + df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) + sp_series = pd.Series([0, 0, 1]).to_sparse(fill_value=0) + df['new_column'] = sp_series + assert_series_equal(df['new_column'], sp_series, check_names=False) + + def test_setitem_with_unaligned_sparse_value(self): + df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) + sp_series = (pd.Series([0, 0, 1], index=[2, 1, 0]) + .to_sparse(fill_value=0)) + df['new_column'] = sp_series + exp = pd.Series([1, 0, 0], name='new_column') + assert_series_equal(df['new_column'], exp) + + def test_setitem_datetime_coercion(self): + # GH 1048 + df = pd.DataFrame({'c': [pd.Timestamp('2010-10-01')] * 3}) + df.loc[0:1, 'c'] = np.datetime64('2008-08-08') + self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[0, 'c']) + self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[1, 'c']) + df.loc[2, 'c'] = date(2005, 5, 5) + self.assertEqual(pd.Timestamp('2005-05-05'), df.loc[2, 'c']) + + def test_datetimelike_setitem_with_inference(self): + # GH 7592 + # assignment of timedeltas with NaT + + one_hour = timedelta(hours=1) + df = DataFrame(index=date_range('20130101', periods=4)) + df['A'] = np.array([1 * one_hour] * 4, dtype='m8[ns]') + df.loc[:, 'B'] = np.array([2 * one_hour] * 4, dtype='m8[ns]') + df.loc[:3, 'C'] = np.array([3 * one_hour] * 3, dtype='m8[ns]') + df.ix[:, 'D'] = np.array([4 * one_hour] * 4, dtype='m8[ns]') + df.ix[:3, 'E'] = np.array([5 * one_hour] * 3, dtype='m8[ns]') + df['F'] = np.timedelta64('NaT') + df.ix[:-1, 'F'] = np.array([6 * one_hour] * 3, dtype='m8[ns]') + df.ix[-3:, 'G'] = date_range('20130101', periods=3) + df['H'] = np.datetime64('NaT') + result = df.dtypes + expected = Series([np.dtype('timedelta64[ns]')] * 6 + + [np.dtype('datetime64[ns]')] * 2, + index=list('ABCDEFGH')) + assert_series_equal(result, expected) + + def test_at_time_between_time_datetimeindex(self): + index = date_range("2012-01-01", "2012-01-05", freq='30min') + df = DataFrame(randn(len(index), 5), index=index) + akey = time(12, 0, 0) + bkey = slice(time(13, 0, 0), time(14, 0, 0)) + ainds = [24, 72, 120, 168] + binds = [26, 27, 28, 74, 75, 76, 122, 123, 124, 170, 171, 172] + + result = df.at_time(akey) + expected = df.ix[akey] + expected2 = df.ix[ainds] + assert_frame_equal(result, expected) + assert_frame_equal(result, expected2) + self.assertEqual(len(result), 4) + + result = df.between_time(bkey.start, bkey.stop) + expected = df.ix[bkey] + expected2 = df.ix[binds] + assert_frame_equal(result, expected) + assert_frame_equal(result, expected2) + self.assertEqual(len(result), 12) + + result = df.copy() + result.ix[akey] = 0 + result = result.ix[akey] + expected = df.ix[akey].copy() + expected.ix[:] = 0 + assert_frame_equal(result, expected) + + result = df.copy() + result.ix[akey] = 0 + result.ix[akey] = df.ix[ainds] + assert_frame_equal(result, df) + + result = df.copy() + result.ix[bkey] = 0 + result = result.ix[bkey] + expected = df.ix[bkey].copy() + expected.ix[:] = 0 + assert_frame_equal(result, expected) + + result = df.copy() + result.ix[bkey] = 0 + result.ix[bkey] = df.ix[binds] + assert_frame_equal(result, df) + + def test_xs(self): + from pandas.core.datetools import bday + + idx = self.frame.index[5] + xs = self.frame.xs(idx) + for item, value in compat.iteritems(xs): + if np.isnan(value): + self.assertTrue(np.isnan(self.frame[item][idx])) + else: + self.assertEqual(value, self.frame[item][idx]) + + # mixed-type xs + test_data = { + 'A': {'1': 1, '2': 2}, + 'B': {'1': '1', '2': '2', '3': '3'}, + } + frame = DataFrame(test_data) + xs = frame.xs('1') + self.assertEqual(xs.dtype, np.object_) + self.assertEqual(xs['A'], 1) + self.assertEqual(xs['B'], '1') + + with tm.assertRaises(KeyError): + self.tsframe.xs(self.tsframe.index[0] - bday) + + # xs get column + series = self.frame.xs('A', axis=1) + expected = self.frame['A'] + assert_series_equal(series, expected) + + # view is returned if possible + series = self.frame.xs('A', axis=1) + series[:] = 5 + self.assertTrue((expected == 5).all()) + + def test_xs_corner(self): + # pathological mixed-type reordering case + df = DataFrame(index=[0]) + df['A'] = 1. + df['B'] = 'foo' + df['C'] = 2. + df['D'] = 'bar' + df['E'] = 3. + + xs = df.xs(0) + assert_almost_equal(xs, [1., 'foo', 2., 'bar', 3.]) + + # no columns but Index(dtype=object) + df = DataFrame(index=['a', 'b', 'c']) + result = df.xs('a') + expected = Series([], name='a', index=pd.Index([], dtype=object)) + assert_series_equal(result, expected) + + def test_xs_duplicates(self): + df = DataFrame(randn(5, 2), index=['b', 'b', 'c', 'b', 'a']) + + cross = df.xs('c') + exp = df.iloc[2] + assert_series_equal(cross, exp) + + def test_xs_keep_level(self): + df = (DataFrame({'day': {0: 'sat', 1: 'sun'}, + 'flavour': {0: 'strawberry', 1: 'strawberry'}, + 'sales': {0: 10, 1: 12}, + 'year': {0: 2008, 1: 2008}}) + .set_index(['year', 'flavour', 'day'])) + result = df.xs('sat', level='day', drop_level=False) + expected = df[:1] + assert_frame_equal(result, expected) + + result = df.xs([2008, 'sat'], level=['year', 'day'], drop_level=False) + assert_frame_equal(result, expected) + + def test_xs_view(self): + # in 0.14 this will return a view if possible a copy otherwise, but + # this is numpy dependent + + dm = DataFrame(np.arange(20.).reshape(4, 5), + index=lrange(4), columns=lrange(5)) + + dm.xs(2)[:] = 10 + self.assertTrue((dm.xs(2) == 10).all()) + + def test_index_namedtuple(self): + from collections import namedtuple + IndexType = namedtuple("IndexType", ["a", "b"]) + idx1 = IndexType("foo", "bar") + idx2 = IndexType("baz", "bof") + index = Index([idx1, idx2], + name="composite_index", tupleize_cols=False) + df = DataFrame([(1, 2), (3, 4)], index=index, columns=["A", "B"]) + result = df.ix[IndexType("foo", "bar")]["A"] + self.assertEqual(result, 1) + + def test_boolean_indexing(self): + idx = lrange(3) + cols = ['A', 'B', 'C'] + df1 = DataFrame(index=idx, columns=cols, + data=np.array([[0.0, 0.5, 1.0], + [1.5, 2.0, 2.5], + [3.0, 3.5, 4.0]], + dtype=float)) + df2 = DataFrame(index=idx, columns=cols, + data=np.ones((len(idx), len(cols)))) + + expected = DataFrame(index=idx, columns=cols, + data=np.array([[0.0, 0.5, 1.0], + [1.5, 2.0, -1], + [-1, -1, -1]], dtype=float)) + + df1[df1 > 2.0 * df2] = -1 + assert_frame_equal(df1, expected) + with assertRaisesRegexp(ValueError, 'Item wrong length'): + df1[df1.index[:-1] > 2] = -1 + + def test_boolean_indexing_mixed(self): + df = DataFrame({ + long(0): {35: np.nan, 40: np.nan, 43: np.nan, + 49: np.nan, 50: np.nan}, + long(1): {35: np.nan, + 40: 0.32632316859446198, + 43: np.nan, + 49: 0.32632316859446198, + 50: 0.39114724480578139}, + long(2): {35: np.nan, 40: np.nan, 43: 0.29012581014105987, + 49: np.nan, 50: np.nan}, + long(3): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, + 50: np.nan}, + long(4): {35: 0.34215328467153283, 40: np.nan, 43: np.nan, + 49: np.nan, 50: np.nan}, + 'y': {35: 0, 40: 0, 43: 0, 49: 0, 50: 1}}) + + # mixed int/float ok + df2 = df.copy() + df2[df2 > 0.3] = 1 + expected = df.copy() + expected.loc[40, 1] = 1 + expected.loc[49, 1] = 1 + expected.loc[50, 1] = 1 + expected.loc[35, 4] = 1 + assert_frame_equal(df2, expected) + + df['foo'] = 'test' + with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): + df[df > 0.3] = 1 + + def test_where(self): + default_frame = DataFrame(np.random.randn(5, 3), + columns=['A', 'B', 'C']) + + def _safe_add(df): + # only add to the numeric items + def is_ok(s): + return (issubclass(s.dtype.type, (np.integer, np.floating)) + and s.dtype != 'uint8') + + return DataFrame(dict([(c, s + 1) if is_ok(s) else (c, s) + for c, s in compat.iteritems(df)])) + + def _check_get(df, cond, check_dtypes=True): + other1 = _safe_add(df) + rs = df.where(cond, other1) + rs2 = df.where(cond.values, other1) + for k, v in rs.iteritems(): + exp = Series( + np.where(cond[k], df[k], other1[k]), index=v.index) + assert_series_equal(v, exp, check_names=False) + assert_frame_equal(rs, rs2) + + # dtypes + if check_dtypes: + self.assertTrue((rs.dtypes == df.dtypes).all()) + + # check getting + for df in [default_frame, self.mixed_frame, + self.mixed_float, self.mixed_int]: + cond = df > 0 + _check_get(df, cond) + + # upcasting case (GH # 2794) + df = DataFrame(dict([(c, Series([1] * 3, dtype=c)) + for c in ['int64', 'int32', + 'float32', 'float64']])) + df.ix[1, :] = 0 + result = df.where(df >= 0).get_dtype_counts() + + # when we don't preserve boolean casts + # + # expected = Series({ 'float32' : 1, 'float64' : 3 }) + + expected = Series({'float32': 1, 'float64': 1, 'int32': 1, 'int64': 1}) + assert_series_equal(result, expected) + + # aligning + def _check_align(df, cond, other, check_dtypes=True): + rs = df.where(cond, other) + for i, k in enumerate(rs.columns): + result = rs[k] + d = df[k].values + c = cond[k].reindex(df[k].index).fillna(False).values + + if np.isscalar(other): + o = other + else: + if isinstance(other, np.ndarray): + o = Series(other[:, i], index=result.index).values + else: + o = other[k].values + + new_values = d if c.all() else np.where(c, d, o) + expected = Series(new_values, index=result.index, name=k) + + # since we can't always have the correct numpy dtype + # as numpy doesn't know how to downcast, don't check + assert_series_equal(result, expected, check_dtype=False) + + # dtypes + # can't check dtype when other is an ndarray + + if check_dtypes and not isinstance(other, np.ndarray): + self.assertTrue((rs.dtypes == df.dtypes).all()) + + for df in [self.mixed_frame, self.mixed_float, self.mixed_int]: + + # other is a frame + cond = (df > 0)[1:] + _check_align(df, cond, _safe_add(df)) + + # check other is ndarray + cond = df > 0 + _check_align(df, cond, (_safe_add(df).values)) + + # integers are upcast, so don't check the dtypes + cond = df > 0 + check_dtypes = all([not issubclass(s.type, np.integer) + for s in df.dtypes]) + _check_align(df, cond, np.nan, check_dtypes=check_dtypes) + + # invalid conditions + df = default_frame + err1 = (df + 1).values[0:2, :] + self.assertRaises(ValueError, df.where, cond, err1) + + err2 = cond.ix[:2, :].values + other1 = _safe_add(df) + self.assertRaises(ValueError, df.where, err2, other1) + + self.assertRaises(ValueError, df.mask, True) + self.assertRaises(ValueError, df.mask, 0) + + # where inplace + def _check_set(df, cond, check_dtypes=True): + dfi = df.copy() + econd = cond.reindex_like(df).fillna(True) + expected = dfi.mask(~econd) + + dfi.where(cond, np.nan, inplace=True) + assert_frame_equal(dfi, expected) + + # dtypes (and confirm upcasts)x + if check_dtypes: + for k, v in compat.iteritems(df.dtypes): + if issubclass(v.type, np.integer) and not cond[k].all(): + v = np.dtype('float64') + self.assertEqual(dfi[k].dtype, v) + + for df in [default_frame, self.mixed_frame, self.mixed_float, + self.mixed_int]: + + cond = df > 0 + _check_set(df, cond) + + cond = df >= 0 + _check_set(df, cond) + + # aligining + cond = (df >= 0)[1:] + _check_set(df, cond) + + # GH 10218 + # test DataFrame.where with Series slicing + df = DataFrame({'a': range(3), 'b': range(4, 7)}) + result = df.where(df['a'] == 1) + expected = df[df['a'] == 1].reindex(df.index) + assert_frame_equal(result, expected) + + def test_where_bug(self): + + # GH 2793 + + df = DataFrame({'a': [1.0, 2.0, 3.0, 4.0], 'b': [ + 4.0, 3.0, 2.0, 1.0]}, dtype='float64') + expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [ + 4.0, 3.0, np.nan, np.nan]}, dtype='float64') + result = df.where(df > 2, np.nan) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(result > 2, np.nan, inplace=True) + assert_frame_equal(result, expected) + + # mixed + for dtype in ['int16', 'int8', 'int32', 'int64']: + df = DataFrame({'a': np.array([1, 2, 3, 4], dtype=dtype), + 'b': np.array([4.0, 3.0, 2.0, 1.0], + dtype='float64')}) + + expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], + 'b': [4.0, 3.0, np.nan, np.nan]}, + dtype='float64') + + result = df.where(df > 2, np.nan) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(result > 2, np.nan, inplace=True) + assert_frame_equal(result, expected) + + # transpositional issue + # GH7506 + a = DataFrame({0: [1, 2], 1: [3, 4], 2: [5, 6]}) + b = DataFrame({0: [np.nan, 8], 1: [9, np.nan], 2: [np.nan, np.nan]}) + do_not_replace = b.isnull() | (a > b) + + expected = a.copy() + expected[~do_not_replace] = b + + result = a.where(do_not_replace, b) + assert_frame_equal(result, expected) + + a = DataFrame({0: [4, 6], 1: [1, 0]}) + b = DataFrame({0: [np.nan, 3], 1: [3, np.nan]}) + do_not_replace = b.isnull() | (a > b) + + expected = a.copy() + expected[~do_not_replace] = b + + result = a.where(do_not_replace, b) + assert_frame_equal(result, expected) + + def test_where_datetime(self): + + # GH 3311 + df = DataFrame(dict(A=date_range('20130102', periods=5), + B=date_range('20130104', periods=5), + C=np.random.randn(5))) + + stamp = datetime(2013, 1, 3) + result = df[df > stamp] + expected = df.copy() + expected.loc[[0, 1], 'A'] = np.nan + assert_frame_equal(result, expected) + + def test_where_none(self): + # GH 4667 + # setting with None changes dtype + df = DataFrame({'series': Series(range(10))}).astype(float) + df[df > 7] = None + expected = DataFrame( + {'series': Series([0, 1, 2, 3, 4, 5, 6, 7, np.nan, np.nan])}) + assert_frame_equal(df, expected) + + # GH 7656 + df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, { + 'A': np.nan, 'B': 'Test', 'C': np.nan}]) + expected = df.where(~isnull(df), None) + with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): + df.where(~isnull(df), None, inplace=True) + + def test_where_align(self): + + def create(): + df = DataFrame(np.random.randn(10, 3)) + df.iloc[3:5, 0] = np.nan + df.iloc[4:6, 1] = np.nan + df.iloc[5:8, 2] = np.nan + return df + + # series + df = create() + expected = df.fillna(df.mean()) + result = df.where(pd.notnull(df), df.mean(), axis='columns') + assert_frame_equal(result, expected) + + df.where(pd.notnull(df), df.mean(), inplace=True, axis='columns') + assert_frame_equal(df, expected) + + df = create().fillna(0) + expected = df.apply(lambda x, y: x.where(x > 0, y), y=df[0]) + result = df.where(df > 0, df[0], axis='index') + assert_frame_equal(result, expected) + result = df.where(df > 0, df[0], axis='rows') + assert_frame_equal(result, expected) + + # frame + df = create() + expected = df.fillna(1) + result = df.where(pd.notnull(df), DataFrame( + 1, index=df.index, columns=df.columns)) + assert_frame_equal(result, expected) + + def test_where_complex(self): + # GH 6345 + expected = DataFrame( + [[1 + 1j, 2], [np.nan, 4 + 1j]], columns=['a', 'b']) + df = DataFrame([[1 + 1j, 2], [5 + 1j, 4 + 1j]], columns=['a', 'b']) + df[df.abs() >= 5] = np.nan + assert_frame_equal(df, expected) + + def test_where_axis(self): + # GH 9736 + df = DataFrame(np.random.randn(2, 2)) + mask = DataFrame([[False, False], [False, False]]) + s = Series([0, 1]) + + expected = DataFrame([[0, 0], [1, 1]], dtype='float64') + result = df.where(mask, s, axis='index') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='index', inplace=True) + assert_frame_equal(result, expected) + + expected = DataFrame([[0, 1], [0, 1]], dtype='float64') + result = df.where(mask, s, axis='columns') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + # Upcast needed + df = DataFrame([[1, 2], [3, 4]], dtype='int64') + mask = DataFrame([[False, False], [False, False]]) + s = Series([0, np.nan]) + + expected = DataFrame([[0, 0], [np.nan, np.nan]], dtype='float64') + result = df.where(mask, s, axis='index') + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s, axis='index', inplace=True) + assert_frame_equal(result, expected) + + expected = DataFrame([[0, np.nan], [0, np.nan]], dtype='float64') + result = df.where(mask, s, axis='columns') + assert_frame_equal(result, expected) + + expected = DataFrame({0: np.array([0, 0], dtype='int64'), + 1: np.array([np.nan, np.nan], dtype='float64')}) + result = df.copy() + result.where(mask, s, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + # Multiple dtypes (=> multiple Blocks) + df = pd.concat([DataFrame(np.random.randn(10, 2)), + DataFrame(np.random.randint(0, 10, size=(10, 2)))], + ignore_index=True, axis=1) + mask = DataFrame(False, columns=df.columns, index=df.index) + s1 = Series(1, index=df.columns) + s2 = Series(2, index=df.index) + + result = df.where(mask, s1, axis='columns') + expected = DataFrame(1.0, columns=df.columns, index=df.index) + expected[2] = expected[2].astype(int) + expected[3] = expected[3].astype(int) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s1, axis='columns', inplace=True) + assert_frame_equal(result, expected) + + result = df.where(mask, s2, axis='index') + expected = DataFrame(2.0, columns=df.columns, index=df.index) + expected[2] = expected[2].astype(int) + expected[3] = expected[3].astype(int) + assert_frame_equal(result, expected) + + result = df.copy() + result.where(mask, s2, axis='index', inplace=True) + assert_frame_equal(result, expected) + + # DataFrame vs DataFrame + d1 = df.copy().drop(1, axis=0) + expected = df.copy() + expected.loc[1, :] = np.nan + + result = df.where(mask, d1) + assert_frame_equal(result, expected) + result = df.where(mask, d1, axis='index') + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d1, inplace=True) + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d1, inplace=True, axis='index') + assert_frame_equal(result, expected) + + d2 = df.copy().drop(1, axis=1) + expected = df.copy() + expected.loc[:, 1] = np.nan + + result = df.where(mask, d2) + assert_frame_equal(result, expected) + result = df.where(mask, d2, axis='columns') + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d2, inplace=True) + assert_frame_equal(result, expected) + result = df.copy() + result.where(mask, d2, inplace=True, axis='columns') + assert_frame_equal(result, expected) + + def test_mask(self): + df = DataFrame(np.random.randn(5, 3)) + cond = df > 0 + + rs = df.where(cond, np.nan) + assert_frame_equal(rs, df.mask(df <= 0)) + assert_frame_equal(rs, df.mask(~cond)) + + other = DataFrame(np.random.randn(5, 3)) + rs = df.where(cond, other) + assert_frame_equal(rs, df.mask(df <= 0, other)) + assert_frame_equal(rs, df.mask(~cond, other)) + + def test_mask_inplace(self): + # GH8801 + df = DataFrame(np.random.randn(5, 3)) + cond = df > 0 + + rdf = df.copy() + + rdf.where(cond, inplace=True) + assert_frame_equal(rdf, df.where(cond)) + assert_frame_equal(rdf, df.mask(~cond)) + + rdf = df.copy() + rdf.where(cond, -df, inplace=True) + assert_frame_equal(rdf, df.where(cond, -df)) + assert_frame_equal(rdf, df.mask(~cond, -df)) + + def test_mask_edge_case_1xN_frame(self): + # GH4071 + df = DataFrame([[1, 2]]) + res = df.mask(DataFrame([[True, False]])) + expec = DataFrame([[nan, 2]]) + assert_frame_equal(res, expec) + + def test_head_tail(self): + assert_frame_equal(self.frame.head(), self.frame[:5]) + assert_frame_equal(self.frame.tail(), self.frame[-5:]) + + assert_frame_equal(self.frame.head(0), self.frame[0:0]) + assert_frame_equal(self.frame.tail(0), self.frame[0:0]) + + assert_frame_equal(self.frame.head(-1), self.frame[:-1]) + assert_frame_equal(self.frame.tail(-1), self.frame[1:]) + assert_frame_equal(self.frame.head(1), self.frame[:1]) + assert_frame_equal(self.frame.tail(1), self.frame[-1:]) + # with a float index + df = self.frame.copy() + df.index = np.arange(len(self.frame)) + 0.1 + assert_frame_equal(df.head(), df.iloc[:5]) + assert_frame_equal(df.tail(), df.iloc[-5:]) + assert_frame_equal(df.head(0), df[0:0]) + assert_frame_equal(df.tail(0), df[0:0]) + assert_frame_equal(df.head(-1), df.iloc[:-1]) + assert_frame_equal(df.tail(-1), df.iloc[1:]) + # test empty dataframe + empty_df = DataFrame() + assert_frame_equal(empty_df.tail(), empty_df) + assert_frame_equal(empty_df.head(), empty_df) diff --git a/pandas/tests/frame/test_misc_api.py b/pandas/tests/frame/test_misc_api.py new file mode 100644 index 0000000000000..ade1895ece14f --- /dev/null +++ b/pandas/tests/frame/test_misc_api.py @@ -0,0 +1,487 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function +# pylint: disable-msg=W0612,E1101 +from copy import deepcopy +import sys +import nose +from distutils.version import LooseVersion + +from pandas.compat import range, lrange +from pandas import compat + +from numpy.random import randn +import numpy as np + +from pandas import DataFrame, Series +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class SafeForSparse(object): + + _multiprocess_can_split_ = True + + def test_copy_index_name_checking(self): + # don't want to be able to modify the index stored elsewhere after + # making a copy + for attr in ('index', 'columns'): + ind = getattr(self.frame, attr) + ind.name = None + cp = self.frame.copy() + getattr(cp, attr).name = 'foo' + self.assertIsNone(getattr(self.frame, attr).name) + + def test_getitem_pop_assign_name(self): + s = self.frame['A'] + self.assertEqual(s.name, 'A') + + s = self.frame.pop('A') + self.assertEqual(s.name, 'A') + + s = self.frame.ix[:, 'B'] + self.assertEqual(s.name, 'B') + + s2 = s.ix[:] + self.assertEqual(s2.name, 'B') + + def test_get_value(self): + for idx in self.frame.index: + for col in self.frame.columns: + result = self.frame.get_value(idx, col) + expected = self.frame[col][idx] + assert_almost_equal(result, expected) + + def test_join_index(self): + # left / right + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2) + self.assertTrue(f.index.equals(joined.index)) + self.assertEqual(len(joined.columns), 4) + + joined = f.join(f2, how='left') + self.assertTrue(joined.index.equals(f.index)) + self.assertEqual(len(joined.columns), 4) + + joined = f.join(f2, how='right') + self.assertTrue(joined.index.equals(f2.index)) + self.assertEqual(len(joined.columns), 4) + + # inner + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2, how='inner') + self.assertTrue(joined.index.equals(f.index.intersection(f2.index))) + self.assertEqual(len(joined.columns), 4) + + # outer + + f = self.frame.reindex(columns=['A', 'B'])[:10] + f2 = self.frame.reindex(columns=['C', 'D']) + + joined = f.join(f2, how='outer') + self.assertTrue(tm.equalContents(self.frame.index, joined.index)) + self.assertEqual(len(joined.columns), 4) + + assertRaisesRegexp(ValueError, 'join method', f.join, f2, how='foo') + + # corner case - overlapping columns + for how in ('outer', 'left', 'inner'): + with assertRaisesRegexp(ValueError, 'columns overlap but ' + 'no suffix'): + self.frame.join(self.frame, how=how) + + def test_join_index_more(self): + af = self.frame.ix[:, ['A', 'B']] + bf = self.frame.ix[::2, ['C', 'D']] + + expected = af.copy() + expected['C'] = self.frame['C'][::2] + expected['D'] = self.frame['D'][::2] + + result = af.join(bf) + assert_frame_equal(result, expected) + + result = af.join(bf, how='right') + assert_frame_equal(result, expected[::2]) + + result = bf.join(af, how='right') + assert_frame_equal(result, expected.ix[:, result.columns]) + + def test_join_index_series(self): + df = self.frame.copy() + s = df.pop(self.frame.columns[-1]) + joined = df.join(s) + + # TODO should this check_names ? + assert_frame_equal(joined, self.frame, check_names=False) + + s.name = None + assertRaisesRegexp(ValueError, 'must have a name', df.join, s) + + def test_join_overlap(self): + df1 = self.frame.ix[:, ['A', 'B', 'C']] + df2 = self.frame.ix[:, ['B', 'C', 'D']] + + joined = df1.join(df2, lsuffix='_df1', rsuffix='_df2') + df1_suf = df1.ix[:, ['B', 'C']].add_suffix('_df1') + df2_suf = df2.ix[:, ['B', 'C']].add_suffix('_df2') + + no_overlap = self.frame.ix[:, ['A', 'D']] + expected = df1_suf.join(df2_suf).join(no_overlap) + + # column order not necessarily sorted + assert_frame_equal(joined, expected.ix[:, joined.columns]) + + def test_add_prefix_suffix(self): + with_prefix = self.frame.add_prefix('foo#') + expected = ['foo#%s' % c for c in self.frame.columns] + self.assert_numpy_array_equal(with_prefix.columns, expected) + + with_suffix = self.frame.add_suffix('#foo') + expected = ['%s#foo' % c for c in self.frame.columns] + self.assert_numpy_array_equal(with_suffix.columns, expected) + + +class TestDataFrameMisc(tm.TestCase, SafeForSparse, TestData): + + klass = DataFrame + + _multiprocess_can_split_ = True + + def test_get_axis(self): + f = self.frame + self.assertEqual(f._get_axis_number(0), 0) + self.assertEqual(f._get_axis_number(1), 1) + self.assertEqual(f._get_axis_number('index'), 0) + self.assertEqual(f._get_axis_number('rows'), 0) + self.assertEqual(f._get_axis_number('columns'), 1) + + self.assertEqual(f._get_axis_name(0), 'index') + self.assertEqual(f._get_axis_name(1), 'columns') + self.assertEqual(f._get_axis_name('index'), 'index') + self.assertEqual(f._get_axis_name('rows'), 'index') + self.assertEqual(f._get_axis_name('columns'), 'columns') + + self.assertIs(f._get_axis(0), f.index) + self.assertIs(f._get_axis(1), f.columns) + + assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, 2) + assertRaisesRegexp(ValueError, 'No axis.*foo', f._get_axis_name, 'foo') + assertRaisesRegexp(ValueError, 'No axis.*None', f._get_axis_name, None) + assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, + None) + + def test_keys(self): + getkeys = self.frame.keys + self.assertIs(getkeys(), self.frame.columns) + + def test_column_contains_typeerror(self): + try: + self.frame.columns in self.frame + except TypeError: + pass + + def test_not_hashable(self): + df = pd.DataFrame([1]) + self.assertRaises(TypeError, hash, df) + self.assertRaises(TypeError, hash, self.empty) + + def test_new_empty_index(self): + df1 = DataFrame(randn(0, 3)) + df2 = DataFrame(randn(0, 3)) + df1.index.name = 'foo' + self.assertIsNone(df2.index.name) + + def test_array_interface(self): + result = np.sqrt(self.frame) + tm.assertIsInstance(result, type(self.frame)) + self.assertIs(result.index, self.frame.index) + self.assertIs(result.columns, self.frame.columns) + + assert_frame_equal(result, self.frame.apply(np.sqrt)) + + def test_get_agg_axis(self): + cols = self.frame._get_agg_axis(0) + self.assertIs(cols, self.frame.columns) + + idx = self.frame._get_agg_axis(1) + self.assertIs(idx, self.frame.index) + + self.assertRaises(ValueError, self.frame._get_agg_axis, 2) + + def test_nonzero(self): + self.assertTrue(self.empty.empty) + + self.assertFalse(self.frame.empty) + self.assertFalse(self.mixed_frame.empty) + + # corner case + df = DataFrame({'A': [1., 2., 3.], + 'B': ['a', 'b', 'c']}, + index=np.arange(3)) + del df['A'] + self.assertFalse(df.empty) + + def test_iteritems(self): + df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) + for k, v in compat.iteritems(df): + self.assertEqual(type(v), Series) + + def test_iter(self): + self.assertTrue(tm.equalContents(list(self.frame), self.frame.columns)) + + def test_iterrows(self): + for i, (k, v) in enumerate(self.frame.iterrows()): + exp = self.frame.xs(self.frame.index[i]) + assert_series_equal(v, exp) + + for i, (k, v) in enumerate(self.mixed_frame.iterrows()): + exp = self.mixed_frame.xs(self.mixed_frame.index[i]) + assert_series_equal(v, exp) + + def test_itertuples(self): + for i, tup in enumerate(self.frame.itertuples()): + s = Series(tup[1:]) + s.name = tup[0] + expected = self.frame.ix[i, :].reset_index(drop=True) + assert_series_equal(s, expected) + + df = DataFrame({'floats': np.random.randn(5), + 'ints': lrange(5)}, columns=['floats', 'ints']) + + for tup in df.itertuples(index=False): + tm.assertIsInstance(tup[1], np.integer) + + df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]}) + dfaa = df[['a', 'a']] + self.assertEqual(list(dfaa.itertuples()), [ + (0, 1, 1), (1, 2, 2), (2, 3, 3)]) + + self.assertEqual(repr(list(df.itertuples(name=None))), + '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]') + + tup = next(df.itertuples(name='TestName')) + + # no support for field renaming in Python 2.6, regular tuples are + # returned + if sys.version >= LooseVersion('2.7'): + self.assertEqual(tup._fields, ('Index', 'a', 'b')) + self.assertEqual((tup.Index, tup.a, tup.b), tup) + self.assertEqual(type(tup).__name__, 'TestName') + + df.columns = ['def', 'return'] + tup2 = next(df.itertuples(name='TestName')) + self.assertEqual(tup2, (0, 1, 4)) + + if sys.version >= LooseVersion('2.7'): + self.assertEqual(tup2._fields, ('Index', '_1', '_2')) + + df3 = DataFrame(dict(('f' + str(i), [i]) for i in range(1024))) + # will raise SyntaxError if trying to create namedtuple + tup3 = next(df3.itertuples()) + self.assertFalse(hasattr(tup3, '_fields')) + self.assertIsInstance(tup3, tuple) + + def test_len(self): + self.assertEqual(len(self.frame), len(self.frame.index)) + + def test_as_matrix(self): + frame = self.frame + mat = frame.as_matrix() + + frameCols = frame.columns + for i, row in enumerate(mat): + for j, value in enumerate(row): + col = frameCols[j] + if np.isnan(value): + self.assertTrue(np.isnan(frame[col][i])) + else: + self.assertEqual(value, frame[col][i]) + + # mixed type + mat = self.mixed_frame.as_matrix(['foo', 'A']) + self.assertEqual(mat[0, 0], 'bar') + + df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]}) + mat = df.as_matrix() + self.assertEqual(mat[0, 0], 1j) + + # single block corner case + mat = self.frame.as_matrix(['A', 'B']) + expected = self.frame.reindex(columns=['A', 'B']).values + assert_almost_equal(mat, expected) + + def test_values(self): + self.frame.values[:, 0] = 5. + self.assertTrue((self.frame.values[:, 0] == 5).all()) + + def test_deepcopy(self): + cp = deepcopy(self.frame) + series = cp['A'] + series[:] = 10 + for idx, value in compat.iteritems(series): + self.assertNotEqual(self.frame['A'][idx], value) + + # --------------------------------------------------------------------- + # Transposing + + def test_transpose(self): + frame = self.frame + dft = frame.T + for idx, series in compat.iteritems(dft): + for col, value in compat.iteritems(series): + if np.isnan(value): + self.assertTrue(np.isnan(frame[col][idx])) + else: + self.assertEqual(value, frame[col][idx]) + + # mixed type + index, data = tm.getMixedTypeDict() + mixed = DataFrame(data, index=index) + + mixed_T = mixed.T + for col, s in compat.iteritems(mixed_T): + self.assertEqual(s.dtype, np.object_) + + def test_transpose_get_view(self): + dft = self.frame.T + dft.values[:, 5:10] = 5 + + self.assertTrue((self.frame.values[5:10] == 5).all()) + + def test_swapaxes(self): + df = DataFrame(np.random.randn(10, 5)) + assert_frame_equal(df.T, df.swapaxes(0, 1)) + assert_frame_equal(df.T, df.swapaxes(1, 0)) + assert_frame_equal(df, df.swapaxes(0, 0)) + self.assertRaises(ValueError, df.swapaxes, 2, 5) + + def test_axis_aliases(self): + f = self.frame + + # reg name + expected = f.sum(axis=0) + result = f.sum(axis='index') + assert_series_equal(result, expected) + + expected = f.sum(axis=1) + result = f.sum(axis='columns') + assert_series_equal(result, expected) + + def test_more_asMatrix(self): + values = self.mixed_frame.as_matrix() + self.assertEqual(values.shape[1], len(self.mixed_frame.columns)) + + def test_repr_with_mi_nat(self): + df = DataFrame({'X': [1, 2]}, + index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']]) + res = repr(df) + exp = ' X\nNaT a 1\n2013-01-01 b 2' + nose.tools.assert_equal(res, exp) + + def test_iterkv_deprecation(self): + with tm.assert_produces_warning(FutureWarning): + self.mixed_float.iterkv() + + def test_iterkv_names(self): + for k, v in compat.iteritems(self.mixed_frame): + self.assertEqual(v.name, k) + + def test_series_put_names(self): + series = self.mixed_frame._series + for k, v in compat.iteritems(series): + self.assertEqual(v.name, k) + + def test_empty_nonzero(self): + df = DataFrame([1, 2, 3]) + self.assertFalse(df.empty) + df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna() + self.assertTrue(df.empty) + self.assertTrue(df.T.empty) + + def test_inplace_return_self(self): + # re #1893 + + data = DataFrame({'a': ['foo', 'bar', 'baz', 'qux'], + 'b': [0, 0, 1, 1], + 'c': [1, 2, 3, 4]}) + + def _check_f(base, f): + result = f(base) + self.assertTrue(result is None) + + # -----DataFrame----- + + # set_index + f = lambda x: x.set_index('a', inplace=True) + _check_f(data.copy(), f) + + # reset_index + f = lambda x: x.reset_index(inplace=True) + _check_f(data.set_index('a'), f) + + # drop_duplicates + f = lambda x: x.drop_duplicates(inplace=True) + _check_f(data.copy(), f) + + # sort + f = lambda x: x.sort_values('b', inplace=True) + _check_f(data.copy(), f) + + # sort_index + f = lambda x: x.sort_index(inplace=True) + _check_f(data.copy(), f) + + # sortlevel + f = lambda x: x.sortlevel(0, inplace=True) + _check_f(data.set_index(['a', 'b']), f) + + # fillna + f = lambda x: x.fillna(0, inplace=True) + _check_f(data.copy(), f) + + # replace + f = lambda x: x.replace(1, 0, inplace=True) + _check_f(data.copy(), f) + + # rename + f = lambda x: x.rename({1: 'foo'}, inplace=True) + _check_f(data.copy(), f) + + # -----Series----- + d = data.copy()['c'] + + # reset_index + f = lambda x: x.reset_index(inplace=True, drop=True) + _check_f(data.set_index('a')['c'], f) + + # fillna + f = lambda x: x.fillna(0, inplace=True) + _check_f(d.copy(), f) + + # replace + f = lambda x: x.replace(1, 0, inplace=True) + _check_f(d.copy(), f) + + # rename + f = lambda x: x.rename({1: 'foo'}, inplace=True) + _check_f(d.copy(), f) + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/frame/test_missing.py b/pandas/tests/frame/test_missing.py new file mode 100644 index 0000000000000..fd212664b5b9b --- /dev/null +++ b/pandas/tests/frame/test_missing.py @@ -0,0 +1,427 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from numpy import nan, random +import numpy as np + +from pandas.compat import lrange +from pandas import (DataFrame, Series, Timestamp, + date_range) +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm +from pandas.tests.frame.common import TestData, _check_mixed_float + + +class TestDataFrameMissingData(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_dropEmptyRows(self): + N = len(self.frame.index) + mat = random.randn(N) + mat[:5] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + original = Series(mat, index=self.frame.index, name='foo') + expected = original.dropna() + inplace_frame1, inplace_frame2 = frame.copy(), frame.copy() + + smaller_frame = frame.dropna(how='all') + # check that original was preserved + assert_series_equal(frame['foo'], original) + inplace_frame1.dropna(how='all', inplace=True) + assert_series_equal(smaller_frame['foo'], expected) + assert_series_equal(inplace_frame1['foo'], expected) + + smaller_frame = frame.dropna(how='all', subset=['foo']) + inplace_frame2.dropna(how='all', subset=['foo'], inplace=True) + assert_series_equal(smaller_frame['foo'], expected) + assert_series_equal(inplace_frame2['foo'], expected) + + def test_dropIncompleteRows(self): + N = len(self.frame.index) + mat = random.randn(N) + mat[:5] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + frame['bar'] = 5 + original = Series(mat, index=self.frame.index, name='foo') + inp_frame1, inp_frame2 = frame.copy(), frame.copy() + + smaller_frame = frame.dropna() + assert_series_equal(frame['foo'], original) + inp_frame1.dropna(inplace=True) + self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:]) + self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:]) + + samesize_frame = frame.dropna(subset=['bar']) + assert_series_equal(frame['foo'], original) + self.assertTrue((frame['bar'] == 5).all()) + inp_frame2.dropna(subset=['bar'], inplace=True) + self.assertTrue(samesize_frame.index.equals(self.frame.index)) + self.assertTrue(inp_frame2.index.equals(self.frame.index)) + + def test_dropna(self): + df = DataFrame(np.random.randn(6, 4)) + df[2][:2] = nan + + dropped = df.dropna(axis=1) + expected = df.ix[:, [0, 1, 3]] + inp = df.copy() + inp.dropna(axis=1, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=0) + expected = df.ix[lrange(2, 6)] + inp = df.copy() + inp.dropna(axis=0, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + # threshold + dropped = df.dropna(axis=1, thresh=5) + expected = df.ix[:, [0, 1, 3]] + inp = df.copy() + inp.dropna(axis=1, thresh=5, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=0, thresh=4) + expected = df.ix[lrange(2, 6)] + inp = df.copy() + inp.dropna(axis=0, thresh=4, inplace=True) + assert_frame_equal(dropped, expected) + assert_frame_equal(inp, expected) + + dropped = df.dropna(axis=1, thresh=4) + assert_frame_equal(dropped, df) + + dropped = df.dropna(axis=1, thresh=3) + assert_frame_equal(dropped, df) + + # subset + dropped = df.dropna(axis=0, subset=[0, 1, 3]) + inp = df.copy() + inp.dropna(axis=0, subset=[0, 1, 3], inplace=True) + assert_frame_equal(dropped, df) + assert_frame_equal(inp, df) + + # all + dropped = df.dropna(axis=1, how='all') + assert_frame_equal(dropped, df) + + df[2] = nan + dropped = df.dropna(axis=1, how='all') + expected = df.ix[:, [0, 1, 3]] + assert_frame_equal(dropped, expected) + + # bad input + self.assertRaises(ValueError, df.dropna, axis=3) + + def test_drop_and_dropna_caching(self): + # tst that cacher updates + original = Series([1, 2, np.nan], name='A') + expected = Series([1, 2], dtype=original.dtype, name='A') + df = pd.DataFrame({'A': original.values.copy()}) + df2 = df.copy() + df['A'].dropna() + assert_series_equal(df['A'], original) + df['A'].dropna(inplace=True) + assert_series_equal(df['A'], expected) + df2['A'].drop([1]) + assert_series_equal(df2['A'], original) + df2['A'].drop([1], inplace=True) + assert_series_equal(df2['A'], original.drop([1])) + + def test_dropna_corner(self): + # bad input + self.assertRaises(ValueError, self.frame.dropna, how='foo') + self.assertRaises(TypeError, self.frame.dropna, how=None) + # non-existent column - 8303 + self.assertRaises(KeyError, self.frame.dropna, subset=['A', 'X']) + + def test_dropna_multiple_axes(self): + df = DataFrame([[1, np.nan, 2, 3], + [4, np.nan, 5, 6], + [np.nan, np.nan, np.nan, np.nan], + [7, np.nan, 8, 9]]) + cp = df.copy() + result = df.dropna(how='all', axis=[0, 1]) + result2 = df.dropna(how='all', axis=(0, 1)) + expected = df.dropna(how='all').dropna(how='all', axis=1) + + assert_frame_equal(result, expected) + assert_frame_equal(result2, expected) + assert_frame_equal(df, cp) + + inp = df.copy() + inp.dropna(how='all', axis=(0, 1), inplace=True) + assert_frame_equal(inp, expected) + + def test_fillna(self): + self.tsframe.ix[:5, 'A'] = nan + self.tsframe.ix[-5:, 'A'] = nan + + zero_filled = self.tsframe.fillna(0) + self.assertTrue((zero_filled.ix[:5, 'A'] == 0).all()) + + padded = self.tsframe.fillna(method='pad') + self.assertTrue(np.isnan(padded.ix[:5, 'A']).all()) + self.assertTrue((padded.ix[-5:, 'A'] == padded.ix[-5, 'A']).all()) + + # mixed type + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + result = self.mixed_frame.fillna(value=0) + result = self.mixed_frame.fillna(method='pad') + + self.assertRaises(ValueError, self.tsframe.fillna) + self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill') + + # mixed numeric (but no float16) + mf = self.mixed_float.reindex(columns=['A', 'B', 'D']) + mf.ix[-10:, 'A'] = nan + result = mf.fillna(value=0) + _check_mixed_float(result, dtype=dict(C=None)) + + result = mf.fillna(method='pad') + _check_mixed_float(result, dtype=dict(C=None)) + + # empty frame (GH #2778) + df = DataFrame(columns=['x']) + for m in ['pad', 'backfill']: + df.x.fillna(method=m, inplace=1) + df.x.fillna(method=m) + + # with different dtype (GH3386) + df = DataFrame([['a', 'a', np.nan, 'a'], [ + 'b', 'b', np.nan, 'b'], ['c', 'c', np.nan, 'c']]) + + result = df.fillna({2: 'foo'}) + expected = DataFrame([['a', 'a', 'foo', 'a'], + ['b', 'b', 'foo', 'b'], + ['c', 'c', 'foo', 'c']]) + assert_frame_equal(result, expected) + + df.fillna({2: 'foo'}, inplace=True) + assert_frame_equal(df, expected) + + # limit and value + df = DataFrame(np.random.randn(10, 3)) + df.iloc[2:7, 0] = np.nan + df.iloc[3:5, 2] = np.nan + + expected = df.copy() + expected.iloc[2, 0] = 999 + expected.iloc[3, 2] = 999 + result = df.fillna(999, limit=1) + assert_frame_equal(result, expected) + + # with datelike + # GH 6344 + df = DataFrame({ + 'Date': [pd.NaT, Timestamp("2014-1-1")], + 'Date2': [Timestamp("2013-1-1"), pd.NaT] + }) + + expected = df.copy() + expected['Date'] = expected['Date'].fillna(df.ix[0, 'Date2']) + result = df.fillna(value={'Date': df['Date2']}) + assert_frame_equal(result, expected) + + def test_fillna_dtype_conversion(self): + # make sure that fillna on an empty frame works + df = DataFrame(index=["A", "B", "C"], columns=[1, 2, 3, 4, 5]) + result = df.get_dtype_counts().sort_values() + expected = Series({'object': 5}) + assert_series_equal(result, expected) + + result = df.fillna(1) + expected = DataFrame(1, index=["A", "B", "C"], columns=[1, 2, 3, 4, 5]) + result = result.get_dtype_counts().sort_values() + expected = Series({'int64': 5}) + assert_series_equal(result, expected) + + # empty block + df = DataFrame(index=lrange(3), columns=['A', 'B'], dtype='float64') + result = df.fillna('nan') + expected = DataFrame('nan', index=lrange(3), columns=['A', 'B']) + assert_frame_equal(result, expected) + + # equiv of replace + df = DataFrame(dict(A=[1, np.nan], B=[1., 2.])) + for v in ['', 1, np.nan, 1.0]: + expected = df.replace(np.nan, v) + result = df.fillna(v) + assert_frame_equal(result, expected) + + def test_fillna_datetime_columns(self): + # GH 7095 + df = pd.DataFrame({'A': [-1, -2, np.nan], + 'B': date_range('20130101', periods=3), + 'C': ['foo', 'bar', None], + 'D': ['foo2', 'bar2', None]}, + index=date_range('20130110', periods=3)) + result = df.fillna('?') + expected = pd.DataFrame({'A': [-1, -2, '?'], + 'B': date_range('20130101', periods=3), + 'C': ['foo', 'bar', '?'], + 'D': ['foo2', 'bar2', '?']}, + index=date_range('20130110', periods=3)) + self.assert_frame_equal(result, expected) + + df = pd.DataFrame({'A': [-1, -2, np.nan], + 'B': [pd.Timestamp('2013-01-01'), + pd.Timestamp('2013-01-02'), pd.NaT], + 'C': ['foo', 'bar', None], + 'D': ['foo2', 'bar2', None]}, + index=date_range('20130110', periods=3)) + result = df.fillna('?') + expected = pd.DataFrame({'A': [-1, -2, '?'], + 'B': [pd.Timestamp('2013-01-01'), + pd.Timestamp('2013-01-02'), '?'], + 'C': ['foo', 'bar', '?'], + 'D': ['foo2', 'bar2', '?']}, + index=pd.date_range('20130110', periods=3)) + self.assert_frame_equal(result, expected) + + def test_ffill(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + assert_frame_equal(self.tsframe.ffill(), + self.tsframe.fillna(method='ffill')) + + def test_bfill(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + assert_frame_equal(self.tsframe.bfill(), + self.tsframe.fillna(method='bfill')) + + def test_fillna_skip_certain_blocks(self): + # don't try to fill boolean, int blocks + + df = DataFrame(np.random.randn(10, 4).astype(int)) + + # it works! + df.fillna(np.nan) + + def test_fillna_inplace(self): + df = DataFrame(np.random.randn(10, 4)) + df[1][:4] = np.nan + df[3][-4:] = np.nan + + expected = df.fillna(value=0) + self.assertIsNot(expected, df) + + df.fillna(value=0, inplace=True) + assert_frame_equal(df, expected) + + df[1][:4] = np.nan + df[3][-4:] = np.nan + expected = df.fillna(method='ffill') + self.assertIsNot(expected, df) + + df.fillna(method='ffill', inplace=True) + assert_frame_equal(df, expected) + + def test_fillna_dict_series(self): + df = DataFrame({'a': [nan, 1, 2, nan, nan], + 'b': [1, 2, 3, nan, nan], + 'c': [nan, 1, 2, 3, 4]}) + + result = df.fillna({'a': 0, 'b': 5}) + + expected = df.copy() + expected['a'] = expected['a'].fillna(0) + expected['b'] = expected['b'].fillna(5) + assert_frame_equal(result, expected) + + # it works + result = df.fillna({'a': 0, 'b': 5, 'd': 7}) + + # Series treated same as dict + result = df.fillna(df.max()) + expected = df.fillna(df.max().to_dict()) + assert_frame_equal(result, expected) + + # disable this for now + with assertRaisesRegexp(NotImplementedError, 'column by column'): + df.fillna(df.max(1), axis=1) + + def test_fillna_dataframe(self): + # GH 8377 + df = DataFrame({'a': [nan, 1, 2, nan, nan], + 'b': [1, 2, 3, nan, nan], + 'c': [nan, 1, 2, 3, 4]}, + index=list('VWXYZ')) + + # df2 may have different index and columns + df2 = DataFrame({'a': [nan, 10, 20, 30, 40], + 'b': [50, 60, 70, 80, 90], + 'foo': ['bar'] * 5}, + index=list('VWXuZ')) + + result = df.fillna(df2) + + # only those columns and indices which are shared get filled + expected = DataFrame({'a': [nan, 1, 2, nan, 40], + 'b': [1, 2, 3, nan, 90], + 'c': [nan, 1, 2, 3, 4]}, + index=list('VWXYZ')) + + assert_frame_equal(result, expected) + + def test_fillna_columns(self): + df = DataFrame(np.random.randn(10, 10)) + df.values[:, ::2] = np.nan + + result = df.fillna(method='ffill', axis=1) + expected = df.T.fillna(method='pad').T + assert_frame_equal(result, expected) + + df.insert(6, 'foo', 5) + result = df.fillna(method='ffill', axis=1) + expected = df.astype(float).fillna(method='ffill', axis=1) + assert_frame_equal(result, expected) + + def test_fillna_invalid_method(self): + with assertRaisesRegexp(ValueError, 'ffil'): + self.frame.fillna(method='ffil') + + def test_fillna_invalid_value(self): + # list + self.assertRaises(TypeError, self.frame.fillna, [1, 2]) + # tuple + self.assertRaises(TypeError, self.frame.fillna, (1, 2)) + # frame with series + self.assertRaises(ValueError, self.frame.iloc[:, 0].fillna, + self.frame) + + def test_fillna_col_reordering(self): + cols = ["COL." + str(i) for i in range(5, 0, -1)] + data = np.random.rand(20, 5) + df = DataFrame(index=lrange(20), columns=cols, data=data) + filled = df.fillna(method='ffill') + self.assertEqual(df.columns.tolist(), filled.columns.tolist()) + + def test_fill_corner(self): + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + filled = self.mixed_frame.fillna(value=0) + self.assertTrue((filled.ix[5:20, 'foo'] == 0).all()) + del self.mixed_frame['foo'] + + empty_float = self.frame.reindex(columns=[]) + + # TODO(wesm): unused? + result = empty_float.fillna(value=0) # noqa diff --git a/pandas/tests/frame/test_mutate_columns.py b/pandas/tests/frame/test_mutate_columns.py new file mode 100644 index 0000000000000..1546d18a224cd --- /dev/null +++ b/pandas/tests/frame/test_mutate_columns.py @@ -0,0 +1,221 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from pandas.compat import range, lrange +import numpy as np + +from pandas import DataFrame, Series + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Column add, remove, delete. + + +class TestDataFrameMutateColumns(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_assign(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) + original = df.copy() + result = df.assign(C=df.B / df.A) + expected = df.copy() + expected['C'] = [4, 2.5, 2] + assert_frame_equal(result, expected) + + # lambda syntax + result = df.assign(C=lambda x: x.B / x.A) + assert_frame_equal(result, expected) + + # original is unmodified + assert_frame_equal(df, original) + + # Non-Series array-like + result = df.assign(C=[4, 2.5, 2]) + assert_frame_equal(result, expected) + # original is unmodified + assert_frame_equal(df, original) + + result = df.assign(B=df.B / df.A) + expected = expected.drop('B', axis=1).rename(columns={'C': 'B'}) + assert_frame_equal(result, expected) + + # overwrite + result = df.assign(A=df.A + df.B) + expected = df.copy() + expected['A'] = [5, 7, 9] + assert_frame_equal(result, expected) + + # lambda + result = df.assign(A=lambda x: x.A + x.B) + assert_frame_equal(result, expected) + + def test_assign_multiple(self): + df = DataFrame([[1, 4], [2, 5], [3, 6]], columns=['A', 'B']) + result = df.assign(C=[7, 8, 9], D=df.A, E=lambda x: x.B) + expected = DataFrame([[1, 4, 7, 1, 4], [2, 5, 8, 2, 5], + [3, 6, 9, 3, 6]], columns=list('ABCDE')) + assert_frame_equal(result, expected) + + def test_assign_alphabetical(self): + # GH 9818 + df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) + result = df.assign(D=df.A + df.B, C=df.A - df.B) + expected = DataFrame([[1, 2, -1, 3], [3, 4, -1, 7]], + columns=list('ABCD')) + assert_frame_equal(result, expected) + result = df.assign(C=df.A - df.B, D=df.A + df.B) + assert_frame_equal(result, expected) + + def test_assign_bad(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) + # non-keyword argument + with tm.assertRaises(TypeError): + df.assign(lambda x: x.A) + with tm.assertRaises(AttributeError): + df.assign(C=df.A, D=df.A + df.C) + with tm.assertRaises(KeyError): + df.assign(C=lambda df: df.A, D=lambda df: df['A'] + df['C']) + with tm.assertRaises(KeyError): + df.assign(C=df.A, D=lambda x: x['A'] + x['C']) + + def test_insert_error_msmgs(self): + + # GH 7432 + df = DataFrame({'foo': ['a', 'b', 'c'], 'bar': [ + 1, 2, 3], 'baz': ['d', 'e', 'f']}).set_index('foo') + s = DataFrame({'foo': ['a', 'b', 'c', 'a'], 'fiz': [ + 'g', 'h', 'i', 'j']}).set_index('foo') + msg = 'cannot reindex from a duplicate axis' + with assertRaisesRegexp(ValueError, msg): + df['newcol'] = s + + # GH 4107, more descriptive error message + df = DataFrame(np.random.randint(0, 2, (4, 4)), + columns=['a', 'b', 'c', 'd']) + + msg = 'incompatible index of inserted column with frame index' + with assertRaisesRegexp(TypeError, msg): + df['gr'] = df.groupby(['b', 'c']).count() + + def test_insert_benchmark(self): + # from the vb_suite/frame_methods/frame_insert_columns + N = 10 + K = 5 + df = DataFrame(index=lrange(N)) + new_col = np.random.randn(N) + for i in range(K): + df[i] = new_col + expected = DataFrame(np.repeat(new_col, K).reshape(N, K), + index=lrange(N)) + assert_frame_equal(df, expected) + + def test_insert(self): + df = DataFrame(np.random.randn(5, 3), index=np.arange(5), + columns=['c', 'b', 'a']) + + df.insert(0, 'foo', df['a']) + self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) + assert_almost_equal(df['a'], df['foo']) + + df.insert(2, 'bar', df['c']) + self.assert_numpy_array_equal(df.columns, + ['foo', 'c', 'bar', 'b', 'a']) + assert_almost_equal(df['c'], df['bar']) + + # diff dtype + + # new item + df['x'] = df['a'].astype('float32') + result = Series(dict(float64=5, float32=1)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + # replacing current (in different block) + df['a'] = df['a'].astype('float32') + result = Series(dict(float64=4, float32=2)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + df['y'] = df['a'].astype('int32') + result = Series(dict(float64=4, float32=2, int32=1)) + self.assertTrue((df.get_dtype_counts() == result).all()) + + with assertRaisesRegexp(ValueError, 'already exists'): + df.insert(1, 'a', df['b']) + self.assertRaises(ValueError, df.insert, 1, 'c', df['b']) + + df.columns.name = 'some_name' + # preserve columns name field + df.insert(0, 'baz', df['c']) + self.assertEqual(df.columns.name, 'some_name') + + def test_delitem(self): + del self.frame['A'] + self.assertNotIn('A', self.frame) + + def test_pop(self): + self.frame.columns.name = 'baz' + + self.frame.pop('A') + self.assertNotIn('A', self.frame) + + self.frame['foo'] = 'bar' + self.frame.pop('foo') + self.assertNotIn('foo', self.frame) + # TODO self.assertEqual(self.frame.columns.name, 'baz') + + # 10912 + # inplace ops cause caching issue + a = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[ + 'A', 'B', 'C'], index=['X', 'Y']) + b = a.pop('B') + b += 1 + + # original frame + expected = DataFrame([[1, 3], [4, 6]], columns=[ + 'A', 'C'], index=['X', 'Y']) + assert_frame_equal(a, expected) + + # result + expected = Series([2, 5], index=['X', 'Y'], name='B') + 1 + assert_series_equal(b, expected) + + def test_pop_non_unique_cols(self): + df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]}) + df.columns = ["a", "b", "a"] + + res = df.pop("a") + self.assertEqual(type(res), DataFrame) + self.assertEqual(len(res), 2) + self.assertEqual(len(df.columns), 1) + self.assertTrue("b" in df.columns) + self.assertFalse("a" in df.columns) + self.assertEqual(len(df.index), 2) + + def test_insert_column_bug_4032(self): + + # GH4032, inserting a column and renaming causing errors + df = DataFrame({'b': [1.1, 2.2]}) + df = df.rename(columns={}) + df.insert(0, 'a', [1, 2]) + + result = df.rename(columns={}) + str(result) + expected = DataFrame([[1, 1.1], [2, 2.2]], columns=['a', 'b']) + assert_frame_equal(result, expected) + df.insert(0, 'c', [1.3, 2.3]) + + result = df.rename(columns={}) + str(result) + + expected = DataFrame([[1.3, 1, 1.1], [2.3, 2, 2.2]], + columns=['c', 'a', 'b']) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_nonunique_indexes.py b/pandas/tests/frame/test_nonunique_indexes.py new file mode 100644 index 0000000000000..1b24e829088f2 --- /dev/null +++ b/pandas/tests/frame/test_nonunique_indexes.py @@ -0,0 +1,454 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import numpy as np + +from pandas.compat import lrange, u +from pandas import DataFrame, Series, MultiIndex, date_range +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameNonuniqueIndexes(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_column_dups_operations(self): + + def check(result, expected=None): + if expected is not None: + assert_frame_equal(result, expected) + result.dtypes + str(result) + + # assignment + # GH 3687 + arr = np.random.randn(3, 2) + idx = lrange(2) + df = DataFrame(arr, columns=['A', 'A']) + df.columns = idx + expected = DataFrame(arr, columns=idx) + check(df, expected) + + idx = date_range('20130101', periods=4, freq='Q-NOV') + df = DataFrame([[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], + columns=['a', 'a', 'a', 'a']) + df.columns = idx + expected = DataFrame( + [[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], columns=idx) + check(df, expected) + + # insert + df = DataFrame([[1, 1, 1, 5], [1, 1, 2, 5], [2, 1, 3, 5]], + columns=['foo', 'bar', 'foo', 'hello']) + df['string'] = 'bah' + expected = DataFrame([[1, 1, 1, 5, 'bah'], [1, 1, 2, 5, 'bah'], + [2, 1, 3, 5, 'bah']], + columns=['foo', 'bar', 'foo', 'hello', 'string']) + check(df, expected) + with assertRaisesRegexp(ValueError, 'Length of value'): + df.insert(0, 'AnotherColumn', range(len(df.index) - 1)) + + # insert same dtype + df['foo2'] = 3 + expected = DataFrame([[1, 1, 1, 5, 'bah', 3], [1, 1, 2, 5, 'bah', 3], + [2, 1, 3, 5, 'bah', 3]], + columns=['foo', 'bar', 'foo', 'hello', + 'string', 'foo2']) + check(df, expected) + + # set (non-dup) + df['foo2'] = 4 + expected = DataFrame([[1, 1, 1, 5, 'bah', 4], [1, 1, 2, 5, 'bah', 4], + [2, 1, 3, 5, 'bah', 4]], + columns=['foo', 'bar', 'foo', 'hello', + 'string', 'foo2']) + check(df, expected) + df['foo2'] = 3 + + # delete (non dup) + del df['bar'] + expected = DataFrame([[1, 1, 5, 'bah', 3], [1, 2, 5, 'bah', 3], + [2, 3, 5, 'bah', 3]], + columns=['foo', 'foo', 'hello', 'string', 'foo2']) + check(df, expected) + + # try to delete again (its not consolidated) + del df['hello'] + expected = DataFrame([[1, 1, 'bah', 3], [1, 2, 'bah', 3], + [2, 3, 'bah', 3]], + columns=['foo', 'foo', 'string', 'foo2']) + check(df, expected) + + # consolidate + df = df.consolidate() + expected = DataFrame([[1, 1, 'bah', 3], [1, 2, 'bah', 3], + [2, 3, 'bah', 3]], + columns=['foo', 'foo', 'string', 'foo2']) + check(df, expected) + + # insert + df.insert(2, 'new_col', 5.) + expected = DataFrame([[1, 1, 5., 'bah', 3], [1, 2, 5., 'bah', 3], + [2, 3, 5., 'bah', 3]], + columns=['foo', 'foo', 'new_col', 'string', + 'foo2']) + check(df, expected) + + # insert a dup + assertRaisesRegexp(ValueError, 'cannot insert', + df.insert, 2, 'new_col', 4.) + df.insert(2, 'new_col', 4., allow_duplicates=True) + expected = DataFrame([[1, 1, 4., 5., 'bah', 3], + [1, 2, 4., 5., 'bah', 3], + [2, 3, 4., 5., 'bah', 3]], + columns=['foo', 'foo', 'new_col', + 'new_col', 'string', 'foo2']) + check(df, expected) + + # delete (dup) + del df['foo'] + expected = DataFrame([[4., 5., 'bah', 3], [4., 5., 'bah', 3], + [4., 5., 'bah', 3]], + columns=['new_col', 'new_col', 'string', 'foo2']) + assert_frame_equal(df, expected) + + # dup across dtypes + df = DataFrame([[1, 1, 1., 5], [1, 1, 2., 5], [2, 1, 3., 5]], + columns=['foo', 'bar', 'foo', 'hello']) + check(df) + + df['foo2'] = 7. + expected = DataFrame([[1, 1, 1., 5, 7.], [1, 1, 2., 5, 7.], + [2, 1, 3., 5, 7.]], + columns=['foo', 'bar', 'foo', 'hello', 'foo2']) + check(df, expected) + + result = df['foo'] + expected = DataFrame([[1, 1.], [1, 2.], [2, 3.]], + columns=['foo', 'foo']) + check(result, expected) + + # multiple replacements + df['foo'] = 'string' + expected = DataFrame([['string', 1, 'string', 5, 7.], + ['string', 1, 'string', 5, 7.], + ['string', 1, 'string', 5, 7.]], + columns=['foo', 'bar', 'foo', 'hello', 'foo2']) + check(df, expected) + + del df['foo'] + expected = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], columns=[ + 'bar', 'hello', 'foo2']) + check(df, expected) + + # values + df = DataFrame([[1, 2.5], [3, 4.5]], index=[1, 2], columns=['x', 'x']) + result = df.values + expected = np.array([[1, 2.5], [3, 4.5]]) + self.assertTrue((result == expected).all().all()) + + # rename, GH 4403 + df4 = DataFrame( + {'TClose': [22.02], + 'RT': [0.0454], + 'TExg': [0.0422]}, + index=MultiIndex.from_tuples([(600809, 20130331)], + names=['STK_ID', 'RPT_Date'])) + + df5 = DataFrame({'STK_ID': [600809] * 3, + 'RPT_Date': [20120930, 20121231, 20130331], + 'STK_Name': [u('饡驦'), u('饡驦'), u('饡驦')], + 'TClose': [38.05, 41.66, 30.01]}, + index=MultiIndex.from_tuples( + [(600809, 20120930), + (600809, 20121231), + (600809, 20130331)], + names=['STK_ID', 'RPT_Date'])) + + k = pd.merge(df4, df5, how='inner', left_index=True, right_index=True) + result = k.rename( + columns={'TClose_x': 'TClose', 'TClose_y': 'QT_Close'}) + str(result) + result.dtypes + + expected = (DataFrame([[0.0454, 22.02, 0.0422, 20130331, 600809, + u('饡驦'), 30.01]], + columns=['RT', 'TClose', 'TExg', + 'RPT_Date', 'STK_ID', 'STK_Name', + 'QT_Close']) + .set_index(['STK_ID', 'RPT_Date'], drop=False)) + assert_frame_equal(result, expected) + + # reindex is invalid! + df = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], + columns=['bar', 'a', 'a']) + self.assertRaises(ValueError, df.reindex, columns=['bar']) + self.assertRaises(ValueError, df.reindex, columns=['bar', 'foo']) + + # drop + df = DataFrame([[1, 5, 7.], [1, 5, 7.], [1, 5, 7.]], + columns=['bar', 'a', 'a']) + result = df.drop(['a'], axis=1) + expected = DataFrame([[1], [1], [1]], columns=['bar']) + check(result, expected) + result = df.drop('a', axis=1) + check(result, expected) + + # describe + df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], + columns=['bar', 'a', 'a'], dtype='float64') + result = df.describe() + s = df.iloc[:, 0].describe() + expected = pd.concat([s, s, s], keys=df.columns, axis=1) + check(result, expected) + + # check column dups with index equal and not equal to df's index + df = DataFrame(np.random.randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'A']) + for index in [df.index, pd.Index(list('edcba'))]: + this_df = df.copy() + expected_ser = pd.Series(index.values, index=this_df.index) + expected_df = DataFrame.from_items([('A', expected_ser), + ('B', this_df['B']), + ('A', expected_ser)]) + this_df['A'] = index + check(this_df, expected_df) + + # operations + for op in ['__add__', '__mul__', '__sub__', '__truediv__']: + df = DataFrame(dict(A=np.arange(10), B=np.random.rand(10))) + expected = getattr(df, op)(df) + expected.columns = ['A', 'A'] + df.columns = ['A', 'A'] + result = getattr(df, op)(df) + check(result, expected) + + # multiple assignments that change dtypes + # the location indexer is a slice + # GH 6120 + df = DataFrame(np.random.randn(5, 2), columns=['that', 'that']) + expected = DataFrame(1.0, index=range(5), columns=['that', 'that']) + + df['that'] = 1.0 + check(df, expected) + + df = DataFrame(np.random.rand(5, 2), columns=['that', 'that']) + expected = DataFrame(1, index=range(5), columns=['that', 'that']) + + df['that'] = 1 + check(df, expected) + + def test_column_dups2(self): + + # drop buggy GH 6240 + df = DataFrame({'A': np.random.randn(5), + 'B': np.random.randn(5), + 'C': np.random.randn(5), + 'D': ['a', 'b', 'c', 'd', 'e']}) + + expected = df.take([0, 1, 1], axis=1) + df2 = df.take([2, 0, 1, 2, 1], axis=1) + result = df2.drop('C', axis=1) + assert_frame_equal(result, expected) + + # dropna + df = DataFrame({'A': np.random.randn(5), + 'B': np.random.randn(5), + 'C': np.random.randn(5), + 'D': ['a', 'b', 'c', 'd', 'e']}) + df.iloc[2, [0, 1, 2]] = np.nan + df.iloc[0, 0] = np.nan + df.iloc[1, 1] = np.nan + df.iloc[:, 3] = np.nan + expected = df.dropna(subset=['A', 'B', 'C'], how='all') + expected.columns = ['A', 'A', 'B', 'C'] + + df.columns = ['A', 'A', 'B', 'C'] + + result = df.dropna(subset=['A', 'C'], how='all') + assert_frame_equal(result, expected) + + def test_column_dups_indexing(self): + def check(result, expected=None): + if expected is not None: + assert_frame_equal(result, expected) + result.dtypes + str(result) + + # boolean indexing + # GH 4879 + dups = ['A', 'A', 'C', 'D'] + df = DataFrame(np.arange(12).reshape(3, 4), columns=[ + 'A', 'B', 'C', 'D'], dtype='float64') + expected = df[df.C > 6] + expected.columns = dups + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + result = df[df.C > 6] + check(result, expected) + + # where + df = DataFrame(np.arange(12).reshape(3, 4), columns=[ + 'A', 'B', 'C', 'D'], dtype='float64') + expected = df[df > 6] + expected.columns = dups + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + result = df[df > 6] + check(result, expected) + + # boolean with the duplicate raises + df = DataFrame(np.arange(12).reshape(3, 4), + columns=dups, dtype='float64') + self.assertRaises(ValueError, lambda: df[df.A > 6]) + + # dup aligining operations should work + # GH 5185 + df1 = DataFrame([1, 2, 3, 4, 5], index=[1, 2, 1, 2, 3]) + df2 = DataFrame([1, 2, 3], index=[1, 2, 3]) + expected = DataFrame([0, 2, 0, 2, 2], index=[1, 1, 2, 2, 3]) + result = df1.sub(df2) + assert_frame_equal(result, expected) + + # equality + df1 = DataFrame([[1, 2], [2, np.nan], [3, 4], [4, 4]], + columns=['A', 'B']) + df2 = DataFrame([[0, 1], [2, 4], [2, np.nan], [4, 5]], + columns=['A', 'A']) + + # not-comparing like-labelled + self.assertRaises(ValueError, lambda: df1 == df2) + + df1r = df1.reindex_like(df2) + result = df1r == df2 + expected = DataFrame([[False, True], [True, False], [False, False], [ + True, False]], columns=['A', 'A']) + assert_frame_equal(result, expected) + + # mixed column selection + # GH 5639 + dfbool = DataFrame({'one': Series([True, True, False], + index=['a', 'b', 'c']), + 'two': Series([False, False, True, False], + index=['a', 'b', 'c', 'd']), + 'three': Series([False, True, True, True], + index=['a', 'b', 'c', 'd'])}) + expected = pd.concat( + [dfbool['one'], dfbool['three'], dfbool['one']], axis=1) + result = dfbool[['one', 'three', 'one']] + check(result, expected) + + # multi-axis dups + # GH 6121 + df = DataFrame(np.arange(25.).reshape(5, 5), + index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'C', 'D', 'E']) + z = df[['A', 'C', 'A']].copy() + expected = z.ix[['a', 'c', 'a']] + + df = DataFrame(np.arange(25.).reshape(5, 5), + index=['a', 'b', 'c', 'd', 'e'], + columns=['A', 'B', 'C', 'D', 'E']) + z = df[['A', 'C', 'A']] + result = z.ix[['a', 'c', 'a']] + check(result, expected) + + def test_column_dups_indexing2(self): + + # GH 8363 + # datetime ops with a non-unique index + df = DataFrame({'A': np.arange(5, dtype='int64'), + 'B': np.arange(1, 6, dtype='int64')}, + index=[2, 2, 3, 3, 4]) + result = df.B - df.A + expected = Series(1, index=[2, 2, 3, 3, 4]) + assert_series_equal(result, expected) + + df = DataFrame({'A': date_range('20130101', periods=5), + 'B': date_range('20130101 09:00:00', periods=5)}, + index=[2, 2, 3, 3, 4]) + result = df.B - df.A + expected = Series(pd.Timedelta('9 hours'), index=[2, 2, 3, 3, 4]) + assert_series_equal(result, expected) + + def test_columns_with_dups(self): + # GH 3468 related + + # basic + df = DataFrame([[1, 2]], columns=['a', 'a']) + df.columns = ['a', 'a.1'] + str(df) + expected = DataFrame([[1, 2]], columns=['a', 'a.1']) + assert_frame_equal(df, expected) + + df = DataFrame([[1, 2, 3]], columns=['b', 'a', 'a']) + df.columns = ['b', 'a', 'a.1'] + str(df) + expected = DataFrame([[1, 2, 3]], columns=['b', 'a', 'a.1']) + assert_frame_equal(df, expected) + + # with a dup index + df = DataFrame([[1, 2]], columns=['a', 'a']) + df.columns = ['b', 'b'] + str(df) + expected = DataFrame([[1, 2]], columns=['b', 'b']) + assert_frame_equal(df, expected) + + # multi-dtype + df = DataFrame([[1, 2, 1., 2., 3., 'foo', 'bar']], + columns=['a', 'a', 'b', 'b', 'd', 'c', 'c']) + df.columns = list('ABCDEFG') + str(df) + expected = DataFrame( + [[1, 2, 1., 2., 3., 'foo', 'bar']], columns=list('ABCDEFG')) + assert_frame_equal(df, expected) + + # this is an error because we cannot disambiguate the dup columns + self.assertRaises(Exception, lambda x: DataFrame( + [[1, 2, 'foo', 'bar']], columns=['a', 'a', 'a', 'a'])) + + # dups across blocks + df_float = DataFrame(np.random.randn(10, 3), dtype='float64') + df_int = DataFrame(np.random.randn(10, 3), dtype='int64') + df_bool = DataFrame(True, index=df_float.index, + columns=df_float.columns) + df_object = DataFrame('foo', index=df_float.index, + columns=df_float.columns) + df_dt = DataFrame(pd.Timestamp('20010101'), + index=df_float.index, + columns=df_float.columns) + df = pd.concat([df_float, df_int, df_bool, df_object, df_dt], axis=1) + + self.assertEqual(len(df._data._blknos), len(df.columns)) + self.assertEqual(len(df._data._blklocs), len(df.columns)) + + # testing iget + for i in range(len(df.columns)): + df.iloc[:, i] + + # dup columns across dtype GH 2079/2194 + vals = [[1, -1, 2.], [2, -2, 3.]] + rs = DataFrame(vals, columns=['A', 'A', 'B']) + xp = DataFrame(vals) + xp.columns = ['A', 'A', 'B'] + assert_frame_equal(rs, xp) + + def test_as_matrix_duplicates(self): + df = DataFrame([[1, 2, 'a', 'b'], + [1, 2, 'a', 'b']], + columns=['one', 'one', 'two', 'two']) + + result = df.values + expected = np.array([[1, 2, 'a', 'b'], [1, 2, 'a', 'b']], + dtype=object) + + self.assertTrue(np.array_equal(result, expected)) diff --git a/pandas/tests/frame/test_operators.py b/pandas/tests/frame/test_operators.py new file mode 100644 index 0000000000000..9e48702ad2b0a --- /dev/null +++ b/pandas/tests/frame/test_operators.py @@ -0,0 +1,1171 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import operator + +import nose + +from numpy import nan, random +import numpy as np + +from pandas.compat import lrange +from pandas import compat +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) +import pandas.core.common as com +import pandas as pd + +from pandas.util.testing import (assert_numpy_array_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import (TestData, _check_mixed_float, + _check_mixed_int) + + +class TestDataFrameOperators(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_operators(self): + garbage = random.random(4) + colSeries = Series(garbage, index=np.array(self.frame.columns)) + + idSum = self.frame + self.frame + seriesSum = self.frame + colSeries + + for col, series in compat.iteritems(idSum): + for idx, val in compat.iteritems(series): + origVal = self.frame[col][idx] * 2 + if not np.isnan(val): + self.assertEqual(val, origVal) + else: + self.assertTrue(np.isnan(origVal)) + + for col, series in compat.iteritems(seriesSum): + for idx, val in compat.iteritems(series): + origVal = self.frame[col][idx] + colSeries[col] + if not np.isnan(val): + self.assertEqual(val, origVal) + else: + self.assertTrue(np.isnan(origVal)) + + added = self.frame2 + self.frame2 + expected = self.frame2 * 2 + assert_frame_equal(added, expected) + + df = DataFrame({'a': ['a', None, 'b']}) + assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']})) + + # Test for issue #10181 + for dtype in ('float', 'int64'): + frames = [ + DataFrame(dtype=dtype), + DataFrame(columns=['A'], dtype=dtype), + DataFrame(index=[0], dtype=dtype), + ] + for df in frames: + self.assertTrue((df + df).equals(df)) + assert_frame_equal(df + df, df) + + def test_ops_np_scalar(self): + vals, xs = np.random.rand(5, 3), [nan, 7, -23, 2.718, -3.14, np.inf] + f = lambda x: DataFrame(x, index=list('ABCDE'), + columns=['jim', 'joe', 'jolie']) + + df = f(vals) + + for x in xs: + assert_frame_equal(df / np.array(x), f(vals / x)) + assert_frame_equal(np.array(x) * df, f(vals * x)) + assert_frame_equal(df + np.array(x), f(vals + x)) + assert_frame_equal(np.array(x) - df, f(x - vals)) + + def test_operators_boolean(self): + + # GH 5808 + # empty frames, non-mixed dtype + + result = DataFrame(index=[1]) & DataFrame(index=[1]) + assert_frame_equal(result, DataFrame(index=[1])) + + result = DataFrame(index=[1]) | DataFrame(index=[1]) + assert_frame_equal(result, DataFrame(index=[1])) + + result = DataFrame(index=[1]) & DataFrame(index=[1, 2]) + assert_frame_equal(result, DataFrame(index=[1, 2])) + + result = DataFrame(index=[1], columns=['A']) & DataFrame( + index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(index=[1], columns=['A'])) + + result = DataFrame(True, index=[1], columns=['A']) & DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(True, index=[1], columns=['A'])) + + result = DataFrame(True, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(True, index=[1], columns=['A'])) + + # boolean ops + result = DataFrame(1, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + assert_frame_equal(result, DataFrame(1, index=[1], columns=['A'])) + + def f(): + DataFrame(1.0, index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + self.assertRaises(TypeError, f) + + def f(): + DataFrame('foo', index=[1], columns=['A']) | DataFrame( + True, index=[1], columns=['A']) + self.assertRaises(TypeError, f) + + def test_operators_none_as_na(self): + df = DataFrame({"col1": [2, 5.0, 123, None], + "col2": [1, 2, 3, 4]}, dtype=object) + + ops = [operator.add, operator.sub, operator.mul, operator.truediv] + + # since filling converts dtypes from object, changed expected to be + # object + for op in ops: + filled = df.fillna(np.nan) + result = op(df, 3) + expected = op(filled, 3).astype(object) + expected[com.isnull(expected)] = None + assert_frame_equal(result, expected) + + result = op(df, df) + expected = op(filled, filled).astype(object) + expected[com.isnull(expected)] = None + assert_frame_equal(result, expected) + + result = op(df, df.fillna(7)) + assert_frame_equal(result, expected) + + result = op(df.fillna(7), df) + assert_frame_equal(result, expected, check_dtype=False) + + def test_comparison_invalid(self): + + def check(df, df2): + + for (x, y) in [(df, df2), (df2, df)]: + self.assertRaises(TypeError, lambda: x == y) + self.assertRaises(TypeError, lambda: x != y) + self.assertRaises(TypeError, lambda: x >= y) + self.assertRaises(TypeError, lambda: x > y) + self.assertRaises(TypeError, lambda: x < y) + self.assertRaises(TypeError, lambda: x <= y) + + # GH4968 + # invalid date/int comparisons + df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a']) + df['dates'] = date_range('20010101', periods=len(df)) + + df2 = df.copy() + df2['dates'] = df['a'] + check(df, df2) + + df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b']) + df2 = DataFrame({'a': date_range('20010101', periods=len( + df)), 'b': date_range('20100101', periods=len(df))}) + check(df, df2) + + def test_timestamp_compare(self): + # make sure we can compare Timestamps on the right AND left hand side + # GH4982 + df = DataFrame({'dates1': date_range('20010101', periods=10), + 'dates2': date_range('20010102', periods=10), + 'intcol': np.random.randint(1000000000, size=10), + 'floatcol': np.random.randn(10), + 'stringcol': list(tm.rands(10))}) + df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT + ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq', + 'ne': 'ne'} + for left, right in ops.items(): + left_f = getattr(operator, left) + right_f = getattr(operator, right) + + # no nats + expected = left_f(df, Timestamp('20010109')) + result = right_f(Timestamp('20010109'), df) + assert_frame_equal(result, expected) + + # nats + expected = left_f(df, Timestamp('nat')) + result = right_f(Timestamp('nat'), df) + assert_frame_equal(result, expected) + + def test_modulo(self): + # GH3590, modulo as ints + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + + # this is technically wrong as the integer portion is coerced to float + # ### + expected = DataFrame({'first': Series([0, 0, 0, 0], dtype='float64'), + 'second': Series([np.nan, np.nan, np.nan, 0])}) + result = p % p + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values % p.values, index=p.index, + columns=p.columns, dtype='float64') + result2.iloc[0:3, 1] = np.nan + assert_frame_equal(result2, expected) + + result = p % 0 + expected = DataFrame(np.nan, index=p.index, columns=p.columns) + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values.astype('float64') % + 0, index=p.index, columns=p.columns) + assert_frame_equal(result2, expected) + + # not commutative with series + p = DataFrame(np.random.randn(10, 5)) + s = p[0] + res = s % p + res2 = p % s + self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) + + def test_div(self): + + # integer div, but deal with the 0's (GH 9144) + p = DataFrame({'first': [3, 4, 5, 8], 'second': [0, 0, 0, 3]}) + result = p / p + + expected = DataFrame({'first': Series([1.0, 1.0, 1.0, 1.0]), + 'second': Series([nan, nan, nan, 1])}) + assert_frame_equal(result, expected) + + result2 = DataFrame(p.values.astype('float') / p.values, index=p.index, + columns=p.columns) + assert_frame_equal(result2, expected) + + result = p / 0 + expected = DataFrame(np.inf, index=p.index, columns=p.columns) + expected.iloc[0:3, 1] = nan + assert_frame_equal(result, expected) + + # numpy has a slightly different (wrong) treatement + result2 = DataFrame(p.values.astype('float64') / 0, index=p.index, + columns=p.columns) + assert_frame_equal(result2, expected) + + p = DataFrame(np.random.randn(10, 5)) + s = p[0] + res = s / p + res2 = p / s + self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) + + def test_logical_operators(self): + + def _check_bin_op(op): + result = op(df1, df2) + expected = DataFrame(op(df1.values, df2.values), index=df1.index, + columns=df1.columns) + self.assertEqual(result.values.dtype, np.bool_) + assert_frame_equal(result, expected) + + def _check_unary_op(op): + result = op(df1) + expected = DataFrame(op(df1.values), index=df1.index, + columns=df1.columns) + self.assertEqual(result.values.dtype, np.bool_) + assert_frame_equal(result, expected) + + df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, + 'b': {'a': False, 'b': True, 'c': False, + 'd': False, 'e': False}, + 'c': {'a': False, 'b': False, 'c': True, + 'd': False, 'e': False}, + 'd': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, + 'e': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}} + + df2 = {'a': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, + 'b': {'a': False, 'b': True, 'c': False, + 'd': False, 'e': False}, + 'c': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, + 'd': {'a': False, 'b': False, 'c': False, + 'd': True, 'e': False}, + 'e': {'a': False, 'b': False, 'c': False, + 'd': False, 'e': True}} + + df1 = DataFrame(df1) + df2 = DataFrame(df2) + + _check_bin_op(operator.and_) + _check_bin_op(operator.or_) + _check_bin_op(operator.xor) + + # operator.neg is deprecated in numpy >= 1.9 + _check_unary_op(operator.inv) + + def test_logical_typeerror(self): + if not compat.PY3: + self.assertRaises(TypeError, self.frame.__eq__, 'foo') + self.assertRaises(TypeError, self.frame.__lt__, 'foo') + self.assertRaises(TypeError, self.frame.__gt__, 'foo') + self.assertRaises(TypeError, self.frame.__ne__, 'foo') + else: + raise nose.SkipTest('test_logical_typeerror not tested on PY3') + + def test_logical_with_nas(self): + d = DataFrame({'a': [np.nan, False], 'b': [True, True]}) + + # GH4947 + # bool comparisons should return bool + result = d['a'] | d['b'] + expected = Series([False, True]) + assert_series_equal(result, expected) + + # GH4604, automatic casting here + result = d['a'].fillna(False) | d['b'] + expected = Series([True, True]) + assert_series_equal(result, expected) + + result = d['a'].fillna(False, downcast=False) | d['b'] + expected = Series([True, True]) + assert_series_equal(result, expected) + + def test_neg(self): + # what to do? + assert_frame_equal(-self.frame, -1 * self.frame) + + def test_invert(self): + assert_frame_equal(-(self.frame < 0), ~(self.frame < 0)) + + def test_arith_flex_frame(self): + ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod'] + if not compat.PY3: + aliases = {} + else: + aliases = {'div': 'truediv'} + + for op in ops: + try: + alias = aliases.get(op, op) + f = getattr(operator, alias) + result = getattr(self.frame, op)(2 * self.frame) + exp = f(self.frame, 2 * self.frame) + assert_frame_equal(result, exp) + + # vs mix float + result = getattr(self.mixed_float, op)(2 * self.mixed_float) + exp = f(self.mixed_float, 2 * self.mixed_float) + assert_frame_equal(result, exp) + _check_mixed_float(result, dtype=dict(C=None)) + + # vs mix int + if op in ['add', 'sub', 'mul']: + result = getattr(self.mixed_int, op)(2 + self.mixed_int) + exp = f(self.mixed_int, 2 + self.mixed_int) + + # overflow in the uint + dtype = None + if op in ['sub']: + dtype = dict(B='object', C=None) + elif op in ['add', 'mul']: + dtype = dict(C=None) + assert_frame_equal(result, exp) + _check_mixed_int(result, dtype=dtype) + + # rops + r_f = lambda x, y: f(y, x) + result = getattr(self.frame, 'r' + op)(2 * self.frame) + exp = r_f(self.frame, 2 * self.frame) + assert_frame_equal(result, exp) + + # vs mix float + result = getattr(self.mixed_float, op)( + 2 * self.mixed_float) + exp = f(self.mixed_float, 2 * self.mixed_float) + assert_frame_equal(result, exp) + _check_mixed_float(result, dtype=dict(C=None)) + + result = getattr(self.intframe, op)(2 * self.intframe) + exp = f(self.intframe, 2 * self.intframe) + assert_frame_equal(result, exp) + + # vs mix int + if op in ['add', 'sub', 'mul']: + result = getattr(self.mixed_int, op)( + 2 + self.mixed_int) + exp = f(self.mixed_int, 2 + self.mixed_int) + + # overflow in the uint + dtype = None + if op in ['sub']: + dtype = dict(B='object', C=None) + elif op in ['add', 'mul']: + dtype = dict(C=None) + assert_frame_equal(result, exp) + _check_mixed_int(result, dtype=dtype) + except: + com.pprint_thing("Failing operation %r" % op) + raise + + # ndim >= 3 + ndim_5 = np.ones(self.frame.shape + (3, 4, 5)) + with assertRaisesRegexp(ValueError, 'shape'): + f(self.frame, ndim_5) + + with assertRaisesRegexp(ValueError, 'shape'): + getattr(self.frame, op)(ndim_5) + + # res_add = self.frame.add(self.frame) + # res_sub = self.frame.sub(self.frame) + # res_mul = self.frame.mul(self.frame) + # res_div = self.frame.div(2 * self.frame) + + # assert_frame_equal(res_add, self.frame + self.frame) + # assert_frame_equal(res_sub, self.frame - self.frame) + # assert_frame_equal(res_mul, self.frame * self.frame) + # assert_frame_equal(res_div, self.frame / (2 * self.frame)) + + const_add = self.frame.add(1) + assert_frame_equal(const_add, self.frame + 1) + + # corner cases + result = self.frame.add(self.frame[:0]) + assert_frame_equal(result, self.frame * np.nan) + + result = self.frame[:0].add(self.frame) + assert_frame_equal(result, self.frame * np.nan) + with assertRaisesRegexp(NotImplementedError, 'fill_value'): + self.frame.add(self.frame.iloc[0], fill_value=3) + with assertRaisesRegexp(NotImplementedError, 'fill_value'): + self.frame.add(self.frame.iloc[0], axis='index', fill_value=3) + + def test_binary_ops_align(self): + + # test aligning binary ops + + # GH 6681 + index = MultiIndex.from_product([list('abc'), + ['one', 'two', 'three'], + [1, 2, 3]], + names=['first', 'second', 'third']) + + df = DataFrame(np.arange(27 * 3).reshape(27, 3), + index=index, + columns=['value1', 'value2', 'value3']).sortlevel() + + idx = pd.IndexSlice + for op in ['add', 'sub', 'mul', 'div', 'truediv']: + opa = getattr(operator, op, None) + if opa is None: + continue + + x = Series([1.0, 10.0, 100.0], [1, 2, 3]) + result = getattr(df, op)(x, level='third', axis=0) + + expected = pd.concat([opa(df.loc[idx[:, :, i], :], v) + for i, v in x.iteritems()]).sortlevel() + assert_frame_equal(result, expected) + + x = Series([1.0, 10.0], ['two', 'three']) + result = getattr(df, op)(x, level='second', axis=0) + + expected = (pd.concat([opa(df.loc[idx[:, i], :], v) + for i, v in x.iteritems()]) + .reindex_like(df).sortlevel()) + assert_frame_equal(result, expected) + + # GH9463 (alignment level of dataframe with series) + + midx = MultiIndex.from_product([['A', 'B'], ['a', 'b']]) + df = DataFrame(np.ones((2, 4), dtype='int64'), columns=midx) + s = pd.Series({'a': 1, 'b': 2}) + + df2 = df.copy() + df2.columns.names = ['lvl0', 'lvl1'] + s2 = s.copy() + s2.index.name = 'lvl1' + + # different cases of integer/string level names: + res1 = df.mul(s, axis=1, level=1) + res2 = df.mul(s2, axis=1, level=1) + res3 = df2.mul(s, axis=1, level=1) + res4 = df2.mul(s2, axis=1, level=1) + res5 = df2.mul(s, axis=1, level='lvl1') + res6 = df2.mul(s2, axis=1, level='lvl1') + + exp = DataFrame(np.array([[1, 2, 1, 2], [1, 2, 1, 2]], dtype='int64'), + columns=midx) + + for res in [res1, res2]: + assert_frame_equal(res, exp) + + exp.columns.names = ['lvl0', 'lvl1'] + for res in [res3, res4, res5, res6]: + assert_frame_equal(res, exp) + + def test_arith_mixed(self): + + left = DataFrame({'A': ['a', 'b', 'c'], + 'B': [1, 2, 3]}) + + result = left + left + expected = DataFrame({'A': ['aa', 'bb', 'cc'], + 'B': [2, 4, 6]}) + assert_frame_equal(result, expected) + + def test_arith_getitem_commute(self): + df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]}) + + self._test_op(df, operator.add) + self._test_op(df, operator.sub) + self._test_op(df, operator.mul) + self._test_op(df, operator.truediv) + self._test_op(df, operator.floordiv) + self._test_op(df, operator.pow) + + self._test_op(df, lambda x, y: y + x) + self._test_op(df, lambda x, y: y - x) + self._test_op(df, lambda x, y: y * x) + self._test_op(df, lambda x, y: y / x) + self._test_op(df, lambda x, y: y ** x) + + self._test_op(df, lambda x, y: x + y) + self._test_op(df, lambda x, y: x - y) + self._test_op(df, lambda x, y: x * y) + self._test_op(df, lambda x, y: x / y) + self._test_op(df, lambda x, y: x ** y) + + @staticmethod + def _test_op(df, op): + result = op(df, 1) + + if not df.columns.is_unique: + raise ValueError("Only unique columns supported by this test") + + for col in result.columns: + assert_series_equal(result[col], op(df[col], 1)) + + def test_bool_flex_frame(self): + data = np.random.randn(5, 3) + other_data = np.random.randn(5, 3) + df = DataFrame(data) + other = DataFrame(other_data) + ndim_5 = np.ones(df.shape + (1, 3)) + + # Unaligned + def _check_unaligned_frame(meth, op, df, other): + part_o = other.ix[3:, 1:].copy() + rs = meth(part_o) + xp = op(df, part_o.reindex(index=df.index, columns=df.columns)) + assert_frame_equal(rs, xp) + + # DataFrame + self.assertTrue(df.eq(df).values.all()) + self.assertFalse(df.ne(df).values.any()) + for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']: + f = getattr(df, op) + o = getattr(operator, op) + # No NAs + assert_frame_equal(f(other), o(df, other)) + _check_unaligned_frame(f, o, df, other) + # ndarray + assert_frame_equal(f(other.values), o(df, other.values)) + # scalar + assert_frame_equal(f(0), o(df, 0)) + # NAs + assert_frame_equal(f(np.nan), o(df, np.nan)) + with assertRaisesRegexp(ValueError, 'shape'): + f(ndim_5) + + # Series + def _test_seq(df, idx_ser, col_ser): + idx_eq = df.eq(idx_ser, axis=0) + col_eq = df.eq(col_ser) + idx_ne = df.ne(idx_ser, axis=0) + col_ne = df.ne(col_ser) + assert_frame_equal(col_eq, df == Series(col_ser)) + assert_frame_equal(col_eq, -col_ne) + assert_frame_equal(idx_eq, -idx_ne) + assert_frame_equal(idx_eq, df.T.eq(idx_ser).T) + assert_frame_equal(col_eq, df.eq(list(col_ser))) + assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0)) + assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0)) + + idx_gt = df.gt(idx_ser, axis=0) + col_gt = df.gt(col_ser) + idx_le = df.le(idx_ser, axis=0) + col_le = df.le(col_ser) + + assert_frame_equal(col_gt, df > Series(col_ser)) + assert_frame_equal(col_gt, -col_le) + assert_frame_equal(idx_gt, -idx_le) + assert_frame_equal(idx_gt, df.T.gt(idx_ser).T) + + idx_ge = df.ge(idx_ser, axis=0) + col_ge = df.ge(col_ser) + idx_lt = df.lt(idx_ser, axis=0) + col_lt = df.lt(col_ser) + assert_frame_equal(col_ge, df >= Series(col_ser)) + assert_frame_equal(col_ge, -col_lt) + assert_frame_equal(idx_ge, -idx_lt) + assert_frame_equal(idx_ge, df.T.ge(idx_ser).T) + + idx_ser = Series(np.random.randn(5)) + col_ser = Series(np.random.randn(3)) + _test_seq(df, idx_ser, col_ser) + + # list/tuple + _test_seq(df, idx_ser.values, col_ser.values) + + # NA + df.ix[0, 0] = np.nan + rs = df.eq(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.ne(df) + self.assertTrue(rs.ix[0, 0]) + rs = df.gt(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.lt(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.ge(df) + self.assertFalse(rs.ix[0, 0]) + rs = df.le(df) + self.assertFalse(rs.ix[0, 0]) + + # complex + arr = np.array([np.nan, 1, 6, np.nan]) + arr2 = np.array([2j, np.nan, 7, None]) + df = DataFrame({'a': arr}) + df2 = DataFrame({'a': arr2}) + rs = df.gt(df2) + self.assertFalse(rs.values.any()) + rs = df.ne(df2) + self.assertTrue(rs.values.all()) + + arr3 = np.array([2j, np.nan, None]) + df3 = DataFrame({'a': arr3}) + rs = df3.gt(2j) + self.assertFalse(rs.values.any()) + + # corner, dtype=object + df1 = DataFrame({'col': ['foo', np.nan, 'bar']}) + df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']}) + result = df1.ne(df2) + exp = DataFrame({'col': [False, True, False]}) + assert_frame_equal(result, exp) + + def test_arith_flex_series(self): + df = self.simple + + row = df.xs('a') + col = df['two'] + # after arithmetic refactor, add truediv here + ops = ['add', 'sub', 'mul', 'mod'] + for op in ops: + f = getattr(df, op) + op = getattr(operator, op) + assert_frame_equal(f(row), op(df, row)) + assert_frame_equal(f(col, axis=0), op(df.T, col).T) + + # special case for some reason + assert_frame_equal(df.add(row, axis=None), df + row) + + # cases which will be refactored after big arithmetic refactor + assert_frame_equal(df.div(row), df / row) + assert_frame_equal(df.div(col, axis=0), (df.T / col).T) + + # broadcasting issue in GH7325 + df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='int64') + expected = DataFrame([[nan, np.inf], [1.0, 1.5], [1.0, 1.25]]) + result = df.div(df[0], axis='index') + assert_frame_equal(result, expected) + + df = DataFrame(np.arange(3 * 2).reshape((3, 2)), dtype='float64') + expected = DataFrame([[np.nan, np.inf], [1.0, 1.5], [1.0, 1.25]]) + result = df.div(df[0], axis='index') + assert_frame_equal(result, expected) + + def test_arith_non_pandas_object(self): + df = self.simple + + val1 = df.xs('a').values + added = DataFrame(df.values + val1, index=df.index, columns=df.columns) + assert_frame_equal(df + val1, added) + + added = DataFrame((df.values.T + val1).T, + index=df.index, columns=df.columns) + assert_frame_equal(df.add(val1, axis=0), added) + + val2 = list(df['two']) + + added = DataFrame(df.values + val2, index=df.index, columns=df.columns) + assert_frame_equal(df + val2, added) + + added = DataFrame((df.values.T + val2).T, index=df.index, + columns=df.columns) + assert_frame_equal(df.add(val2, axis='index'), added) + + val3 = np.random.rand(*df.shape) + added = DataFrame(df.values + val3, index=df.index, columns=df.columns) + assert_frame_equal(df.add(val3), added) + + def test_combineFrame(self): + frame_copy = self.frame.reindex(self.frame.index[::2]) + + del frame_copy['D'] + frame_copy['C'][:5] = nan + + added = self.frame + frame_copy + tm.assert_dict_equal(added['A'].valid(), + self.frame['A'] * 2, + compare_keys=False) + + self.assertTrue( + np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()) + + # assert(False) + + self.assertTrue(np.isnan(added['D']).all()) + + self_added = self.frame + self.frame + self.assertTrue(self_added.index.equals(self.frame.index)) + + added_rev = frame_copy + self.frame + self.assertTrue(np.isnan(added['D']).all()) + self.assertTrue(np.isnan(added_rev['D']).all()) + + # corner cases + + # empty + plus_empty = self.frame + self.empty + self.assertTrue(np.isnan(plus_empty.values).all()) + + empty_plus = self.empty + self.frame + self.assertTrue(np.isnan(empty_plus.values).all()) + + empty_empty = self.empty + self.empty + self.assertTrue(empty_empty.empty) + + # out of order + reverse = self.frame.reindex(columns=self.frame.columns[::-1]) + + assert_frame_equal(reverse + self.frame, self.frame * 2) + + # mix vs float64, upcast + added = self.frame + self.mixed_float + _check_mixed_float(added, dtype='float64') + added = self.mixed_float + self.frame + _check_mixed_float(added, dtype='float64') + + # mix vs mix + added = self.mixed_float + self.mixed_float2 + _check_mixed_float(added, dtype=dict(C=None)) + added = self.mixed_float2 + self.mixed_float + _check_mixed_float(added, dtype=dict(C=None)) + + # with int + added = self.frame + self.mixed_int + _check_mixed_float(added, dtype='float64') + + def test_combineSeries(self): + + # Series + series = self.frame.xs(self.frame.index[0]) + + added = self.frame + series + + for key, s in compat.iteritems(added): + assert_series_equal(s, self.frame[key] + series[key]) + + larger_series = series.to_dict() + larger_series['E'] = 1 + larger_series = Series(larger_series) + larger_added = self.frame + larger_series + + for key, s in compat.iteritems(self.frame): + assert_series_equal(larger_added[key], s + series[key]) + self.assertIn('E', larger_added) + self.assertTrue(np.isnan(larger_added['E']).all()) + + # vs mix (upcast) as needed + added = self.mixed_float + series + _check_mixed_float(added, dtype='float64') + added = self.mixed_float + series.astype('float32') + _check_mixed_float(added, dtype=dict(C=None)) + added = self.mixed_float + series.astype('float16') + _check_mixed_float(added, dtype=dict(C=None)) + + # these raise with numexpr.....as we are adding an int64 to an + # uint64....weird vs int + + # added = self.mixed_int + (100*series).astype('int64') + # _check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = + # 'int64', D = 'int64')) + # added = self.mixed_int + (100*series).astype('int32') + # _check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = + # 'int32', D = 'int64')) + + # TimeSeries + ts = self.tsframe['A'] + + # 10890 + # we no longer allow auto timeseries broadcasting + # and require explict broadcasting + added = self.tsframe.add(ts, axis='index') + + for key, col in compat.iteritems(self.tsframe): + result = col + ts + assert_series_equal(added[key], result, check_names=False) + self.assertEqual(added[key].name, key) + if col.name == ts.name: + self.assertEqual(result.name, 'A') + else: + self.assertTrue(result.name is None) + + smaller_frame = self.tsframe[:-5] + smaller_added = smaller_frame.add(ts, axis='index') + + self.assertTrue(smaller_added.index.equals(self.tsframe.index)) + + smaller_ts = ts[:-5] + smaller_added2 = self.tsframe.add(smaller_ts, axis='index') + assert_frame_equal(smaller_added, smaller_added2) + + # length 0, result is all-nan + result = self.tsframe.add(ts[:0], axis='index') + expected = DataFrame(np.nan, index=self.tsframe.index, + columns=self.tsframe.columns) + assert_frame_equal(result, expected) + + # Frame is all-nan + result = self.tsframe[:0].add(ts, axis='index') + expected = DataFrame(np.nan, index=self.tsframe.index, + columns=self.tsframe.columns) + assert_frame_equal(result, expected) + + # empty but with non-empty index + frame = self.tsframe[:1].reindex(columns=[]) + result = frame.mul(ts, axis='index') + self.assertEqual(len(result), len(ts)) + + def test_combineFunc(self): + result = self.frame * 2 + self.assert_numpy_array_equal(result.values, self.frame.values * 2) + + # vs mix + result = self.mixed_float * 2 + for c, s in compat.iteritems(result): + self.assert_numpy_array_equal( + s.values, self.mixed_float[c].values * 2) + _check_mixed_float(result, dtype=dict(C=None)) + + result = self.empty * 2 + self.assertIs(result.index, self.empty.index) + self.assertEqual(len(result.columns), 0) + + def test_comparisons(self): + df1 = tm.makeTimeDataFrame() + df2 = tm.makeTimeDataFrame() + + row = self.simple.xs('a') + ndim_5 = np.ones(df1.shape + (1, 1, 1)) + + def test_comp(func): + result = func(df1, df2) + self.assert_numpy_array_equal(result.values, + func(df1.values, df2.values)) + with assertRaisesRegexp(ValueError, 'Wrong number of dimensions'): + func(df1, ndim_5) + + result2 = func(self.simple, row) + self.assert_numpy_array_equal(result2.values, + func(self.simple.values, row.values)) + + result3 = func(self.frame, 0) + self.assert_numpy_array_equal(result3.values, + func(self.frame.values, 0)) + + with assertRaisesRegexp(ValueError, 'Can only compare ' + 'identically-labeled DataFrame'): + func(self.simple, self.simple[:2]) + + test_comp(operator.eq) + test_comp(operator.ne) + test_comp(operator.lt) + test_comp(operator.gt) + test_comp(operator.ge) + test_comp(operator.le) + + def test_string_comparison(self): + df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}]) + mask_a = df.a > 1 + assert_frame_equal(df[mask_a], df.ix[1:1, :]) + assert_frame_equal(df[-mask_a], df.ix[0:0, :]) + + mask_b = df.b == "foo" + assert_frame_equal(df[mask_b], df.ix[0:0, :]) + assert_frame_equal(df[-mask_b], df.ix[1:1, :]) + + def test_float_none_comparison(self): + df = DataFrame(np.random.randn(8, 3), index=lrange(8), + columns=['A', 'B', 'C']) + + self.assertRaises(TypeError, df.__eq__, None) + + def test_boolean_comparison(self): + + # GH 4576 + # boolean comparisons with a tuple/list give unexpected results + df = DataFrame(np.arange(6).reshape((3, 2))) + b = np.array([2, 2]) + b_r = np.atleast_2d([2, 2]) + b_c = b_r.T + l = (2, 2, 2) + tup = tuple(l) + + # gt + expected = DataFrame([[False, False], [False, True], [True, True]]) + result = df > b + assert_frame_equal(result, expected) + + result = df.values > b + assert_numpy_array_equal(result, expected.values) + + result = df > l + assert_frame_equal(result, expected) + + result = df > tup + assert_frame_equal(result, expected) + + result = df > b_r + assert_frame_equal(result, expected) + + result = df.values > b_r + assert_numpy_array_equal(result, expected.values) + + self.assertRaises(ValueError, df.__gt__, b_c) + self.assertRaises(ValueError, df.values.__gt__, b_c) + + # == + expected = DataFrame([[False, False], [True, False], [False, False]]) + result = df == b + assert_frame_equal(result, expected) + + result = df == l + assert_frame_equal(result, expected) + + result = df == tup + assert_frame_equal(result, expected) + + result = df == b_r + assert_frame_equal(result, expected) + + result = df.values == b_r + assert_numpy_array_equal(result, expected.values) + + self.assertRaises(ValueError, lambda: df == b_c) + self.assertFalse((df.values == b_c)) + + # with alignment + df = DataFrame(np.arange(6).reshape((3, 2)), + columns=list('AB'), index=list('abc')) + expected.index = df.index + expected.columns = df.columns + + result = df == l + assert_frame_equal(result, expected) + + result = df == tup + assert_frame_equal(result, expected) + + # not shape compatible + self.assertRaises(ValueError, lambda: df == (2, 2)) + self.assertRaises(ValueError, lambda: df == [2, 2]) + + def test_combineAdd(self): + + with tm.assert_produces_warning(FutureWarning): + # trivial + comb = self.frame.combineAdd(self.frame) + assert_frame_equal(comb, self.frame * 2) + + # more rigorous + a = DataFrame([[1., nan, nan, 2., nan]], + columns=np.arange(5)) + b = DataFrame([[2., 3., nan, 2., 6., nan]], + columns=np.arange(6)) + expected = DataFrame([[3., 3., nan, 4., 6., nan]], + columns=np.arange(6)) + + result = a.combineAdd(b) + assert_frame_equal(result, expected) + result2 = a.T.combineAdd(b.T) + assert_frame_equal(result2, expected.T) + + expected2 = a.combine(b, operator.add, fill_value=0.) + assert_frame_equal(expected, expected2) + + # corner cases + comb = self.frame.combineAdd(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combineAdd(self.frame) + assert_frame_equal(comb, self.frame) + + # integer corner case + df1 = DataFrame({'x': [5]}) + df2 = DataFrame({'x': [1]}) + df3 = DataFrame({'x': [6]}) + comb = df1.combineAdd(df2) + assert_frame_equal(comb, df3) + + # mixed type GH2191 + df1 = DataFrame({'A': [1, 2], 'B': [3, 4]}) + df2 = DataFrame({'A': [1, 2], 'C': [5, 6]}) + rs = df1.combineAdd(df2) + xp = DataFrame({'A': [2, 4], 'B': [3, 4.], 'C': [5, 6.]}) + assert_frame_equal(xp, rs) + + # TODO: test integer fill corner? + + def test_combineMult(self): + with tm.assert_produces_warning(FutureWarning): + # trivial + comb = self.frame.combineMult(self.frame) + + assert_frame_equal(comb, self.frame ** 2) + + # corner cases + comb = self.frame.combineMult(self.empty) + assert_frame_equal(comb, self.frame) + + comb = self.empty.combineMult(self.frame) + assert_frame_equal(comb, self.frame) + + def test_combine_generic(self): + df1 = self.frame + df2 = self.frame.ix[:-5, ['A', 'B', 'C']] + + combined = df1.combine(df2, np.add) + combined2 = df2.combine(df1, np.add) + self.assertTrue(combined['D'].isnull().all()) + self.assertTrue(combined2['D'].isnull().all()) + + chunk = combined.ix[:-5, ['A', 'B', 'C']] + chunk2 = combined2.ix[:-5, ['A', 'B', 'C']] + + exp = self.frame.ix[:-5, ['A', 'B', 'C']].reindex_like(chunk) * 2 + assert_frame_equal(chunk, exp) + assert_frame_equal(chunk2, exp) + + def test_inplace_ops_alignment(self): + + # inplace ops / ops alignment + # GH 8511 + + columns = list('abcdefg') + X_orig = DataFrame(np.arange(10 * len(columns)) + .reshape(-1, len(columns)), + columns=columns, index=range(10)) + Z = 100 * X_orig.iloc[:, 1:-1].copy() + block1 = list('bedcf') + subs = list('bcdef') + + # add + X = X_orig.copy() + result1 = (X[block1] + Z).reindex(columns=subs) + + X[block1] += Z + result2 = X.reindex(columns=subs) + + X = X_orig.copy() + result3 = (X[block1] + Z[block1]).reindex(columns=subs) + + X[block1] += Z[block1] + result4 = X.reindex(columns=subs) + + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) + assert_frame_equal(result1, result4) + + # sub + X = X_orig.copy() + result1 = (X[block1] - Z).reindex(columns=subs) + + X[block1] -= Z + result2 = X.reindex(columns=subs) + + X = X_orig.copy() + result3 = (X[block1] - Z[block1]).reindex(columns=subs) + + X[block1] -= Z[block1] + result4 = X.reindex(columns=subs) + + assert_frame_equal(result1, result2) + assert_frame_equal(result1, result3) + assert_frame_equal(result1, result4) + + def test_inplace_ops_identity(self): + + # GH 5104 + # make sure that we are actually changing the object + s_orig = Series([1, 2, 3]) + df_orig = DataFrame(np.random.randint(0, 5, size=10).reshape(-1, 5)) + + # no dtype change + s = s_orig.copy() + s2 = s + s += 1 + assert_series_equal(s, s2) + assert_series_equal(s_orig + 1, s) + self.assertIs(s, s2) + self.assertIs(s._data, s2._data) + + df = df_orig.copy() + df2 = df + df += 1 + assert_frame_equal(df, df2) + assert_frame_equal(df_orig + 1, df) + self.assertIs(df, df2) + self.assertIs(df._data, df2._data) + + # dtype change + s = s_orig.copy() + s2 = s + s += 1.5 + assert_series_equal(s, s2) + assert_series_equal(s_orig + 1.5, s) + + df = df_orig.copy() + df2 = df + df += 1.5 + assert_frame_equal(df, df2) + assert_frame_equal(df_orig + 1.5, df) + self.assertIs(df, df2) + self.assertIs(df._data, df2._data) + + # mixed dtype + arr = np.random.randint(0, 10, size=5) + df_orig = DataFrame({'A': arr.copy(), 'B': 'foo'}) + df = df_orig.copy() + df2 = df + df['A'] += 1 + expected = DataFrame({'A': arr.copy() + 1, 'B': 'foo'}) + assert_frame_equal(df, expected) + assert_frame_equal(df2, expected) + self.assertIs(df._data, df2._data) + + df = df_orig.copy() + df2 = df + df['A'] += 1.5 + expected = DataFrame({'A': arr.copy() + 1.5, 'B': 'foo'}) + assert_frame_equal(df, expected) + assert_frame_equal(df2, expected) + self.assertIs(df._data, df2._data) diff --git a/pandas/tests/frame/test_query_eval.py b/pandas/tests/frame/test_query_eval.py new file mode 100644 index 0000000000000..52594b982a0d0 --- /dev/null +++ b/pandas/tests/frame/test_query_eval.py @@ -0,0 +1,1087 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import operator +import nose +from itertools import product + +from pandas.compat import (zip, range, lrange, StringIO) +from pandas import DataFrame, Series, Index, MultiIndex, date_range +import pandas as pd +import numpy as np + +from numpy.random import randn + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaises, + makeCustomDataframe as mkdf) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +PARSERS = 'python', 'pandas' +ENGINES = 'python', 'numexpr' + + +def skip_if_no_pandas_parser(parser): + if parser != 'pandas': + raise nose.SkipTest("cannot evaluate with parser {0!r}".format(parser)) + + +def skip_if_no_ne(engine='numexpr'): + if engine == 'numexpr': + try: + import numexpr as ne # noqa + except ImportError: + raise nose.SkipTest("cannot query engine numexpr when numexpr not " + "installed") + + +class TestDataFrameEval(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_ops(self): + + # tst ops and reversed ops in evaluation + # GH7198 + + # smaller hits python, larger hits numexpr + for n in [4, 4000]: + + df = DataFrame(1, index=range(n), columns=list('abcd')) + df.iloc[0] = 2 + m = df.mean() + + for op_str, op, rop in [('+', '__add__', '__radd__'), + ('-', '__sub__', '__rsub__'), + ('*', '__mul__', '__rmul__'), + ('/', '__truediv__', '__rtruediv__')]: + + base = (DataFrame(np.tile(m.values, n) # noqa + .reshape(n, -1), + columns=list('abcd'))) + + expected = eval("base{op}df".format(op=op_str)) + + # ops as strings + result = eval("m{op}df".format(op=op_str)) + assert_frame_equal(result, expected) + + # these are commutative + if op in ['+', '*']: + result = getattr(df, op)(m) + assert_frame_equal(result, expected) + + # these are not + elif op in ['-', '/']: + result = getattr(df, rop)(m) + assert_frame_equal(result, expected) + + # GH7192 + df = DataFrame(dict(A=np.random.randn(25000))) + df.iloc[0:5] = np.nan + expected = (1 - np.isnan(df.iloc[0:25])) + result = (1 - np.isnan(df)).iloc[0:25] + assert_frame_equal(result, expected) + + +class TestDataFrameQueryWithMultiIndex(tm.TestCase): + + _multiprocess_can_split_ = True + + def check_query_with_named_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = tm.choice(['eggs', 'ham'], size=10) + index = MultiIndex.from_arrays([a, b], names=['color', 'food']) + df = DataFrame(randn(10, 2), index=index) + ind = Series(df.index.get_level_values('color').values, index=index, + name='color') + + # equality + res1 = df.query('color == "red"', parser=parser, engine=engine) + res2 = df.query('"red" == color', parser=parser, engine=engine) + exp = df[ind == 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('color != "red"', parser=parser, engine=engine) + res2 = df.query('"red" != color', parser=parser, engine=engine) + exp = df[ind != 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('color == ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] == color', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('color != ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] != color', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["red"] in color', parser=parser, engine=engine) + res2 = df.query('"red" in color', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["red"] not in color', parser=parser, engine=engine) + res2 = df.query('"red" not in color', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + def test_query_with_named_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_with_named_multiindex, parser, engine + + def check_query_with_unnamed_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = tm.choice(['eggs', 'ham'], size=10) + index = MultiIndex.from_arrays([a, b]) + df = DataFrame(randn(10, 2), index=index) + ind = Series(df.index.get_level_values(0).values, index=index) + + res1 = df.query('ilevel_0 == "red"', parser=parser, engine=engine) + res2 = df.query('"red" == ilevel_0', parser=parser, engine=engine) + exp = df[ind == 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('ilevel_0 != "red"', parser=parser, engine=engine) + res2 = df.query('"red" != ilevel_0', parser=parser, engine=engine) + exp = df[ind != 'red'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('ilevel_0 == ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] == ilevel_0', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('ilevel_0 != ["red"]', parser=parser, engine=engine) + res2 = df.query('["red"] != ilevel_0', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["red"] in ilevel_0', parser=parser, engine=engine) + res2 = df.query('"red" in ilevel_0', parser=parser, engine=engine) + exp = df[ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["red"] not in ilevel_0', parser=parser, + engine=engine) + res2 = df.query('"red" not in ilevel_0', parser=parser, engine=engine) + exp = df[~ind.isin(['red'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # ## LEVEL 1 + ind = Series(df.index.get_level_values(1).values, index=index) + res1 = df.query('ilevel_1 == "eggs"', parser=parser, engine=engine) + res2 = df.query('"eggs" == ilevel_1', parser=parser, engine=engine) + exp = df[ind == 'eggs'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # inequality + res1 = df.query('ilevel_1 != "eggs"', parser=parser, engine=engine) + res2 = df.query('"eggs" != ilevel_1', parser=parser, engine=engine) + exp = df[ind != 'eggs'] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # list equality (really just set membership) + res1 = df.query('ilevel_1 == ["eggs"]', parser=parser, engine=engine) + res2 = df.query('["eggs"] == ilevel_1', parser=parser, engine=engine) + exp = df[ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('ilevel_1 != ["eggs"]', parser=parser, engine=engine) + res2 = df.query('["eggs"] != ilevel_1', parser=parser, engine=engine) + exp = df[~ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + # in/not in ops + res1 = df.query('["eggs"] in ilevel_1', parser=parser, engine=engine) + res2 = df.query('"eggs" in ilevel_1', parser=parser, engine=engine) + exp = df[ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + res1 = df.query('["eggs"] not in ilevel_1', parser=parser, + engine=engine) + res2 = df.query('"eggs" not in ilevel_1', parser=parser, engine=engine) + exp = df[~ind.isin(['eggs'])] + assert_frame_equal(res1, exp) + assert_frame_equal(res2, exp) + + def test_query_with_unnamed_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_with_unnamed_multiindex, parser, engine + + def check_query_with_partially_named_multiindex(self, parser, engine): + tm.skip_if_no_ne(engine) + a = tm.choice(['red', 'green'], size=10) + b = np.arange(10) + index = MultiIndex.from_arrays([a, b]) + index.names = [None, 'rating'] + df = DataFrame(randn(10, 2), index=index) + res = df.query('rating == 1', parser=parser, engine=engine) + ind = Series(df.index.get_level_values('rating').values, index=index, + name='rating') + exp = df[ind == 1] + assert_frame_equal(res, exp) + + res = df.query('rating != 1', parser=parser, engine=engine) + ind = Series(df.index.get_level_values('rating').values, index=index, + name='rating') + exp = df[ind != 1] + assert_frame_equal(res, exp) + + res = df.query('ilevel_0 == "red"', parser=parser, engine=engine) + ind = Series(df.index.get_level_values(0).values, index=index) + exp = df[ind == "red"] + assert_frame_equal(res, exp) + + res = df.query('ilevel_0 != "red"', parser=parser, engine=engine) + ind = Series(df.index.get_level_values(0).values, index=index) + exp = df[ind != "red"] + assert_frame_equal(res, exp) + + def test_query_with_partially_named_multiindex(self): + for parser, engine in product(['pandas'], ENGINES): + yield (self.check_query_with_partially_named_multiindex, + parser, engine) + + def test_query_multiindex_get_index_resolvers(self): + for parser, engine in product(['pandas'], ENGINES): + yield (self.check_query_multiindex_get_index_resolvers, parser, + engine) + + def check_query_multiindex_get_index_resolvers(self, parser, engine): + df = mkdf(10, 3, r_idx_nlevels=2, r_idx_names=['spam', 'eggs']) + resolvers = df._get_index_resolvers() + + def to_series(mi, level): + level_values = mi.get_level_values(level) + s = level_values.to_series() + s.index = mi + return s + + col_series = df.columns.to_series() + expected = {'index': df.index, + 'columns': col_series, + 'spam': to_series(df.index, 'spam'), + 'eggs': to_series(df.index, 'eggs'), + 'C0': col_series} + for k, v in resolvers.items(): + if isinstance(v, Index): + assert v.is_(expected[k]) + elif isinstance(v, Series): + assert_series_equal(v, expected[k]) + else: + raise AssertionError("object must be a Series or Index") + + def test_raise_on_panel_with_multiindex(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_raise_on_panel_with_multiindex, parser, engine + + def check_raise_on_panel_with_multiindex(self, parser, engine): + tm.skip_if_no_ne() + p = tm.makePanel(7) + p.items = tm.makeCustomIndex(len(p.items), nlevels=2) + with tm.assertRaises(NotImplementedError): + pd.eval('p + 1', parser=parser, engine=engine) + + def test_raise_on_panel4d_with_multiindex(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_raise_on_panel4d_with_multiindex, parser, engine + + def check_raise_on_panel4d_with_multiindex(self, parser, engine): + tm.skip_if_no_ne() + p4d = tm.makePanel4D(7) + p4d.items = tm.makeCustomIndex(len(p4d.items), nlevels=2) + with tm.assertRaises(NotImplementedError): + pd.eval('p4d + 1', parser=parser, engine=engine) + + +class TestDataFrameQueryNumExprPandas(tm.TestCase): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryNumExprPandas, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'pandas' + tm.skip_if_no_ne(cls.engine) + + @classmethod + def tearDownClass(cls): + super(TestDataFrameQueryNumExprPandas, cls).tearDownClass() + del cls.engine, cls.parser + + def test_date_query_with_attribute_access(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('@df.dates1 < 20130101 < @df.dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_no_attribute_access(self): + engine, parser = self.engine, self.parser + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('dates1 < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates2'] = date_range('1/1/2013', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT + res = df.query('dates1 < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.iloc[0, 0] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT_duplicates(self): + engine, parser = self.engine, self.parser + n = 10 + d = {} + d['dates1'] = date_range('1/1/2012', periods=n) + d['dates3'] = date_range('1/1/2014', periods=n) + df = DataFrame(d) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('index < 20130101 < dates3', engine=engine, + parser=parser) + expec = df[(df.index.to_series() < '20130101') & + ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_non_date(self): + engine, parser = self.engine, self.parser + + n = 10 + df = DataFrame({'dates': date_range('1/1/2012', periods=n), + 'nondate': np.arange(n)}) + + ops = '==', '!=', '<', '>', '<=', '>=' + + for op in ops: + with tm.assertRaises(TypeError): + df.query('dates %s nondate' % op, parser=parser, engine=engine) + + def test_query_syntax_error(self): + engine, parser = self.engine, self.parser + df = DataFrame({"i": lrange(10), "+": lrange(3, 13), + "r": lrange(4, 14)}) + with tm.assertRaises(SyntaxError): + df.query('i - +', engine=engine, parser=parser) + + def test_query_scope(self): + from pandas.computation.ops import UndefinedVariableError + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + + df = DataFrame(np.random.randn(20, 2), columns=list('ab')) + + a, b = 1, 2 # noqa + res = df.query('a > b', engine=engine, parser=parser) + expected = df[df.a > df.b] + assert_frame_equal(res, expected) + + res = df.query('@a > b', engine=engine, parser=parser) + expected = df[a > df.b] + assert_frame_equal(res, expected) + + # no local variable c + with tm.assertRaises(UndefinedVariableError): + df.query('@a > b > @c', engine=engine, parser=parser) + + # no column named 'c' + with tm.assertRaises(UndefinedVariableError): + df.query('@a > b > c', engine=engine, parser=parser) + + def test_query_doesnt_pickup_local(self): + from pandas.computation.ops import UndefinedVariableError + + engine, parser = self.engine, self.parser + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + # we don't pick up the local 'sin' + with tm.assertRaises(UndefinedVariableError): + df.query('sin > 5', engine=engine, parser=parser) + + def test_query_builtin(self): + from pandas.computation.engines import NumExprClobberingError + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + with tm.assertRaisesRegexp(NumExprClobberingError, + 'Variables in expression.+'): + df.query('sin > 5', engine=engine, parser=parser) + + def test_query(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c']) + + assert_frame_equal(df.query('a < b', engine=engine, parser=parser), + df[df.a < df.b]) + assert_frame_equal(df.query('a + b > b * c', engine=engine, + parser=parser), + df[df.a + df.b > df.b * df.c]) + + def test_query_index_with_name(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randint(10, size=(10, 3)), + index=Index(range(10), name='blob'), + columns=['a', 'b', 'c']) + res = df.query('(blob < 5) & (a < b)', engine=engine, parser=parser) + expec = df[(df.index < 5) & (df.a < df.b)] + assert_frame_equal(res, expec) + + res = df.query('blob < b', engine=engine, parser=parser) + expec = df[df.index < df.b] + + assert_frame_equal(res, expec) + + def test_query_index_without_name(self): + engine, parser = self.engine, self.parser + df = DataFrame(np.random.randint(10, size=(10, 3)), + index=range(10), columns=['a', 'b', 'c']) + + # "index" should refer to the index + res = df.query('index < b', engine=engine, parser=parser) + expec = df[df.index < df.b] + assert_frame_equal(res, expec) + + # test against a scalar + res = df.query('index < 5', engine=engine, parser=parser) + expec = df[df.index < 5] + assert_frame_equal(res, expec) + + def test_nested_scope(self): + engine = self.engine + parser = self.parser + + skip_if_no_pandas_parser(parser) + + df = DataFrame(np.random.randn(5, 3)) + df2 = DataFrame(np.random.randn(5, 3)) + expected = df[(df > 0) & (df2 > 0)] + + result = df.query('(@df > 0) & (@df2 > 0)', engine=engine, + parser=parser) + assert_frame_equal(result, expected) + + result = pd.eval('df[df > 0 and df2 > 0]', engine=engine, + parser=parser) + assert_frame_equal(result, expected) + + result = pd.eval('df[df > 0 and df2 > 0 and df[df > 0] > 0]', + engine=engine, parser=parser) + expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] + assert_frame_equal(result, expected) + + result = pd.eval('df[(df>0) & (df2>0)]', engine=engine, parser=parser) + expected = df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) + assert_frame_equal(result, expected) + + def test_nested_raises_on_local_self_reference(self): + from pandas.computation.ops import UndefinedVariableError + + df = DataFrame(np.random.randn(5, 3)) + + # can't reference ourself b/c we're a local so @ is necessary + with tm.assertRaises(UndefinedVariableError): + df.query('df > 0', engine=self.engine, parser=self.parser) + + def test_local_syntax(self): + skip_if_no_pandas_parser(self.parser) + + engine, parser = self.engine, self.parser + df = DataFrame(randn(100, 10), columns=list('abcdefghij')) + b = 1 + expect = df[df.a < b] + result = df.query('a < @b', engine=engine, parser=parser) + assert_frame_equal(result, expect) + + expect = df[df.a < df.b] + result = df.query('a < b', engine=engine, parser=parser) + assert_frame_equal(result, expect) + + def test_chained_cmp_and_in(self): + skip_if_no_pandas_parser(self.parser) + engine, parser = self.engine, self.parser + cols = list('abc') + df = DataFrame(randn(100, len(cols)), columns=cols) + res = df.query('a < b < c and a not in b not in c', engine=engine, + parser=parser) + ind = ((df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & + ~df.c.isin(df.b)) + expec = df[ind] + assert_frame_equal(res, expec) + + def test_local_variable_with_in(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + a = Series(np.random.randint(3, size=15), name='a') + b = Series(np.random.randint(10, size=15), name='b') + df = DataFrame({'a': a, 'b': b}) + + expected = df.loc[(df.b - 1).isin(a)] + result = df.query('b - 1 in a', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + b = Series(np.random.randint(10, size=15), name='b') + expected = df.loc[(b - 1).isin(a)] + result = df.query('@b - 1 in a', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + def test_at_inside_string(self): + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + c = 1 # noqa + df = DataFrame({'a': ['a', 'a', 'b', 'b', '@c', '@c']}) + result = df.query('a == "@c"', engine=engine, parser=parser) + expected = df[df.a == "@c"] + assert_frame_equal(result, expected) + + def test_query_undefined_local(self): + from pandas.computation.ops import UndefinedVariableError + engine, parser = self.engine, self.parser + skip_if_no_pandas_parser(parser) + df = DataFrame(np.random.rand(10, 2), columns=list('ab')) + with tm.assertRaisesRegexp(UndefinedVariableError, + "local variable 'c' is not defined"): + df.query('a == @c', engine=engine, parser=parser) + + def test_index_resolvers_come_after_columns_with_the_same_name(self): + n = 1 # noqa + a = np.r_[20:101:20] + + df = DataFrame({'index': a, 'b': np.random.randn(a.size)}) + df.index.name = 'index' + result = df.query('index > 5', engine=self.engine, parser=self.parser) + expected = df[df['index'] > 5] + assert_frame_equal(result, expected) + + df = DataFrame({'index': a, + 'b': np.random.randn(a.size)}) + result = df.query('ilevel_0 > 5', engine=self.engine, + parser=self.parser) + expected = df.loc[df.index[df.index > 5]] + assert_frame_equal(result, expected) + + df = DataFrame({'a': a, 'b': np.random.randn(a.size)}) + df.index.name = 'a' + result = df.query('a > 5', engine=self.engine, parser=self.parser) + expected = df[df.a > 5] + assert_frame_equal(result, expected) + + result = df.query('index > 5', engine=self.engine, parser=self.parser) + expected = df.loc[df.index[df.index > 5]] + assert_frame_equal(result, expected) + + def test_inf(self): + n = 10 + df = DataFrame({'a': np.random.rand(n), 'b': np.random.rand(n)}) + df.loc[::2, 0] = np.inf + ops = '==', '!=' + d = dict(zip(ops, (operator.eq, operator.ne))) + for op, f in d.items(): + q = 'a %s inf' % op + expected = df[f(df.a, np.inf)] + result = df.query(q, engine=self.engine, parser=self.parser) + assert_frame_equal(result, expected) + + +class TestDataFrameQueryNumExprPython(TestDataFrameQueryNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryNumExprPython, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'python' + tm.skip_if_no_ne(cls.engine) + cls.frame = TestData().frame + + def test_date_query_no_attribute_access(self): + engine, parser = self.engine, self.parser + df = DataFrame(randn(5, 3)) + df['dates1'] = date_range('1/1/2012', periods=5) + df['dates2'] = date_range('1/1/2013', periods=5) + df['dates3'] = date_range('1/1/2014', periods=5) + res = df.query('(dates1 < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates2'] = date_range('1/1/2013', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT + res = df.query('(dates1 < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.set_index('dates1', inplace=True, drop=True) + res = df.query('(index < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.iloc[0, 0] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + res = df.query('(index < 20130101) & (20130101 < dates3)', + engine=engine, parser=parser) + expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] + assert_frame_equal(res, expec) + + def test_date_index_query_with_NaT_duplicates(self): + engine, parser = self.engine, self.parser + n = 10 + df = DataFrame(randn(n, 3)) + df['dates1'] = date_range('1/1/2012', periods=n) + df['dates3'] = date_range('1/1/2014', periods=n) + df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT + df.set_index('dates1', inplace=True, drop=True) + with tm.assertRaises(NotImplementedError): + df.query('index < 20130101 < dates3', engine=engine, parser=parser) + + def test_nested_scope(self): + from pandas.computation.ops import UndefinedVariableError + engine = self.engine + parser = self.parser + # smoke test + x = 1 # noqa + result = pd.eval('x + 1', engine=engine, parser=parser) + self.assertEqual(result, 2) + + df = DataFrame(np.random.randn(5, 3)) + df2 = DataFrame(np.random.randn(5, 3)) + + # don't have the pandas parser + with tm.assertRaises(SyntaxError): + df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) + + with tm.assertRaises(UndefinedVariableError): + df.query('(df>0) & (df2>0)', engine=engine, parser=parser) + + expected = df[(df > 0) & (df2 > 0)] + result = pd.eval('df[(df > 0) & (df2 > 0)]', engine=engine, + parser=parser) + assert_frame_equal(expected, result) + + expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] + result = pd.eval('df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)]', + engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryPythonPandas(TestDataFrameQueryNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryPythonPandas, cls).setUpClass() + cls.engine = 'python' + cls.parser = 'pandas' + cls.frame = TestData().frame + + def test_query_builtin(self): + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + expected = df[df.index > 5] + result = df.query('sin > 5', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryPythonPython(TestDataFrameQueryNumExprPython): + + @classmethod + def setUpClass(cls): + super(TestDataFrameQueryPythonPython, cls).setUpClass() + cls.engine = cls.parser = 'python' + cls.frame = TestData().frame + + def test_query_builtin(self): + engine, parser = self.engine, self.parser + + n = m = 10 + df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) + + df.index.name = 'sin' + expected = df[df.index > 5] + result = df.query('sin > 5', engine=engine, parser=parser) + assert_frame_equal(expected, result) + + +class TestDataFrameQueryStrings(tm.TestCase): + + def check_str_query_method(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame(randn(10, 1), columns=['b']) + df['strings'] = Series(list('aabbccddee')) + expect = df[df.strings == 'a'] + + if parser != 'pandas': + col = 'strings' + lst = '"a"' + + lhs = [col] * 2 + [lst] * 2 + rhs = lhs[::-1] + + eq, ne = '==', '!=' + ops = 2 * ([eq] + [ne]) + + for lhs, op, rhs in zip(lhs, ops, rhs): + ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) + assertRaises(NotImplementedError, df.query, ex, engine=engine, + parser=parser, local_dict={'strings': df.strings}) + else: + res = df.query('"a" == strings', engine=engine, parser=parser) + assert_frame_equal(res, expect) + + res = df.query('strings == "a"', engine=engine, parser=parser) + assert_frame_equal(res, expect) + assert_frame_equal(res, df[df.strings.isin(['a'])]) + + expect = df[df.strings != 'a'] + res = df.query('strings != "a"', engine=engine, parser=parser) + assert_frame_equal(res, expect) + + res = df.query('"a" != strings', engine=engine, parser=parser) + assert_frame_equal(res, expect) + assert_frame_equal(res, df[~df.strings.isin(['a'])]) + + def test_str_query_method(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_str_query_method, parser, engine + + def test_str_list_query_method(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_str_list_query_method, parser, engine + + def check_str_list_query_method(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame(randn(10, 1), columns=['b']) + df['strings'] = Series(list('aabbccddee')) + expect = df[df.strings.isin(['a', 'b'])] + + if parser != 'pandas': + col = 'strings' + lst = '["a", "b"]' + + lhs = [col] * 2 + [lst] * 2 + rhs = lhs[::-1] + + eq, ne = '==', '!=' + ops = 2 * ([eq] + [ne]) + + for lhs, op, rhs in zip(lhs, ops, rhs): + ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) + with tm.assertRaises(NotImplementedError): + df.query(ex, engine=engine, parser=parser) + else: + res = df.query('strings == ["a", "b"]', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + res = df.query('["a", "b"] == strings', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + expect = df[~df.strings.isin(['a', 'b'])] + + res = df.query('strings != ["a", "b"]', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + res = df.query('["a", "b"] != strings', engine=engine, + parser=parser) + assert_frame_equal(res, expect) + + def check_query_with_string_columns(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame({'a': list('aaaabbbbcccc'), + 'b': list('aabbccddeeff'), + 'c': np.random.randint(5, size=12), + 'd': np.random.randint(9, size=12)}) + if parser == 'pandas': + res = df.query('a in b', parser=parser, engine=engine) + expec = df[df.a.isin(df.b)] + assert_frame_equal(res, expec) + + res = df.query('a in b and c < d', parser=parser, engine=engine) + expec = df[df.a.isin(df.b) & (df.c < df.d)] + assert_frame_equal(res, expec) + else: + with assertRaises(NotImplementedError): + df.query('a in b', parser=parser, engine=engine) + + with assertRaises(NotImplementedError): + df.query('a in b and c < d', parser=parser, engine=engine) + + def test_query_with_string_columns(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_with_string_columns, parser, engine + + def check_object_array_eq_ne(self, parser, engine): + tm.skip_if_no_ne(engine) + df = DataFrame({'a': list('aaaabbbbcccc'), + 'b': list('aabbccddeeff'), + 'c': np.random.randint(5, size=12), + 'd': np.random.randint(9, size=12)}) + res = df.query('a == b', parser=parser, engine=engine) + exp = df[df.a == df.b] + assert_frame_equal(res, exp) + + res = df.query('a != b', parser=parser, engine=engine) + exp = df[df.a != df.b] + assert_frame_equal(res, exp) + + def test_object_array_eq_ne(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_object_array_eq_ne, parser, engine + + def check_query_with_nested_strings(self, parser, engine): + tm.skip_if_no_ne(engine) + skip_if_no_pandas_parser(parser) + raw = """id event timestamp + 1 "page 1 load" 1/1/2014 0:00:01 + 1 "page 1 exit" 1/1/2014 0:00:31 + 2 "page 2 load" 1/1/2014 0:01:01 + 2 "page 2 exit" 1/1/2014 0:01:31 + 3 "page 3 load" 1/1/2014 0:02:01 + 3 "page 3 exit" 1/1/2014 0:02:31 + 4 "page 1 load" 2/1/2014 1:00:01 + 4 "page 1 exit" 2/1/2014 1:00:31 + 5 "page 2 load" 2/1/2014 1:01:01 + 5 "page 2 exit" 2/1/2014 1:01:31 + 6 "page 3 load" 2/1/2014 1:02:01 + 6 "page 3 exit" 2/1/2014 1:02:31 + """ + df = pd.read_csv(StringIO(raw), sep=r'\s{2,}', engine='python', + parse_dates=['timestamp']) + expected = df[df.event == '"page 1 load"'] + res = df.query("""'"page 1 load"' in event""", parser=parser, + engine=engine) + assert_frame_equal(expected, res) + + def test_query_with_nested_string(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_with_nested_strings, parser, engine + + def check_query_with_nested_special_character(self, parser, engine): + skip_if_no_pandas_parser(parser) + tm.skip_if_no_ne(engine) + df = DataFrame({'a': ['a', 'b', 'test & test'], + 'b': [1, 2, 3]}) + res = df.query('a == "test & test"', parser=parser, engine=engine) + expec = df[df.a == 'test & test'] + assert_frame_equal(res, expec) + + def test_query_with_nested_special_character(self): + for parser, engine in product(PARSERS, ENGINES): + yield (self.check_query_with_nested_special_character, + parser, engine) + + def check_query_lex_compare_strings(self, parser, engine): + tm.skip_if_no_ne(engine=engine) + import operator as opr + + a = Series(tm.choice(list('abcde'), 20)) + b = Series(np.arange(a.size)) + df = DataFrame({'X': a, 'Y': b}) + + ops = {'<': opr.lt, '>': opr.gt, '<=': opr.le, '>=': opr.ge} + + for op, func in ops.items(): + res = df.query('X %s "d"' % op, engine=engine, parser=parser) + expected = df[func(df.X, 'd')] + assert_frame_equal(res, expected) + + def test_query_lex_compare_strings(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_lex_compare_strings, parser, engine + + def check_query_single_element_booleans(self, parser, engine): + tm.skip_if_no_ne(engine) + columns = 'bid', 'bidsize', 'ask', 'asksize' + data = np.random.randint(2, size=(1, len(columns))).astype(bool) + df = DataFrame(data, columns=columns) + res = df.query('bid & ask', engine=engine, parser=parser) + expected = df[df.bid & df.ask] + assert_frame_equal(res, expected) + + def test_query_single_element_booleans(self): + for parser, engine in product(PARSERS, ENGINES): + yield self.check_query_single_element_booleans, parser, engine + + def check_query_string_scalar_variable(self, parser, engine): + tm.skip_if_no_ne(engine) + df = pd.DataFrame({'Symbol': ['BUD US', 'BUD US', 'IBM US', 'IBM US'], + 'Price': [109.70, 109.72, 183.30, 183.35]}) + e = df[df.Symbol == 'BUD US'] + symb = 'BUD US' # noqa + r = df.query('Symbol == @symb', parser=parser, engine=engine) + assert_frame_equal(e, r) + + def test_query_string_scalar_variable(self): + for parser, engine in product(['pandas'], ENGINES): + yield self.check_query_string_scalar_variable, parser, engine + + +class TestDataFrameEvalNumExprPandas(tm.TestCase): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalNumExprPandas, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'pandas' + tm.skip_if_no_ne() + + def setUp(self): + self.frame = DataFrame(randn(10, 3), columns=list('abc')) + + def tearDown(self): + del self.frame + + def test_simple_expr(self): + res = self.frame.eval('a + b', engine=self.engine, parser=self.parser) + expect = self.frame.a + self.frame.b + assert_series_equal(res, expect) + + def test_bool_arith_expr(self): + res = self.frame.eval('a[a < 1] + b', engine=self.engine, + parser=self.parser) + expect = self.frame.a[self.frame.a < 1] + self.frame.b + assert_series_equal(res, expect) + + def test_invalid_type_for_operator_raises(self): + df = DataFrame({'a': [1, 2], 'b': ['c', 'd']}) + ops = '+', '-', '*', '/' + for op in ops: + with tm.assertRaisesRegexp(TypeError, + "unsupported operand type\(s\) for " + ".+: '.+' and '.+'"): + df.eval('a {0} b'.format(op), engine=self.engine, + parser=self.parser) + + +class TestDataFrameEvalNumExprPython(TestDataFrameEvalNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalNumExprPython, cls).setUpClass() + cls.engine = 'numexpr' + cls.parser = 'python' + tm.skip_if_no_ne(cls.engine) + + +class TestDataFrameEvalPythonPandas(TestDataFrameEvalNumExprPandas): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalPythonPandas, cls).setUpClass() + cls.engine = 'python' + cls.parser = 'pandas' + + +class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython): + + @classmethod + def setUpClass(cls): + super(TestDataFrameEvalPythonPython, cls).tearDownClass() + cls.engine = cls.parser = 'python' + + +if __name__ == '__main__': + nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], + exit=False) diff --git a/pandas/tests/frame/test_replace.py b/pandas/tests/frame/test_replace.py new file mode 100644 index 0000000000000..bed0e0623ace0 --- /dev/null +++ b/pandas/tests/frame/test_replace.py @@ -0,0 +1,1055 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import re + +from pandas.compat import (zip, range, lrange, StringIO) +from pandas import (DataFrame, Series, Index, date_range, compat, + Timestamp) +import pandas as pd + +from numpy import nan +import numpy as np + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameReplace(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_replace_inplace(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + tsframe = self.tsframe.copy() + tsframe.replace(nan, 0, inplace=True) + assert_frame_equal(tsframe, self.tsframe.fillna(0)) + + self.assertRaises(TypeError, self.tsframe.replace, nan, inplace=True) + self.assertRaises(TypeError, self.tsframe.replace, nan) + + # mixed type + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + result = self.mixed_frame.replace(np.nan, 0) + expected = self.mixed_frame.fillna(value=0) + assert_frame_equal(result, expected) + + tsframe = self.tsframe.copy() + tsframe.replace([nan], [0], inplace=True) + assert_frame_equal(tsframe, self.tsframe.fillna(0)) + + def test_regex_replace_scalar(self): + obj = {'a': list('ab..'), 'b': list('efgh')} + dfobj = DataFrame(obj) + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.replace(r'\s*\.\s*', nan, regex=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.replace(r'\s*\.\s*', nan, regex=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.replace(re.compile(r'\s*\.\s*'), nan, regex=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.replace(re.compile(r'\s*\.\s*'), nan, regex=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfmix.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfmix.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1') + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + def test_regex_replace_scalar_inplace(self): + obj = {'a': list('ab..'), 'b': list('efgh')} + dfobj = DataFrame(obj) + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.copy() + res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, + inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, + inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + res = dfobj.copy() + res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) + assert_frame_equal(dfobj, res.fillna('.')) + + # mixed + res = dfmix.copy() + res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) + assert_frame_equal(dfmix, res.fillna('.')) + + # regex -> regex + # obj frame + res = dfobj.copy() + res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', + inplace=True) + objc = obj.copy() + objc['a'] = ['a', 'b', '...', '...'] + expec = DataFrame(objc) + assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', + inplace=True) + mixc = mix.copy() + mixc['b'] = ['a', 'b', '...', '...'] + expec = DataFrame(mixc) + assert_frame_equal(res, expec) + + def test_regex_replace_list_obj(self): + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'e|f|g'] + values = [nan, 'crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + + ['h'], 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] + values = [r'\1\1', r'\1_crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', + 'f_crap', + 'g_crap', 'h'], + 'c': ['h', 'e_crap', 'l', 'o']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.replace(value=values, regex=to_replace_res) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_obj_inplace(self): + # same as above with inplace=True + # lists of regexes and values + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'e|f|g'] + values = [nan, 'crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + + ['h'], 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] + values = [r'\1\1', r'\1_crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', + 'f_crap', + 'g_crap', 'h'], + 'c': ['h', 'e_crap', 'l', 'o']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'e'] + values = [r'\1\1', r'crap'] + res = dfobj.copy() + res.replace(value=values, regex=to_replace_res, inplace=True) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', + 'h'], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_mixed(self): + # mixed frame to make sure this doesn't break things + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'a'] + values = [nan, 'crap'] + mix2 = {'a': lrange(4), 'b': list('ab..'), 'c': list('halo')} + dfmix2 = DataFrame(mix2) + res = dfmix2.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix2['a'], 'b': ['crap', 'b', nan, nan], + 'c': ['h', 'crap', 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] + values = [r'\1\1', r'\1_crap'] + res = dfmix.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', + '..']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.replace(to_replace_res, values, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.replace(regex=to_replace_res, value=values) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + def test_regex_replace_list_mixed_inplace(self): + mix = {'a': lrange(4), 'b': list('ab..')} + dfmix = DataFrame(mix) + # the same inplace + # lists of regexes and values + # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] + to_replace_res = [r'\s*\.\s*', r'a'] + values = [nan, 'crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b', nan, nan]}) + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] + to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] + values = [r'\1\1', r'\1_crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', + '..']}) + + assert_frame_equal(res, expec) + + # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN + # or vN)] + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.copy() + res.replace(to_replace_res, values, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] + values = [r'\1\1', r'crap', r'\1_crap'] + res = dfmix.copy() + res.replace(regex=to_replace_res, value=values, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) + assert_frame_equal(res, expec) + + def test_regex_replace_dict_mixed(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + dfmix = DataFrame(mix) + + # dicts + # single dict {re1: v1}, search the whole frame + # need test for this... + + # list of dicts {re1: v1, re2: v2, ..., re3: v3}, search the whole + # frame + res = dfmix.replace({'b': r'\s*\.\s*'}, {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace({'b': r'\s*\.\s*'}, {'b': nan}, inplace=True, regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + # list of dicts {re1: re11, re2: re12, ..., reN: re1N}, search the + # whole frame + res = dfmix.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True) + res2 = dfmix.copy() + res2.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, inplace=True, + regex=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + res = dfmix.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}) + res2 = dfmix.copy() + res2.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}, + inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + # scalar -> dict + # to_replace regex, {value: value} + expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': + mix['c']}) + res = dfmix.replace('a', {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace('a', {'b': nan}, regex=True, inplace=True) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + res = dfmix.replace('a', {'b': nan}, regex=True) + res2 = dfmix.copy() + res2.replace(regex='a', value={'b': nan}, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + + def test_regex_replace_dict_nested(self): + # nested dicts will not work until this is implemented for Series + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + dfmix = DataFrame(mix) + res = dfmix.replace({'b': {r'\s*\.\s*': nan}}, regex=True) + res2 = dfmix.copy() + res4 = dfmix.copy() + res2.replace({'b': {r'\s*\.\s*': nan}}, inplace=True, regex=True) + res3 = dfmix.replace(regex={'b': {r'\s*\.\s*': nan}}) + res4.replace(regex={'b': {r'\s*\.\s*': nan}}, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + assert_frame_equal(res4, expec) + + def test_regex_replace_dict_nested_gh4115(self): + df = pd.DataFrame({'Type': ['Q', 'T', 'Q', 'Q', 'T'], 'tmp': 2}) + expected = DataFrame({'Type': [0, 1, 0, 0, 1], 'tmp': 2}) + result = df.replace({'Type': {'Q': 0, 'T': 1}}) + assert_frame_equal(result, expected) + + def test_regex_replace_list_to_scalar(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + expec = DataFrame({'a': mix['a'], 'b': np.array([nan] * 4), + 'c': [nan, nan, nan, 'd']}) + + res = df.replace([r'\s*\.\s*', 'a|b'], nan, regex=True) + res2 = df.copy() + res3 = df.copy() + res2.replace([r'\s*\.\s*', 'a|b'], nan, regex=True, inplace=True) + res3.replace(regex=[r'\s*\.\s*', 'a|b'], value=nan, inplace=True) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_str_to_numeric(self): + # what happens when you try to replace a numeric value with a regex? + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + res = df.replace(r'\s*\.\s*', 0, regex=True) + res2 = df.copy() + res2.replace(r'\s*\.\s*', 0, inplace=True, regex=True) + res3 = df.copy() + res3.replace(regex=r'\s*\.\s*', value=0, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', 0, 0], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_regex_list_to_numeric(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + res = df.replace([r'\s*\.\s*', 'b'], 0, regex=True) + res2 = df.copy() + res2.replace([r'\s*\.\s*', 'b'], 0, regex=True, inplace=True) + res3 = df.copy() + res3.replace(regex=[r'\s*\.\s*', 'b'], value=0, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 0, 0, 0], 'c': ['a', 0, + nan, + 'd']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_series_of_regexes(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + s1 = Series({'b': r'\s*\.\s*'}) + s2 = Series({'b': nan}) + res = df.replace(s1, s2, regex=True) + res2 = df.copy() + res2.replace(s1, s2, inplace=True, regex=True) + res3 = df.copy() + res3.replace(regex=s1, value=s2, inplace=True) + expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': + mix['c']}) + assert_frame_equal(res, expec) + assert_frame_equal(res2, expec) + assert_frame_equal(res3, expec) + + def test_regex_replace_numeric_to_object_conversion(self): + mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} + df = DataFrame(mix) + expec = DataFrame({'a': ['a', 1, 2, 3], 'b': mix['b'], 'c': mix['c']}) + res = df.replace(0, 'a') + assert_frame_equal(res, expec) + self.assertEqual(res.a.dtype, np.object_) + + def test_replace_regex_metachar(self): + metachars = '[]', '()', '\d', '\w', '\s' + + for metachar in metachars: + df = DataFrame({'a': [metachar, 'else']}) + result = df.replace({'a': {metachar: 'paren'}}) + expected = DataFrame({'a': ['paren', 'else']}) + assert_frame_equal(result, expected) + + def test_replace(self): + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + + zero_filled = self.tsframe.replace(nan, -1e8) + assert_frame_equal(zero_filled, self.tsframe.fillna(-1e8)) + assert_frame_equal(zero_filled.replace(-1e8, nan), self.tsframe) + + self.tsframe['A'][:5] = nan + self.tsframe['A'][-5:] = nan + self.tsframe['B'][:5] = -1e8 + + # empty + df = DataFrame(index=['a', 'b']) + assert_frame_equal(df, df.replace(5, 7)) + + # GH 11698 + # test for mixed data types. + df = pd.DataFrame([('-', pd.to_datetime('20150101')), + ('a', pd.to_datetime('20150102'))]) + df1 = df.replace('-', np.nan) + expected_df = pd.DataFrame([(np.nan, pd.to_datetime('20150101')), + ('a', pd.to_datetime('20150102'))]) + assert_frame_equal(df1, expected_df) + + def test_replace_list(self): + obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} + dfobj = DataFrame(obj) + + # lists of regexes and values + # list of [v1, v2, ..., vN] -> [v1, v2, ..., vN] + to_replace_res = [r'.', r'e'] + values = [nan, 'crap'] + res = dfobj.replace(to_replace_res, values) + expec = DataFrame({'a': ['a', 'b', nan, nan], + 'b': ['crap', 'f', 'g', 'h'], 'c': ['h', 'crap', + 'l', 'o']}) + assert_frame_equal(res, expec) + + # list of [v1, v2, ..., vN] -> [v1, v2, .., vN] + to_replace_res = [r'.', r'f'] + values = [r'..', r'crap'] + res = dfobj.replace(to_replace_res, values) + expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e', 'crap', 'g', + 'h'], + 'c': ['h', 'e', 'l', 'o']}) + + assert_frame_equal(res, expec) + + def test_replace_series_dict(self): + # from GH 3064 + df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) + result = df.replace(0, {'zero': 0.5, 'one': 1.0}) + expected = DataFrame( + {'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 2.0, 'b': 1.0}}) + assert_frame_equal(result, expected) + + result = df.replace(0, df.mean()) + assert_frame_equal(result, expected) + + # series to series/dict + df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) + s = Series({'zero': 0.0, 'one': 2.0}) + result = df.replace(s, {'zero': 0.5, 'one': 1.0}) + expected = DataFrame( + {'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 1.0, 'b': 0.0}}) + assert_frame_equal(result, expected) + + result = df.replace(s, df.mean()) + assert_frame_equal(result, expected) + + def test_replace_convert(self): + # gh 3907 + df = DataFrame([['foo', 'bar', 'bah'], ['bar', 'foo', 'bah']]) + m = {'foo': 1, 'bar': 2, 'bah': 3} + rep = df.replace(m) + expec = Series([np.int64] * 3) + res = rep.dtypes + assert_series_equal(expec, res) + + def test_replace_mixed(self): + self.mixed_frame.ix[5:20, 'foo'] = nan + self.mixed_frame.ix[-10:, 'A'] = nan + + result = self.mixed_frame.replace(np.nan, -18) + expected = self.mixed_frame.fillna(value=-18) + assert_frame_equal(result, expected) + assert_frame_equal(result.replace(-18, nan), self.mixed_frame) + + result = self.mixed_frame.replace(np.nan, -1e8) + expected = self.mixed_frame.fillna(value=-1e8) + assert_frame_equal(result, expected) + assert_frame_equal(result.replace(-1e8, nan), self.mixed_frame) + + # int block upcasting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64')}) + expected = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0.5, 1], dtype='float64')}) + result = df.replace(0, 0.5) + assert_frame_equal(result, expected) + + df.replace(0, 0.5, inplace=True) + assert_frame_equal(df, expected) + + # int block splitting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64'), + 'C': Series([1, 2], dtype='int64')}) + expected = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0.5, 1], dtype='float64'), + 'C': Series([1, 2], dtype='int64')}) + result = df.replace(0, 0.5) + assert_frame_equal(result, expected) + + # to object block upcasting + df = DataFrame({'A': Series([1.0, 2.0], dtype='float64'), + 'B': Series([0, 1], dtype='int64')}) + expected = DataFrame({'A': Series([1, 'foo'], dtype='object'), + 'B': Series([0, 1], dtype='int64')}) + result = df.replace(2, 'foo') + assert_frame_equal(result, expected) + + expected = DataFrame({'A': Series(['foo', 'bar'], dtype='object'), + 'B': Series([0, 'foo'], dtype='object')}) + result = df.replace([1, 2], ['foo', 'bar']) + assert_frame_equal(result, expected) + + # test case from + df = DataFrame({'A': Series([3, 0], dtype='int64'), + 'B': Series([0, 3], dtype='int64')}) + result = df.replace(3, df.mean().to_dict()) + expected = df.copy().astype('float64') + m = df.mean() + expected.iloc[0, 0] = m[0] + expected.iloc[1, 1] = m[1] + assert_frame_equal(result, expected) + + def test_replace_simple_nested_dict(self): + df = DataFrame({'col': range(1, 5)}) + expected = DataFrame({'col': ['a', 2, 3, 'b']}) + + result = df.replace({'col': {1: 'a', 4: 'b'}}) + assert_frame_equal(expected, result) + + # in this case, should be the same as the not nested version + result = df.replace({1: 'a', 4: 'b'}) + assert_frame_equal(expected, result) + + def test_replace_simple_nested_dict_with_nonexistent_value(self): + df = DataFrame({'col': range(1, 5)}) + expected = DataFrame({'col': ['a', 2, 3, 'b']}) + + result = df.replace({-1: '-', 1: 'a', 4: 'b'}) + assert_frame_equal(expected, result) + + result = df.replace({'col': {-1: '-', 1: 'a', 4: 'b'}}) + assert_frame_equal(expected, result) + + def test_replace_value_is_none(self): + self.assertRaises(TypeError, self.tsframe.replace, nan) + orig_value = self.tsframe.iloc[0, 0] + orig2 = self.tsframe.iloc[1, 0] + + self.tsframe.iloc[0, 0] = nan + self.tsframe.iloc[1, 0] = 1 + + result = self.tsframe.replace(to_replace={nan: 0}) + expected = self.tsframe.T.replace(to_replace={nan: 0}).T + assert_frame_equal(result, expected) + + result = self.tsframe.replace(to_replace={nan: 0, 1: -1e8}) + tsframe = self.tsframe.copy() + tsframe.iloc[0, 0] = 0 + tsframe.iloc[1, 0] = -1e8 + expected = tsframe + assert_frame_equal(expected, result) + self.tsframe.iloc[0, 0] = orig_value + self.tsframe.iloc[1, 0] = orig2 + + def test_replace_for_new_dtypes(self): + + # dtypes + tsframe = self.tsframe.copy().astype(np.float32) + tsframe['A'][:5] = nan + tsframe['A'][-5:] = nan + + zero_filled = tsframe.replace(nan, -1e8) + assert_frame_equal(zero_filled, tsframe.fillna(-1e8)) + assert_frame_equal(zero_filled.replace(-1e8, nan), tsframe) + + tsframe['A'][:5] = nan + tsframe['A'][-5:] = nan + tsframe['B'][:5] = -1e8 + + b = tsframe['B'] + b[b == -1e8] = nan + tsframe['B'] = b + result = tsframe.fillna(method='bfill') + assert_frame_equal(result, tsframe.fillna(method='bfill')) + + def test_replace_dtypes(self): + # int + df = DataFrame({'ints': [1, 2, 3]}) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}) + assert_frame_equal(result, expected) + + df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int32) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int32) + assert_frame_equal(result, expected) + + df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int16) + result = df.replace(1, 0) + expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int16) + assert_frame_equal(result, expected) + + # bools + df = DataFrame({'bools': [True, False, True]}) + result = df.replace(False, True) + self.assertTrue(result.values.all()) + + # complex blocks + df = DataFrame({'complex': [1j, 2j, 3j]}) + result = df.replace(1j, 0j) + expected = DataFrame({'complex': [0j, 2j, 3j]}) + assert_frame_equal(result, expected) + + # datetime blocks + prev = datetime.today() + now = datetime.today() + df = DataFrame({'datetime64': Index([prev, now, prev])}) + result = df.replace(prev, now) + expected = DataFrame({'datetime64': Index([now] * 3)}) + assert_frame_equal(result, expected) + + def test_replace_input_formats(self): + # both dicts + to_rep = {'A': np.nan, 'B': 0, 'C': ''} + values = {'A': 0, 'B': -1, 'C': 'missing'} + df = DataFrame({'A': [np.nan, 0, np.inf], 'B': [0, 2, 5], + 'C': ['', 'asdf', 'fd']}) + filled = df.replace(to_rep, values) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(to_rep[k], values[k]) + assert_frame_equal(filled, DataFrame(expected)) + + result = df.replace([0, 2, 5], [5, 2, 0]) + expected = DataFrame({'A': [np.nan, 5, np.inf], 'B': [5, 2, 0], + 'C': ['', 'asdf', 'fd']}) + assert_frame_equal(result, expected) + + # dict to scalar + filled = df.replace(to_rep, 0) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(to_rep[k], 0) + assert_frame_equal(filled, DataFrame(expected)) + + self.assertRaises(TypeError, df.replace, to_rep, [np.nan, 0, '']) + + # scalar to dict + values = {'A': 0, 'B': -1, 'C': 'missing'} + df = DataFrame({'A': [np.nan, 0, np.nan], 'B': [0, 2, 5], + 'C': ['', 'asdf', 'fd']}) + filled = df.replace(np.nan, values) + expected = {} + for k, v in compat.iteritems(df): + expected[k] = v.replace(np.nan, values[k]) + assert_frame_equal(filled, DataFrame(expected)) + + # list to list + to_rep = [np.nan, 0, ''] + values = [-2, -1, 'missing'] + result = df.replace(to_rep, values) + expected = df.copy() + for i in range(len(to_rep)): + expected.replace(to_rep[i], values[i], inplace=True) + assert_frame_equal(result, expected) + + self.assertRaises(ValueError, df.replace, to_rep, values[1:]) + + # list to scalar + to_rep = [np.nan, 0, ''] + result = df.replace(to_rep, -1) + expected = df.copy() + for i in range(len(to_rep)): + expected.replace(to_rep[i], -1, inplace=True) + assert_frame_equal(result, expected) + + def test_replace_limit(self): + pass + + def test_replace_dict_no_regex(self): + answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: + 'Disagree', 4: 'Strongly Disagree'}) + weights = {'Agree': 4, 'Disagree': 2, 'Neutral': 3, 'Strongly Agree': + 5, 'Strongly Disagree': 1} + expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) + result = answer.replace(weights) + assert_series_equal(result, expected) + + def test_replace_series_no_regex(self): + answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: + 'Disagree', 4: 'Strongly Disagree'}) + weights = Series({'Agree': 4, 'Disagree': 2, 'Neutral': 3, + 'Strongly Agree': 5, 'Strongly Disagree': 1}) + expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) + result = answer.replace(weights) + assert_series_equal(result, expected) + + def test_replace_dict_tuple_list_ordering_remains_the_same(self): + df = DataFrame(dict(A=[nan, 1])) + res1 = df.replace(to_replace={nan: 0, 1: -1e8}) + res2 = df.replace(to_replace=(1, nan), value=[-1e8, 0]) + res3 = df.replace(to_replace=[1, nan], value=[-1e8, 0]) + + expected = DataFrame({'A': [0, -1e8]}) + assert_frame_equal(res1, res2) + assert_frame_equal(res2, res3) + assert_frame_equal(res3, expected) + + def test_replace_doesnt_replace_without_regex(self): + raw = """fol T_opp T_Dir T_Enh + 0 1 0 0 vo + 1 2 vr 0 0 + 2 2 0 0 0 + 3 3 0 bt 0""" + df = pd.read_csv(StringIO(raw), sep=r'\s+') + res = df.replace({'\D': 1}) + assert_frame_equal(df, res) + + def test_replace_bool_with_string(self): + df = DataFrame({'a': [True, False], 'b': list('ab')}) + result = df.replace(True, 'a') + expected = DataFrame({'a': ['a', False], 'b': df.b}) + assert_frame_equal(result, expected) + + def test_replace_pure_bool_with_string_no_op(self): + df = DataFrame(np.random.rand(2, 2) > 0.5) + result = df.replace('asdf', 'fdsa') + assert_frame_equal(df, result) + + def test_replace_bool_with_bool(self): + df = DataFrame(np.random.rand(2, 2) > 0.5) + result = df.replace(False, True) + expected = DataFrame(np.ones((2, 2), dtype=bool)) + assert_frame_equal(result, expected) + + def test_replace_with_dict_with_bool_keys(self): + df = DataFrame({0: [True, False], 1: [False, True]}) + with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): + df.replace({'asdf': 'asdb', True: 'yes'}) + + def test_replace_truthy(self): + df = DataFrame({'a': [True, True]}) + r = df.replace([np.inf, -np.inf], np.nan) + e = df + assert_frame_equal(r, e) + + def test_replace_int_to_int_chain(self): + df = DataFrame({'a': lrange(1, 5)}) + with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): + df.replace({'a': dict(zip(range(1, 5), range(2, 6)))}) + + def test_replace_str_to_str_chain(self): + a = np.arange(1, 5) + astr = a.astype(str) + bstr = np.arange(2, 6).astype(str) + df = DataFrame({'a': astr}) + with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): + df.replace({'a': dict(zip(astr, bstr))}) + + def test_replace_swapping_bug(self): + df = pd.DataFrame({'a': [True, False, True]}) + res = df.replace({'a': {True: 'Y', False: 'N'}}) + expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) + assert_frame_equal(res, expect) + + df = pd.DataFrame({'a': [0, 1, 0]}) + res = df.replace({'a': {0: 'Y', 1: 'N'}}) + expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) + assert_frame_equal(res, expect) + + def test_replace_period(self): + d = { + 'fname': { + 'out_augmented_AUG_2011.json': + pd.Period(year=2011, month=8, freq='M'), + 'out_augmented_JAN_2011.json': + pd.Period(year=2011, month=1, freq='M'), + 'out_augmented_MAY_2012.json': + pd.Period(year=2012, month=5, freq='M'), + 'out_augmented_SUBSIDY_WEEK.json': + pd.Period(year=2011, month=4, freq='M'), + 'out_augmented_AUG_2012.json': + pd.Period(year=2012, month=8, freq='M'), + 'out_augmented_MAY_2011.json': + pd.Period(year=2011, month=5, freq='M'), + 'out_augmented_SEP_2013.json': + pd.Period(year=2013, month=9, freq='M')}} + + df = pd.DataFrame(['out_augmented_AUG_2012.json', + 'out_augmented_SEP_2013.json', + 'out_augmented_SUBSIDY_WEEK.json', + 'out_augmented_MAY_2012.json', + 'out_augmented_MAY_2011.json', + 'out_augmented_AUG_2011.json', + 'out_augmented_JAN_2011.json'], columns=['fname']) + tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) + expected = DataFrame({'fname': [d['fname'][k] + for k in df.fname.values]}) + result = df.replace(d) + assert_frame_equal(result, expected) + + def test_replace_datetime(self): + d = {'fname': + {'out_augmented_AUG_2011.json': pd.Timestamp('2011-08'), + 'out_augmented_JAN_2011.json': pd.Timestamp('2011-01'), + 'out_augmented_MAY_2012.json': pd.Timestamp('2012-05'), + 'out_augmented_SUBSIDY_WEEK.json': pd.Timestamp('2011-04'), + 'out_augmented_AUG_2012.json': pd.Timestamp('2012-08'), + 'out_augmented_MAY_2011.json': pd.Timestamp('2011-05'), + 'out_augmented_SEP_2013.json': pd.Timestamp('2013-09')}} + + df = pd.DataFrame(['out_augmented_AUG_2012.json', + 'out_augmented_SEP_2013.json', + 'out_augmented_SUBSIDY_WEEK.json', + 'out_augmented_MAY_2012.json', + 'out_augmented_MAY_2011.json', + 'out_augmented_AUG_2011.json', + 'out_augmented_JAN_2011.json'], columns=['fname']) + tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) + expected = DataFrame({'fname': [d['fname'][k] + for k in df.fname.values]}) + result = df.replace(d) + assert_frame_equal(result, expected) + + def test_replace_datetimetz(self): + + # GH 11326 + # behaving poorly when presented with a datetime64[ns, tz] + df = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': [0, np.nan, 2]}) + result = df.replace(np.nan, 1) + expected = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': Series([0, 1, 2], dtype='float64')}) + assert_frame_equal(result, expected) + + result = df.fillna(1) + assert_frame_equal(result, expected) + + result = df.replace(0, np.nan) + expected = DataFrame({'A': date_range('20130101', periods=3, + tz='US/Eastern'), + 'B': [np.nan, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.replace(Timestamp('20130102', tz='US/Eastern'), + Timestamp('20130104', tz='US/Eastern')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104', tz='US/Eastern'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace( + {'A': pd.NaT}, Timestamp('20130104', tz='US/Eastern')) + assert_frame_equal(result, expected) + + # coerce to object + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace( + {'A': pd.NaT}, Timestamp('20130104', tz='US/Pacific')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104', tz='US/Pacific'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) + + result = df.copy() + result.iloc[1, 0] = np.nan + result = result.replace({'A': np.nan}, Timestamp('20130104')) + expected = DataFrame({'A': [Timestamp('20130101', tz='US/Eastern'), + Timestamp('20130104'), + Timestamp('20130103', tz='US/Eastern')], + 'B': [0, np.nan, 2]}) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_repr_info.py b/pandas/tests/frame/test_repr_info.py new file mode 100644 index 0000000000000..a458445081be5 --- /dev/null +++ b/pandas/tests/frame/test_repr_info.py @@ -0,0 +1,377 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime, timedelta +import re +import sys + +from numpy import nan +import numpy as np + +from pandas import (DataFrame, compat, option_context) +from pandas.compat import StringIO, lrange, u +import pandas.core.format as fmt +import pandas as pd + +from numpy.testing.decorators import slow +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +# Segregated collection of methods that require the BlockManager internal data +# structure + + +class TestDataFrameReprInfoEtc(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_repr_empty(self): + # empty + foo = repr(self.empty) # noqa + + # empty with index + frame = DataFrame(index=np.arange(1000)) + foo = repr(frame) # noqa + + def test_repr_mixed(self): + buf = StringIO() + + # mixed + foo = repr(self.mixed_frame) # noqa + self.mixed_frame.info(verbose=False, buf=buf) + + @slow + def test_repr_mixed_big(self): + # big mixed + biggie = DataFrame({'A': np.random.randn(200), + 'B': tm.makeStringIndex(200)}, + index=lrange(200)) + biggie.loc[:20, 'A'] = nan + biggie.loc[:20, 'B'] = nan + + foo = repr(biggie) # noqa + + def test_repr(self): + buf = StringIO() + + # small one + foo = repr(self.frame) + self.frame.info(verbose=False, buf=buf) + + # even smaller + self.frame.reindex(columns=['A']).info(verbose=False, buf=buf) + self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf) + + # exhausting cases in DataFrame.info + + # columns but no index + no_index = DataFrame(columns=[0, 1, 3]) + foo = repr(no_index) # noqa + + # no columns or index + self.empty.info(buf=buf) + + df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"]) + self.assertFalse("\t" in repr(df)) + self.assertFalse("\r" in repr(df)) + self.assertFalse("a\n" in repr(df)) + + def test_repr_dimensions(self): + df = DataFrame([[1, 2, ], [3, 4]]) + with option_context('display.show_dimensions', True): + self.assertTrue("2 rows x 2 columns" in repr(df)) + + with option_context('display.show_dimensions', False): + self.assertFalse("2 rows x 2 columns" in repr(df)) + + with option_context('display.show_dimensions', 'truncate'): + self.assertFalse("2 rows x 2 columns" in repr(df)) + + @slow + def test_repr_big(self): + # big one + biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4), + index=lrange(200)) + repr(biggie) + + def test_repr_unsortable(self): + # columns are not sortable + import warnings + warn_filters = warnings.filters + warnings.filterwarnings('ignore', + category=FutureWarning, + module=".*format") + + unsortable = DataFrame({'foo': [1] * 50, + datetime.today(): [1] * 50, + 'bar': ['bar'] * 50, + datetime.today() + timedelta(1): ['bar'] * 50}, + index=np.arange(50)) + repr(unsortable) + + fmt.set_option('display.precision', 3, 'display.column_space', 10) + repr(self.frame) + + fmt.set_option('display.max_rows', 10, 'display.max_columns', 2) + repr(self.frame) + + fmt.set_option('display.max_rows', 1000, 'display.max_columns', 1000) + repr(self.frame) + + self.reset_display_options() + + warnings.filters = warn_filters + + def test_repr_unicode(self): + uval = u('\u03c3\u03c3\u03c3\u03c3') + + # TODO(wesm): is this supposed to be used? + bval = uval.encode('utf-8') # noqa + + df = DataFrame({'A': [uval, uval]}) + + result = repr(df) + ex_top = ' A' + self.assertEqual(result.split('\n')[0].rstrip(), ex_top) + + df = DataFrame({'A': [uval, uval]}) + result = repr(df) + self.assertEqual(result.split('\n')[0].rstrip(), ex_top) + + def test_unicode_string_with_unicode(self): + df = DataFrame({'A': [u("\u05d0")]}) + + if compat.PY3: + str(df) + else: + compat.text_type(df) + + def test_bytestring_with_unicode(self): + df = DataFrame({'A': [u("\u05d0")]}) + if compat.PY3: + bytes(df) + else: + str(df) + + def test_very_wide_info_repr(self): + df = DataFrame(np.random.randn(10, 20), + columns=tm.rands_array(10, 20)) + repr(df) + + def test_repr_column_name_unicode_truncation_bug(self): + # #1906 + df = DataFrame({'Id': [7117434], + 'StringCol': ('Is it possible to modify drop plot code' + ' so that the output graph is displayed ' + 'in iphone simulator, Is it possible to ' + 'modify drop plot code so that the ' + 'output graph is \xe2\x80\xa8displayed ' + 'in iphone simulator.Now we are adding ' + 'the CSV file externally. I want to Call' + ' the File through the code..')}) + + result = repr(df) + self.assertIn('StringCol', result) + + def test_latex_repr(self): + result = r"""\begin{tabular}{llll} +\toprule +{} & 0 & 1 & 2 \\ +\midrule +0 & $\alpha$ & b & c \\ +1 & 1 & 2 & 3 \\ +\bottomrule +\end{tabular} +""" + with option_context("display.latex.escape", False): + df = DataFrame([[r'$\alpha$', 'b', 'c'], [1, 2, 3]]) + self.assertEqual(result, df._repr_latex_()) + + def test_info(self): + io = StringIO() + self.frame.info(buf=io) + self.tsframe.info(buf=io) + + frame = DataFrame(np.random.randn(5, 3)) + + import sys + sys.stdout = StringIO() + frame.info() + frame.info(verbose=False) + sys.stdout = sys.__stdout__ + + def test_info_wide(self): + from pandas import set_option, reset_option + io = StringIO() + df = DataFrame(np.random.randn(5, 101)) + df.info(buf=io) + + io = StringIO() + df.info(buf=io, max_cols=101) + rs = io.getvalue() + self.assertTrue(len(rs.splitlines()) > 100) + xp = rs + + set_option('display.max_info_columns', 101) + io = StringIO() + df.info(buf=io) + self.assertEqual(rs, xp) + reset_option('display.max_info_columns') + + def test_info_duplicate_columns(self): + io = StringIO() + + # it works! + frame = DataFrame(np.random.randn(1500, 4), + columns=['a', 'a', 'b', 'b']) + frame.info(buf=io) + + def test_info_duplicate_columns_shows_correct_dtypes(self): + # GH11761 + io = StringIO() + + frame = DataFrame([[1, 2.0]], + columns=['a', 'a']) + frame.info(buf=io) + io.seek(0) + lines = io.readlines() + self.assertEqual('a 1 non-null int64\n', lines[3]) + self.assertEqual('a 1 non-null float64\n', lines[4]) + + def test_info_shows_column_dtypes(self): + dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', + 'complex128', 'object', 'bool'] + data = {} + n = 10 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + buf = StringIO() + df.info(buf=buf) + res = buf.getvalue() + for i, dtype in enumerate(dtypes): + name = '%d %d non-null %s' % (i, n, dtype) + assert name in res + + def test_info_max_cols(self): + df = DataFrame(np.random.randn(10, 5)) + for len_, verbose in [(5, None), (5, False), (10, True)]: + # For verbose always ^ setting ^ summarize ^ full output + with option_context('max_info_columns', 4): + buf = StringIO() + df.info(buf=buf, verbose=verbose) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + for len_, verbose in [(10, None), (5, False), (10, True)]: + + # max_cols no exceeded + with option_context('max_info_columns', 5): + buf = StringIO() + df.info(buf=buf, verbose=verbose) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + for len_, max_cols in [(10, 5), (5, 4)]: + # setting truncates + with option_context('max_info_columns', 4): + buf = StringIO() + df.info(buf=buf, max_cols=max_cols) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + # setting wouldn't truncate + with option_context('max_info_columns', 5): + buf = StringIO() + df.info(buf=buf, max_cols=max_cols) + res = buf.getvalue() + self.assertEqual(len(res.strip().split('\n')), len_) + + def test_info_memory_usage(self): + # Ensure memory usage is displayed, when asserted, on the last line + dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', + 'complex128', 'object', 'bool'] + data = {} + n = 10 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + buf = StringIO() + # display memory usage case + df.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + self.assertTrue("memory usage: " in res[-1]) + # do not display memory usage cas + df.info(buf=buf, memory_usage=False) + res = buf.getvalue().splitlines() + self.assertTrue("memory usage: " not in res[-1]) + + df.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + # memory usage is a lower bound, so print it as XYZ+ MB + self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) + + df.iloc[:, :5].info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + # excluded column with object dtype, so estimate is accurate + self.assertFalse(re.match(r"memory usage: [^+]+\+", res[-1])) + + df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo']) + df_with_object_index.info(buf=buf, memory_usage=True) + res = buf.getvalue().splitlines() + self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) + + df_with_object_index.info(buf=buf, memory_usage='deep') + res = buf.getvalue().splitlines() + self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) + + self.assertTrue(df_with_object_index.memory_usage(index=True, + deep=True).sum() + > df_with_object_index.memory_usage(index=True).sum()) + + df_object = pd.DataFrame({'a': ['a']}) + self.assertTrue(df_object.memory_usage(deep=True).sum() + > df_object.memory_usage().sum()) + + # Test a DataFrame with duplicate columns + dtypes = ['int64', 'int64', 'int64', 'float64'] + data = {} + n = 100 + for i, dtype in enumerate(dtypes): + data[i] = np.random.randint(2, size=n).astype(dtype) + df = DataFrame(data) + df.columns = dtypes + # Ensure df size is as expected + df_size = df.memory_usage().sum() + exp_size = (len(dtypes) + 1) * n * 8 # (cols + index) * rows * bytes + self.assertEqual(df_size, exp_size) + # Ensure number of cols in memory_usage is the same as df + size_df = np.size(df.columns.values) + 1 # index=True; default + self.assertEqual(size_df, np.size(df.memory_usage())) + + # assert deep works only on object + self.assertEqual(df.memory_usage().sum(), + df.memory_usage(deep=True).sum()) + + # test for validity + DataFrame(1, index=['a'], columns=['A'] + ).memory_usage(index=True) + DataFrame(1, index=['a'], columns=['A'] + ).index.nbytes + df = DataFrame( + data=1, + index=pd.MultiIndex.from_product( + [['a'], range(1000)]), + columns=['A'] + ) + df.index.nbytes + df.memory_usage(index=True) + df.index.values.nbytes + + # sys.getsizeof will call the .memory_usage with + # deep=True, and add on some GC overhead + diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df) + self.assertTrue(abs(diff) < 100) diff --git a/pandas/tests/frame/test_reshape.py b/pandas/tests/frame/test_reshape.py new file mode 100644 index 0000000000000..c030a6a71c7b8 --- /dev/null +++ b/pandas/tests/frame/test_reshape.py @@ -0,0 +1,570 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime +import itertools + +from numpy.random import randn +from numpy import nan +import numpy as np + +from pandas.compat import u +from pandas import DataFrame, Index, Series, MultiIndex, date_range +import pandas as pd + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameReshape(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_pivot(self): + data = { + 'index': ['A', 'B', 'C', 'C', 'B', 'A'], + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + frame = DataFrame(data) + pivoted = frame.pivot( + index='index', columns='columns', values='values') + + expected = DataFrame({ + 'One': {'A': 1., 'B': 2., 'C': 3.}, + 'Two': {'A': 1., 'B': 2., 'C': 3.} + }) + expected.index.name, expected.columns.name = 'index', 'columns' + + assert_frame_equal(pivoted, expected) + + # name tracking + self.assertEqual(pivoted.index.name, 'index') + self.assertEqual(pivoted.columns.name, 'columns') + + # don't specify values + pivoted = frame.pivot(index='index', columns='columns') + self.assertEqual(pivoted.index.name, 'index') + self.assertEqual(pivoted.columns.names, (None, 'columns')) + + # pivot multiple columns + wp = tm.makePanel() + lp = wp.to_frame() + df = lp.reset_index() + assert_frame_equal(df.pivot('major', 'minor'), lp.unstack()) + + def test_pivot_duplicates(self): + data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'], + 'b': ['one', 'two', 'one', 'one', 'two'], + 'c': [1., 2., 3., 3., 4.]}) + with assertRaisesRegexp(ValueError, 'duplicate entries'): + data.pivot('a', 'b', 'c') + + def test_pivot_empty(self): + df = DataFrame({}, columns=['a', 'b', 'c']) + result = df.pivot('a', 'b', 'c') + expected = DataFrame({}) + assert_frame_equal(result, expected, check_names=False) + + def test_pivot_integer_bug(self): + df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")]) + + result = df.pivot(index=1, columns=0, values=2) + repr(result) + self.assert_numpy_array_equal(result.columns, ['A', 'B']) + + def test_pivot_index_none(self): + # gh-3962 + data = { + 'index': ['A', 'B', 'C', 'C', 'B', 'A'], + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + frame = DataFrame(data).set_index('index') + result = frame.pivot(columns='columns', values='values') + expected = DataFrame({ + 'One': {'A': 1., 'B': 2., 'C': 3.}, + 'Two': {'A': 1., 'B': 2., 'C': 3.} + }) + + expected.index.name, expected.columns.name = 'index', 'columns' + assert_frame_equal(result, expected) + + # omit values + result = frame.pivot(columns='columns') + + expected.columns = pd.MultiIndex.from_tuples([('values', 'One'), + ('values', 'Two')], + names=[None, 'columns']) + expected.index.name = 'index' + assert_frame_equal(result, expected, check_names=False) + self.assertEqual(result.index.name, 'index',) + self.assertEqual(result.columns.names, (None, 'columns')) + expected.columns = expected.columns.droplevel(0) + + data = { + 'index': range(7), + 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], + 'values': [1., 2., 3., 3., 2., 1.] + } + + result = frame.pivot(columns='columns', values='values') + + expected.columns.name = 'columns' + assert_frame_equal(result, expected) + + def test_stack_unstack(self): + stacked = self.frame.stack() + stacked_df = DataFrame({'foo': stacked, 'bar': stacked}) + + unstacked = stacked.unstack() + unstacked_df = stacked_df.unstack() + + assert_frame_equal(unstacked, self.frame) + assert_frame_equal(unstacked_df['bar'], self.frame) + + unstacked_cols = stacked.unstack(0) + unstacked_cols_df = stacked_df.unstack(0) + assert_frame_equal(unstacked_cols.T, self.frame) + assert_frame_equal(unstacked_cols_df['bar'].T, self.frame) + + def test_stack_ints(self): + df = DataFrame( + np.random.randn(30, 27), + columns=MultiIndex.from_tuples( + list(itertools.product(range(3), repeat=3)) + ) + ) + assert_frame_equal( + df.stack(level=[1, 2]), + df.stack(level=1).stack(level=1) + ) + assert_frame_equal( + df.stack(level=[-2, -1]), + df.stack(level=1).stack(level=1) + ) + + df_named = df.copy() + df_named.columns.set_names(range(3), inplace=True) + assert_frame_equal( + df_named.stack(level=[1, 2]), + df_named.stack(level=1).stack(level=1) + ) + + def test_stack_mixed_levels(self): + columns = MultiIndex.from_tuples( + [('A', 'cat', 'long'), ('B', 'cat', 'long'), + ('A', 'dog', 'short'), ('B', 'dog', 'short')], + names=['exp', 'animal', 'hair_length'] + ) + df = DataFrame(randn(4, 4), columns=columns) + + animal_hair_stacked = df.stack(level=['animal', 'hair_length']) + exp_hair_stacked = df.stack(level=['exp', 'hair_length']) + + # GH #8584: Need to check that stacking works when a number + # is passed that is both a level name and in the range of + # the level numbers + df2 = df.copy() + df2.columns.names = ['exp', 'animal', 1] + assert_frame_equal(df2.stack(level=['animal', 1]), + animal_hair_stacked, check_names=False) + assert_frame_equal(df2.stack(level=['exp', 1]), + exp_hair_stacked, check_names=False) + + # When mixed types are passed and the ints are not level + # names, raise + self.assertRaises(ValueError, df2.stack, level=['animal', 0]) + + # GH #8584: Having 0 in the level names could raise a + # strange error about lexsort depth + df3 = df.copy() + df3.columns.names = ['exp', 'animal', 0] + assert_frame_equal(df3.stack(level=['animal', 0]), + animal_hair_stacked, check_names=False) + + def test_stack_int_level_names(self): + columns = MultiIndex.from_tuples( + [('A', 'cat', 'long'), ('B', 'cat', 'long'), + ('A', 'dog', 'short'), ('B', 'dog', 'short')], + names=['exp', 'animal', 'hair_length'] + ) + df = DataFrame(randn(4, 4), columns=columns) + + exp_animal_stacked = df.stack(level=['exp', 'animal']) + animal_hair_stacked = df.stack(level=['animal', 'hair_length']) + exp_hair_stacked = df.stack(level=['exp', 'hair_length']) + + df2 = df.copy() + df2.columns.names = [0, 1, 2] + assert_frame_equal(df2.stack(level=[1, 2]), animal_hair_stacked, + check_names=False) + assert_frame_equal(df2.stack(level=[0, 1]), exp_animal_stacked, + check_names=False) + assert_frame_equal(df2.stack(level=[0, 2]), exp_hair_stacked, + check_names=False) + + # Out-of-order int column names + df3 = df.copy() + df3.columns.names = [2, 0, 1] + assert_frame_equal(df3.stack(level=[0, 1]), animal_hair_stacked, + check_names=False) + assert_frame_equal(df3.stack(level=[2, 0]), exp_animal_stacked, + check_names=False) + assert_frame_equal(df3.stack(level=[2, 1]), exp_hair_stacked, + check_names=False) + + def test_unstack_bool(self): + df = DataFrame([False, False], + index=MultiIndex.from_arrays([['a', 'b'], ['c', 'l']]), + columns=['col']) + rs = df.unstack() + xp = DataFrame(np.array([[False, np.nan], [np.nan, False]], + dtype=object), + index=['a', 'b'], + columns=MultiIndex.from_arrays([['col', 'col'], + ['c', 'l']])) + assert_frame_equal(rs, xp) + + def test_unstack_level_binding(self): + # GH9856 + mi = pd.MultiIndex( + levels=[[u('foo'), u('bar')], [u('one'), u('two')], + [u('a'), u('b')]], + labels=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]], + names=[u('first'), u('second'), u('third')]) + s = pd.Series(0, index=mi) + result = s.unstack([1, 2]).stack(0) + + expected_mi = pd.MultiIndex( + levels=[['foo', 'bar'], ['one', 'two']], + labels=[[0, 0, 1, 1], [0, 1, 0, 1]], + names=['first', 'second']) + + expected = pd.DataFrame(np.array([[np.nan, 0], + [0, np.nan], + [np.nan, 0], + [0, np.nan]], + dtype=np.float64), + index=expected_mi, + columns=pd.Index(['a', 'b'], name='third')) + + assert_frame_equal(result, expected) + + def test_unstack_to_series(self): + # check reversibility + data = self.frame.unstack() + + self.assertTrue(isinstance(data, Series)) + undo = data.unstack().T + assert_frame_equal(undo, self.frame) + + # check NA handling + data = DataFrame({'x': [1, 2, np.NaN], 'y': [3.0, 4, np.NaN]}) + data.index = Index(['a', 'b', 'c']) + result = data.unstack() + + midx = MultiIndex(levels=[['x', 'y'], ['a', 'b', 'c']], + labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) + expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx) + + assert_series_equal(result, expected) + + # check composability of unstack + old_data = data.copy() + for _ in range(4): + data = data.unstack() + assert_frame_equal(old_data, data) + + def test_unstack_dtypes(self): + + # GH 2929 + rows = [[1, 1, 3, 4], + [1, 2, 3, 4], + [2, 1, 3, 4], + [2, 2, 3, 4]] + + df = DataFrame(rows, columns=list('ABCD')) + result = df.get_dtype_counts() + expected = Series({'int64': 4}) + assert_series_equal(result, expected) + + # single dtype + df2 = df.set_index(['A', 'B']) + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'int64': 4}) + assert_series_equal(result, expected) + + # mixed + df2 = df.set_index(['A', 'B']) + df2['C'] = 3. + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'int64': 2, 'float64': 2}) + assert_series_equal(result, expected) + + df2['D'] = 'foo' + df3 = df2.unstack('B') + result = df3.get_dtype_counts() + expected = Series({'float64': 2, 'object': 2}) + assert_series_equal(result, expected) + + # GH7405 + for c, d in (np.zeros(5), np.zeros(5)), \ + (np.arange(5, dtype='f8'), np.arange(5, 10, dtype='f8')): + + df = DataFrame({'A': ['a'] * 5, 'C': c, 'D': d, + 'B': pd.date_range('2012-01-01', periods=5)}) + + right = df.iloc[:3].copy(deep=True) + + df = df.set_index(['A', 'B']) + df['D'] = df['D'].astype('int64') + + left = df.iloc[:3].unstack(0) + right = right.set_index(['A', 'B']).unstack(0) + right[('D', 'a')] = right[('D', 'a')].astype('int64') + + self.assertEqual(left.shape, (3, 2)) + assert_frame_equal(left, right) + + def test_unstack_non_unique_index_names(self): + idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], + names=['c1', 'c1']) + df = DataFrame([1, 2], index=idx) + with tm.assertRaises(ValueError): + df.unstack('c1') + + with tm.assertRaises(ValueError): + df.T.stack('c1') + + def test_unstack_nan_index(self): # GH7466 + cast = lambda val: '{0:1}'.format('' if val != val else val) + nan = np.nan + + def verify(df): + mk_list = lambda a: list(a) if isinstance(a, tuple) else [a] + rows, cols = df.notnull().values.nonzero() + for i, j in zip(rows, cols): + left = sorted(df.iloc[i, j].split('.')) + right = mk_list(df.index[i]) + mk_list(df.columns[j]) + right = sorted(list(map(cast, right))) + self.assertEqual(left, right) + + df = DataFrame({'jim': ['a', 'b', nan, 'd'], + 'joe': ['w', 'x', 'y', 'z'], + 'jolie': ['a.w', 'b.x', ' .y', 'd.z']}) + + left = df.set_index(['jim', 'joe']).unstack()['jolie'] + right = df.set_index(['joe', 'jim']).unstack()['jolie'].T + assert_frame_equal(left, right) + + for idx in itertools.permutations(df.columns[:2]): + mi = df.set_index(list(idx)) + for lev in range(2): + udf = mi.unstack(level=lev) + self.assertEqual(udf.notnull().values.sum(), len(df)) + verify(udf['jolie']) + + df = DataFrame({'1st': ['d'] * 3 + [nan] * 5 + ['a'] * 2 + + ['c'] * 3 + ['e'] * 2 + ['b'] * 5, + '2nd': ['y'] * 2 + ['w'] * 3 + [nan] * 3 + + ['z'] * 4 + [nan] * 3 + ['x'] * 3 + [nan] * 2, + '3rd': [67, 39, 53, 72, 57, 80, 31, 18, 11, 30, 59, + 50, 62, 59, 76, 52, 14, 53, 60, 51]}) + + df['4th'], df['5th'] = \ + df.apply(lambda r: '.'.join(map(cast, r)), axis=1), \ + df.apply(lambda r: '.'.join(map(cast, r.iloc[::-1])), axis=1) + + for idx in itertools.permutations(['1st', '2nd', '3rd']): + mi = df.set_index(list(idx)) + for lev in range(3): + udf = mi.unstack(level=lev) + self.assertEqual(udf.notnull().values.sum(), 2 * len(df)) + for col in ['4th', '5th']: + verify(udf[col]) + + # GH7403 + df = pd.DataFrame( + {'A': list('aaaabbbb'), 'B': range(8), 'C': range(8)}) + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[3, 0, 1, 2, nan, nan, nan, nan], + [nan, nan, nan, nan, 4, 5, 6, 7]] + vals = list(map(list, zip(*vals))) + idx = Index([nan, 0, 1, 2, 4, 5, 6, 7], name='B') + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + df = DataFrame({'A': list('aaaabbbb'), 'B': list(range(4)) * 2, + 'C': range(8)}) + df.iloc[2, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[2, nan], [0, 4], [1, 5], [nan, 6], [3, 7]] + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + idx = Index([nan, 0, 1, 2, 3], name='B') + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + df = pd.DataFrame({'A': list('aaaabbbb'), 'B': list(range(4)) * 2, + 'C': range(8)}) + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack(0) + + vals = [[3, nan], [0, 4], [1, 5], [2, 6], [nan, 7]] + cols = MultiIndex(levels=[['C'], ['a', 'b']], + labels=[[0, 0], [0, 1]], + names=[None, 'A']) + idx = Index([nan, 0, 1, 2, 3], name='B') + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + # GH7401 + df = pd.DataFrame({'A': list('aaaaabbbbb'), 'C': np.arange(10), + 'B': (date_range('2012-01-01', periods=5) + .tolist() * 2)}) + + df.iloc[3, 1] = np.NaN + left = df.set_index(['A', 'B']).unstack() + + vals = np.array([[3, 0, 1, 2, nan, 4], [nan, 5, 6, 7, 8, 9]]) + idx = Index(['a', 'b'], name='A') + cols = MultiIndex(levels=[['C'], date_range('2012-01-01', periods=5)], + labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], + names=[None, 'B']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + # GH4862 + vals = [['Hg', nan, nan, 680585148], + ['U', 0.0, nan, 680585148], + ['Pb', 7.07e-06, nan, 680585148], + ['Sn', 2.3614e-05, 0.0133, 680607017], + ['Ag', 0.0, 0.0133, 680607017], + ['Hg', -0.00015, 0.0133, 680607017]] + df = DataFrame(vals, columns=['agent', 'change', 'dosage', 's_id'], + index=[17263, 17264, 17265, 17266, 17267, 17268]) + + left = df.copy().set_index(['s_id', 'dosage', 'agent']).unstack() + + vals = [[nan, nan, 7.07e-06, nan, 0.0], + [0.0, -0.00015, nan, 2.3614e-05, nan]] + + idx = MultiIndex(levels=[[680585148, 680607017], [0.0133]], + labels=[[0, 1], [-1, 0]], + names=['s_id', 'dosage']) + + cols = MultiIndex(levels=[['change'], ['Ag', 'Hg', 'Pb', 'Sn', 'U']], + labels=[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4]], + names=[None, 'agent']) + + right = DataFrame(vals, columns=cols, index=idx) + assert_frame_equal(left, right) + + left = df.ix[17264:].copy().set_index(['s_id', 'dosage', 'agent']) + assert_frame_equal(left.unstack(), right) + + # GH9497 - multiple unstack with nulls + df = DataFrame({'1st': [1, 2, 1, 2, 1, 2], + '2nd': pd.date_range('2014-02-01', periods=6, + freq='D'), + 'jim': 100 + np.arange(6), + 'joe': (np.random.randn(6) * 10).round(2)}) + + df['3rd'] = df['2nd'] - pd.Timestamp('2014-02-02') + df.loc[1, '2nd'] = df.loc[3, '2nd'] = nan + df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan + + left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd']) + self.assertEqual(left.notnull().values.sum(), 2 * len(df)) + + for col in ['jim', 'joe']: + for _, r in df.iterrows(): + key = r['1st'], (col, r['2nd'], r['3rd']) + self.assertEqual(r[col], left.loc[key]) + + def test_stack_datetime_column_multiIndex(self): + # GH 8039 + t = datetime(2014, 1, 1) + df = DataFrame( + [1, 2, 3, 4], columns=MultiIndex.from_tuples([(t, 'A', 'B')])) + result = df.stack() + + eidx = MultiIndex.from_product([(0, 1, 2, 3), ('B',)]) + ecols = MultiIndex.from_tuples([(t, 'A')]) + expected = DataFrame([1, 2, 3, 4], index=eidx, columns=ecols) + assert_frame_equal(result, expected) + + def test_stack_partial_multiIndex(self): + # GH 8844 + def _test_stack_with_multiindex(multiindex): + df = DataFrame(np.arange(3 * len(multiindex)) + .reshape(3, len(multiindex)), + columns=multiindex) + for level in (-1, 0, 1, [0, 1], [1, 0]): + result = df.stack(level=level, dropna=False) + + if isinstance(level, int): + # Stacking a single level should not make any all-NaN rows, + # so df.stack(level=level, dropna=False) should be the same + # as df.stack(level=level, dropna=True). + expected = df.stack(level=level, dropna=True) + if isinstance(expected, Series): + assert_series_equal(result, expected) + else: + assert_frame_equal(result, expected) + + df.columns = MultiIndex.from_tuples(df.columns.get_values(), + names=df.columns.names) + expected = df.stack(level=level, dropna=False) + if isinstance(expected, Series): + assert_series_equal(result, expected) + else: + assert_frame_equal(result, expected) + + full_multiindex = MultiIndex.from_tuples([('B', 'x'), ('B', 'z'), + ('A', 'y'), + ('C', 'x'), ('C', 'u')], + names=['Upper', 'Lower']) + for multiindex_columns in ([0, 1, 2, 3, 4], + [0, 1, 2, 3], [0, 1, 2, 4], + [0, 1, 2], [1, 2, 3], [2, 3, 4], + [0, 1], [0, 2], [0, 3], + [0], [2], [4]): + _test_stack_with_multiindex(full_multiindex[multiindex_columns]) + if len(multiindex_columns) > 1: + multiindex_columns.reverse() + _test_stack_with_multiindex( + full_multiindex[multiindex_columns]) + + df = DataFrame(np.arange(6).reshape(2, 3), + columns=full_multiindex[[0, 1, 3]]) + result = df.stack(dropna=False) + expected = DataFrame([[0, 2], [1, nan], [3, 5], [4, nan]], + index=MultiIndex( + levels=[[0, 1], ['u', 'x', 'y', 'z']], + labels=[[0, 0, 1, 1], + [1, 3, 1, 3]], + names=[None, 'Lower']), + columns=Index(['B', 'C'], name='Upper'), + dtype=df.dtypes[0]) + assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_sorting.py b/pandas/tests/frame/test_sorting.py new file mode 100644 index 0000000000000..ff2159f8b6f40 --- /dev/null +++ b/pandas/tests/frame/test_sorting.py @@ -0,0 +1,473 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import numpy as np + +from pandas.compat import lrange +from pandas import (DataFrame, Series, MultiIndex, Timestamp, + date_range) + +from pandas.util.testing import (assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSorting(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_sort_values(self): + # API for 9816 + + # sort_index + frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # 9816 deprecated + with tm.assert_produces_warning(FutureWarning): + frame.sort(columns='A') + with tm.assert_produces_warning(FutureWarning): + frame.sort() + + unordered = frame.ix[[3, 2, 4, 1]] + expected = unordered.sort_index() + + result = unordered.sort_index(axis=0) + assert_frame_equal(result, expected) + + unordered = frame.ix[:, [2, 1, 3, 0]] + expected = unordered.sort_index(axis=1) + + result = unordered.sort_index(axis=1) + assert_frame_equal(result, expected) + assert_frame_equal(result, expected) + + # sortlevel + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + + result = df.sort_index(level='A', sort_remaining=False) + expected = df.sortlevel('A', sort_remaining=False) + assert_frame_equal(result, expected) + + df = df.T + result = df.sort_index(level='A', axis=1, sort_remaining=False) + expected = df.sortlevel('A', axis=1, sort_remaining=False) + assert_frame_equal(result, expected) + + # MI sort, but no by + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + result = df.sort_index(sort_remaining=False) + expected = df.sort_index() + assert_frame_equal(result, expected) + + def test_sort_index(self): + frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # axis=0 + unordered = frame.ix[[3, 2, 4, 1]] + sorted_df = unordered.sort_index(axis=0) + expected = frame + assert_frame_equal(sorted_df, expected) + + sorted_df = unordered.sort_index(ascending=False) + expected = frame[::-1] + assert_frame_equal(sorted_df, expected) + + # axis=1 + unordered = frame.ix[:, ['D', 'B', 'C', 'A']] + sorted_df = unordered.sort_index(axis=1) + expected = frame + assert_frame_equal(sorted_df, expected) + + sorted_df = unordered.sort_index(axis=1, ascending=False) + expected = frame.ix[:, ::-1] + assert_frame_equal(sorted_df, expected) + + # by column + sorted_df = frame.sort_values(by='A') + indexer = frame['A'].argsort().values + expected = frame.ix[frame.index[indexer]] + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.sort_values(by='A', ascending=False) + indexer = indexer[::-1] + expected = frame.ix[frame.index[indexer]] + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.sort_values(by='A', ascending=False) + assert_frame_equal(sorted_df, expected) + + # GH4839 + sorted_df = frame.sort_values(by=['A'], ascending=[False]) + assert_frame_equal(sorted_df, expected) + + # check for now + sorted_df = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected[::-1]) + expected = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected) + + expected = frame.sort_values(by=['A', 'B'], ascending=False) + sorted_df = frame.sort_values(by=['A', 'B']) + assert_frame_equal(sorted_df, expected[::-1]) + + self.assertRaises(ValueError, lambda: frame.sort_values( + by=['A', 'B'], axis=2, inplace=True)) + + msg = 'When sorting by column, axis must be 0' + with assertRaisesRegexp(ValueError, msg): + frame.sort_values(by='A', axis=1) + + msg = r'Length of ascending \(5\) != length of by \(2\)' + with assertRaisesRegexp(ValueError, msg): + frame.sort_values(by=['A', 'B'], axis=0, ascending=[True] * 5) + + def test_sort_index_categorical_index(self): + + df = (DataFrame({'A': np.arange(6, dtype='int64'), + 'B': Series(list('aabbca')) + .astype('category', categories=list('cab'))}) + .set_index('B')) + + result = df.sort_index() + expected = df.iloc[[4, 0, 1, 5, 2, 3]] + assert_frame_equal(result, expected) + + result = df.sort_index(ascending=False) + expected = df.iloc[[3, 2, 5, 1, 0, 4]] + assert_frame_equal(result, expected) + + def test_sort_nan(self): + # GH3917 + nan = np.nan + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}) + + # sort one column only + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 9, 2, nan, 5, 5, 4]}, + index=[2, 0, 3, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A'], na_position='first') + assert_frame_equal(sorted_df, expected) + + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 9, 2]}, + index=[2, 5, 4, 6, 1, 0, 3]) + sorted_df = df.sort_values(['A'], na_position='first', ascending=False) + assert_frame_equal(sorted_df, expected) + + # na_position='last', order + expected = DataFrame( + {'A': [1, 1, 2, 4, 6, 8, nan], + 'B': [2, 9, nan, 5, 5, 4, 5]}, + index=[3, 0, 1, 6, 4, 5, 2]) + sorted_df = df.sort_values(['A', 'B']) + assert_frame_equal(sorted_df, expected) + + # na_position='first', order + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 2, 9, nan, 5, 5, 4]}, + index=[2, 3, 0, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A', 'B'], na_position='first') + assert_frame_equal(sorted_df, expected) + + # na_position='first', not order + expected = DataFrame( + {'A': [nan, 1, 1, 2, 4, 6, 8], + 'B': [5, 9, 2, nan, 5, 5, 4]}, + index=[2, 0, 3, 1, 6, 4, 5]) + sorted_df = df.sort_values(['A', 'B'], ascending=[ + 1, 0], na_position='first') + assert_frame_equal(sorted_df, expected) + + # na_position='last', not order + expected = DataFrame( + {'A': [8, 6, 4, 2, 1, 1, nan], + 'B': [4, 5, 5, nan, 2, 9, 5]}, + index=[5, 4, 6, 1, 3, 0, 2]) + sorted_df = df.sort_values(['A', 'B'], ascending=[ + 0, 1], na_position='last') + assert_frame_equal(sorted_df, expected) + + # Test DataFrame with nan label + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}, + index=[1, 2, 3, 4, 5, 6, nan]) + + # NaN label, ascending=True, na_position='last' + sorted_df = df.sort_index( + kind='quicksort', ascending=True, na_position='last') + expected = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}, + index=[1, 2, 3, 4, 5, 6, nan]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=True, na_position='first' + sorted_df = df.sort_index(na_position='first') + expected = DataFrame({'A': [4, 1, 2, nan, 1, 6, 8], + 'B': [5, 9, nan, 5, 2, 5, 4]}, + index=[nan, 1, 2, 3, 4, 5, 6]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=False, na_position='last' + sorted_df = df.sort_index(kind='quicksort', ascending=False) + expected = DataFrame({'A': [8, 6, 1, nan, 2, 1, 4], + 'B': [4, 5, 2, 5, nan, 9, 5]}, + index=[6, 5, 4, 3, 2, 1, nan]) + assert_frame_equal(sorted_df, expected) + + # NaN label, ascending=False, na_position='first' + sorted_df = df.sort_index( + kind='quicksort', ascending=False, na_position='first') + expected = DataFrame({'A': [4, 8, 6, 1, nan, 2, 1], + 'B': [5, 4, 5, 2, 5, nan, 9]}, + index=[nan, 6, 5, 4, 3, 2, 1]) + assert_frame_equal(sorted_df, expected) + + def test_stable_descending_sort(self): + # GH #6399 + df = DataFrame([[2, 'first'], [2, 'second'], [1, 'a'], [1, 'b']], + columns=['sort_col', 'order']) + sorted_df = df.sort_values(by='sort_col', kind='mergesort', + ascending=False) + assert_frame_equal(df, sorted_df) + + def test_stable_descending_multicolumn_sort(self): + nan = np.nan + df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], + 'B': [9, nan, 5, 2, 5, 4, 5]}) + # test stable mergesort + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 2, 9]}, + index=[2, 5, 4, 6, 1, 3, 0]) + sorted_df = df.sort_values(['A', 'B'], ascending=[0, 1], + na_position='first', + kind='mergesort') + assert_frame_equal(sorted_df, expected) + + expected = DataFrame( + {'A': [nan, 8, 6, 4, 2, 1, 1], + 'B': [5, 4, 5, 5, nan, 9, 2]}, + index=[2, 5, 4, 6, 1, 0, 3]) + sorted_df = df.sort_values(['A', 'B'], ascending=[0, 0], + na_position='first', + kind='mergesort') + assert_frame_equal(sorted_df, expected) + + def test_sort_index_multicolumn(self): + import random + A = np.arange(5).repeat(20) + B = np.tile(np.arange(5), 20) + random.shuffle(A) + random.shuffle(B) + frame = DataFrame({'A': A, 'B': B, + 'C': np.random.randn(100)}) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['A', 'B']) + result = frame.sort_values(by=['A', 'B']) + indexer = np.lexsort((frame['B'], frame['A'])) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['A', 'B'], ascending=False) + result = frame.sort_values(by=['A', 'B'], ascending=False) + indexer = np.lexsort((frame['B'].rank(ascending=False), + frame['A'].rank(ascending=False))) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + frame.sort_index(by=['B', 'A']) + result = frame.sort_values(by=['B', 'A']) + indexer = np.lexsort((frame['A'], frame['B'])) + expected = frame.take(indexer) + assert_frame_equal(result, expected) + + def test_sort_index_inplace(self): + frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + # axis=0 + unordered = frame.ix[[3, 2, 4, 1]] + a_id = id(unordered['A']) + df = unordered.copy() + df.sort_index(inplace=True) + expected = frame + assert_frame_equal(df, expected) + self.assertNotEqual(a_id, id(df['A'])) + + df = unordered.copy() + df.sort_index(ascending=False, inplace=True) + expected = frame[::-1] + assert_frame_equal(df, expected) + + # axis=1 + unordered = frame.ix[:, ['D', 'B', 'C', 'A']] + df = unordered.copy() + df.sort_index(axis=1, inplace=True) + expected = frame + assert_frame_equal(df, expected) + + df = unordered.copy() + df.sort_index(axis=1, ascending=False, inplace=True) + expected = frame.ix[:, ::-1] + assert_frame_equal(df, expected) + + def test_sort_index_different_sortorder(self): + A = np.arange(20).repeat(5) + B = np.tile(np.arange(5), 20) + + indexer = np.random.permutation(100) + A = A.take(indexer) + B = B.take(indexer) + + df = DataFrame({'A': A, 'B': B, + 'C': np.random.randn(100)}) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=['A', 'B'], ascending=[1, 0]) + result = df.sort_values(by=['A', 'B'], ascending=[1, 0]) + + ex_indexer = np.lexsort((df.B.max() - df.B, df.A)) + expected = df.take(ex_indexer) + assert_frame_equal(result, expected) + + # test with multiindex, too + idf = df.set_index(['A', 'B']) + + result = idf.sort_index(ascending=[1, 0]) + expected = idf.take(ex_indexer) + assert_frame_equal(result, expected) + + # also, Series! + result = idf['C'].sort_index(ascending=[1, 0]) + assert_series_equal(result, expected['C']) + + def test_sort_inplace(self): + frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], + columns=['A', 'B', 'C', 'D']) + + sorted_df = frame.copy() + sorted_df.sort_values(by='A', inplace=True) + expected = frame.sort_values(by='A') + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.copy() + sorted_df.sort_values(by='A', ascending=False, inplace=True) + expected = frame.sort_values(by='A', ascending=False) + assert_frame_equal(sorted_df, expected) + + sorted_df = frame.copy() + sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True) + expected = frame.sort_values(by=['A', 'B'], ascending=False) + assert_frame_equal(sorted_df, expected) + + def test_sort_index_duplicates(self): + + # with 9816, these are all translated to .sort_values + + df = DataFrame([lrange(5, 9), lrange(4)], + columns=['a', 'a', 'b', 'b']) + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by='a') + with assertRaisesRegexp(ValueError, 'duplicate'): + df.sort_values(by='a') + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=['a']) + with assertRaisesRegexp(ValueError, 'duplicate'): + df.sort_values(by=['a']) + + with assertRaisesRegexp(ValueError, 'duplicate'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + # multi-column 'by' is separate codepath + df.sort_index(by=['a', 'b']) + with assertRaisesRegexp(ValueError, 'duplicate'): + # multi-column 'by' is separate codepath + df.sort_values(by=['a', 'b']) + + # with multi-index + # GH4370 + df = DataFrame(np.random.randn(4, 2), + columns=MultiIndex.from_tuples([('a', 0), ('a', 1)])) + with assertRaisesRegexp(ValueError, 'levels'): + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by='a') + with assertRaisesRegexp(ValueError, 'levels'): + df.sort_values(by='a') + + # convert tuples to a list of tuples + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=[('a', 1)]) + expected = df.sort_values(by=[('a', 1)]) + + # use .sort_values #9816 + with tm.assert_produces_warning(FutureWarning): + df.sort_index(by=('a', 1)) + result = df.sort_values(by=('a', 1)) + assert_frame_equal(result, expected) + + def test_sortlevel(self): + mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) + df = DataFrame([[1, 2], [3, 4]], mi) + res = df.sortlevel('A', sort_remaining=False) + assert_frame_equal(df, res) + + res = df.sortlevel(['A', 'B'], sort_remaining=False) + assert_frame_equal(df, res) + + def test_sort_datetimes(self): + + # GH 3461, argsort / lexsort differences for a datetime column + df = DataFrame(['a', 'a', 'a', 'b', 'c', 'd', 'e', 'f', 'g'], + columns=['A'], + index=date_range('20130101', periods=9)) + dts = [Timestamp(x) + for x in ['2004-02-11', '2004-01-21', '2004-01-26', + '2005-09-20', '2010-10-04', '2009-05-12', + '2008-11-12', '2010-09-28', '2010-09-28']] + df['B'] = dts[::2] + dts[1::2] + df['C'] = 2. + df['A1'] = 3. + + df1 = df.sort_values(by='A') + df2 = df.sort_values(by=['A']) + assert_frame_equal(df1, df2) + + df1 = df.sort_values(by='B') + df2 = df.sort_values(by=['B']) + assert_frame_equal(df1, df2) + + def test_frame_column_inplace_sort_exception(self): + s = self.frame['A'] + with assertRaisesRegexp(ValueError, "This Series is a view"): + s.sort_values(inplace=True) + + cp = s.copy() + cp.sort_values() # it works! diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py new file mode 100644 index 0000000000000..a7458f5335ec4 --- /dev/null +++ b/pandas/tests/frame/test_subclass.py @@ -0,0 +1,126 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from pandas import DataFrame, Series, MultiIndex, Panel +import pandas as pd + +from pandas.util.testing import (assert_frame_equal, + SubclassedDataFrame) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameSubclassing(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_frame_subclassing_and_slicing(self): + # Subclass frame and ensure it returns the right class on slicing it + # In reference to PR 9632 + + class CustomSeries(Series): + + @property + def _constructor(self): + return CustomSeries + + def custom_series_function(self): + return 'OK' + + class CustomDataFrame(DataFrame): + """ + Subclasses pandas DF, fills DF with simulation results, adds some + custom plotting functions. + """ + + def __init__(self, *args, **kw): + super(CustomDataFrame, self).__init__(*args, **kw) + + @property + def _constructor(self): + return CustomDataFrame + + _constructor_sliced = CustomSeries + + def custom_frame_function(self): + return 'OK' + + data = {'col1': range(10), + 'col2': range(10)} + cdf = CustomDataFrame(data) + + # Did we get back our own DF class? + self.assertTrue(isinstance(cdf, CustomDataFrame)) + + # Do we get back our own Series class after selecting a column? + cdf_series = cdf.col1 + self.assertTrue(isinstance(cdf_series, CustomSeries)) + self.assertEqual(cdf_series.custom_series_function(), 'OK') + + # Do we get back our own DF class after slicing row-wise? + cdf_rows = cdf[1:5] + self.assertTrue(isinstance(cdf_rows, CustomDataFrame)) + self.assertEqual(cdf_rows.custom_frame_function(), 'OK') + + # Make sure sliced part of multi-index frame is custom class + mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')]) + cdf_multi = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) + self.assertTrue(isinstance(cdf_multi['A'], CustomDataFrame)) + + mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')]) + cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) + self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) + + def test_dataframe_metadata(self): + df = SubclassedDataFrame({'X': [1, 2, 3], 'Y': [1, 2, 3]}, + index=['a', 'b', 'c']) + df.testattr = 'XXX' + + self.assertEqual(df.testattr, 'XXX') + self.assertEqual(df[['X']].testattr, 'XXX') + self.assertEqual(df.loc[['a', 'b'], :].testattr, 'XXX') + self.assertEqual(df.iloc[[0, 1], :].testattr, 'XXX') + + # GH9776 + self.assertEqual(df.iloc[0:1, :].testattr, 'XXX') + + # GH10553 + unpickled = self.round_trip_pickle(df) + assert_frame_equal(df, unpickled) + self.assertEqual(df._metadata, unpickled._metadata) + self.assertEqual(df.testattr, unpickled.testattr) + + def test_to_panel_expanddim(self): + # GH 9762 + + class SubclassedFrame(DataFrame): + + @property + def _constructor_expanddim(self): + return SubclassedPanel + + class SubclassedPanel(Panel): + pass + + index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)]) + df = SubclassedFrame({'X': [1, 2, 3], 'Y': [4, 5, 6]}, index=index) + result = df.to_panel() + self.assertTrue(isinstance(result, SubclassedPanel)) + expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]], + items=['X', 'Y'], major_axis=[0], + minor_axis=[0, 1, 2], + dtype='int64') + tm.assert_panel_equal(result, expected) + + def test_subclass_attr_err_propagation(self): + # GH 11808 + class A(DataFrame): + + @property + def bar(self): + return self.i_dont_exist + with tm.assertRaisesRegexp(AttributeError, '.*i_dont_exist.*'): + A().bar diff --git a/pandas/tests/frame/test_timeseries.py b/pandas/tests/frame/test_timeseries.py new file mode 100644 index 0000000000000..115e942dceb0f --- /dev/null +++ b/pandas/tests/frame/test_timeseries.py @@ -0,0 +1,338 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +from datetime import datetime + +from numpy import nan +from numpy.random import randn +import numpy as np + +from pandas import DataFrame, Series, Index, Timestamp, DatetimeIndex +import pandas as pd +import pandas.core.datetools as datetools + +from pandas.util.testing import (assert_almost_equal, + assert_series_equal, + assert_frame_equal, + assertRaisesRegexp) + +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +class TestDataFrameTimeSeriesMethods(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_diff(self): + the_diff = self.tsframe.diff(1) + + assert_series_equal(the_diff['A'], + self.tsframe['A'] - self.tsframe['A'].shift(1)) + + # int dtype + a = 10000000000000000 + b = a + 1 + s = Series([a, b]) + + rs = DataFrame({'s': s}).diff() + self.assertEqual(rs.s[1], 1) + + # mixed numeric + tf = self.tsframe.astype('float32') + the_diff = tf.diff(1) + assert_series_equal(the_diff['A'], + tf['A'] - tf['A'].shift(1)) + + # issue 10907 + df = pd.DataFrame({'y': pd.Series([2]), 'z': pd.Series([3])}) + df.insert(0, 'x', 1) + result = df.diff(axis=1) + expected = pd.DataFrame({'x': np.nan, 'y': pd.Series( + 1), 'z': pd.Series(1)}).astype('float64') + assert_frame_equal(result, expected) + + def test_diff_timedelta(self): + # GH 4533 + df = DataFrame(dict(time=[Timestamp('20130101 9:01'), + Timestamp('20130101 9:02')], + value=[1.0, 2.0])) + + res = df.diff() + exp = DataFrame([[pd.NaT, np.nan], + [pd.Timedelta('00:01:00'), 1]], + columns=['time', 'value']) + assert_frame_equal(res, exp) + + def test_diff_mixed_dtype(self): + df = DataFrame(np.random.randn(5, 3)) + df['A'] = np.array([1, 2, 3, 4, 5], dtype=object) + + result = df.diff() + self.assertEqual(result[0].dtype, np.float64) + + def test_diff_neg_n(self): + rs = self.tsframe.diff(-1) + xp = self.tsframe - self.tsframe.shift(-1) + assert_frame_equal(rs, xp) + + def test_diff_float_n(self): + rs = self.tsframe.diff(1.) + xp = self.tsframe.diff(1) + assert_frame_equal(rs, xp) + + def test_diff_axis(self): + # GH 9727 + df = DataFrame([[1., 2.], [3., 4.]]) + assert_frame_equal(df.diff(axis=1), DataFrame( + [[np.nan, 1.], [np.nan, 1.]])) + assert_frame_equal(df.diff(axis=0), DataFrame( + [[np.nan, np.nan], [2., 2.]])) + + def test_pct_change(self): + rs = self.tsframe.pct_change(fill_method=None) + assert_frame_equal(rs, self.tsframe / self.tsframe.shift(1) - 1) + + rs = self.tsframe.pct_change(2) + filled = self.tsframe.fillna(method='pad') + assert_frame_equal(rs, filled / filled.shift(2) - 1) + + rs = self.tsframe.pct_change(fill_method='bfill', limit=1) + filled = self.tsframe.fillna(method='bfill', limit=1) + assert_frame_equal(rs, filled / filled.shift(1) - 1) + + rs = self.tsframe.pct_change(freq='5D') + filled = self.tsframe.fillna(method='pad') + assert_frame_equal(rs, filled / filled.shift(freq='5D') - 1) + + def test_pct_change_shift_over_nas(self): + s = Series([1., 1.5, np.nan, 2.5, 3.]) + + df = DataFrame({'a': s, 'b': s}) + + chg = df.pct_change() + expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) + edf = DataFrame({'a': expected, 'b': expected}) + assert_frame_equal(chg, edf) + + def test_shift(self): + # naive shift + shiftedFrame = self.tsframe.shift(5) + self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + + shiftedSeries = self.tsframe['A'].shift(5) + assert_series_equal(shiftedFrame['A'], shiftedSeries) + + shiftedFrame = self.tsframe.shift(-5) + self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) + + shiftedSeries = self.tsframe['A'].shift(-5) + assert_series_equal(shiftedFrame['A'], shiftedSeries) + + # shift by 0 + unshifted = self.tsframe.shift(0) + assert_frame_equal(unshifted, self.tsframe) + + # shift by DateOffset + shiftedFrame = self.tsframe.shift(5, freq=datetools.BDay()) + self.assertEqual(len(shiftedFrame), len(self.tsframe)) + + shiftedFrame2 = self.tsframe.shift(5, freq='B') + assert_frame_equal(shiftedFrame, shiftedFrame2) + + d = self.tsframe.index[0] + shifted_d = d + datetools.BDay(5) + assert_series_equal(self.tsframe.xs(d), + shiftedFrame.xs(shifted_d), check_names=False) + + # shift int frame + int_shifted = self.intframe.shift(1) # noqa + + # Shifting with PeriodIndex + ps = tm.makePeriodFrame() + shifted = ps.shift(1) + unshifted = shifted.shift(-1) + self.assertTrue(shifted.index.equals(ps.index)) + + tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0], + compare_keys=False) + + shifted2 = ps.shift(1, 'B') + shifted3 = ps.shift(1, datetools.bday) + assert_frame_equal(shifted2, shifted3) + assert_frame_equal(ps, shifted2.shift(-1, 'B')) + + assertRaisesRegexp(ValueError, 'does not match PeriodIndex freq', + ps.shift, freq='D') + + # shift other axis + # GH 6371 + df = DataFrame(np.random.rand(10, 5)) + expected = pd.concat([DataFrame(np.nan, index=df.index, + columns=[0]), + df.iloc[:, 0:-1]], + ignore_index=True, axis=1) + result = df.shift(1, axis=1) + assert_frame_equal(result, expected) + + # shift named axis + df = DataFrame(np.random.rand(10, 5)) + expected = pd.concat([DataFrame(np.nan, index=df.index, + columns=[0]), + df.iloc[:, 0:-1]], + ignore_index=True, axis=1) + result = df.shift(1, axis='columns') + assert_frame_equal(result, expected) + + def test_shift_bool(self): + df = DataFrame({'high': [True, False], + 'low': [False, False]}) + rs = df.shift(1) + xp = DataFrame(np.array([[np.nan, np.nan], + [True, False]], dtype=object), + columns=['high', 'low']) + assert_frame_equal(rs, xp) + + def test_shift_categorical(self): + # GH 9416 + s1 = pd.Series(['a', 'b', 'c'], dtype='category') + s2 = pd.Series(['A', 'B', 'C'], dtype='category') + df = DataFrame({'one': s1, 'two': s2}) + rs = df.shift(1) + xp = DataFrame({'one': s1.shift(1), 'two': s2.shift(1)}) + assert_frame_equal(rs, xp) + + def test_shift_empty(self): + # Regression test for #8019 + df = DataFrame({'foo': []}) + rs = df.shift(-1) + + assert_frame_equal(df, rs) + + def test_tshift(self): + # PeriodIndex + ps = tm.makePeriodFrame() + shifted = ps.tshift(1) + unshifted = shifted.tshift(-1) + + assert_frame_equal(unshifted, ps) + + shifted2 = ps.tshift(freq='B') + assert_frame_equal(shifted, shifted2) + + shifted3 = ps.tshift(freq=datetools.bday) + assert_frame_equal(shifted, shifted3) + + assertRaisesRegexp(ValueError, 'does not match', ps.tshift, freq='M') + + # DatetimeIndex + shifted = self.tsframe.tshift(1) + unshifted = shifted.tshift(-1) + + assert_frame_equal(self.tsframe, unshifted) + + shifted2 = self.tsframe.tshift(freq=self.tsframe.index.freq) + assert_frame_equal(shifted, shifted2) + + inferred_ts = DataFrame(self.tsframe.values, + Index(np.asarray(self.tsframe.index)), + columns=self.tsframe.columns) + shifted = inferred_ts.tshift(1) + unshifted = shifted.tshift(-1) + assert_frame_equal(shifted, self.tsframe.tshift(1)) + assert_frame_equal(unshifted, inferred_ts) + + no_freq = self.tsframe.ix[[0, 5, 7], :] + self.assertRaises(ValueError, no_freq.tshift) + + def test_truncate(self): + ts = self.tsframe[::3] + + start, end = self.tsframe.index[3], self.tsframe.index[6] + + start_missing = self.tsframe.index[2] + end_missing = self.tsframe.index[7] + + # neither specified + truncated = ts.truncate() + assert_frame_equal(truncated, ts) + + # both specified + expected = ts[1:3] + + truncated = ts.truncate(start, end) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(start_missing, end_missing) + assert_frame_equal(truncated, expected) + + # start specified + expected = ts[1:] + + truncated = ts.truncate(before=start) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(before=start_missing) + assert_frame_equal(truncated, expected) + + # end specified + expected = ts[:3] + + truncated = ts.truncate(after=end) + assert_frame_equal(truncated, expected) + + truncated = ts.truncate(after=end_missing) + assert_frame_equal(truncated, expected) + + self.assertRaises(ValueError, ts.truncate, + before=ts.index[-1] - 1, + after=ts.index[0] + 1) + + def test_truncate_copy(self): + index = self.tsframe.index + truncated = self.tsframe.truncate(index[5], index[10]) + truncated.values[:] = 5. + self.assertFalse((self.tsframe.values[5:11] == 5).any()) + + def test_asfreq(self): + offset_monthly = self.tsframe.asfreq(datetools.bmonthEnd) + rule_monthly = self.tsframe.asfreq('BM') + + assert_almost_equal(offset_monthly['A'], rule_monthly['A']) + + filled = rule_monthly.asfreq('B', method='pad') # noqa + # TODO: actually check that this worked. + + # don't forget! + filled_dep = rule_monthly.asfreq('B', method='pad') # noqa + + # test does not blow up on length-0 DataFrame + zero_length = self.tsframe.reindex([]) + result = zero_length.asfreq('BM') + self.assertIsNot(result, zero_length) + + def test_asfreq_datetimeindex(self): + df = DataFrame({'A': [1, 2, 3]}, + index=[datetime(2011, 11, 1), datetime(2011, 11, 2), + datetime(2011, 11, 3)]) + df = df.asfreq('B') + tm.assertIsInstance(df.index, DatetimeIndex) + + ts = df['A'].asfreq('B') + tm.assertIsInstance(ts.index, DatetimeIndex) + + def test_first_last_valid(self): + N = len(self.frame.index) + mat = randn(N) + mat[:5] = nan + mat[-5:] = nan + + frame = DataFrame({'foo': mat}, index=self.frame.index) + index = frame.first_valid_index() + + self.assertEqual(index, frame.index[5]) + + index = frame.last_valid_index() + self.assertEqual(index, frame.index[-6]) diff --git a/pandas/tests/frame/test_to_csv.py b/pandas/tests/frame/test_to_csv.py new file mode 100644 index 0000000000000..a5b86b35d330e --- /dev/null +++ b/pandas/tests/frame/test_to_csv.py @@ -0,0 +1,1110 @@ +# -*- coding: utf-8 -*- + +from __future__ import print_function + +import csv + +from numpy import nan +import numpy as np + +from pandas.compat import (lmap, range, lrange, StringIO, u) +from pandas.parser import CParserError +from pandas import (DataFrame, Index, Series, MultiIndex, Timestamp, + date_range, read_csv, compat) +import pandas as pd + +from pandas.util.testing import (assert_almost_equal, + assert_equal, + assert_series_equal, + assert_frame_equal, + ensure_clean, + makeCustomDataframe as mkdf, + assertRaisesRegexp) + +from numpy.testing.decorators import slow +import pandas.util.testing as tm + +from pandas.tests.frame.common import TestData + + +MIXED_FLOAT_DTYPES = ['float16', 'float32', 'float64'] +MIXED_INT_DTYPES = ['uint8', 'uint16', 'uint32', 'uint64', 'int8', 'int16', + 'int32', 'int64'] + + +class TestDataFrameToCSV(tm.TestCase, TestData): + + _multiprocess_can_split_ = True + + def test_to_csv_from_csv(self): + + pname = '__tmp_to_csv_from_csv__' + with ensure_clean(pname) as path: + self.frame['A'][:5] = nan + + self.frame.to_csv(path) + self.frame.to_csv(path, columns=['A', 'B']) + self.frame.to_csv(path, header=False) + self.frame.to_csv(path, index=False) + + # test roundtrip + self.tsframe.to_csv(path) + recons = DataFrame.from_csv(path) + + assert_frame_equal(self.tsframe, recons) + + self.tsframe.to_csv(path, index_label='index') + recons = DataFrame.from_csv(path, index_col=None) + assert(len(recons.columns) == len(self.tsframe.columns) + 1) + + # no index + self.tsframe.to_csv(path, index=False) + recons = DataFrame.from_csv(path, index_col=None) + assert_almost_equal(self.tsframe.values, recons.values) + + # corner case + dm = DataFrame({'s1': Series(lrange(3), lrange(3)), + 's2': Series(lrange(2), lrange(2))}) + dm.to_csv(path) + recons = DataFrame.from_csv(path) + assert_frame_equal(dm, recons) + + with ensure_clean(pname) as path: + + # duplicate index + df = DataFrame(np.random.randn(3, 3), index=['a', 'a', 'b'], + columns=['x', 'y', 'z']) + df.to_csv(path) + result = DataFrame.from_csv(path) + assert_frame_equal(result, df) + + midx = MultiIndex.from_tuples( + [('A', 1, 2), ('A', 1, 2), ('B', 1, 2)]) + df = DataFrame(np.random.randn(3, 3), index=midx, + columns=['x', 'y', 'z']) + df.to_csv(path) + result = DataFrame.from_csv(path, index_col=[0, 1, 2], + parse_dates=False) + # TODO from_csv names index ['Unnamed: 1', 'Unnamed: 2'] should it + # ? + assert_frame_equal(result, df, check_names=False) + + # column aliases + col_aliases = Index(['AA', 'X', 'Y', 'Z']) + self.frame2.to_csv(path, header=col_aliases) + rs = DataFrame.from_csv(path) + xp = self.frame2.copy() + xp.columns = col_aliases + + assert_frame_equal(xp, rs) + + self.assertRaises(ValueError, self.frame2.to_csv, path, + header=['AA', 'X']) + + with ensure_clean(pname) as path: + df1 = DataFrame(np.random.randn(3, 1)) + df2 = DataFrame(np.random.randn(3, 1)) + + df1.to_csv(path) + df2.to_csv(path, mode='a', header=False) + xp = pd.concat([df1, df2]) + rs = pd.read_csv(path, index_col=0) + rs.columns = lmap(int, rs.columns) + xp.columns = lmap(int, xp.columns) + assert_frame_equal(xp, rs) + + with ensure_clean() as path: + # GH 10833 (TimedeltaIndex formatting) + dt = pd.Timedelta(seconds=1) + df = pd.DataFrame({'dt_data': [i * dt for i in range(3)]}, + index=pd.Index([i * dt for i in range(3)], + name='dt_index')) + df.to_csv(path) + + result = pd.read_csv(path, index_col='dt_index') + result.index = pd.to_timedelta(result.index) + # TODO: remove renaming when GH 10875 is solved + result.index = result.index.rename('dt_index') + result['dt_data'] = pd.to_timedelta(result['dt_data']) + + assert_frame_equal(df, result, check_index_type=True) + + # tz, 8260 + with ensure_clean(pname) as path: + + self.tzframe.to_csv(path) + result = pd.read_csv(path, index_col=0, parse_dates=['A']) + + converter = lambda c: pd.to_datetime(result[c]).dt.tz_localize( + 'UTC').dt.tz_convert(self.tzframe[c].dt.tz) + result['B'] = converter('B') + result['C'] = converter('C') + assert_frame_equal(result, self.tzframe) + + def test_to_csv_cols_reordering(self): + # GH3454 + import pandas as pd + + chunksize = 5 + N = int(chunksize * 2.5) + + df = mkdf(N, 3) + cs = df.columns + cols = [cs[2], cs[0]] + + with ensure_clean() as path: + df.to_csv(path, columns=cols, chunksize=chunksize) + rs_c = pd.read_csv(path, index_col=0) + + assert_frame_equal(df[cols], rs_c, check_names=False) + + def test_to_csv_legacy_raises_on_dupe_cols(self): + df = mkdf(10, 3) + df.columns = ['a', 'a', 'b'] + with ensure_clean() as path: + with tm.assert_produces_warning(FutureWarning, + check_stacklevel=False): + self.assertRaises(NotImplementedError, + df.to_csv, path, engine='python') + + def test_to_csv_new_dupe_cols(self): + import pandas as pd + + def _check_df(df, cols=None): + with ensure_clean() as path: + df.to_csv(path, columns=cols, chunksize=chunksize) + rs_c = pd.read_csv(path, index_col=0) + + # we wrote them in a different order + # so compare them in that order + if cols is not None: + + if df.columns.is_unique: + rs_c.columns = cols + else: + indexer, missing = df.columns.get_indexer_non_unique( + cols) + rs_c.columns = df.columns.take(indexer) + + for c in cols: + obj_df = df[c] + obj_rs = rs_c[c] + if isinstance(obj_df, Series): + assert_series_equal(obj_df, obj_rs) + else: + assert_frame_equal( + obj_df, obj_rs, check_names=False) + + # wrote in the same order + else: + rs_c.columns = df.columns + assert_frame_equal(df, rs_c, check_names=False) + + chunksize = 5 + N = int(chunksize * 2.5) + + # dupe cols + df = mkdf(N, 3) + df.columns = ['a', 'a', 'b'] + _check_df(df, None) + + # dupe cols with selection + cols = ['b', 'a'] + _check_df(df, cols) + + @slow + def test_to_csv_moar(self): + path = '__tmp_to_csv_moar__' + + def _do_test(df, path, r_dtype=None, c_dtype=None, + rnlvl=None, cnlvl=None, dupe_col=False): + + kwargs = dict(parse_dates=False) + if cnlvl: + if rnlvl is not None: + kwargs['index_col'] = lrange(rnlvl) + kwargs['header'] = lrange(cnlvl) + with ensure_clean(path) as path: + df.to_csv(path, encoding='utf8', + chunksize=chunksize, tupleize_cols=False) + recons = DataFrame.from_csv( + path, tupleize_cols=False, **kwargs) + else: + kwargs['header'] = 0 + with ensure_clean(path) as path: + df.to_csv(path, encoding='utf8', chunksize=chunksize) + recons = DataFrame.from_csv(path, **kwargs) + + def _to_uni(x): + if not isinstance(x, compat.text_type): + return x.decode('utf8') + return x + if dupe_col: + # read_Csv disambiguates the columns by + # labeling them dupe.1,dupe.2, etc'. monkey patch columns + recons.columns = df.columns + if rnlvl and not cnlvl: + delta_lvl = [recons.iloc[ + :, i].values for i in range(rnlvl - 1)] + ix = MultiIndex.from_arrays([list(recons.index)] + delta_lvl) + recons.index = ix + recons = recons.iloc[:, rnlvl - 1:] + + type_map = dict(i='i', f='f', s='O', u='O', dt='O', p='O') + if r_dtype: + if r_dtype == 'u': # unicode + r_dtype = 'O' + recons.index = np.array(lmap(_to_uni, recons.index), + dtype=r_dtype) + df.index = np.array(lmap(_to_uni, df.index), dtype=r_dtype) + elif r_dtype == 'dt': # unicode + r_dtype = 'O' + recons.index = np.array(lmap(Timestamp, recons.index), + dtype=r_dtype) + df.index = np.array( + lmap(Timestamp, df.index), dtype=r_dtype) + elif r_dtype == 'p': + r_dtype = 'O' + recons.index = np.array( + list(map(Timestamp, recons.index.to_datetime())), + dtype=r_dtype) + df.index = np.array( + list(map(Timestamp, df.index.to_datetime())), + dtype=r_dtype) + else: + r_dtype = type_map.get(r_dtype) + recons.index = np.array(recons.index, dtype=r_dtype) + df.index = np.array(df.index, dtype=r_dtype) + if c_dtype: + if c_dtype == 'u': + c_dtype = 'O' + recons.columns = np.array(lmap(_to_uni, recons.columns), + dtype=c_dtype) + df.columns = np.array( + lmap(_to_uni, df.columns), dtype=c_dtype) + elif c_dtype == 'dt': + c_dtype = 'O' + recons.columns = np.array(lmap(Timestamp, recons.columns), + dtype=c_dtype) + df.columns = np.array( + lmap(Timestamp, df.columns), dtype=c_dtype) + elif c_dtype == 'p': + c_dtype = 'O' + recons.columns = np.array( + lmap(Timestamp, recons.columns.to_datetime()), + dtype=c_dtype) + df.columns = np.array( + lmap(Timestamp, df.columns.to_datetime()), + dtype=c_dtype) + else: + c_dtype = type_map.get(c_dtype) + recons.columns = np.array(recons.columns, dtype=c_dtype) + df.columns = np.array(df.columns, dtype=c_dtype) + + assert_frame_equal(df, recons, check_names=False, + check_less_precise=True) + + N = 100 + chunksize = 1000 + + # GH3437 + from pandas import NaT + + def make_dtnat_arr(n, nnat=None): + if nnat is None: + nnat = int(n * 0.1) # 10% + s = list(date_range('2000', freq='5min', periods=n)) + if nnat: + for i in np.random.randint(0, len(s), nnat): + s[i] = NaT + i = np.random.randint(100) + s[-i] = NaT + s[i] = NaT + return s + + # N=35000 + s1 = make_dtnat_arr(chunksize + 5) + s2 = make_dtnat_arr(chunksize + 5, 0) + path = '1.csv' + + # s3=make_dtnjat_arr(chunksize+5,0) + with ensure_clean('.csv') as pth: + df = DataFrame(dict(a=s1, b=s2)) + df.to_csv(pth, chunksize=chunksize) + recons = DataFrame.from_csv(pth)._convert(datetime=True, + coerce=True) + assert_frame_equal(df, recons, check_names=False, + check_less_precise=True) + + for ncols in [4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type='dt', + c_idx_type='s'), path, 'dt', 's') + + for ncols in [4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type='dt', + c_idx_type='s'), path, 'dt', 's') + pass + + for r_idx_type, c_idx_type in [('i', 'i'), ('s', 's'), ('u', 'dt'), + ('p', 'p')]: + for ncols in [1, 2, 3, 4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [2, 10, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_type=r_idx_type, + c_idx_type=c_idx_type), + path, r_idx_type, c_idx_type) + + for ncols in [1, 2, 3, 4]: + base = int((chunksize // ncols or 1) or 1) + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols), path) + + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2]: + df = mkdf(nrows, 3) + cols = list(df.columns) + cols[:2] = ["dupe", "dupe"] + cols[-2:] = ["dupe", "dupe"] + ix = list(df.index) + ix[:2] = ["rdupe", "rdupe"] + ix[-2:] = ["rdupe", "rdupe"] + df.index = ix + df.columns = cols + _do_test(df, path, dupe_col=True) + + _do_test(DataFrame(index=lrange(10)), path) + _do_test(mkdf(chunksize // 2 + 1, 2, r_idx_nlevels=2), path, rnlvl=2) + for ncols in [2, 3, 4]: + base = int(chunksize // ncols) + for nrows in [10, N - 2, N - 1, N, N + 1, N + 2, 2 * N - 2, + 2 * N - 1, 2 * N, 2 * N + 1, 2 * N + 2, + base - 1, base, base + 1]: + _do_test(mkdf(nrows, ncols, r_idx_nlevels=2), path, rnlvl=2) + _do_test(mkdf(nrows, ncols, c_idx_nlevels=2), path, cnlvl=2) + _do_test(mkdf(nrows, ncols, r_idx_nlevels=2, c_idx_nlevels=2), + path, rnlvl=2, cnlvl=2) + + def test_to_csv_from_csv_w_some_infs(self): + + # test roundtrip with inf, -inf, nan, as full columns and mix + self.frame['G'] = np.nan + f = lambda x: [np.inf, np.nan][np.random.rand() < .5] + self.frame['H'] = self.frame.index.map(f) + + with ensure_clean() as path: + self.frame.to_csv(path) + recons = DataFrame.from_csv(path) + + # TODO to_csv drops column name + assert_frame_equal(self.frame, recons, check_names=False) + assert_frame_equal(np.isinf(self.frame), + np.isinf(recons), check_names=False) + + def test_to_csv_from_csv_w_all_infs(self): + + # test roundtrip with inf, -inf, nan, as full columns and mix + self.frame['E'] = np.inf + self.frame['F'] = -np.inf + + with ensure_clean() as path: + self.frame.to_csv(path) + recons = DataFrame.from_csv(path) + + # TODO to_csv drops column name + assert_frame_equal(self.frame, recons, check_names=False) + assert_frame_equal(np.isinf(self.frame), + np.isinf(recons), check_names=False) + + def test_to_csv_no_index(self): + # GH 3624, after appending columns, to_csv fails + pname = '__tmp_to_csv_no_index__' + with ensure_clean(pname) as path: + df = DataFrame({'c1': [1, 2, 3], 'c2': [4, 5, 6]}) + df.to_csv(path, index=False) + result = read_csv(path) + assert_frame_equal(df, result) + df['c3'] = Series([7, 8, 9], dtype='int64') + df.to_csv(path, index=False) + result = read_csv(path) + assert_frame_equal(df, result) + + def test_to_csv_with_mix_columns(self): + # GH11637, incorrect output when a mix of integer and string column + # names passed as columns parameter in to_csv + + df = DataFrame({0: ['a', 'b', 'c'], + 1: ['aa', 'bb', 'cc']}) + df['test'] = 'txt' + assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) + + def test_to_csv_headers(self): + # GH6186, the presence or absence of `index` incorrectly + # causes to_csv to have different header semantics. + pname = '__tmp_to_csv_headers__' + from_df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) + to_df = DataFrame([[1, 2], [3, 4]], columns=['X', 'Y']) + with ensure_clean(pname) as path: + from_df.to_csv(path, header=['X', 'Y']) + recons = DataFrame.from_csv(path) + assert_frame_equal(to_df, recons) + + from_df.to_csv(path, index=False, header=['X', 'Y']) + recons = DataFrame.from_csv(path) + recons.reset_index(inplace=True) + assert_frame_equal(to_df, recons) + + def test_to_csv_multiindex(self): + + pname = '__tmp_to_csv_multiindex__' + frame = self.frame + old_index = frame.index + arrays = np.arange(len(old_index) * 2).reshape(2, -1) + new_index = MultiIndex.from_arrays(arrays, names=['first', 'second']) + frame.index = new_index + + with ensure_clean(pname) as path: + + frame.to_csv(path, header=False) + frame.to_csv(path, columns=['A', 'B']) + + # round trip + frame.to_csv(path) + df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False) + + # TODO to_csv drops column name + assert_frame_equal(frame, df, check_names=False) + self.assertEqual(frame.index.names, df.index.names) + + # needed if setUP becomes a classmethod + self.frame.index = old_index + + # try multiindex with dates + tsframe = self.tsframe + old_index = tsframe.index + new_index = [old_index, np.arange(len(old_index))] + tsframe.index = MultiIndex.from_arrays(new_index) + + tsframe.to_csv(path, index_label=['time', 'foo']) + recons = DataFrame.from_csv(path, index_col=[0, 1]) + # TODO to_csv drops column name + assert_frame_equal(tsframe, recons, check_names=False) + + # do not load index + tsframe.to_csv(path) + recons = DataFrame.from_csv(path, index_col=None) + np.testing.assert_equal( + len(recons.columns), len(tsframe.columns) + 2) + + # no index + tsframe.to_csv(path, index=False) + recons = DataFrame.from_csv(path, index_col=None) + assert_almost_equal(recons.values, self.tsframe.values) + + # needed if setUP becomes classmethod + self.tsframe.index = old_index + + with ensure_clean(pname) as path: + # GH3571, GH1651, GH3141 + + def _make_frame(names=None): + if names is True: + names = ['first', 'second'] + return DataFrame(np.random.randint(0, 10, size=(3, 3)), + columns=MultiIndex.from_tuples( + [('bah', 'foo'), + ('bah', 'bar'), + ('ban', 'baz')], names=names), + dtype='int64') + + # column & index are multi-index + df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1, 2, 3], index_col=[ + 0, 1], tupleize_cols=False) + assert_frame_equal(df, result) + + # column is mi + df = mkdf(5, 3, r_idx_nlevels=1, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv( + path, header=[0, 1, 2, 3], index_col=0, tupleize_cols=False) + assert_frame_equal(df, result) + + # dup column names? + df = mkdf(5, 3, r_idx_nlevels=3, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1, 2, 3], index_col=[ + 0, 1, 2], tupleize_cols=False) + assert_frame_equal(df, result) + + # writing with no index + df = _make_frame() + df.to_csv(path, tupleize_cols=False, index=False) + result = read_csv(path, header=[0, 1], tupleize_cols=False) + assert_frame_equal(df, result) + + # we lose the names here + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False, index=False) + result = read_csv(path, header=[0, 1], tupleize_cols=False) + self.assertTrue(all([x is None for x in result.columns.names])) + result.columns.names = df.columns.names + assert_frame_equal(df, result) + + # tupleize_cols=True and index=False + df = _make_frame(True) + df.to_csv(path, tupleize_cols=True, index=False) + result = read_csv( + path, header=0, tupleize_cols=True, index_col=None) + result.columns = df.columns + assert_frame_equal(df, result) + + # whatsnew example + df = _make_frame() + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1], index_col=[ + 0], tupleize_cols=False) + assert_frame_equal(df, result) + + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False) + result = read_csv(path, header=[0, 1], index_col=[ + 0], tupleize_cols=False) + assert_frame_equal(df, result) + + # column & index are multi-index (compatibility) + df = mkdf(5, 3, r_idx_nlevels=2, c_idx_nlevels=4) + df.to_csv(path, tupleize_cols=True) + result = read_csv(path, header=0, index_col=[ + 0, 1], tupleize_cols=True) + result.columns = df.columns + assert_frame_equal(df, result) + + # invalid options + df = _make_frame(True) + df.to_csv(path, tupleize_cols=False) + + # catch invalid headers + with assertRaisesRegexp(CParserError, + 'Passed header=\[0,1,2\] are too many ' + 'rows for this multi_index of columns'): + read_csv(path, tupleize_cols=False, + header=lrange(3), index_col=0) + + with assertRaisesRegexp(CParserError, + 'Passed header=\[0,1,2,3,4,5,6\], len of ' + '7, but only 6 lines in file'): + read_csv(path, tupleize_cols=False, + header=lrange(7), index_col=0) + + for i in [4, 5, 6]: + with tm.assertRaises(CParserError): + read_csv(path, tupleize_cols=False, + header=lrange(i), index_col=0) + + # write with cols + with assertRaisesRegexp(TypeError, 'cannot specify cols with a ' + 'MultiIndex'): + df.to_csv(path, tupleize_cols=False, columns=['foo', 'bar']) + + with ensure_clean(pname) as path: + # empty + tsframe[:0].to_csv(path) + recons = DataFrame.from_csv(path) + exp = tsframe[:0] + exp.index = [] + + self.assertTrue(recons.columns.equals(exp.columns)) + self.assertEqual(len(recons), 0) + + def test_to_csv_float32_nanrep(self): + df = DataFrame(np.random.randn(1, 4).astype(np.float32)) + df[1] = np.nan + + with ensure_clean('__tmp_to_csv_float32_nanrep__.csv') as path: + df.to_csv(path, na_rep=999) + + with open(path) as f: + lines = f.readlines() + self.assertEqual(lines[1].split(',')[2], '999') + + def test_to_csv_withcommas(self): + + # Commas inside fields should be correctly escaped when saving as CSV. + df = DataFrame({'A': [1, 2, 3], 'B': ['5,6', '7,8', '9,0']}) + + with ensure_clean('__tmp_to_csv_withcommas__.csv') as path: + df.to_csv(path) + df2 = DataFrame.from_csv(path) + assert_frame_equal(df2, df) + + def test_to_csv_mixed(self): + + def create_cols(name): + return ["%s%03d" % (name, i) for i in range(5)] + + df_float = DataFrame(np.random.randn( + 100, 5), dtype='float64', columns=create_cols('float')) + df_int = DataFrame(np.random.randn(100, 5), + dtype='int64', columns=create_cols('int')) + df_bool = DataFrame(True, index=df_float.index, + columns=create_cols('bool')) + df_object = DataFrame('foo', index=df_float.index, + columns=create_cols('object')) + df_dt = DataFrame(Timestamp('20010101'), + index=df_float.index, columns=create_cols('date')) + + # add in some nans + df_float.ix[30:50, 1:3] = np.nan + + # ## this is a bug in read_csv right now #### + # df_dt.ix[30:50,1:3] = np.nan + + df = pd.concat([df_float, df_int, df_bool, df_object, df_dt], axis=1) + + # dtype + dtypes = dict() + for n, dtype in [('float', np.float64), ('int', np.int64), + ('bool', np.bool), ('object', np.object)]: + for c in create_cols(n): + dtypes[c] = dtype + + with ensure_clean() as filename: + df.to_csv(filename) + rs = read_csv(filename, index_col=0, dtype=dtypes, + parse_dates=create_cols('date')) + assert_frame_equal(rs, df) + + def test_to_csv_dups_cols(self): + + df = DataFrame(np.random.randn(1000, 30), columns=lrange( + 15) + lrange(15), dtype='float64') + + with ensure_clean() as filename: + df.to_csv(filename) # single dtype, fine + result = read_csv(filename, index_col=0) + result.columns = df.columns + assert_frame_equal(result, df) + + df_float = DataFrame(np.random.randn(1000, 3), dtype='float64') + df_int = DataFrame(np.random.randn(1000, 3), dtype='int64') + df_bool = DataFrame(True, index=df_float.index, columns=lrange(3)) + df_object = DataFrame('foo', index=df_float.index, columns=lrange(3)) + df_dt = DataFrame(Timestamp('20010101'), + index=df_float.index, columns=lrange(3)) + df = pd.concat([df_float, df_int, df_bool, df_object, + df_dt], axis=1, ignore_index=True) + + cols = [] + for i in range(5): + cols.extend([0, 1, 2]) + df.columns = cols + + from pandas import to_datetime + with ensure_clean() as filename: + df.to_csv(filename) + result = read_csv(filename, index_col=0) + + # date cols + for i in ['0.4', '1.4', '2.4']: + result[i] = to_datetime(result[i]) + + result.columns = df.columns + assert_frame_equal(result, df) + + # GH3457 + from pandas.util.testing import makeCustomDataframe as mkdf + + N = 10 + df = mkdf(N, 3) + df.columns = ['a', 'a', 'b'] + + with ensure_clean() as filename: + df.to_csv(filename) + + # read_csv will rename the dups columns + result = read_csv(filename, index_col=0) + result = result.rename(columns={'a.1': 'a'}) + assert_frame_equal(result, df) + + def test_to_csv_chunking(self): + + aa = DataFrame({'A': lrange(100000)}) + aa['B'] = aa.A + 1.0 + aa['C'] = aa.A + 2.0 + aa['D'] = aa.A + 3.0 + + for chunksize in [10000, 50000, 100000]: + with ensure_clean() as filename: + aa.to_csv(filename, chunksize=chunksize) + rs = read_csv(filename, index_col=0) + assert_frame_equal(rs, aa) + + @slow + def test_to_csv_wide_frame_formatting(self): + # Issue #8621 + df = DataFrame(np.random.randn(1, 100010), columns=None, index=None) + with ensure_clean() as filename: + df.to_csv(filename, header=False, index=False) + rs = read_csv(filename, header=None) + assert_frame_equal(rs, df) + + def test_to_csv_bug(self): + f1 = StringIO('a,1.0\nb,2.0') + df = DataFrame.from_csv(f1, header=None) + newdf = DataFrame({'t': df[df.columns[0]]}) + + with ensure_clean() as path: + newdf.to_csv(path) + + recons = read_csv(path, index_col=0) + # don't check_names as t != 1 + assert_frame_equal(recons, newdf, check_names=False) + + def test_to_csv_unicode(self): + + df = DataFrame({u('c/\u03c3'): [1, 2, 3]}) + with ensure_clean() as path: + + df.to_csv(path, encoding='UTF-8') + df2 = read_csv(path, index_col=0, encoding='UTF-8') + assert_frame_equal(df, df2) + + df.to_csv(path, encoding='UTF-8', index=False) + df2 = read_csv(path, index_col=None, encoding='UTF-8') + assert_frame_equal(df, df2) + + def test_to_csv_unicode_index_col(self): + buf = StringIO('') + df = DataFrame( + [[u("\u05d0"), "d2", "d3", "d4"], ["a1", "a2", "a3", "a4"]], + columns=[u("\u05d0"), + u("\u05d1"), u("\u05d2"), u("\u05d3")], + index=[u("\u05d0"), u("\u05d1")]) + + df.to_csv(buf, encoding='UTF-8') + buf.seek(0) + + df2 = read_csv(buf, index_col=0, encoding='UTF-8') + assert_frame_equal(df, df2) + + def test_to_csv_stringio(self): + buf = StringIO() + self.frame.to_csv(buf) + buf.seek(0) + recons = read_csv(buf, index_col=0) + # TODO to_csv drops column name + assert_frame_equal(recons, self.frame, check_names=False) + + def test_to_csv_float_format(self): + + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, float_format='%.2f') + + rs = read_csv(filename, index_col=0) + xp = DataFrame([[0.12, 0.23, 0.57], + [12.32, 123123.20, 321321.20]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + assert_frame_equal(rs, xp) + + def test_to_csv_quoting(self): + df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) + + buf = StringIO() + df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC) + + result = buf.getvalue() + expected = ('"A","B"\n' + '1,"foo"\n' + '2,"bar"\n' + '3,"baz"\n') + + self.assertEqual(result, expected) + + # quoting windows line terminators, presents with encoding? + # #3503 + text = 'a,b,c\n1,"test \r\n",3\n' + df = pd.read_csv(StringIO(text)) + buf = StringIO() + df.to_csv(buf, encoding='utf-8', index=False) + self.assertEqual(buf.getvalue(), text) + + # testing if quoting parameter is passed through with multi-indexes + # related to issue #7791 + df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]}) + df = df.set_index(['a', 'b']) + expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n' + self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected) + + def test_to_csv_unicodewriter_quoting(self): + df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) + + buf = StringIO() + df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC, + encoding='utf-8') + + result = buf.getvalue() + expected = ('"A","B"\n' + '1,"foo"\n' + '2,"bar"\n' + '3,"baz"\n') + + self.assertEqual(result, expected) + + def test_to_csv_quote_none(self): + # GH4328 + df = DataFrame({'A': ['hello', '{"hello"}']}) + for encoding in (None, 'utf-8'): + buf = StringIO() + df.to_csv(buf, quoting=csv.QUOTE_NONE, + encoding=encoding, index=False) + result = buf.getvalue() + expected = 'A\nhello\n{"hello"}\n' + self.assertEqual(result, expected) + + def test_to_csv_index_no_leading_comma(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, + index=['one', 'two', 'three']) + + buf = StringIO() + df.to_csv(buf, index_label=False) + expected = ('A,B\n' + 'one,1,4\n' + 'two,2,5\n' + 'three,3,6\n') + self.assertEqual(buf.getvalue(), expected) + + def test_to_csv_line_terminators(self): + df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, + index=['one', 'two', 'three']) + + buf = StringIO() + df.to_csv(buf, line_terminator='\r\n') + expected = (',A,B\r\n' + 'one,1,4\r\n' + 'two,2,5\r\n' + 'three,3,6\r\n') + self.assertEqual(buf.getvalue(), expected) + + buf = StringIO() + df.to_csv(buf) # The default line terminator remains \n + expected = (',A,B\n' + 'one,1,4\n' + 'two,2,5\n' + 'three,3,6\n') + self.assertEqual(buf.getvalue(), expected) + + def test_to_csv_from_csv_categorical(self): + + # CSV with categoricals should result in the same output as when one + # would add a "normal" Series/DataFrame. + s = Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])) + s2 = Series(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) + res = StringIO() + s.to_csv(res) + exp = StringIO() + s2.to_csv(exp) + self.assertEqual(res.getvalue(), exp.getvalue()) + + df = DataFrame({"s": s}) + df2 = DataFrame({"s": s2}) + res = StringIO() + df.to_csv(res) + exp = StringIO() + df2.to_csv(exp) + self.assertEqual(res.getvalue(), exp.getvalue()) + + def test_to_csv_path_is_none(self): + # GH 8215 + # Make sure we return string for consistency with + # Series.to_csv() + csv_str = self.frame.to_csv(path=None) + self.assertIsInstance(csv_str, str) + recons = pd.read_csv(StringIO(csv_str), index_col=0) + assert_frame_equal(self.frame, recons) + + def test_to_csv_compression_gzip(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, compression="gzip") + + # test the round trip - to_csv -> read_csv + rs = read_csv(filename, compression="gzip", index_col=0) + assert_frame_equal(df, rs) + + # explicitly make sure file is gziped + import gzip + f = gzip.open(filename, 'rb') + text = f.read().decode('utf8') + f.close() + for col in df.columns: + self.assertIn(col, text) + + def test_to_csv_compression_bz2(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + + df.to_csv(filename, compression="bz2") + + # test the round trip - to_csv -> read_csv + rs = read_csv(filename, compression="bz2", index_col=0) + assert_frame_equal(df, rs) + + # explicitly make sure file is bz2ed + import bz2 + f = bz2.BZ2File(filename, 'rb') + text = f.read().decode('utf8') + f.close() + for col in df.columns: + self.assertIn(col, text) + + def test_to_csv_compression_value_error(self): + # GH7615 + # use the compression kw in to_csv + df = DataFrame([[0.123456, 0.234567, 0.567567], + [12.32112, 123123.2, 321321.2]], + index=['A', 'B'], columns=['X', 'Y', 'Z']) + + with ensure_clean() as filename: + # zip compression is not supported and should raise ValueError + self.assertRaises(ValueError, df.to_csv, + filename, compression="zip") + + def test_to_csv_date_format(self): + from pandas import to_datetime + pname = '__tmp_to_csv_date_format__' + with ensure_clean(pname) as path: + for engine in [None, 'python']: + w = FutureWarning if engine == 'python' else None + + dt_index = self.tsframe.index + datetime_frame = DataFrame( + {'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index) + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame.to_csv( + path, date_format='%Y%m%d', engine=engine) + + # Check that the data was put in the specified format + test = read_csv(path, index_col=0) + + datetime_frame_int = datetime_frame.applymap( + lambda x: int(x.strftime('%Y%m%d'))) + datetime_frame_int.index = datetime_frame_int.index.map( + lambda x: int(x.strftime('%Y%m%d'))) + + assert_frame_equal(test, datetime_frame_int) + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame.to_csv( + path, date_format='%Y-%m-%d', engine=engine) + + # Check that the data was put in the specified format + test = read_csv(path, index_col=0) + datetime_frame_str = datetime_frame.applymap( + lambda x: x.strftime('%Y-%m-%d')) + datetime_frame_str.index = datetime_frame_str.index.map( + lambda x: x.strftime('%Y-%m-%d')) + + assert_frame_equal(test, datetime_frame_str) + + # Check that columns get converted + datetime_frame_columns = datetime_frame.T + + with tm.assert_produces_warning(w, check_stacklevel=False): + datetime_frame_columns.to_csv( + path, date_format='%Y%m%d', engine=engine) + + test = read_csv(path, index_col=0) + + datetime_frame_columns = datetime_frame_columns.applymap( + lambda x: int(x.strftime('%Y%m%d'))) + # Columns don't get converted to ints by read_csv + datetime_frame_columns.columns = ( + datetime_frame_columns.columns + .map(lambda x: x.strftime('%Y%m%d'))) + + assert_frame_equal(test, datetime_frame_columns) + + # test NaTs + nat_index = to_datetime( + ['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000']) + nat_frame = DataFrame({'A': nat_index}, index=nat_index) + + with tm.assert_produces_warning(w, check_stacklevel=False): + nat_frame.to_csv( + path, date_format='%Y-%m-%d', engine=engine) + + test = read_csv(path, parse_dates=[0, 1], index_col=0) + + assert_frame_equal(test, nat_frame) + + def test_to_csv_with_dst_transitions(self): + + with ensure_clean('csv_date_format_with_dst') as path: + # make sure we are not failing on transitions + times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00", + tz="Europe/London", + freq="H", + ambiguous='infer') + + for i in [times, times + pd.Timedelta('10s')]: + time_range = np.array(range(len(i)), dtype='int64') + df = DataFrame({'A': time_range}, index=i) + df.to_csv(path, index=True) + + # we have to reconvert the index as we + # don't parse the tz's + result = read_csv(path, index_col=0) + result.index = pd.to_datetime(result.index).tz_localize( + 'UTC').tz_convert('Europe/London') + assert_frame_equal(result, df) + + # GH11619 + idx = pd.date_range('2015-01-01', '2015-12-31', + freq='H', tz='Europe/Paris') + df = DataFrame({'values': 1, 'idx': idx}, + index=idx) + with ensure_clean('csv_date_format_with_dst') as path: + df.to_csv(path, index=True) + result = read_csv(path, index_col=0) + result.index = pd.to_datetime(result.index).tz_localize( + 'UTC').tz_convert('Europe/Paris') + result['idx'] = pd.to_datetime(result['idx']).astype( + 'datetime64[ns, Europe/Paris]') + assert_frame_equal(result, df) + + # assert working + df.astype(str) + + with ensure_clean('csv_date_format_with_dst') as path: + df.to_pickle(path) + result = pd.read_pickle(path) + assert_frame_equal(result, df) diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py deleted file mode 100644 index ba546b6daac77..0000000000000 --- a/pandas/tests/test_frame.py +++ /dev/null @@ -1,17190 +0,0 @@ -# -*- coding: utf-8 -*- - -from __future__ import print_function -# pylint: disable-msg=W0612,E1101 -from copy import deepcopy -from datetime import datetime, timedelta, time, date -import sys -import operator -import re -import csv -import nose -import functools -import itertools -from itertools import product, permutations -from distutils.version import LooseVersion - -from pandas.compat import( - map, zip, range, long, lrange, lmap, lzip, - OrderedDict, u, StringIO, is_platform_windows -) -from pandas import compat - -from numpy import random, nan, inf -from numpy.random import randn -import numpy as np -import numpy.ma as ma -import numpy.ma.mrecords as mrecords - -import pandas.core.nanops as nanops -import pandas.core.common as com -import pandas.core.format as fmt -import pandas.core.datetools as datetools -from pandas import (DataFrame, Index, Series, Panel, notnull, isnull, - MultiIndex, DatetimeIndex, Timestamp, date_range, - read_csv, timedelta_range, Timedelta, option_context, period_range) -from pandas.core.dtypes import DatetimeTZDtype -import pandas as pd -from pandas.parser import CParserError -from pandas.util.misc import is_little_endian - -from pandas.util.testing import (assert_almost_equal, - assert_equal, - assert_numpy_array_equal, - assert_series_equal, - assert_frame_equal, - assertRaisesRegexp, - assertRaises, - makeCustomDataframe as mkdf, - ensure_clean, - SubclassedDataFrame) -from pandas.core.indexing import IndexingError -from pandas.core.common import PandasError - -import pandas.util.testing as tm -import pandas.lib as lib - -from numpy.testing.decorators import slow -from pandas import _np_version_under1p9 - -# --------------------------------------------------------------------- -# DataFrame test cases - -JOIN_TYPES = ['inner', 'outer', 'left', 'right'] -MIXED_FLOAT_DTYPES = ['float16','float32','float64'] -MIXED_INT_DTYPES = ['uint8','uint16','uint32','uint64','int8','int16', - 'int32','int64'] - -def _check_mixed_float(df, dtype = None): - - # float16 are most likely to be upcasted to float32 - dtypes = dict(A = 'float32', B = 'float32', C = 'float16', D = 'float64') - if isinstance(dtype, compat.string_types): - dtypes = dict([ (k,dtype) for k, v in dtypes.items() ]) - elif isinstance(dtype, dict): - dtypes.update(dtype) - if dtypes.get('A'): - assert(df.dtypes['A'] == dtypes['A']) - if dtypes.get('B'): - assert(df.dtypes['B'] == dtypes['B']) - if dtypes.get('C'): - assert(df.dtypes['C'] == dtypes['C']) - if dtypes.get('D'): - assert(df.dtypes['D'] == dtypes['D']) - - -def _check_mixed_int(df, dtype = None): - dtypes = dict(A = 'int32', B = 'uint64', C = 'uint8', D = 'int64') - if isinstance(dtype, compat.string_types): - dtypes = dict([ (k,dtype) for k, v in dtypes.items() ]) - elif isinstance(dtype, dict): - dtypes.update(dtype) - if dtypes.get('A'): - assert(df.dtypes['A'] == dtypes['A']) - if dtypes.get('B'): - assert(df.dtypes['B'] == dtypes['B']) - if dtypes.get('C'): - assert(df.dtypes['C'] == dtypes['C']) - if dtypes.get('D'): - assert(df.dtypes['D'] == dtypes['D']) - - -class CheckIndexing(object): - - _multiprocess_can_split_ = True - - def test_getitem(self): - # slicing - sl = self.frame[:20] - self.assertEqual(20, len(sl.index)) - - # column access - - for _, series in compat.iteritems(sl): - self.assertEqual(20, len(series.index)) - self.assertTrue(tm.equalContents(series.index, sl.index)) - - for key, _ in compat.iteritems(self.frame._series): - self.assertIsNotNone(self.frame[key]) - - self.assertNotIn('random', self.frame) - with assertRaisesRegexp(KeyError, 'random'): - self.frame['random'] - - df = self.frame.copy() - df['$10'] = randn(len(df)) - ad = randn(len(df)) - df['@awesome_domain'] = ad - self.assertRaises(KeyError, df.__getitem__, 'df["$10"]') - res = df['@awesome_domain'] - assert_numpy_array_equal(ad, res.values) - - def test_getitem_dupe_cols(self): - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) - try: - df[['baf']] - except KeyError: - pass - else: - self.fail("Dataframe failed to raise KeyError") - - def test_get(self): - b = self.frame.get('B') - assert_series_equal(b, self.frame['B']) - - self.assertIsNone(self.frame.get('foo')) - assert_series_equal(self.frame.get('foo', self.frame['B']), - self.frame['B']) - # None - # GH 5652 - for df in [DataFrame(), DataFrame(columns=list('AB')), DataFrame(columns=list('AB'),index=range(3)) ]: - result = df.get(None) - self.assertIsNone(result) - - def test_getitem_iterator(self): - idx = iter(['A', 'B', 'C']) - result = self.frame.ix[:, idx] - expected = self.frame.ix[:, ['A', 'B', 'C']] - assert_frame_equal(result, expected) - - def test_getitem_list(self): - self.frame.columns.name = 'foo' - - result = self.frame[['B', 'A']] - result2 = self.frame[Index(['B', 'A'])] - - expected = self.frame.ix[:, ['B', 'A']] - expected.columns.name = 'foo' - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - self.assertEqual(result.columns.name, 'foo') - - with assertRaisesRegexp(KeyError, 'not in index'): - self.frame[['B', 'A', 'food']] - with assertRaisesRegexp(KeyError, 'not in index'): - self.frame[Index(['B', 'A', 'foo'])] - - # tuples - df = DataFrame(randn(8, 3), - columns=Index([('foo', 'bar'), ('baz', 'qux'), - ('peek', 'aboo')], name=['sth', 'sth2'])) - - result = df[[('foo', 'bar'), ('baz', 'qux')]] - expected = df.ix[:, :2] - assert_frame_equal(result, expected) - self.assertEqual(result.columns.names, ['sth', 'sth2']) - - def test_setitem_list(self): - - self.frame['E'] = 'foo' - data = self.frame[['A', 'B']] - self.frame[['B', 'A']] = data - - assert_series_equal(self.frame['B'], data['A'], check_names=False) - assert_series_equal(self.frame['A'], data['B'], check_names=False) - - with assertRaisesRegexp(ValueError, 'Columns must be same length as key'): - data[['A']] = self.frame[['A', 'B']] - with assertRaisesRegexp(ValueError, 'Length of values does not match ' - 'length of index'): - data['A'] = range(len(data.index) - 1) - - df = DataFrame(0, lrange(3), ['tt1', 'tt2'], dtype=np.int_) - df.ix[1, ['tt1', 'tt2']] = [1, 2] - - result = df.ix[1, ['tt1', 'tt2']] - expected = Series([1, 2], df.columns, dtype=np.int_, name=1) - assert_series_equal(result, expected) - - df['tt1'] = df['tt2'] = '0' - df.ix[1, ['tt1', 'tt2']] = ['1', '2'] - result = df.ix[1, ['tt1', 'tt2']] - expected = Series(['1', '2'], df.columns, name=1) - assert_series_equal(result, expected) - - def test_setitem_list_not_dataframe(self): - data = np.random.randn(len(self.frame), 2) - self.frame[['A', 'B']] = data - assert_almost_equal(self.frame[['A', 'B']].values, data) - - def test_setitem_list_of_tuples(self): - tuples = lzip(self.frame['A'], self.frame['B']) - self.frame['tuples'] = tuples - - result = self.frame['tuples'] - expected = Series(tuples, index=self.frame.index, name='tuples') - assert_series_equal(result, expected) - - def test_setitem_mulit_index(self): - # GH7655, test that assigning to a sub-frame of a frame - # with multi-index columns aligns both rows and columns - it = ['jim', 'joe', 'jolie'], ['first', 'last'], \ - ['left', 'center', 'right'] - - cols = MultiIndex.from_product(it) - index = pd.date_range('20141006',periods=20) - vals = np.random.randint(1, 1000, (len(index), len(cols))) - df = pd.DataFrame(vals, columns=cols, index=index) - - i, j = df.index.values.copy(), it[-1][:] - - np.random.shuffle(i) - df['jim'] = df['jolie'].loc[i, ::-1] - assert_frame_equal(df['jim'], df['jolie']) - - np.random.shuffle(j) - df[('joe', 'first')] = df[('jolie', 'last')].loc[i, j] - assert_frame_equal(df[('joe', 'first')], df[('jolie', 'last')]) - - np.random.shuffle(j) - df[('joe', 'last')] = df[('jolie', 'first')].loc[i, j] - assert_frame_equal(df[('joe', 'last')], df[('jolie', 'first')]) - - def test_inplace_ops_alignment(self): - - # inplace ops / ops alignment - # GH 8511 - - columns = list('abcdefg') - X_orig = DataFrame(np.arange(10*len(columns)).reshape(-1,len(columns)), columns=columns, index=range(10)) - Z = 100*X_orig.iloc[:,1:-1].copy() - block1 = list('bedcf') - subs = list('bcdef') - - # add - X = X_orig.copy() - result1 = (X[block1] + Z).reindex(columns=subs) - - X[block1] += Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] + Z[block1]).reindex(columns=subs) - - X[block1] += Z[block1] - result4 = X.reindex(columns=subs) - - assert_frame_equal(result1, result2) - assert_frame_equal(result1, result3) - assert_frame_equal(result1, result4) - - # sub - X = X_orig.copy() - result1 = (X[block1] - Z).reindex(columns=subs) - - X[block1] -= Z - result2 = X.reindex(columns=subs) - - X = X_orig.copy() - result3 = (X[block1] - Z[block1]).reindex(columns=subs) - - X[block1] -= Z[block1] - result4 = X.reindex(columns=subs) - - assert_frame_equal(result1, result2) - assert_frame_equal(result1, result3) - assert_frame_equal(result1, result4) - - def test_inplace_ops_identity(self): - - # GH 5104 - # make sure that we are actually changing the object - s_orig = Series([1, 2, 3]) - df_orig = DataFrame(np.random.randint(0,5,size=10).reshape(-1,5)) - - # no dtype change - s = s_orig.copy() - s2 = s - s += 1 - assert_series_equal(s,s2) - assert_series_equal(s_orig+1,s) - self.assertIs(s,s2) - self.assertIs(s._data,s2._data) - - df = df_orig.copy() - df2 = df - df += 1 - assert_frame_equal(df,df2) - assert_frame_equal(df_orig+1,df) - self.assertIs(df,df2) - self.assertIs(df._data,df2._data) - - # dtype change - s = s_orig.copy() - s2 = s - s += 1.5 - assert_series_equal(s,s2) - assert_series_equal(s_orig+1.5,s) - - df = df_orig.copy() - df2 = df - df += 1.5 - assert_frame_equal(df,df2) - assert_frame_equal(df_orig+1.5,df) - self.assertIs(df,df2) - self.assertIs(df._data,df2._data) - - # mixed dtype - arr = np.random.randint(0,10,size=5) - df_orig = DataFrame({'A' : arr.copy(), 'B' : 'foo'}) - df = df_orig.copy() - df2 = df - df['A'] += 1 - expected = DataFrame({'A' : arr.copy()+1, 'B' : 'foo'}) - assert_frame_equal(df,expected) - assert_frame_equal(df2,expected) - self.assertIs(df._data,df2._data) - - df = df_orig.copy() - df2 = df - df['A'] += 1.5 - expected = DataFrame({'A' : arr.copy()+1.5, 'B' : 'foo'}) - assert_frame_equal(df,expected) - assert_frame_equal(df2,expected) - self.assertIs(df._data,df2._data) - - def test_getitem_boolean(self): - # boolean indexing - d = self.tsframe.index[10] - indexer = self.tsframe.index > d - indexer_obj = indexer.astype(object) - - subindex = self.tsframe.index[indexer] - subframe = self.tsframe[indexer] - - self.assert_numpy_array_equal(subindex, subframe.index) - with assertRaisesRegexp(ValueError, 'Item wrong length'): - self.tsframe[indexer[:-1]] - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - with tm.assertRaisesRegexp(ValueError, 'boolean values only'): - self.tsframe[self.tsframe] - - # test that Series work - indexer_obj = Series(indexer_obj, self.tsframe.index) - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - # test that Series indexers reindex - with tm.assert_produces_warning(UserWarning): - indexer_obj = indexer_obj.reindex(self.tsframe.index[::-1]) - - subframe_obj = self.tsframe[indexer_obj] - assert_frame_equal(subframe_obj, subframe) - - # test df[df > 0] - for df in [ self.tsframe, self.mixed_frame, self.mixed_float, self.mixed_int ]: - - data = df._get_numeric_data() - bif = df[df > 0] - bifw = DataFrame(dict([ (c,np.where(data[c] > 0, data[c], np.nan)) for c in data.columns ]), - index=data.index, columns=data.columns) - - # add back other columns to compare - for c in df.columns: - if c not in bifw: - bifw[c] = df[c] - bifw = bifw.reindex(columns = df.columns) - - assert_frame_equal(bif, bifw, check_dtype=False) - for c in df.columns: - if bif[c].dtype != bifw[c].dtype: - self.assertEqual(bif[c].dtype, df[c].dtype) - - def test_getitem_boolean_casting(self): - - # don't upcast if we don't need to - df = self.tsframe.copy() - df['E'] = 1 - df['E'] = df['E'].astype('int32') - df['E1'] = df['E'].copy() - df['F'] = 1 - df['F'] = df['F'].astype('int64') - df['F1'] = df['F'].copy() - - casted = df[df>0] - result = casted.get_dtype_counts() - expected = Series({'float64': 4, 'int32' : 2, 'int64' : 2}) - assert_series_equal(result, expected) - - # int block splitting - df.ix[1:3,['E1','F1']] = 0 - casted = df[df>0] - result = casted.get_dtype_counts() - expected = Series({'float64': 6, 'int32' : 1, 'int64' : 1}) - assert_series_equal(result, expected) - - # where dtype conversions - # GH 3733 - df = DataFrame(data = np.random.randn(100, 50)) - df = df.where(df > 0) # create nans - bools = df > 0 - mask = isnull(df) - expected = bools.astype(float).mask(mask) - result = bools.mask(mask) - assert_frame_equal(result,expected) - - def test_getitem_boolean_list(self): - df = DataFrame(np.arange(12).reshape(3, 4)) - - def _checkit(lst): - result = df[lst] - expected = df.ix[df.index[lst]] - assert_frame_equal(result, expected) - - _checkit([True, False, True]) - _checkit([True, True, True]) - _checkit([False, False, False]) - - def test_getitem_boolean_iadd(self): - arr = randn(5, 5) - - df = DataFrame(arr.copy(), columns = ['A','B','C','D','E']) - - df[df < 0] += 1 - arr[arr < 0] += 1 - - assert_almost_equal(df.values, arr) - - def test_boolean_index_empty_corner(self): - # #2096 - blah = DataFrame(np.empty([0, 1]), columns=['A'], - index=DatetimeIndex([])) - - # both of these should succeed trivially - k = np.array([], bool) - - blah[k] - blah[k] = 0 - - def test_getitem_ix_mixed_integer(self): - df = DataFrame(np.random.randn(4, 3), - index=[1, 10, 'C', 'E'], columns=[1, 2, 3]) - - result = df.ix[:-1] - expected = df.ix[df.index[:-1]] - assert_frame_equal(result, expected) - - result = df.ix[[1, 10]] - expected = df.ix[Index([1, 10], dtype=object)] - assert_frame_equal(result, expected) - - # 11320 - df = pd.DataFrame({ "rna": (1.5,2.2,3.2,4.5), - -1000: [11,21,36,40], - 0: [10,22,43,34], - 1000:[0, 10, 20, 30] },columns=['rna',-1000,0,1000]) - result = df[[1000]] - expected = df.iloc[:,[3]] - assert_frame_equal(result, expected) - result = df[[-1000]] - expected = df.iloc[:,[1]] - assert_frame_equal(result, expected) - - def test_getitem_setitem_ix_negative_integers(self): - result = self.frame.ix[:, -1] - assert_series_equal(result, self.frame['D']) - - result = self.frame.ix[:, [-1]] - assert_frame_equal(result, self.frame[['D']]) - - result = self.frame.ix[:, [-1, -2]] - assert_frame_equal(result, self.frame[['D', 'C']]) - - self.frame.ix[:, [-1]] = 0 - self.assertTrue((self.frame['D'] == 0).all()) - - df = DataFrame(np.random.randn(8, 4)) - self.assertTrue(isnull(df.ix[:, [-1]].values).all()) - - # #1942 - a = DataFrame(randn(20, 2), index=[chr(x + 65) for x in range(20)]) - a.ix[-1] = a.ix[-2] - - assert_series_equal(a.ix[-1], a.ix[-2], check_names=False) - self.assertEqual(a.ix[-1].name, 'T') - self.assertEqual(a.ix[-2].name, 'S') - - def test_getattr(self): - assert_series_equal(self.frame.A, self.frame['A']) - self.assertRaises(AttributeError, getattr, self.frame, - 'NONEXISTENT_NAME') - - def test_setattr_column(self): - df = DataFrame({'foobar': 1}, index=lrange(10)) - - df.foobar = 5 - self.assertTrue((df.foobar == 5).all()) - - def test_setitem(self): - # not sure what else to do here - series = self.frame['A'][::2] - self.frame['col5'] = series - self.assertIn('col5', self.frame) - tm.assert_dict_equal(series, self.frame['col5'], - compare_keys=False) - - series = self.frame['A'] - self.frame['col6'] = series - tm.assert_dict_equal(series, self.frame['col6'], - compare_keys=False) - - with tm.assertRaises(KeyError): - self.frame[randn(len(self.frame) + 1)] = 1 - - # set ndarray - arr = randn(len(self.frame)) - self.frame['col9'] = arr - self.assertTrue((self.frame['col9'] == arr).all()) - - self.frame['col7'] = 5 - assert((self.frame['col7'] == 5).all()) - - self.frame['col0'] = 3.14 - assert((self.frame['col0'] == 3.14).all()) - - self.frame['col8'] = 'foo' - assert((self.frame['col8'] == 'foo').all()) - - # this is partially a view (e.g. some blocks are view) - # so raise/warn - smaller = self.frame[:2] - def f(): - smaller['col10'] = ['1', '2'] - self.assertRaises(com.SettingWithCopyError, f) - self.assertEqual(smaller['col10'].dtype, np.object_) - self.assertTrue((smaller['col10'] == ['1', '2']).all()) - - # with a dtype - for dtype in ['int32','int64','float32','float64']: - self.frame[dtype] = np.array(arr,dtype=dtype) - self.assertEqual(self.frame[dtype].dtype.name, dtype) - - # dtype changing GH4204 - df = DataFrame([[0,0]]) - df.iloc[0] = np.nan - expected = DataFrame([[np.nan,np.nan]]) - assert_frame_equal(df,expected) - - df = DataFrame([[0,0]]) - df.loc[0] = np.nan - assert_frame_equal(df,expected) - - def test_setitem_tuple(self): - self.frame['A', 'B'] = self.frame['A'] - assert_series_equal(self.frame['A', 'B'], self.frame['A'], check_names=False) - - def test_setitem_always_copy(self): - s = self.frame['A'].copy() - self.frame['E'] = s - - self.frame['E'][5:10] = nan - self.assertTrue(notnull(s[5:10]).all()) - - def test_setitem_boolean(self): - df = self.frame.copy() - values = self.frame.values - - df[df['A'] > 0] = 4 - values[values[:, 0] > 0] = 4 - assert_almost_equal(df.values, values) - - # test that column reindexing works - series = df['A'] == 4 - series = series.reindex(df.index[::-1]) - df[series] = 1 - values[values[:, 0] == 4] = 1 - assert_almost_equal(df.values, values) - - df[df > 0] = 5 - values[values > 0] = 5 - assert_almost_equal(df.values, values) - - df[df == 5] = 0 - values[values == 5] = 0 - assert_almost_equal(df.values, values) - - # a df that needs alignment first - df[df[:-1] < 0] = 2 - np.putmask(values[:-1], values[:-1] < 0, 2) - assert_almost_equal(df.values, values) - - # indexed with same shape but rows-reversed df - df[df[::-1] == 2] = 3 - values[values == 2] = 3 - assert_almost_equal(df.values, values) - - with assertRaisesRegexp(TypeError, 'Must pass DataFrame with boolean ' - 'values only'): - df[df * 0] = 2 - - # index with DataFrame - mask = df > np.abs(df) - expected = df.copy() - df[df > np.abs(df)] = nan - expected.values[mask.values] = nan - assert_frame_equal(df, expected) - - # set from DataFrame - expected = df.copy() - df[df > np.abs(df)] = df * 2 - np.putmask(expected.values, mask.values, df.values * 2) - assert_frame_equal(df, expected) - - def test_setitem_cast(self): - self.frame['D'] = self.frame['D'].astype('i8') - self.assertEqual(self.frame['D'].dtype, np.int64) - - # #669, should not cast? - # this is now set to int64, which means a replacement of the column to - # the value dtype (and nothing to do with the existing dtype) - self.frame['B'] = 0 - self.assertEqual(self.frame['B'].dtype, np.int64) - - # cast if pass array of course - self.frame['B'] = np.arange(len(self.frame)) - self.assertTrue(issubclass(self.frame['B'].dtype.type, np.integer)) - - self.frame['foo'] = 'bar' - self.frame['foo'] = 0 - self.assertEqual(self.frame['foo'].dtype, np.int64) - - self.frame['foo'] = 'bar' - self.frame['foo'] = 2.5 - self.assertEqual(self.frame['foo'].dtype, np.float64) - - self.frame['something'] = 0 - self.assertEqual(self.frame['something'].dtype, np.int64) - self.frame['something'] = 2 - self.assertEqual(self.frame['something'].dtype, np.int64) - self.frame['something'] = 2.5 - self.assertEqual(self.frame['something'].dtype, np.float64) - - # GH 7704 - # dtype conversion on setting - df = DataFrame(np.random.rand(30, 3), columns=tuple('ABC')) - df['event'] = np.nan - df.loc[10,'event'] = 'foo' - result = df.get_dtype_counts().sort_values() - expected = Series({'float64' : 3, 'object' : 1 }).sort_values() - assert_series_equal(result, expected) - - # Test that data type is preserved . #5782 - df = DataFrame({'one': np.arange(6, dtype=np.int8)}) - df.loc[1, 'one'] = 6 - self.assertEqual(df.dtypes.one, np.dtype(np.int8)) - df.one = np.int8(7) - self.assertEqual(df.dtypes.one, np.dtype(np.int8)) - - def test_setitem_boolean_column(self): - expected = self.frame.copy() - mask = self.frame['A'] > 0 - - self.frame.ix[mask, 'B'] = 0 - expected.values[mask.values, 1] = 0 - - assert_frame_equal(self.frame, expected) - - def test_setitem_corner(self): - # corner case - df = DataFrame({'B': [1., 2., 3.], - 'C': ['a', 'b', 'c']}, - index=np.arange(3)) - del df['B'] - df['B'] = [1., 2., 3.] - self.assertIn('B', df) - self.assertEqual(len(df.columns), 2) - - df['A'] = 'beginning' - df['E'] = 'foo' - df['D'] = 'bar' - df[datetime.now()] = 'date' - df[datetime.now()] = 5. - - # what to do when empty frame with index - dm = DataFrame(index=self.frame.index) - dm['A'] = 'foo' - dm['B'] = 'bar' - self.assertEqual(len(dm.columns), 2) - self.assertEqual(dm.values.dtype, np.object_) - - # upcast - dm['C'] = 1 - self.assertEqual(dm['C'].dtype, np.int64) - - dm['E'] = 1. - self.assertEqual(dm['E'].dtype, np.float64) - - # set existing column - dm['A'] = 'bar' - self.assertEqual('bar', dm['A'][0]) - - dm = DataFrame(index=np.arange(3)) - dm['A'] = 1 - dm['foo'] = 'bar' - del dm['foo'] - dm['foo'] = 'bar' - self.assertEqual(dm['foo'].dtype, np.object_) - - dm['coercable'] = ['1', '2', '3'] - self.assertEqual(dm['coercable'].dtype, np.object_) - - def test_setitem_corner2(self): - data = {"title": ['foobar', 'bar', 'foobar'] + ['foobar'] * 17, - "cruft": np.random.random(20)} - - df = DataFrame(data) - ix = df[df['title'] == 'bar'].index - - df.ix[ix, ['title']] = 'foobar' - df.ix[ix, ['cruft']] = 0 - - assert(df.ix[1, 'title'] == 'foobar') - assert(df.ix[1, 'cruft'] == 0) - - def test_setitem_ambig(self): - # difficulties with mixed-type data - from decimal import Decimal - - # created as float type - dm = DataFrame(index=lrange(3), columns=lrange(3)) - - coercable_series = Series([Decimal(1) for _ in range(3)], - index=lrange(3)) - uncoercable_series = Series(['foo', 'bzr', 'baz'], index=lrange(3)) - - dm[0] = np.ones(3) - self.assertEqual(len(dm.columns), 3) - # self.assertIsNone(dm.objects) - - dm[1] = coercable_series - self.assertEqual(len(dm.columns), 3) - # self.assertIsNone(dm.objects) - - dm[2] = uncoercable_series - self.assertEqual(len(dm.columns), 3) - # self.assertIsNotNone(dm.objects) - self.assertEqual(dm[2].dtype, np.object_) - - def test_setitem_clear_caches(self): - # GH #304 - df = DataFrame({'x': [1.1, 2.1, 3.1, 4.1], 'y': [5.1, 6.1, 7.1, 8.1]}, - index=[0, 1, 2, 3]) - df.insert(2, 'z', np.nan) - - # cache it - foo = df['z'] - - df.ix[2:, 'z'] = 42 - - expected = Series([np.nan, np.nan, 42, 42], index=df.index, name='z') - self.assertIsNot(df['z'], foo) - assert_series_equal(df['z'], expected) - - def test_setitem_None(self): - # GH #766 - self.frame[None] = self.frame['A'] - assert_series_equal(self.frame.iloc[:,-1], self.frame['A'], check_names=False) - assert_series_equal(self.frame.loc[:,None], self.frame['A'], check_names=False) - assert_series_equal(self.frame[None], self.frame['A'], check_names=False) - repr(self.frame) - - def test_setitem_empty(self): - # GH 9596 - df = pd.DataFrame({'a': ['1', '2', '3'], - 'b': ['11', '22', '33'], - 'c': ['111', '222', '333']}) - - result = df.copy() - result.loc[result.b.isnull(), 'a'] = result.a - assert_frame_equal(result, df) - - def test_setitem_empty_frame_with_boolean(self): - # Test for issue #10126 - - for dtype in ('float', 'int64'): - for df in [ - pd.DataFrame(dtype=dtype), - pd.DataFrame(dtype=dtype, index=[1]), - pd.DataFrame(dtype=dtype, columns=['A']), - ]: - df2 = df.copy() - df[df > df2] = 47 - assert_frame_equal(df, df2) - - def test_getitem_empty_frame_with_boolean(self): - # Test for issue #11859 - - df = pd.DataFrame() - df2 = df[df>0] - assert_frame_equal(df, df2) - - def test_delitem_corner(self): - f = self.frame.copy() - del f['D'] - self.assertEqual(len(f.columns), 3) - self.assertRaises(KeyError, f.__delitem__, 'D') - del f['B'] - self.assertEqual(len(f.columns), 2) - - def test_getitem_fancy_2d(self): - f = self.frame - ix = f.ix - - assert_frame_equal(ix[:, ['B', 'A']], f.reindex(columns=['B', 'A'])) - - subidx = self.frame.index[[5, 4, 1]] - assert_frame_equal(ix[subidx, ['B', 'A']], - f.reindex(index=subidx, columns=['B', 'A'])) - - # slicing rows, etc. - assert_frame_equal(ix[5:10], f[5:10]) - assert_frame_equal(ix[5:10, :], f[5:10]) - assert_frame_equal(ix[:5, ['A', 'B']], - f.reindex(index=f.index[:5], columns=['A', 'B'])) - - # slice rows with labels, inclusive! - expected = ix[5:11] - result = ix[f.index[5]:f.index[10]] - assert_frame_equal(expected, result) - - # slice columns - assert_frame_equal(ix[:, :2], f.reindex(columns=['A', 'B'])) - - # get view - exp = f.copy() - ix[5:10].values[:] = 5 - exp.values[5:10] = 5 - assert_frame_equal(f, exp) - - self.assertRaises(ValueError, ix.__getitem__, f > 0.5) - - def test_slice_floats(self): - index = [52195.504153, 52196.303147, 52198.369883] - df = DataFrame(np.random.rand(3, 2), index=index) - - s1 = df.ix[52195.1:52196.5] - self.assertEqual(len(s1), 2) - - s1 = df.ix[52195.1:52196.6] - self.assertEqual(len(s1), 2) - - s1 = df.ix[52195.1:52198.9] - self.assertEqual(len(s1), 3) - - def test_getitem_fancy_slice_integers_step(self): - df = DataFrame(np.random.randn(10, 5)) - - # this is OK - result = df.ix[:8:2] - df.ix[:8:2] = np.nan - self.assertTrue(isnull(df.ix[:8:2]).values.all()) - - def test_getitem_setitem_integer_slice_keyerrors(self): - df = DataFrame(np.random.randn(10, 5), index=lrange(0, 20, 2)) - - # this is OK - cp = df.copy() - cp.ix[4:10] = 0 - self.assertTrue((cp.ix[4:10] == 0).values.all()) - - # so is this - cp = df.copy() - cp.ix[3:11] = 0 - self.assertTrue((cp.ix[3:11] == 0).values.all()) - - result = df.ix[4:10] - result2 = df.ix[3:11] - expected = df.reindex([4, 6, 8, 10]) - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - # non-monotonic, raise KeyError - df2 = df.iloc[lrange(5) + lrange(5, 10)[::-1]] - self.assertRaises(KeyError, df2.ix.__getitem__, slice(3, 11)) - self.assertRaises(KeyError, df2.ix.__setitem__, slice(3, 11), 0) - - def test_setitem_fancy_2d(self): - f = self.frame - ix = f.ix - - # case 1 - frame = self.frame.copy() - expected = frame.copy() - frame.ix[:, ['B', 'A']] = 1 - expected['B'] = 1. - expected['A'] = 1. - assert_frame_equal(frame, expected) - - # case 2 - frame = self.frame.copy() - frame2 = self.frame.copy() - - expected = frame.copy() - - subidx = self.frame.index[[5, 4, 1]] - values = randn(3, 2) - - frame.ix[subidx, ['B', 'A']] = values - frame2.ix[[5, 4, 1], ['B', 'A']] = values - - expected['B'].ix[subidx] = values[:, 0] - expected['A'].ix[subidx] = values[:, 1] - - assert_frame_equal(frame, expected) - assert_frame_equal(frame2, expected) - - # case 3: slicing rows, etc. - frame = self.frame.copy() - - expected1 = self.frame.copy() - frame.ix[5:10] = 1. - expected1.values[5:10] = 1. - assert_frame_equal(frame, expected1) - - expected2 = self.frame.copy() - arr = randn(5, len(frame.columns)) - frame.ix[5:10] = arr - expected2.values[5:10] = arr - assert_frame_equal(frame, expected2) - - # case 4 - frame = self.frame.copy() - frame.ix[5:10, :] = 1. - assert_frame_equal(frame, expected1) - frame.ix[5:10, :] = arr - assert_frame_equal(frame, expected2) - - # case 5 - frame = self.frame.copy() - frame2 = self.frame.copy() - - expected = self.frame.copy() - values = randn(5, 2) - - frame.ix[:5, ['A', 'B']] = values - expected['A'][:5] = values[:, 0] - expected['B'][:5] = values[:, 1] - assert_frame_equal(frame, expected) - - frame2.ix[:5, [0, 1]] = values - assert_frame_equal(frame2, expected) - - # case 6: slice rows with labels, inclusive! - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[frame.index[5]:frame.index[10]] = 5. - expected.values[5:11] = 5 - assert_frame_equal(frame, expected) - - # case 7: slice columns - frame = self.frame.copy() - frame2 = self.frame.copy() - expected = self.frame.copy() - - # slice indices - frame.ix[:, 1:3] = 4. - expected.values[:, 1:3] = 4. - assert_frame_equal(frame, expected) - - # slice with labels - frame.ix[:, 'B':'C'] = 4. - assert_frame_equal(frame, expected) - - # new corner case of boolean slicing / setting - frame = DataFrame(lzip([2, 3, 9, 6, 7], [np.nan] * 5), - columns=['a', 'b']) - lst = [100] - lst.extend([np.nan] * 4) - expected = DataFrame(lzip([100, 3, 9, 6, 7], lst), - columns=['a', 'b']) - frame[frame['a'] == 2] = 100 - assert_frame_equal(frame, expected) - - def test_fancy_getitem_slice_mixed(self): - sliced = self.mixed_frame.ix[:, -3:] - self.assertEqual(sliced['D'].dtype, np.float64) - - # get view with single block - # setting it triggers setting with copy - sliced = self.frame.ix[:, -3:] - def f(): - sliced['C'] = 4. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((self.frame['C'] == 4).all()) - - def test_fancy_setitem_int_labels(self): - # integer index defers to label-based indexing - - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - tmp = df.copy() - exp = df.copy() - tmp.ix[[0, 2, 4]] = 5 - exp.values[:3] = 5 - assert_frame_equal(tmp, exp) - - tmp = df.copy() - exp = df.copy() - tmp.ix[6] = 5 - exp.values[3] = 5 - assert_frame_equal(tmp, exp) - - tmp = df.copy() - exp = df.copy() - tmp.ix[:, 2] = 5 - - # tmp correctly sets the dtype - # so match the exp way - exp[2] = 5 - assert_frame_equal(tmp, exp) - - def test_fancy_getitem_int_labels(self): - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - result = df.ix[[4, 2, 0], [2, 0]] - expected = df.reindex(index=[4, 2, 0], columns=[2, 0]) - assert_frame_equal(result, expected) - - result = df.ix[[4, 2, 0]] - expected = df.reindex(index=[4, 2, 0]) - assert_frame_equal(result, expected) - - result = df.ix[4] - expected = df.xs(4) - assert_series_equal(result, expected) - - result = df.ix[:, 3] - expected = df[3] - assert_series_equal(result, expected) - - def test_fancy_index_int_labels_exceptions(self): - df = DataFrame(np.random.randn(10, 5), index=np.arange(0, 20, 2)) - - # labels that aren't contained - self.assertRaises(KeyError, df.ix.__setitem__, - ([0, 1, 2], [2, 3, 4]), 5) - - # try to set indices not contained in frame - self.assertRaises(KeyError, - self.frame.ix.__setitem__, - ['foo', 'bar', 'baz'], 1) - self.assertRaises(KeyError, - self.frame.ix.__setitem__, - (slice(None, None), ['E']), 1) - - # partial setting now allows this GH2578 - #self.assertRaises(KeyError, - # self.frame.ix.__setitem__, - # (slice(None, None), 'E'), 1) - - def test_setitem_fancy_mixed_2d(self): - self.mixed_frame.ix[:5, ['C', 'B', 'A']] = 5 - result = self.mixed_frame.ix[:5, ['C', 'B', 'A']] - self.assertTrue((result.values == 5).all()) - - self.mixed_frame.ix[5] = np.nan - self.assertTrue(isnull(self.mixed_frame.ix[5]).all()) - - self.mixed_frame.ix[5] = self.mixed_frame.ix[6] - assert_series_equal(self.mixed_frame.ix[5], self.mixed_frame.ix[6], - check_names=False) - - # #1432 - df = DataFrame({1: [1., 2., 3.], - 2: [3, 4, 5]}) - self.assertTrue(df._is_mixed_type) - - df.ix[1] = [5, 10] - - expected = DataFrame({1: [1., 5., 3.], - 2: [3, 10, 5]}) - - assert_frame_equal(df, expected) - - def test_ix_align(self): - b = Series(randn(10), name=0).sort_values() - df_orig = DataFrame(randn(10, 4)) - df = df_orig.copy() - - df.ix[:, 0] = b - assert_series_equal(df.ix[:, 0].reindex(b.index), b) - - dft = df_orig.T - dft.ix[0, :] = b - assert_series_equal(dft.ix[0, :].reindex(b.index), b) - - df = df_orig.copy() - df.ix[:5, 0] = b - s = df.ix[:5, 0] - assert_series_equal(s, b.reindex(s.index)) - - dft = df_orig.T - dft.ix[0, :5] = b - s = dft.ix[0, :5] - assert_series_equal(s, b.reindex(s.index)) - - df = df_orig.copy() - idx = [0, 1, 3, 5] - df.ix[idx, 0] = b - s = df.ix[idx, 0] - assert_series_equal(s, b.reindex(s.index)) - - dft = df_orig.T - dft.ix[0, idx] = b - s = dft.ix[0, idx] - assert_series_equal(s, b.reindex(s.index)) - - def test_ix_frame_align(self): - b = DataFrame(np.random.randn(3, 4)) - df_orig = DataFrame(randn(10, 4)) - df = df_orig.copy() - - df.ix[:3] = b - out = b.ix[:3] - assert_frame_equal(out, b) - - b.sort_index(inplace=True) - - df = df_orig.copy() - df.ix[[0, 1, 2]] = b - out = df.ix[[0, 1, 2]].reindex(b.index) - assert_frame_equal(out, b) - - df = df_orig.copy() - df.ix[:3] = b - out = df.ix[:3] - assert_frame_equal(out, b.reindex(out.index)) - - def test_getitem_setitem_non_ix_labels(self): - df = tm.makeTimeDataFrame() - - start, end = df.index[[5, 10]] - - result = df.ix[start:end] - result2 = df[start:end] - expected = df[5:11] - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - result = df.copy() - result.ix[start:end] = 0 - result2 = df.copy() - result2[start:end] = 0 - expected = df.copy() - expected[5:11] = 0 - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - - def test_ix_multi_take(self): - df = DataFrame(np.random.randn(3, 2)) - rs = df.ix[df.index == 0, :] - xp = df.reindex([0]) - assert_frame_equal(rs, xp) - - """ #1321 - df = DataFrame(np.random.randn(3, 2)) - rs = df.ix[df.index==0, df.columns==1] - xp = df.reindex([0], [1]) - assert_frame_equal(rs, xp) - """ - - def test_ix_multi_take_nonint_index(self): - df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], - columns=['a', 'b']) - rs = df.ix[[0], [0]] - xp = df.reindex(['x'], columns=['a']) - assert_frame_equal(rs, xp) - - def test_ix_multi_take_multiindex(self): - df = DataFrame(np.random.randn(3, 2), index=['x', 'y', 'z'], - columns=[['a', 'b'], ['1', '2']]) - rs = df.ix[[0], [0]] - xp = df.reindex(['x'], columns=[('a', '1')]) - assert_frame_equal(rs, xp) - - def test_ix_dup(self): - idx = Index(['a', 'a', 'b', 'c', 'd', 'd']) - df = DataFrame(np.random.randn(len(idx), 3), idx) - - sub = df.ix[:'d'] - assert_frame_equal(sub, df) - - sub = df.ix['a':'c'] - assert_frame_equal(sub, df.ix[0:4]) - - sub = df.ix['b':'d'] - assert_frame_equal(sub, df.ix[2:]) - - def test_getitem_fancy_1d(self): - f = self.frame - ix = f.ix - - # return self if no slicing...for now - self.assertIs(ix[:, :], f) - - # low dimensional slice - xs1 = ix[2, ['C', 'B', 'A']] - xs2 = f.xs(f.index[2]).reindex(['C', 'B', 'A']) - assert_series_equal(xs1, xs2) - - ts1 = ix[5:10, 2] - ts2 = f[f.columns[2]][5:10] - assert_series_equal(ts1, ts2) - - # positional xs - xs1 = ix[0] - xs2 = f.xs(f.index[0]) - assert_series_equal(xs1, xs2) - - xs1 = ix[f.index[5]] - xs2 = f.xs(f.index[5]) - assert_series_equal(xs1, xs2) - - # single column - assert_series_equal(ix[:, 'A'], f['A']) - - # return view - exp = f.copy() - exp.values[5] = 4 - ix[5][:] = 4 - assert_frame_equal(exp, f) - - exp.values[:, 1] = 6 - ix[:, 1][:] = 6 - assert_frame_equal(exp, f) - - # slice of mixed-frame - xs = self.mixed_frame.ix[5] - exp = self.mixed_frame.xs(self.mixed_frame.index[5]) - assert_series_equal(xs, exp) - - def test_setitem_fancy_1d(self): - - # case 1: set cross-section for indices - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[2, ['C', 'B', 'A']] = [1., 2., 3.] - expected['C'][2] = 1. - expected['B'][2] = 2. - expected['A'][2] = 3. - assert_frame_equal(frame, expected) - - frame2 = self.frame.copy() - frame2.ix[2, [3, 2, 1]] = [1., 2., 3.] - assert_frame_equal(frame, expected) - - # case 2, set a section of a column - frame = self.frame.copy() - expected = self.frame.copy() - - vals = randn(5) - expected.values[5:10, 2] = vals - frame.ix[5:10, 2] = vals - assert_frame_equal(frame, expected) - - frame2 = self.frame.copy() - frame2.ix[5:10, 'B'] = vals - assert_frame_equal(frame, expected) - - # case 3: full xs - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[4] = 5. - expected.values[4] = 5. - assert_frame_equal(frame, expected) - - frame.ix[frame.index[4]] = 6. - expected.values[4] = 6. - assert_frame_equal(frame, expected) - - # single column - frame = self.frame.copy() - expected = self.frame.copy() - - frame.ix[:, 'A'] = 7. - expected['A'] = 7. - assert_frame_equal(frame, expected) - - def test_getitem_fancy_scalar(self): - f = self.frame - ix = f.ix - # individual value - for col in f.columns: - ts = f[col] - for idx in f.index[::5]: - assert_almost_equal(ix[idx, col], ts[idx]) - - def test_setitem_fancy_scalar(self): - f = self.frame - expected = self.frame.copy() - ix = f.ix - # individual value - for j, col in enumerate(f.columns): - ts = f[col] - for idx in f.index[::5]: - i = f.index.get_loc(idx) - val = randn() - expected.values[i, j] = val - ix[idx, col] = val - assert_frame_equal(f, expected) - - def test_getitem_fancy_boolean(self): - f = self.frame - ix = f.ix - - expected = f.reindex(columns=['B', 'D']) - result = ix[:, [False, True, False, True]] - assert_frame_equal(result, expected) - - expected = f.reindex(index=f.index[5:10], columns=['B', 'D']) - result = ix[5:10, [False, True, False, True]] - assert_frame_equal(result, expected) - - boolvec = f.index > f.index[7] - expected = f.reindex(index=f.index[boolvec]) - result = ix[boolvec] - assert_frame_equal(result, expected) - result = ix[boolvec, :] - assert_frame_equal(result, expected) - - result = ix[boolvec, 2:] - expected = f.reindex(index=f.index[boolvec], - columns=['C', 'D']) - assert_frame_equal(result, expected) - - def test_setitem_fancy_boolean(self): - # from 2d, set with booleans - frame = self.frame.copy() - expected = self.frame.copy() - - mask = frame['A'] > 0 - frame.ix[mask] = 0. - expected.values[mask.values] = 0. - assert_frame_equal(frame, expected) - - frame = self.frame.copy() - expected = self.frame.copy() - frame.ix[mask, ['A', 'B']] = 0. - expected.values[mask.values, :2] = 0. - assert_frame_equal(frame, expected) - - def test_getitem_fancy_ints(self): - result = self.frame.ix[[1, 4, 7]] - expected = self.frame.ix[self.frame.index[[1, 4, 7]]] - assert_frame_equal(result, expected) - - result = self.frame.ix[:, [2, 0, 1]] - expected = self.frame.ix[:, self.frame.columns[[2, 0, 1]]] - assert_frame_equal(result, expected) - - def test_getitem_setitem_fancy_exceptions(self): - ix = self.frame.ix - with assertRaisesRegexp(IndexingError, 'Too many indexers'): - ix[:, :, :] - - with assertRaises(IndexingError): - ix[:, :, :] = 1 - - def test_getitem_setitem_boolean_misaligned(self): - # boolean index misaligned labels - mask = self.frame['A'][::-1] > 1 - - result = self.frame.ix[mask] - expected = self.frame.ix[mask[::-1]] - assert_frame_equal(result, expected) - - cp = self.frame.copy() - expected = self.frame.copy() - cp.ix[mask] = 0 - expected.ix[mask] = 0 - assert_frame_equal(cp, expected) - - def test_getitem_setitem_boolean_multi(self): - df = DataFrame(np.random.randn(3, 2)) - - # get - k1 = np.array([True, False, True]) - k2 = np.array([False, True]) - result = df.ix[k1, k2] - expected = df.ix[[0, 2], [1]] - assert_frame_equal(result, expected) - - expected = df.copy() - df.ix[np.array([True, False, True]), - np.array([False, True])] = 5 - expected.ix[[0, 2], [1]] = 5 - assert_frame_equal(df, expected) - - def test_getitem_setitem_float_labels(self): - index = Index([1.5, 2, 3, 4, 5]) - df = DataFrame(np.random.randn(5, 5), index=index) - - result = df.ix[1.5:4] - expected = df.reindex([1.5, 2, 3, 4]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.ix[4:5] - expected = df.reindex([4, 5]) # reindex with int - assert_frame_equal(result, expected, check_index_type=False) - self.assertEqual(len(result), 2) - - result = df.ix[4:5] - expected = df.reindex([4.0, 5.0]) # reindex with float - assert_frame_equal(result, expected) - self.assertEqual(len(result), 2) - - # loc_float changes this to work properly - result = df.ix[1:2] - expected = df.iloc[0:2] - assert_frame_equal(result, expected) - - df.ix[1:2] = 0 - result = df[1:2] - self.assertTrue((result==0).all().all()) - - # #2727 - index = Index([1.0, 2.5, 3.5, 4.5, 5.0]) - df = DataFrame(np.random.randn(5, 5), index=index) - - # positional slicing only via iloc! - # stacklevel=False -> needed stacklevel depends on index type - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - result = df.iloc[1.0:5] - - expected = df.reindex([2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.iloc[4:5] - expected = df.reindex([5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 1) - - # GH 4892, float indexers in iloc are deprecated - import warnings - warnings.filterwarnings(action='error', category=FutureWarning) - - cp = df.copy() - def f(): - cp.iloc[1.0:5] = 0 - self.assertRaises(FutureWarning, f) - def f(): - result = cp.iloc[1.0:5] == 0 - self.assertRaises(FutureWarning, f) - self.assertTrue(result.values.all()) - self.assertTrue((cp.iloc[0:1] == df.iloc[0:1]).values.all()) - - warnings.filterwarnings(action='default', category=FutureWarning) - - cp = df.copy() - cp.iloc[4:5] = 0 - self.assertTrue((cp.iloc[4:5] == 0).values.all()) - self.assertTrue((cp.iloc[0:4] == df.iloc[0:4]).values.all()) - - # float slicing - result = df.ix[1.0:5] - expected = df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 5) - - result = df.ix[1.1:5] - expected = df.reindex([2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 4) - - result = df.ix[4.51:5] - expected = df.reindex([5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 1) - - result = df.ix[1.0:5.0] - expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0]) - assert_frame_equal(result, expected) - self.assertEqual(len(result), 5) - - cp = df.copy() - cp.ix[1.0:5.0] = 0 - result = cp.ix[1.0:5.0] - self.assertTrue((result == 0).values.all()) - - def test_setitem_single_column_mixed(self): - df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['foo', 'bar', 'baz']) - df['str'] = 'qux' - df.ix[::2, 'str'] = nan - expected = [nan, 'qux', nan, 'qux', nan] - assert_almost_equal(df['str'].values, expected) - - def test_setitem_single_column_mixed_datetime(self): - df = DataFrame(randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['foo', 'bar', 'baz']) - - df['timestamp'] = Timestamp('20010102') - - # check our dtypes - result = df.get_dtype_counts() - expected = Series({'float64': 3, 'datetime64[ns]': 1}) - assert_series_equal(result, expected) - - # set an allowable datetime64 type - from pandas import tslib - df.ix['b', 'timestamp'] = tslib.iNaT - self.assertTrue(com.isnull(df.ix['b', 'timestamp'])) - - # allow this syntax - df.ix['c', 'timestamp'] = nan - self.assertTrue(com.isnull(df.ix['c', 'timestamp'])) - - # allow this syntax - df.ix['d', :] = nan - self.assertTrue(com.isnull(df.ix['c', :]).all() == False) - - # as of GH 3216 this will now work! - # try to set with a list like item - #self.assertRaises( - # Exception, df.ix.__setitem__, ('d', 'timestamp'), [nan]) - - def test_setitem_frame(self): - piece = self.frame.ix[:2, ['A', 'B']] - self.frame.ix[-2:, ['A', 'B']] = piece.values - assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, - piece.values) - - # GH 3216 - - # already aligned - f = self.mixed_frame.copy() - piece = DataFrame([[ 1, 2], [3, 4]], index=f.index[0:2],columns=['A', 'B']) - key = (slice(None,2), ['A', 'B']) - f.ix[key] = piece - assert_almost_equal(f.ix[0:2, ['A', 'B']].values, - piece.values) - - # rows unaligned - f = self.mixed_frame.copy() - piece = DataFrame([[ 1, 2 ], [3, 4], [5, 6], [7, 8]], index=list(f.index[0:2]) + ['foo','bar'],columns=['A', 'B']) - key = (slice(None,2), ['A', 'B']) - f.ix[key] = piece - assert_almost_equal(f.ix[0:2:, ['A', 'B']].values, - piece.values[0:2]) - - # key is unaligned with values - f = self.mixed_frame.copy() - piece = f.ix[:2, ['A']] - piece.index = f.index[-2:] - key = (slice(-2, None), ['A', 'B']) - f.ix[key] = piece - piece['B'] = np.nan - assert_almost_equal(f.ix[-2:, ['A', 'B']].values, - piece.values) - - # ndarray - f = self.mixed_frame.copy() - piece = self.mixed_frame.ix[:2, ['A', 'B']] - key = (slice(-2, None), ['A', 'B']) - f.ix[key] = piece.values - assert_almost_equal(f.ix[-2:, ['A', 'B']].values, - piece.values) - - - # needs upcasting - df = DataFrame([[1,2,'foo'],[3,4,'bar']],columns=['A','B','C']) - df2 = df.copy() - df2.ix[:,['A','B']] = df.ix[:,['A','B']]+0.5 - expected = df.reindex(columns=['A','B']) - expected += 0.5 - expected['C'] = df['C'] - assert_frame_equal(df2, expected) - - def test_setitem_frame_align(self): - piece = self.frame.ix[:2, ['A', 'B']] - piece.index = self.frame.index[-2:] - piece.columns = ['A', 'B'] - self.frame.ix[-2:, ['A', 'B']] = piece - assert_almost_equal(self.frame.ix[-2:, ['A', 'B']].values, - piece.values) - - def test_setitem_fancy_exceptions(self): - pass - - def test_getitem_boolean_missing(self): - pass - - def test_setitem_boolean_missing(self): - pass - - def test_getitem_setitem_ix_duplicates(self): - # #1201 - df = DataFrame(np.random.randn(5, 3), - index=['foo', 'foo', 'bar', 'baz', 'bar']) - - result = df.ix['foo'] - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.ix['bar'] - expected = df.ix[[2, 4]] - assert_frame_equal(result, expected) - - result = df.ix['baz'] - expected = df.ix[3] - assert_series_equal(result, expected) - - def test_getitem_ix_boolean_duplicates_multiple(self): - # #1201 - df = DataFrame(np.random.randn(5, 3), - index=['foo', 'foo', 'bar', 'baz', 'bar']) - - result = df.ix[['bar']] - exp = df.ix[[2, 4]] - assert_frame_equal(result, exp) - - result = df.ix[df[1] > 0] - exp = df[df[1] > 0] - assert_frame_equal(result, exp) - - result = df.ix[df[0] > 0] - exp = df[df[0] > 0] - assert_frame_equal(result, exp) - - def test_getitem_setitem_ix_bool_keyerror(self): - # #2199 - df = DataFrame({'a': [1, 2, 3]}) - - self.assertRaises(KeyError, df.ix.__getitem__, False) - self.assertRaises(KeyError, df.ix.__getitem__, True) - - self.assertRaises(KeyError, df.ix.__setitem__, False, 0) - self.assertRaises(KeyError, df.ix.__setitem__, True, 0) - - def test_getitem_list_duplicates(self): - # #1943 - df = DataFrame(np.random.randn(4, 4), columns=list('AABC')) - df.columns.name = 'foo' - - result = df[['B', 'C']] - self.assertEqual(result.columns.name, 'foo') - - expected = df.ix[:, 2:] - assert_frame_equal(result, expected) - - def test_get_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - result = self.frame.get_value(idx, col) - expected = self.frame[col][idx] - assert_almost_equal(result, expected) - - def test_iteritems(self): - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=['a', 'a', 'b']) - for k, v in compat.iteritems(df): - self.assertEqual(type(v), Series) - - def test_lookup(self): - def alt(df, rows, cols): - result = [] - for r, c in zip(rows, cols): - result.append(df.get_value(r, c)) - return result - - def testit(df): - rows = list(df.index) * len(df.columns) - cols = list(df.columns) * len(df.index) - result = df.lookup(rows, cols) - expected = alt(df, rows, cols) - assert_almost_equal(result, expected) - - testit(self.mixed_frame) - testit(self.frame) - - df = DataFrame({'label': ['a', 'b', 'a', 'c'], - 'mask_a': [True, True, False, True], - 'mask_b': [True, False, False, False], - 'mask_c': [False, True, False, True]}) - df['mask'] = df.lookup(df.index, 'mask_' + df['label']) - exp_mask = alt(df, df.index, 'mask_' + df['label']) - assert_almost_equal(df['mask'], exp_mask) - self.assertEqual(df['mask'].dtype, np.bool_) - - with tm.assertRaises(KeyError): - self.frame.lookup(['xyz'], ['A']) - - with tm.assertRaises(KeyError): - self.frame.lookup([self.frame.index[0]], ['xyz']) - - with tm.assertRaisesRegexp(ValueError, 'same size'): - self.frame.lookup(['a', 'b', 'c'], ['a']) - - def test_set_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - self.frame.set_value(idx, col, 1) - assert_almost_equal(self.frame[col][idx], 1) - - def test_set_value_resize(self): - - res = self.frame.set_value('foobar', 'B', 0) - self.assertIs(res, self.frame) - self.assertEqual(res.index[-1], 'foobar') - self.assertEqual(res.get_value('foobar', 'B'), 0) - - self.frame.loc['foobar','qux'] = 0 - self.assertEqual(self.frame.get_value('foobar', 'qux'), 0) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', 'sam') - self.assertEqual(res3['baz'].dtype, np.object_) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', True) - self.assertEqual(res3['baz'].dtype, np.object_) - - res = self.frame.copy() - res3 = res.set_value('foobar', 'baz', 5) - self.assertTrue(com.is_float_dtype(res3['baz'])) - self.assertTrue(isnull(res3['baz'].drop(['foobar'])).all()) - self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam') - - def test_set_value_with_index_dtype_change(self): - df_orig = DataFrame(randn(3, 3), index=lrange(3), columns=list('ABC')) - - # this is actually ambiguous as the 2 is interpreted as a positional - # so column is not created - df = df_orig.copy() - df.set_value('C', 2, 1.0) - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - #self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) - - df = df_orig.copy() - df.loc['C', 2] = 1.0 - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - #self.assertEqual(list(df.columns), list(df_orig.columns) + [2]) - - # create both new - df = df_orig.copy() - df.set_value('C', 'D', 1.0) - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) - - df = df_orig.copy() - df.loc['C', 'D'] = 1.0 - self.assertEqual(list(df.index), list(df_orig.index) + ['C']) - self.assertEqual(list(df.columns), list(df_orig.columns) + ['D']) - - def test_get_set_value_no_partial_indexing(self): - # partial w/ MultiIndex raise exception - index = MultiIndex.from_tuples([(0, 1), (0, 2), (1, 1), (1, 2)]) - df = DataFrame(index=index, columns=lrange(4)) - self.assertRaises(KeyError, df.get_value, 0, 1) - # self.assertRaises(KeyError, df.set_value, 0, 1, 0) - - def test_single_element_ix_dont_upcast(self): - self.frame['E'] = 1 - self.assertTrue(issubclass(self.frame['E'].dtype.type, - (int, np.integer))) - - result = self.frame.ix[self.frame.index[5], 'E'] - self.assertTrue(com.is_integer(result)) - - def test_irow(self): - df = DataFrame(np.random.randn(10, 4), index=lrange(0, 20, 2)) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - df.irow(1) - - result = df.iloc[1] - exp = df.ix[2] - assert_series_equal(result, exp) - - result = df.iloc[2] - exp = df.ix[4] - assert_series_equal(result, exp) - - # slice - result = df.iloc[slice(4, 8)] - expected = df.ix[8:14] - assert_frame_equal(result, expected) - - # verify slice is view - # setting it makes it raise/warn - def f(): - result[2] = 0. - self.assertRaises(com.SettingWithCopyError, f) - exp_col = df[2].copy() - exp_col[4:8] = 0. - assert_series_equal(df[2], exp_col) - - # list of integers - result = df.iloc[[1, 2, 4, 6]] - expected = df.reindex(df.index[[1, 2, 4, 6]]) - assert_frame_equal(result, expected) - - def test_icol(self): - - df = DataFrame(np.random.randn(4, 10), columns=lrange(0, 20, 2)) - - # 10711, deprecated - with tm.assert_produces_warning(FutureWarning): - df.icol(1) - - result = df.iloc[:, 1] - exp = df.ix[:, 2] - assert_series_equal(result, exp) - - result = df.iloc[:, 2] - exp = df.ix[:, 4] - assert_series_equal(result, exp) - - # slice - result = df.iloc[:, slice(4, 8)] - expected = df.ix[:, 8:14] - assert_frame_equal(result, expected) - - # verify slice is view - # and that we are setting a copy - def f(): - result[8] = 0. - self.assertRaises(com.SettingWithCopyError, f) - self.assertTrue((df[8] == 0).all()) - - # list of integers - result = df.iloc[:, [1, 2, 4, 6]] - expected = df.reindex(columns=df.columns[[1, 2, 4, 6]]) - assert_frame_equal(result, expected) - - def test_irow_icol_duplicates(self): - # 10711, deprecated - - df = DataFrame(np.random.rand(3, 3), columns=list('ABC'), - index=list('aab')) - - result = df.iloc[0] - result2 = df.ix[0] - tm.assertIsInstance(result, Series) - assert_almost_equal(result.values, df.values[0]) - assert_series_equal(result, result2) - - result = df.T.iloc[:, 0] - result2 = df.T.ix[:, 0] - tm.assertIsInstance(result, Series) - assert_almost_equal(result.values, df.values[0]) - assert_series_equal(result, result2) - - # multiindex - df = DataFrame(np.random.randn(3, 3), columns=[['i', 'i', 'j'], - ['A', 'A', 'B']], - index=[['i', 'i', 'j'], ['X', 'X', 'Y']]) - rs = df.iloc[0] - xp = df.ix[0] - assert_series_equal(rs, xp) - - rs = df.iloc[:, 0] - xp = df.T.ix[0] - assert_series_equal(rs, xp) - - rs = df.iloc[:, [0]] - xp = df.ix[:, [0]] - assert_frame_equal(rs, xp) - - # #2259 - df = DataFrame([[1, 2, 3], [4, 5, 6]], columns=[1, 1, 2]) - result = df.iloc[:, [0]] - expected = df.take([0], axis=1) - assert_frame_equal(result, expected) - - def test_icol_sparse_propegate_fill_value(self): - from pandas.sparse.api import SparseDataFrame - df = SparseDataFrame({'A': [999, 1]}, default_fill_value=999) - self.assertTrue(len(df['A'].sp_values) == len(df.iloc[:, 0].sp_values)) - - def test_iget_value(self): - # 10711 deprecated - - with tm.assert_produces_warning(FutureWarning): - self.frame.iget_value(0,0) - - for i, row in enumerate(self.frame.index): - for j, col in enumerate(self.frame.columns): - result = self.frame.iat[i,j] - expected = self.frame.at[row, col] - assert_almost_equal(result, expected) - - def test_nested_exception(self): - # Ignore the strange way of triggering the problem - # (which may get fixed), it's just a way to trigger - # the issue or reraising an outer exception without - # a named argument - df = DataFrame({"a": [1, 2, 3], "b": [4, 5, 6], "c": [7, 8, - 9]}).set_index(["a", "b"]) - l = list(df.index) - l[0] = ["a", "b"] - df.index = l - - try: - repr(df) - except Exception as e: - self.assertNotEqual(type(e), UnboundLocalError) - - def test_reindex_methods(self): - df = pd.DataFrame({'x': list(range(5))}) - target = np.array([-0.1, 0.9, 1.1, 1.5]) - - for method, expected_values in [('nearest', [0, 1, 1, 2]), - ('pad', [np.nan, 0, 1, 1]), - ('backfill', [0, 1, 2, 2])]: - expected = pd.DataFrame({'x': expected_values}, index=target) - actual = df.reindex(target, method=method) - assert_frame_equal(expected, actual) - - actual = df.reindex_like(df, method=method, tolerance=0) - assert_frame_equal(df, actual) - - actual = df.reindex(target, method=method, tolerance=1) - assert_frame_equal(expected, actual) - - e2 = expected[::-1] - actual = df.reindex(target[::-1], method=method) - assert_frame_equal(e2, actual) - - new_order = [3, 0, 2, 1] - e2 = expected.iloc[new_order] - actual = df.reindex(target[new_order], method=method) - assert_frame_equal(e2, actual) - - switched_method = ('pad' if method == 'backfill' - else 'backfill' if method == 'pad' - else method) - actual = df[::-1].reindex(target, method=switched_method) - assert_frame_equal(expected, actual) - - expected = pd.DataFrame({'x': [0, 1, 1, np.nan]}, index=target) - actual = df.reindex(target, method='nearest', tolerance=0.2) - assert_frame_equal(expected, actual) - - def test_non_monotonic_reindex_methods(self): - dr = pd.date_range('2013-08-01', periods=6, freq='B') - data = np.random.randn(6,1) - df = pd.DataFrame(data, index=dr, columns=list('A')) - df_rev = pd.DataFrame(data, index=dr[[3, 4, 5] + [0, 1, 2]], - columns=list('A')) - # index is not monotonic increasing or decreasing - self.assertRaises(ValueError, df_rev.reindex, df.index, method='pad') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='ffill') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='bfill') - self.assertRaises(ValueError, df_rev.reindex, df.index, method='nearest') - - def test_reindex_level(self): - from itertools import permutations - icol = ['jim', 'joe', 'jolie'] - - def verify_first_level(df, level, idx, check_index_type=True): - f = lambda val: np.nonzero(df[level] == val)[0] - i = np.concatenate(list(map(f, idx))) - left = df.set_index(icol).reindex(idx, level=level) - right = df.iloc[i].set_index(icol) - assert_frame_equal(left, right, check_index_type=check_index_type) - - def verify(df, level, idx, indexer, check_index_type=True): - left = df.set_index(icol).reindex(idx, level=level) - right = df.iloc[indexer].set_index(icol) - assert_frame_equal(left, right, check_index_type=check_index_type) - - df = pd.DataFrame({'jim':list('B' * 4 + 'A' * 2 + 'C' * 3), - 'joe':list('abcdeabcd')[::-1], - 'jolie':[10, 20, 30] * 3, - 'joline': np.random.randint(0, 1000, 9)}) - - target = [['C', 'B', 'A'], ['F', 'C', 'A', 'D'], ['A'], - ['A', 'B', 'C'], ['C', 'A', 'B'], ['C', 'B'], ['C', 'A'], - ['A', 'B'], ['B', 'A', 'C']] - - for idx in target: - verify_first_level(df, 'jim', idx) - - # reindex by these causes different MultiIndex levels - for idx in [['D', 'F'], ['A', 'C', 'B']]: - verify_first_level(df, 'jim', idx, check_index_type=False) - - verify(df, 'joe', list('abcde'), [3, 2, 1, 0, 5, 4, 8, 7, 6]) - verify(df, 'joe', list('abcd'), [3, 2, 1, 0, 5, 8, 7, 6]) - verify(df, 'joe', list('abc'), [3, 2, 1, 8, 7, 6]) - verify(df, 'joe', list('eca'), [1, 3, 4, 6, 8]) - verify(df, 'joe', list('edc'), [0, 1, 4, 5, 6]) - verify(df, 'joe', list('eadbc'), [3, 0, 2, 1, 4, 5, 8, 7, 6]) - verify(df, 'joe', list('edwq'), [0, 4, 5]) - verify(df, 'joe', list('wq'), [], check_index_type=False) - - df = DataFrame({'jim':['mid'] * 5 + ['btm'] * 8 + ['top'] * 7, - 'joe':['3rd'] * 2 + ['1st'] * 3 + ['2nd'] * 3 + - ['1st'] * 2 + ['3rd'] * 3 + ['1st'] * 2 + - ['3rd'] * 3 + ['2nd'] * 2, - # this needs to be jointly unique with jim and joe or - # reindexing will fail ~1.5% of the time, this works - # out to needing unique groups of same size as joe - 'jolie': np.concatenate([np.random.choice(1000, x, replace=False) - for x in [2, 3, 3, 2, 3, 2, 3, 2]]), - 'joline': np.random.randn(20).round(3) * 10}) - - for idx in permutations(df['jim'].unique()): - for i in range(3): - verify_first_level(df, 'jim', idx[:i+1]) - - i = [2,3,4,0,1,8,9,5,6,7,10,11,12,13,14,18,19,15,16,17] - verify(df, 'joe', ['1st', '2nd', '3rd'], i) - - i = [0,1,2,3,4,10,11,12,5,6,7,8,9,15,16,17,18,19,13,14] - verify(df, 'joe', ['3rd', '2nd', '1st'], i) - - i = [0,1,5,6,7,10,11,12,18,19,15,16,17] - verify(df, 'joe', ['2nd', '3rd'], i) - - i = [0,1,2,3,4,10,11,12,8,9,15,16,17,13,14] - verify(df, 'joe', ['3rd', '1st'], i) - - def test_getitem_ix_float_duplicates(self): - df = pd.DataFrame(np.random.randn(3, 3), - index=[0.1, 0.2, 0.2], columns=list('abc')) - expect = df.iloc[1:] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df.index = [1, 0.2, 0.2] - expect = df.iloc[1:] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df = pd.DataFrame(np.random.randn(4, 3), - index=[1, 0.2, 0.2, 1], columns=list('abc')) - expect = df.iloc[1:-1] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[1:-1, 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - df.index = [0.1, 0.2, 2, 0.2] - expect = df.iloc[[1, -1]] - assert_frame_equal(df.loc[0.2], expect) - assert_frame_equal(df.ix[0.2], expect) - - expect = df.iloc[[1, -1], 0] - assert_series_equal(df.loc[0.2, 'a'], expect) - - def test_setitem_with_sparse_value(self): - # GH8131 - df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) - sp_series = pd.Series([0, 0, 1]).to_sparse(fill_value=0) - df['new_column'] = sp_series - assert_series_equal(df['new_column'], sp_series, check_names=False) - - def test_setitem_with_unaligned_sparse_value(self): - df = pd.DataFrame({'c_1': ['a', 'b', 'c'], 'n_1': [1., 2., 3.]}) - sp_series = (pd.Series([0, 0, 1], index=[2, 1, 0]) - .to_sparse(fill_value=0)) - df['new_column'] = sp_series - exp = pd.Series([1, 0, 0], name='new_column') - assert_series_equal(df['new_column'], exp) - - -_seriesd = tm.getSeriesData() -_tsd = tm.getTimeSeriesData() - -_frame = DataFrame(_seriesd) -_frame2 = DataFrame(_seriesd, columns=['D', 'C', 'B', 'A']) -_intframe = DataFrame(dict((k, v.astype(int)) - for k, v in compat.iteritems(_seriesd))) - -_tsframe = DataFrame(_tsd) - -_mixed_frame = _frame.copy() -_mixed_frame['foo'] = 'bar' - - -class SafeForSparse(object): - - _multiprocess_can_split_ = True - - def test_copy_index_name_checking(self): - # don't want to be able to modify the index stored elsewhere after - # making a copy - for attr in ('index', 'columns'): - ind = getattr(self.frame, attr) - ind.name = None - cp = self.frame.copy() - getattr(cp, attr).name = 'foo' - self.assertIsNone(getattr(self.frame, attr).name) - - def test_getitem_pop_assign_name(self): - s = self.frame['A'] - self.assertEqual(s.name, 'A') - - s = self.frame.pop('A') - self.assertEqual(s.name, 'A') - - s = self.frame.ix[:, 'B'] - self.assertEqual(s.name, 'B') - - s2 = s.ix[:] - self.assertEqual(s2.name, 'B') - - def test_get_value(self): - for idx in self.frame.index: - for col in self.frame.columns: - result = self.frame.get_value(idx, col) - expected = self.frame[col][idx] - assert_almost_equal(result, expected) - - def test_join_index(self): - # left / right - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2) - self.assertTrue(f.index.equals(joined.index)) - self.assertEqual(len(joined.columns), 4) - - joined = f.join(f2, how='left') - self.assertTrue(joined.index.equals(f.index)) - self.assertEqual(len(joined.columns), 4) - - joined = f.join(f2, how='right') - self.assertTrue(joined.index.equals(f2.index)) - self.assertEqual(len(joined.columns), 4) - - # inner - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2, how='inner') - self.assertTrue(joined.index.equals(f.index.intersection(f2.index))) - self.assertEqual(len(joined.columns), 4) - - # outer - - f = self.frame.reindex(columns=['A', 'B'])[:10] - f2 = self.frame.reindex(columns=['C', 'D']) - - joined = f.join(f2, how='outer') - self.assertTrue(tm.equalContents(self.frame.index, joined.index)) - self.assertEqual(len(joined.columns), 4) - - assertRaisesRegexp(ValueError, 'join method', f.join, f2, how='foo') - - # corner case - overlapping columns - for how in ('outer', 'left', 'inner'): - with assertRaisesRegexp(ValueError, 'columns overlap but no suffix'): - self.frame.join(self.frame, how=how) - - def test_join_index_more(self): - af = self.frame.ix[:, ['A', 'B']] - bf = self.frame.ix[::2, ['C', 'D']] - - expected = af.copy() - expected['C'] = self.frame['C'][::2] - expected['D'] = self.frame['D'][::2] - - result = af.join(bf) - assert_frame_equal(result, expected) - - result = af.join(bf, how='right') - assert_frame_equal(result, expected[::2]) - - result = bf.join(af, how='right') - assert_frame_equal(result, expected.ix[:, result.columns]) - - def test_join_index_series(self): - df = self.frame.copy() - s = df.pop(self.frame.columns[-1]) - joined = df.join(s) - - assert_frame_equal(joined, self.frame, check_names=False) # TODO should this check_names ? - - s.name = None - assertRaisesRegexp(ValueError, 'must have a name', df.join, s) - - def test_join_overlap(self): - df1 = self.frame.ix[:, ['A', 'B', 'C']] - df2 = self.frame.ix[:, ['B', 'C', 'D']] - - joined = df1.join(df2, lsuffix='_df1', rsuffix='_df2') - df1_suf = df1.ix[:, ['B', 'C']].add_suffix('_df1') - df2_suf = df2.ix[:, ['B', 'C']].add_suffix('_df2') - no_overlap = self.frame.ix[:, ['A', 'D']] - expected = df1_suf.join(df2_suf).join(no_overlap) - - # column order not necessarily sorted - assert_frame_equal(joined, expected.ix[:, joined.columns]) - - def test_add_prefix_suffix(self): - with_prefix = self.frame.add_prefix('foo#') - expected = ['foo#%s' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_prefix.columns, expected) - - with_suffix = self.frame.add_suffix('#foo') - expected = ['%s#foo' % c for c in self.frame.columns] - self.assert_numpy_array_equal(with_suffix.columns, expected) - - -class TestDataFrame(tm.TestCase, CheckIndexing, - SafeForSparse): - klass = DataFrame - - _multiprocess_can_split_ = True - - def setUp(self): - - self.frame = _frame.copy() - self.frame2 = _frame2.copy() - - # force these all to int64 to avoid platform testing issues - self.intframe = DataFrame(dict([ (c,s) for c,s in compat.iteritems(_intframe) ]), dtype = np.int64) - self.tsframe = _tsframe.copy() - self.mixed_frame = _mixed_frame.copy() - self.mixed_float = DataFrame({ 'A': _frame['A'].copy().astype('float32'), - 'B': _frame['B'].copy().astype('float32'), - 'C': _frame['C'].copy().astype('float16'), - 'D': _frame['D'].copy().astype('float64') }) - self.mixed_float2 = DataFrame({ 'A': _frame2['A'].copy().astype('float32'), - 'B': _frame2['B'].copy().astype('float32'), - 'C': _frame2['C'].copy().astype('float16'), - 'D': _frame2['D'].copy().astype('float64') }) - self.mixed_int = DataFrame({ 'A': _intframe['A'].copy().astype('int32'), - 'B': np.ones(len(_intframe['B']),dtype='uint64'), - 'C': _intframe['C'].copy().astype('uint8'), - 'D': _intframe['D'].copy().astype('int64') }) - self.all_mixed = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'float32' : np.array([1.]*10,dtype='float32'), - 'int32' : np.array([1]*10,dtype='int32'), - }, index=np.arange(10)) - self.tzframe = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern'), - 'C' : date_range('20130101',periods=3,tz='CET')}) - self.tzframe.iloc[1,1] = pd.NaT - self.tzframe.iloc[1,2] = pd.NaT - - self.ts1 = tm.makeTimeSeries() - self.ts2 = tm.makeTimeSeries()[5:] - self.ts3 = tm.makeTimeSeries()[-5:] - self.ts4 = tm.makeTimeSeries()[1:-1] - - self.ts_dict = { - 'col1': self.ts1, - 'col2': self.ts2, - 'col3': self.ts3, - 'col4': self.ts4, - } - self.empty = DataFrame({}) - - arr = np.array([[1., 2., 3.], - [4., 5., 6.], - [7., 8., 9.]]) - - self.simple = DataFrame(arr, columns=['one', 'two', 'three'], - index=['a', 'b', 'c']) - - def test_get_axis(self): - f = self.frame - self.assertEqual(f._get_axis_number(0), 0) - self.assertEqual(f._get_axis_number(1), 1) - self.assertEqual(f._get_axis_number('index'), 0) - self.assertEqual(f._get_axis_number('rows'), 0) - self.assertEqual(f._get_axis_number('columns'), 1) - - self.assertEqual(f._get_axis_name(0), 'index') - self.assertEqual(f._get_axis_name(1), 'columns') - self.assertEqual(f._get_axis_name('index'), 'index') - self.assertEqual(f._get_axis_name('rows'), 'index') - self.assertEqual(f._get_axis_name('columns'), 'columns') - - self.assertIs(f._get_axis(0), f.index) - self.assertIs(f._get_axis(1), f.columns) - - assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, 2) - assertRaisesRegexp(ValueError, 'No axis.*foo', f._get_axis_name, 'foo') - assertRaisesRegexp(ValueError, 'No axis.*None', f._get_axis_name, None) - assertRaisesRegexp(ValueError, 'No axis named', f._get_axis_number, None) - - def test_set_index(self): - idx = Index(np.arange(len(self.mixed_frame))) - - # cache it - _ = self.mixed_frame['foo'] - self.mixed_frame.index = idx - self.assertIs(self.mixed_frame['foo'].index, idx) - with assertRaisesRegexp(ValueError, 'Length mismatch'): - self.mixed_frame.index = idx[::2] - - def test_set_index_cast(self): - - # issue casting an index then set_index - df = DataFrame({'A' : [1.1,2.2,3.3], 'B' : [5.0,6.1,7.2]}, - index = [2010,2011,2012]) - expected = df.ix[2010] - new_index = df.index.astype(np.int32) - df.index = new_index - result = df.ix[2010] - assert_series_equal(result,expected) - - def test_set_index2(self): - df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], - 'B': ['one', 'two', 'three', 'one', 'two'], - 'C': ['a', 'b', 'c', 'd', 'e'], - 'D': np.random.randn(5), - 'E': np.random.randn(5)}) - - # new object, single-column - result = df.set_index('C') - result_nodrop = df.set_index('C', drop=False) - - index = Index(df['C'], name='C') - - expected = df.ix[:, ['A', 'B', 'D', 'E']] - expected.index = index - - expected_nodrop = df.copy() - expected_nodrop.index = index - - assert_frame_equal(result, expected) - assert_frame_equal(result_nodrop, expected_nodrop) - self.assertEqual(result.index.name, index.name) - - # inplace, single - df2 = df.copy() - - df2.set_index('C', inplace=True) - - assert_frame_equal(df2, expected) - - df3 = df.copy() - df3.set_index('C', drop=False, inplace=True) - - assert_frame_equal(df3, expected_nodrop) - - # create new object, multi-column - result = df.set_index(['A', 'B']) - result_nodrop = df.set_index(['A', 'B'], drop=False) - - index = MultiIndex.from_arrays([df['A'], df['B']], names=['A', 'B']) - - expected = df.ix[:, ['C', 'D', 'E']] - expected.index = index - - expected_nodrop = df.copy() - expected_nodrop.index = index - - assert_frame_equal(result, expected) - assert_frame_equal(result_nodrop, expected_nodrop) - self.assertEqual(result.index.names, index.names) - - # inplace - df2 = df.copy() - df2.set_index(['A', 'B'], inplace=True) - assert_frame_equal(df2, expected) - - df3 = df.copy() - df3.set_index(['A', 'B'], drop=False, inplace=True) - assert_frame_equal(df3, expected_nodrop) - - # corner case - with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): - df.set_index('A', verify_integrity=True) - - # append - result = df.set_index(['A', 'B'], append=True) - xp = df.reset_index().set_index(['index', 'A', 'B']) - xp.index.names = [None, 'A', 'B'] - assert_frame_equal(result, xp) - - # append to existing multiindex - rdf = df.set_index(['A'], append=True) - rdf = rdf.set_index(['B', 'C'], append=True) - expected = df.set_index(['A', 'B', 'C'], append=True) - assert_frame_equal(rdf, expected) - - # Series - result = df.set_index(df.C) - self.assertEqual(result.index.name, 'C') - - def test_set_index_nonuniq(self): - df = DataFrame({'A': ['foo', 'foo', 'foo', 'bar', 'bar'], - 'B': ['one', 'two', 'three', 'one', 'two'], - 'C': ['a', 'b', 'c', 'd', 'e'], - 'D': np.random.randn(5), - 'E': np.random.randn(5)}) - with assertRaisesRegexp(ValueError, 'Index has duplicate keys'): - df.set_index('A', verify_integrity=True, inplace=True) - self.assertIn('A', df) - - def test_set_index_bug(self): - # GH1590 - df = DataFrame({'val': [0, 1, 2], 'key': ['a', 'b', 'c']}) - df2 = df.select(lambda indx: indx >= 1) - rs = df2.set_index('key') - xp = DataFrame({'val': [1, 2]}, - Index(['b', 'c'], name='key')) - assert_frame_equal(rs, xp) - - def test_set_index_pass_arrays(self): - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'foo', 'foo'], - 'B': ['one', 'one', 'two', 'three', - 'two', 'two', 'one', 'three'], - 'C': np.random.randn(8), - 'D': np.random.randn(8)}) - - # multiple columns - result = df.set_index(['A', df['B'].values], drop=False) - expected = df.set_index(['A', 'B'], drop=False) - assert_frame_equal(result, expected, check_names=False) # TODO should set_index check_names ? - - def test_construction_with_categorical_index(self): - - ci = tm.makeCategoricalIndex(10) - - # with Categorical - df = DataFrame({'A' : np.random.randn(10), - 'B' : ci.values }) - idf = df.set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - # from a CategoricalIndex - df = DataFrame({'A' : np.random.randn(10), - 'B' : ci }) - idf = df.set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - idf = df.set_index('B').reset_index().set_index('B') - str(idf) - tm.assert_index_equal(idf.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - new_df = idf.reset_index() - new_df.index = df.B - tm.assert_index_equal(new_df.index, ci, check_names=False) - self.assertEqual(idf.index.name, 'B') - - def test_set_index_cast_datetimeindex(self): - df = DataFrame({'A': [datetime(2000, 1, 1) + timedelta(i) - for i in range(1000)], - 'B': np.random.randn(1000)}) - - idf = df.set_index('A') - tm.assertIsInstance(idf.index, DatetimeIndex) - - # don't cast a DatetimeIndex WITH a tz, leave as object - # GH 6032 - i = pd.DatetimeIndex(pd.tseries.tools.to_datetime(['2013-1-1 13:00','2013-1-2 14:00'], errors="raise")).tz_localize('US/Pacific') - df = DataFrame(np.random.randn(2,1),columns=['A']) - - expected = Series(np.array([pd.Timestamp('2013-01-01 13:00:00-0800', tz='US/Pacific'), - pd.Timestamp('2013-01-02 14:00:00-0800', tz='US/Pacific')], dtype="object")) - - # convert index to series - result = Series(i) - assert_series_equal(result, expected) - - # assignt to frame - df['B'] = i - result = df['B'] - assert_series_equal(result, expected, check_names=False) - self.assertEqual(result.name, 'B') - - # keep the timezone - result = i.to_series(keep_tz=True) - assert_series_equal(result.reset_index(drop=True), expected) - - # convert to utc - df['C'] = i.to_series().reset_index(drop=True) - result = df['C'] - comp = DatetimeIndex(expected.values).copy() - comp.tz = None - self.assert_numpy_array_equal(result.values, comp.values) - - # list of datetimes with a tz - df['D'] = i.to_pydatetime() - result = df['D'] - assert_series_equal(result, expected, check_names=False) - self.assertEqual(result.name, 'D') - - # GH 6785 - # set the index manually - import pytz - df = DataFrame([{'ts':datetime(2014, 4, 1, tzinfo=pytz.utc), 'foo':1}]) - expected = df.set_index('ts') - df.index = df['ts'] - df.pop('ts') - assert_frame_equal(df, expected) - - # GH 3950 - # reset_index with single level - for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern']: - idx = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz, name='idx') - df = pd.DataFrame({'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx) - - expected = pd.DataFrame({'idx': [datetime(2011, 1, 1), datetime(2011, 1, 2), - datetime(2011, 1, 3), datetime(2011, 1, 4), - datetime(2011, 1, 5)], - 'a': range(5), 'b': ['A', 'B', 'C', 'D', 'E']}, - columns=['idx', 'a', 'b']) - expected['idx'] = expected['idx'].apply(lambda d: pd.Timestamp(d, tz=tz)) - assert_frame_equal(df.reset_index(), expected) - - def test_set_index_multiindexcolumns(self): - columns = MultiIndex.from_tuples([('foo', 1), ('foo', 2), ('bar', 1)]) - df = DataFrame(np.random.randn(3, 3), columns=columns) - rs = df.set_index(df.columns[0]) - xp = df.ix[:, 1:] - xp.index = df.ix[:, 0].values - xp.index.names = [df.columns[0]] - assert_frame_equal(rs, xp) - - def test_set_index_empty_column(self): - # #1971 - df = DataFrame([ - dict(a=1, p=0), - dict(a=2, m=10), - dict(a=3, m=11, p=20), - dict(a=4, m=12, p=21) - ], columns=('a', 'm', 'p', 'x')) - - # it works! - result = df.set_index(['a', 'x']) - repr(result) - - def test_set_columns(self): - cols = Index(np.arange(len(self.mixed_frame.columns))) - self.mixed_frame.columns = cols - with assertRaisesRegexp(ValueError, 'Length mismatch'): - self.mixed_frame.columns = cols[::2] - - def test_keys(self): - getkeys = self.frame.keys - self.assertIs(getkeys(), self.frame.columns) - - def test_column_contains_typeerror(self): - try: - self.frame.columns in self.frame - except TypeError: - pass - - def test_constructor(self): - df = DataFrame() - self.assertEqual(len(df.index), 0) - - df = DataFrame(data={}) - self.assertEqual(len(df.index), 0) - - def test_constructor_mixed(self): - index, data = tm.getMixedTypeDict() - - indexed_frame = DataFrame(data, index=index) - unindexed_frame = DataFrame(data) - - self.assertEqual(self.mixed_frame['foo'].dtype, np.object_) - - def test_constructor_cast_failure(self): - foo = DataFrame({'a': ['a', 'b', 'c']}, dtype=np.float64) - self.assertEqual(foo['a'].dtype, object) - - # GH 3010, constructing with odd arrays - df = DataFrame(np.ones((4,2))) - - # this is ok - df['foo'] = np.ones((4,2)).tolist() - - # this is not ok - self.assertRaises(ValueError, df.__setitem__, tuple(['test']), np.ones((4,2))) - - # this is ok - df['foo2'] = np.ones((4,2)).tolist() - - def test_constructor_dtype_copy(self): - orig_df = DataFrame({ - 'col1': [1.], - 'col2': [2.], - 'col3': [3.]}) - - new_df = pd.DataFrame(orig_df, dtype=float, copy=True) - - new_df['col1'] = 200. - self.assertEqual(orig_df['col1'][0], 1.) - - def test_constructor_dtype_nocast_view(self): - df = DataFrame([[1, 2]]) - should_be_view = DataFrame(df, dtype=df[0].dtype) - should_be_view[0][0] = 99 - self.assertEqual(df.values[0, 0], 99) - - should_be_view = DataFrame(df.values, dtype=df[0].dtype) - should_be_view[0][0] = 97 - self.assertEqual(df.values[0, 0], 97) - - def test_constructor_dtype_list_data(self): - df = DataFrame([[1, '2'], - [None, 'a']], dtype=object) - self.assertIsNone(df.ix[1, 0]) - self.assertEqual(df.ix[0, 1], '2') - - def test_constructor_list_frames(self): - - # GH 3243 - result = DataFrame([DataFrame([])]) - self.assertEqual(result.shape, (1,0)) - - result = DataFrame([DataFrame(dict(A = lrange(5)))]) - tm.assertIsInstance(result.iloc[0,0], DataFrame) - - def test_constructor_mixed_dtypes(self): - - def _make_mixed_dtypes_df(typ, ad = None): - - if typ == 'int': - dtypes = MIXED_INT_DTYPES - arrays = [ np.array(np.random.rand(10), dtype = d) for d in dtypes ] - elif typ == 'float': - dtypes = MIXED_FLOAT_DTYPES - arrays = [ np.array(np.random.randint(10, size=10), dtype = d) for d in dtypes ] - - zipper = lzip(dtypes,arrays) - for d,a in zipper: - assert(a.dtype == d) - if ad is None: - ad = dict() - ad.update(dict([ (d,a) for d,a in zipper ])) - return DataFrame(ad) - - def _check_mixed_dtypes(df, dtypes = None): - if dtypes is None: - dtypes = MIXED_FLOAT_DTYPES + MIXED_INT_DTYPES - for d in dtypes: - if d in df: - assert(df.dtypes[d] == d) - - # mixed floating and integer coexinst in the same frame - df = _make_mixed_dtypes_df('float') - _check_mixed_dtypes(df) - - # add lots of types - df = _make_mixed_dtypes_df('float', dict(A = 1, B = 'foo', C = 'bar')) - _check_mixed_dtypes(df) - - # GH 622 - df = _make_mixed_dtypes_df('int') - _check_mixed_dtypes(df) - - def test_constructor_complex_dtypes(self): - # GH10952 - a = np.random.rand(10).astype(np.complex64) - b = np.random.rand(10).astype(np.complex128) - - df = DataFrame({'a': a, 'b': b}) - self.assertEqual(a.dtype, df.a.dtype) - self.assertEqual(b.dtype, df.b.dtype) - - def test_constructor_rec(self): - rec = self.frame.to_records(index=False) - - # Assigning causes segfault in NumPy < 1.5.1 - # rec.dtype.names = list(rec.dtype.names)[::-1] - - index = self.frame.index - - df = DataFrame(rec) - self.assert_numpy_array_equal(df.columns, rec.dtype.names) - - df2 = DataFrame(rec, index=index) - self.assert_numpy_array_equal(df2.columns, rec.dtype.names) - self.assertTrue(df2.index.equals(index)) - - rng = np.arange(len(rec))[::-1] - df3 = DataFrame(rec, index=rng, columns=['C', 'B']) - expected = DataFrame(rec, index=rng).reindex(columns=['C', 'B']) - assert_frame_equal(df3, expected) - - def test_constructor_bool(self): - df = DataFrame({0: np.ones(10, dtype=bool), - 1: np.zeros(10, dtype=bool)}) - self.assertEqual(df.values.dtype, np.bool_) - - def test_constructor_overflow_int64(self): - values = np.array([2 ** 64 - i for i in range(1, 10)], - dtype=np.uint64) - - result = DataFrame({'a': values}) - self.assertEqual(result['a'].dtype, object) - - # #2355 - data_scores = [(6311132704823138710, 273), (2685045978526272070, 23), - (8921811264899370420, 45), (long(17019687244989530680), 270), - (long(9930107427299601010), 273)] - dtype = [('uid', 'u8'), ('score', 'u8')] - data = np.zeros((len(data_scores),), dtype=dtype) - data[:] = data_scores - df_crawls = DataFrame(data) - self.assertEqual(df_crawls['uid'].dtype, object) - - def test_constructor_ordereddict(self): - import random - nitems = 100 - nums = lrange(nitems) - random.shuffle(nums) - expected = ['A%d' % i for i in nums] - df = DataFrame(OrderedDict(zip(expected, [[0]] * nitems))) - self.assertEqual(expected, list(df.columns)) - - def test_constructor_dict(self): - frame = DataFrame({'col1': self.ts1, - 'col2': self.ts2}) - - tm.assert_dict_equal(self.ts1, frame['col1'], compare_keys=False) - tm.assert_dict_equal(self.ts2, frame['col2'], compare_keys=False) - - frame = DataFrame({'col1': self.ts1, - 'col2': self.ts2}, - columns=['col2', 'col3', 'col4']) - - self.assertEqual(len(frame), len(self.ts2)) - self.assertNotIn('col1', frame) - self.assertTrue(isnull(frame['col3']).all()) - - # Corner cases - self.assertEqual(len(DataFrame({})), 0) - - # mix dict and array, wrong size - no spec for which error should raise - # first - with tm.assertRaises(ValueError): - DataFrame({'A': {'a': 'a', 'b': 'b'}, 'B': ['a', 'b', 'c']}) - - # Length-one dict micro-optimization - frame = DataFrame({'A': {'1': 1, '2': 2}}) - self.assert_numpy_array_equal(frame.index, ['1', '2']) - - # empty dict plus index - idx = Index([0, 1, 2]) - frame = DataFrame({}, index=idx) - self.assertIs(frame.index, idx) - - # empty with index and columns - idx = Index([0, 1, 2]) - frame = DataFrame({}, index=idx, columns=idx) - self.assertIs(frame.index, idx) - self.assertIs(frame.columns, idx) - self.assertEqual(len(frame._series), 3) - - # with dict of empty list and Series - frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B']) - self.assertTrue(frame.index.equals(Index([]))) - - # GH10856 - # dict with scalar values should raise error, even if columns passed - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}) - - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}, columns=['a']) - - with tm.assertRaises(ValueError): - DataFrame({'a': 0.7}, columns=['b']) - - def test_constructor_multi_index(self): - # GH 4078 - # construction error with mi and all-nan frame - tuples = [(2, 3), (3, 3), (3, 3)] - mi = MultiIndex.from_tuples(tuples) - df = DataFrame(index=mi,columns=mi) - self.assertTrue(pd.isnull(df).values.ravel().all()) - - tuples = [(3, 3), (2, 3), (3, 3)] - mi = MultiIndex.from_tuples(tuples) - df = DataFrame(index=mi,columns=mi) - self.assertTrue(pd.isnull(df).values.ravel().all()) - - def test_constructor_error_msgs(self): - msg = "Mixing dicts with non-Series may lead to ambiguous ordering." - # mix dict and array, wrong size - with assertRaisesRegexp(ValueError, msg): - DataFrame({'A': {'a': 'a', 'b': 'b'}, - 'B': ['a', 'b', 'c']}) - - # wrong size ndarray, GH 3105 - msg = "Shape of passed values is \(3, 4\), indices imply \(3, 3\)" - with assertRaisesRegexp(ValueError, msg): - DataFrame(np.arange(12).reshape((4, 3)), - columns=['foo', 'bar', 'baz'], - index=date_range('2000-01-01', periods=3)) - - - # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): - DataFrame(np.zeros((3, 3, 3)), columns=['A', 'B', 'C'], index=[1]) - - # wrong size axis labels - with assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 2\), indices imply \(3, 1\)"): - DataFrame(np.random.rand(2,3), columns=['A', 'B', 'C'], index=[1]) - - with assertRaisesRegexp(ValueError, "Shape of passed values is \(3, 2\), indices imply \(2, 2\)"): - DataFrame(np.random.rand(2,3), columns=['A', 'B'], index=[1, 2]) - - with assertRaisesRegexp(ValueError, 'If using all scalar values, you must pass an index'): - DataFrame({'a': False, 'b': True}) - - def test_constructor_with_embedded_frames(self): - - # embedded data frames - df1 = DataFrame({'a':[1, 2, 3], 'b':[3, 4, 5]}) - df2 = DataFrame([df1, df1+10]) - - df2.dtypes - str(df2) - - result = df2.loc[0,0] - assert_frame_equal(result,df1) - - result = df2.loc[1,0] - assert_frame_equal(result,df1+10) - - def test_insert_error_msmgs(self): - - # GH 7432 - df = DataFrame({'foo':['a', 'b', 'c'], 'bar':[1,2,3], 'baz':['d','e','f']}).set_index('foo') - s = DataFrame({'foo':['a', 'b', 'c', 'a'], 'fiz':['g','h','i','j']}).set_index('foo') - msg = 'cannot reindex from a duplicate axis' - with assertRaisesRegexp(ValueError, msg): - df['newcol'] = s - - # GH 4107, more descriptive error message - df = DataFrame(np.random.randint(0,2,(4,4)), - columns=['a', 'b', 'c', 'd']) - - msg = 'incompatible index of inserted column with frame index' - with assertRaisesRegexp(TypeError, msg): - df['gr'] = df.groupby(['b', 'c']).count() - - def test_frame_subclassing_and_slicing(self): - # Subclass frame and ensure it returns the right class on slicing it - # In reference to PR 9632 - - class CustomSeries(Series): - @property - def _constructor(self): - return CustomSeries - - def custom_series_function(self): - return 'OK' - - class CustomDataFrame(DataFrame): - "Subclasses pandas DF, fills DF with simulation results, adds some custom plotting functions." - - def __init__(self, *args, **kw): - super(CustomDataFrame, self).__init__(*args, **kw) - - @property - def _constructor(self): - return CustomDataFrame - - _constructor_sliced = CustomSeries - - def custom_frame_function(self): - return 'OK' - - data = {'col1': range(10), - 'col2': range(10)} - cdf = CustomDataFrame(data) - - # Did we get back our own DF class? - self.assertTrue(isinstance(cdf, CustomDataFrame)) - - # Do we get back our own Series class after selecting a column? - cdf_series = cdf.col1 - self.assertTrue(isinstance(cdf_series, CustomSeries)) - self.assertEqual(cdf_series.custom_series_function(), 'OK') - - # Do we get back our own DF class after slicing row-wise? - cdf_rows = cdf[1:5] - self.assertTrue(isinstance(cdf_rows, CustomDataFrame)) - self.assertEqual(cdf_rows.custom_frame_function(), 'OK') - - # Make sure sliced part of multi-index frame is custom class - mcol = pd.MultiIndex.from_tuples([('A', 'A'), ('A', 'B')]) - cdf_multi = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) - self.assertTrue(isinstance(cdf_multi['A'], CustomDataFrame)) - - mcol = pd.MultiIndex.from_tuples([('A', ''), ('B', '')]) - cdf_multi2 = CustomDataFrame([[0, 1], [2, 3]], columns=mcol) - self.assertTrue(isinstance(cdf_multi2['A'], CustomSeries)) - - def test_constructor_subclass_dict(self): - # Test for passing dict subclass to constructor - data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)), - 'col2': tm.TestSubDict((x, 20.0 * x) for x in range(10))} - df = DataFrame(data) - refdf = DataFrame(dict((col, dict(compat.iteritems(val))) - for col, val in compat.iteritems(data))) - assert_frame_equal(refdf, df) - - data = tm.TestSubDict(compat.iteritems(data)) - df = DataFrame(data) - assert_frame_equal(refdf, df) - - # try with defaultdict - from collections import defaultdict - data = {} - self.frame['B'][:10] = np.nan - for k, v in compat.iteritems(self.frame): - dct = defaultdict(dict) - dct.update(v.to_dict()) - data[k] = dct - frame = DataFrame(data) - assert_frame_equal(self.frame.sort_index(), frame) - - def test_constructor_dict_block(self): - expected = [[4., 3., 2., 1.]] - df = DataFrame({'d': [4.], 'c': [3.], 'b': [2.], 'a': [1.]}, - columns=['d', 'c', 'b', 'a']) - assert_almost_equal(df.values, expected) - - def test_constructor_dict_cast(self): - # cast float tests - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - frame = DataFrame(test_data, dtype=float) - self.assertEqual(len(frame), 3) - self.assertEqual(frame['B'].dtype, np.float64) - self.assertEqual(frame['A'].dtype, np.float64) - - frame = DataFrame(test_data) - self.assertEqual(len(frame), 3) - self.assertEqual(frame['B'].dtype, np.object_) - self.assertEqual(frame['A'].dtype, np.float64) - - # can't cast to float - test_data = { - 'A': dict(zip(range(20), tm.makeStringIndex(20))), - 'B': dict(zip(range(15), randn(15))) - } - frame = DataFrame(test_data, dtype=float) - self.assertEqual(len(frame), 20) - self.assertEqual(frame['A'].dtype, np.object_) - self.assertEqual(frame['B'].dtype, np.float64) - - def test_constructor_dict_dont_upcast(self): - d = {'Col1': {'Row1': 'A String', 'Row2': np.nan}} - df = DataFrame(d) - tm.assertIsInstance(df['Col1']['Row2'], float) - - dm = DataFrame([[1, 2], ['a', 'b']], index=[1, 2], columns=[1, 2]) - tm.assertIsInstance(dm[1][1], int) - - def test_constructor_dict_of_tuples(self): - # GH #1491 - data = {'a': (1, 2, 3), 'b': (4, 5, 6)} - - result = DataFrame(data) - expected = DataFrame(dict((k, list(v)) for k, v in compat.iteritems(data))) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_dict_multiindex(self): - check = lambda result, expected: assert_frame_equal( - result, expected, check_dtype=True, check_index_type=True, - check_column_type=True, check_names=True) - d = {('a', 'a'): {('i', 'i'): 0, ('i', 'j'): 1, ('j', 'i'): 2}, - ('b', 'a'): {('i', 'i'): 6, ('i', 'j'): 5, ('j', 'i'): 4}, - ('b', 'c'): {('i', 'i'): 7, ('i', 'j'): 8, ('j', 'i'): 9}} - _d = sorted(d.items()) - df = DataFrame(d) - expected = DataFrame( - [x[1] for x in _d], - index=MultiIndex.from_tuples([x[0] for x in _d])).T - expected.index = MultiIndex.from_tuples(expected.index) - check(df, expected) - - d['z'] = {'y': 123., ('i', 'i'): 111, ('i', 'j'): 111, ('j', 'i'): 111} - _d.insert(0, ('z', d['z'])) - expected = DataFrame( - [x[1] for x in _d], - index=Index([x[0] for x in _d], tupleize_cols=False)).T - expected.index = Index(expected.index, tupleize_cols=False) - df = DataFrame(d) - df = df.reindex(columns=expected.columns, index=expected.index) - check(df, expected) - - def test_constructor_dict_datetime64_index(self): - # GH 10160 - dates_as_str = ['1984-02-19', '1988-11-06', '1989-12-03', '1990-03-15'] - - def create_data(constructor): - return dict((i, {constructor(s): 2*i}) for i, s in enumerate(dates_as_str)) - - data_datetime64 = create_data(np.datetime64) - data_datetime = create_data(lambda x: datetime.strptime(x, '%Y-%m-%d')) - data_Timestamp = create_data(Timestamp) - - expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, - {0: None, 1: 2, 2: None, 3: None}, - {0: None, 1: None, 2: 4, 3: None}, - {0: None, 1: None, 2: None, 3: 6}], - index=[Timestamp(dt) for dt in dates_as_str]) - - result_datetime64 = DataFrame(data_datetime64) - result_datetime = DataFrame(data_datetime) - result_Timestamp = DataFrame(data_Timestamp) - assert_frame_equal(result_datetime64, expected) - assert_frame_equal(result_datetime, expected) - assert_frame_equal(result_Timestamp, expected) - - def test_constructor_dict_timedelta64_index(self): - # GH 10160 - td_as_int = [1, 2, 3, 4] - - def create_data(constructor): - return dict((i, {constructor(s): 2*i}) for i, s in enumerate(td_as_int)) - - data_timedelta64 = create_data(lambda x: np.timedelta64(x, 'D')) - data_timedelta = create_data(lambda x: timedelta(days=x)) - data_Timedelta = create_data(lambda x: Timedelta(x, 'D')) - - expected = DataFrame([{0: 0, 1: None, 2: None, 3: None}, - {0: None, 1: 2, 2: None, 3: None}, - {0: None, 1: None, 2: 4, 3: None}, - {0: None, 1: None, 2: None, 3: 6}], - index=[Timedelta(td, 'D') for td in td_as_int]) - - result_timedelta64 = DataFrame(data_timedelta64) - result_timedelta = DataFrame(data_timedelta) - result_Timedelta = DataFrame(data_Timedelta) - assert_frame_equal(result_timedelta64, expected) - assert_frame_equal(result_timedelta, expected) - assert_frame_equal(result_Timedelta, expected) - - def test_nested_dict_frame_constructor(self): - rng = period_range('1/1/2000', periods=5) - df = DataFrame(randn(10, 5), columns=rng) - - data = {} - for col in df.columns: - for row in df.index: - data.setdefault(col, {})[row] = df.get_value(row, col) - - result = DataFrame(data, columns=rng) - assert_frame_equal(result, df) - - data = {} - for col in df.columns: - for row in df.index: - data.setdefault(row, {})[col] = df.get_value(row, col) - - result = DataFrame(data, index=rng).T - assert_frame_equal(result, df) - - def _check_basic_constructor(self, empty): - "mat: 2d matrix with shpae (3, 2) to input. empty - makes sized objects" - mat = empty((2, 3), dtype=float) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - - # 1-D input - frame = DataFrame(empty((3,)), columns=['A'], index=[1, 2, 3]) - self.assertEqual(len(frame.index), 3) - self.assertEqual(len(frame.columns), 1) - - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.int64) - self.assertEqual(frame.values.dtype, np.int64) - - # wrong size axis labels - msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame(mat, columns=['A', 'B', 'C'], index=[1]) - msg = r'Shape of passed values is \(3, 2\), indices imply \(2, 2\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame(mat, columns=['A', 'B'], index=[1, 2]) - - # higher dim raise exception - with assertRaisesRegexp(ValueError, 'Must pass 2-d input'): - DataFrame(empty((3, 3, 3)), columns=['A', 'B', 'C'], - index=[1]) - - # automatic labeling - frame = DataFrame(mat) - self.assert_numpy_array_equal(frame.index, lrange(2)) - self.assert_numpy_array_equal(frame.columns, lrange(3)) - - frame = DataFrame(mat, index=[1, 2]) - self.assert_numpy_array_equal(frame.columns, lrange(3)) - - frame = DataFrame(mat, columns=['A', 'B', 'C']) - self.assert_numpy_array_equal(frame.index, lrange(2)) - - # 0-length axis - frame = DataFrame(empty((0, 3))) - self.assertEqual(len(frame.index), 0) - - frame = DataFrame(empty((3, 0))) - self.assertEqual(len(frame.columns), 0) - - def test_constructor_ndarray(self): - mat = np.zeros((2, 3), dtype=float) - self._check_basic_constructor(np.ones) - - frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A']) - self.assertEqual(len(frame), 2) - - def test_constructor_maskedarray(self): - self._check_basic_constructor(ma.masked_all) - - # Check non-masked values - mat = ma.masked_all((2, 3), dtype=float) - mat[0, 0] = 1.0 - mat[1, 2] = 2.0 - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1.0, frame['A'][1]) - self.assertEqual(2.0, frame['C'][2]) - - # what is this even checking?? - mat = ma.masked_all((2, 3), dtype=float) - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - def test_constructor_maskedarray_nonfloat(self): - # masked int promoted to float - mat = ma.masked_all((2, 3), dtype=int) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.float64) - self.assertEqual(frame.values.dtype, np.float64) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = 1 - mat2[1, 2] = 2 - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1, frame['A'][1]) - self.assertEqual(2, frame['C'][2]) - - # masked np.datetime64 stays (use lib.NaT as null) - mat = ma.masked_all((2, 3), dtype='M8[ns]') - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(isnull(frame).values.all()) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=np.int64) - self.assertEqual(frame.values.dtype, np.int64) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = 1 - mat2[1, 2] = 2 - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(1, frame['A'].view('i8')[1]) - self.assertEqual(2, frame['C'].view('i8')[2]) - - # masked bool promoted to object - mat = ma.masked_all((2, 3), dtype=bool) - # 2-D input - frame = DataFrame(mat, columns=['A', 'B', 'C'], index=[1, 2]) - - self.assertEqual(len(frame.index), 2) - self.assertEqual(len(frame.columns), 3) - self.assertTrue(np.all(~np.asarray(frame == frame))) - - # cast type - frame = DataFrame(mat, columns=['A', 'B', 'C'], - index=[1, 2], dtype=object) - self.assertEqual(frame.values.dtype, object) - - # Check non-masked values - mat2 = ma.copy(mat) - mat2[0, 0] = True - mat2[1, 2] = False - frame = DataFrame(mat2, columns=['A', 'B', 'C'], index=[1, 2]) - self.assertEqual(True, frame['A'][1]) - self.assertEqual(False, frame['C'][2]) - - def test_constructor_mrecarray(self): - # Ensure mrecarray produces frame identical to dict of masked arrays - # from GH3479 - - assert_fr_equal = functools.partial(assert_frame_equal, - check_index_type=True, - check_column_type=True, - check_frame_type=True) - arrays = [ - ('float', np.array([1.5, 2.0])), - ('int', np.array([1, 2])), - ('str', np.array(['abc', 'def'])), - ] - for name, arr in arrays[:]: - arrays.append(('masked1_' + name, - np.ma.masked_array(arr, mask=[False, True]))) - arrays.append(('masked_all', np.ma.masked_all((2,)))) - arrays.append(('masked_none', - np.ma.masked_array([1.0, 2.5], mask=False))) - - # call assert_frame_equal for all selections of 3 arrays - for comb in itertools.combinations(arrays, 3): - names, data = zip(*comb) - mrecs = mrecords.fromarrays(data, names=names) - - # fill the comb - comb = dict([ (k, v.filled()) if hasattr(v,'filled') else (k, v) for k, v in comb ]) - - expected = DataFrame(comb,columns=names) - result = DataFrame(mrecs) - assert_fr_equal(result,expected) - - # specify columns - expected = DataFrame(comb,columns=names[::-1]) - result = DataFrame(mrecs, columns=names[::-1]) - assert_fr_equal(result,expected) - - # specify index - expected = DataFrame(comb,columns=names,index=[1,2]) - result = DataFrame(mrecs, index=[1,2]) - assert_fr_equal(result,expected) - - def test_constructor_corner(self): - df = DataFrame(index=[]) - self.assertEqual(df.values.shape, (0, 0)) - - # empty but with specified dtype - df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=object) - self.assertEqual(df.values.dtype, np.object_) - - # does not error but ends up float - df = DataFrame(index=lrange(10), columns=['a', 'b'], dtype=int) - self.assertEqual(df.values.dtype, np.object_) - - # #1783 empty dtype object - df = DataFrame({}, columns=['foo', 'bar']) - self.assertEqual(df.values.dtype, np.object_) - - df = DataFrame({'b': 1}, index=lrange(10), columns=list('abc'), - dtype=int) - self.assertEqual(df.values.dtype, np.object_) - - - def test_constructor_scalar_inference(self): - data = {'int': 1, 'bool': True, - 'float': 3., 'complex': 4j, 'object': 'foo'} - df = DataFrame(data, index=np.arange(10)) - - self.assertEqual(df['int'].dtype, np.int64) - self.assertEqual(df['bool'].dtype, np.bool_) - self.assertEqual(df['float'].dtype, np.float64) - self.assertEqual(df['complex'].dtype, np.complex128) - self.assertEqual(df['object'].dtype, np.object_) - - def test_constructor_arrays_and_scalars(self): - df = DataFrame({'a': randn(10), 'b': True}) - exp = DataFrame({'a': df['a'].values, 'b': [True] * 10}) - - assert_frame_equal(df, exp) - with tm.assertRaisesRegexp(ValueError, 'must pass an index'): - DataFrame({'a': False, 'b': True}) - - def test_constructor_DataFrame(self): - df = DataFrame(self.frame) - assert_frame_equal(df, self.frame) - - df_casted = DataFrame(self.frame, dtype=np.int64) - self.assertEqual(df_casted.values.dtype, np.int64) - - def test_constructor_more(self): - # used to be in test_matrix.py - arr = randn(10) - dm = DataFrame(arr, columns=['A'], index=np.arange(10)) - self.assertEqual(dm.values.ndim, 2) - - arr = randn(0) - dm = DataFrame(arr) - self.assertEqual(dm.values.ndim, 2) - self.assertEqual(dm.values.ndim, 2) - - # no data specified - dm = DataFrame(columns=['A', 'B'], index=np.arange(10)) - self.assertEqual(dm.values.shape, (10, 2)) - - dm = DataFrame(columns=['A', 'B']) - self.assertEqual(dm.values.shape, (0, 2)) - - dm = DataFrame(index=np.arange(10)) - self.assertEqual(dm.values.shape, (10, 0)) - - # corner, silly - # TODO: Fix this Exception to be better... - with assertRaisesRegexp(PandasError, 'constructor not properly called'): - DataFrame((1, 2, 3)) - - # can't cast - mat = np.array(['foo', 'bar'], dtype=object).reshape(2, 1) - with assertRaisesRegexp(ValueError, 'cast'): - DataFrame(mat, index=[0, 1], columns=[0], dtype=float) - - dm = DataFrame(DataFrame(self.frame._series)) - assert_frame_equal(dm, self.frame) - - # int cast - dm = DataFrame({'A': np.ones(10, dtype=int), - 'B': np.ones(10, dtype=np.float64)}, - index=np.arange(10)) - - self.assertEqual(len(dm.columns), 2) - self.assertEqual(dm.values.dtype, np.float64) - - def test_constructor_empty_list(self): - df = DataFrame([], index=[]) - expected = DataFrame(index=[]) - assert_frame_equal(df, expected) - - # GH 9939 - df = DataFrame([], columns=['A', 'B']) - expected = DataFrame({}, columns=['A', 'B']) - assert_frame_equal(df, expected) - - # Empty generator: list(empty_gen()) == [] - def empty_gen(): - return - yield - - df = DataFrame(empty_gen(), columns=['A', 'B']) - assert_frame_equal(df, expected) - - def test_constructor_list_of_lists(self): - # GH #484 - l = [[1, 'a'], [2, 'b']] - df = DataFrame(data=l, columns=["num", "str"]) - self.assertTrue(com.is_integer_dtype(df['num'])) - self.assertEqual(df['str'].dtype, np.object_) - - # GH 4851 - # list of 0-dim ndarrays - expected = DataFrame({ 0: range(10) }) - data = [np.array(x) for x in range(10)] - result = DataFrame(data) - assert_frame_equal(result, expected) - - def test_constructor_sequence_like(self): - # GH 3783 - # collections.Squence like - import collections - - class DummyContainer(collections.Sequence): - def __init__(self, lst): - self._lst = lst - def __getitem__(self, n): - return self._lst.__getitem__(n) - def __len__(self, n): - return self._lst.__len__() - - l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])] - columns = ["num", "str"] - result = DataFrame(l, columns=columns) - expected = DataFrame([[1,'a'],[2,'b']],columns=columns) - assert_frame_equal(result, expected, check_dtype=False) - - # GH 4297 - # support Array - import array - result = DataFrame.from_items([('A', array.array('i', range(10)))]) - expected = DataFrame({ 'A' : list(range(10)) }) - assert_frame_equal(result, expected, check_dtype=False) - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ array.array('i', range(10)), array.array('i',range(10)) ]) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_iterator(self): - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ range(10), range(10) ]) - assert_frame_equal(result, expected) - - def test_constructor_generator(self): - #related #2305 - - gen1 = (i for i in range(10)) - gen2 = (i for i in range(10)) - - expected = DataFrame([ list(range(10)), list(range(10)) ]) - result = DataFrame([ gen1, gen2 ]) - assert_frame_equal(result, expected) - - gen = ([ i, 'a'] for i in range(10)) - result = DataFrame(gen) - expected = DataFrame({ 0 : range(10), 1 : 'a' }) - assert_frame_equal(result, expected, check_dtype=False) - - def test_constructor_list_of_dicts(self): - data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), - OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), - OrderedDict([['a', 1.5], ['d', 6]]), - OrderedDict(), - OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), - OrderedDict([['b', 3], ['c', 4], ['d', 6]])] - - result = DataFrame(data) - expected = DataFrame.from_dict(dict(zip(range(len(data)), data)), - orient='index') - assert_frame_equal(result, expected.reindex(result.index)) - - result = DataFrame([{}]) - expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) - - def test_constructor_list_of_series(self): - data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), - OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] - sdict = OrderedDict(zip(['x', 'y'], data)) - idx = Index(['a', 'b', 'c']) - - # all named - data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), - Series([1.5, 3, 6], idx, name='y')] - result = DataFrame(data2) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) - - # some unnamed - data2 = [Series([1.5, 3, 4], idx, dtype='O', name='x'), - Series([1.5, 3, 6], idx)] - result = DataFrame(data2) - - sdict = OrderedDict(zip(['x', 'Unnamed 0'], data)) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result.sort_index(), expected) - - # none named - data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]), - OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]), - OrderedDict([['a', 1.5], ['d', 6]]), - OrderedDict(), - OrderedDict([['a', 1.5], ['b', 3], ['c', 4]]), - OrderedDict([['b', 3], ['c', 4], ['d', 6]])] - data = [Series(d) for d in data] - - result = DataFrame(data) - sdict = OrderedDict(zip(range(len(data)), data)) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected.reindex(result.index)) - - result2 = DataFrame(data, index=np.arange(6)) - assert_frame_equal(result, result2) - - result = DataFrame([Series({})]) - expected = DataFrame(index=[0]) - assert_frame_equal(result, expected) - - data = [OrderedDict([['a', 1.5], ['b', 3.0], ['c', 4.0]]), - OrderedDict([['a', 1.5], ['b', 3.0], ['c', 6.0]])] - sdict = OrderedDict(zip(range(len(data)), data)) - - idx = Index(['a', 'b', 'c']) - data2 = [Series([1.5, 3, 4], idx, dtype='O'), - Series([1.5, 3, 6], idx)] - result = DataFrame(data2) - expected = DataFrame.from_dict(sdict, orient='index') - assert_frame_equal(result, expected) - - def test_constructor_list_of_derived_dicts(self): - class CustomDict(dict): - pass - d = {'a': 1.5, 'b': 3} - - data_custom = [CustomDict(d)] - data = [d] - - result_custom = DataFrame(data_custom) - result = DataFrame(data) - assert_frame_equal(result, result_custom) - - def test_constructor_ragged(self): - data = {'A': randn(10), - 'B': randn(8)} - with assertRaisesRegexp(ValueError, 'arrays must all be same length'): - DataFrame(data) - - def test_constructor_scalar(self): - idx = Index(lrange(3)) - df = DataFrame({"a": 0}, index=idx) - expected = DataFrame({"a": [0, 0, 0]}, index=idx) - assert_frame_equal(df, expected, check_dtype=False) - - def test_constructor_Series_copy_bug(self): - df = DataFrame(self.frame['A'], index=self.frame.index, columns=['A']) - df.copy() - - def test_constructor_mixed_dict_and_Series(self): - data = {} - data['A'] = {'foo': 1, 'bar': 2, 'baz': 3} - data['B'] = Series([4, 3, 2, 1], index=['bar', 'qux', 'baz', 'foo']) - - result = DataFrame(data) - self.assertTrue(result.index.is_monotonic) - - # ordering ambiguous, raise exception - with assertRaisesRegexp(ValueError, 'ambiguous ordering'): - DataFrame({'A': ['a', 'b'], 'B': {'a': 'a', 'b': 'b'}}) - - # this is OK though - result = DataFrame({'A': ['a', 'b'], - 'B': Series(['a', 'b'], index=['a', 'b'])}) - expected = DataFrame({'A': ['a', 'b'], 'B': ['a', 'b']}, - index=['a', 'b']) - assert_frame_equal(result, expected) - - def test_constructor_tuples(self): - result = DataFrame({'A': [(1, 2), (3, 4)]}) - expected = DataFrame({'A': Series([(1, 2), (3, 4)])}) - assert_frame_equal(result, expected) - - def test_constructor_namedtuples(self): - # GH11181 - from collections import namedtuple - named_tuple = namedtuple("Pandas", list('ab')) - tuples = [named_tuple(1, 3), named_tuple(2, 4)] - expected = DataFrame({'a': [1, 2], 'b': [3, 4]}) - result = DataFrame(tuples) - assert_frame_equal(result, expected) - - # with columns - expected = DataFrame({'y': [1, 2], 'z': [3, 4]}) - result = DataFrame(tuples, columns=['y', 'z']) - assert_frame_equal(result, expected) - - def test_constructor_orient(self): - data_dict = self.mixed_frame.T._series - recons = DataFrame.from_dict(data_dict, orient='index') - expected = self.mixed_frame.sort_index() - assert_frame_equal(recons, expected) - - # dict of sequence - a = {'hi': [32, 3, 3], - 'there': [3, 5, 3]} - rs = DataFrame.from_dict(a, orient='index') - xp = DataFrame.from_dict(a).T.reindex(list(a.keys())) - assert_frame_equal(rs, xp) - - def test_constructor_Series_named(self): - a = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') - df = DataFrame(a) - self.assertEqual(df.columns[0], 'x') - self.assertTrue(df.index.equals(a.index)) - - # ndarray like - arr = np.random.randn(10) - s = Series(arr,name='x') - df = DataFrame(s) - expected = DataFrame(dict(x = s)) - assert_frame_equal(df,expected) - - s = Series(arr,index=range(3,13)) - df = DataFrame(s) - expected = DataFrame({ 0 : s }) - assert_frame_equal(df,expected) - - self.assertRaises(ValueError, DataFrame, s, columns=[1,2]) - - # #2234 - a = Series([], name='x') - df = DataFrame(a) - self.assertEqual(df.columns[0], 'x') - - # series with name and w/o - s1 = Series(arr,name='x') - df = DataFrame([s1, arr]).T - expected = DataFrame({ 'x' : s1, 'Unnamed 0' : arr },columns=['x','Unnamed 0']) - assert_frame_equal(df,expected) - - # this is a bit non-intuitive here; the series collapse down to arrays - df = DataFrame([arr, s1]).T - expected = DataFrame({ 1 : s1, 0 : arr },columns=[0,1]) - assert_frame_equal(df,expected) - - def test_constructor_Series_differently_indexed(self): - # name - s1 = Series([1, 2, 3], index=['a', 'b', 'c'], name='x') - - # no name - s2 = Series([1, 2, 3], index=['a', 'b', 'c']) - - other_index = Index(['a', 'b']) - - df1 = DataFrame(s1, index=other_index) - exp1 = DataFrame(s1.reindex(other_index)) - self.assertEqual(df1.columns[0], 'x') - assert_frame_equal(df1, exp1) - - df2 = DataFrame(s2, index=other_index) - exp2 = DataFrame(s2.reindex(other_index)) - self.assertEqual(df2.columns[0], 0) - self.assertTrue(df2.index.equals(other_index)) - assert_frame_equal(df2, exp2) - - def test_constructor_manager_resize(self): - index = list(self.frame.index[:5]) - columns = list(self.frame.columns[:3]) - - result = DataFrame(self.frame._data, index=index, - columns=columns) - self.assert_numpy_array_equal(result.index, index) - self.assert_numpy_array_equal(result.columns, columns) - - def test_constructor_from_items(self): - items = [(c, self.frame[c]) for c in self.frame.columns] - recons = DataFrame.from_items(items) - assert_frame_equal(recons, self.frame) - - # pass some columns - recons = DataFrame.from_items(items, columns=['C', 'B', 'A']) - assert_frame_equal(recons, self.frame.ix[:, ['C', 'B', 'A']]) - - # orient='index' - - row_items = [(idx, self.mixed_frame.xs(idx)) - for idx in self.mixed_frame.index] - - recons = DataFrame.from_items(row_items, - columns=self.mixed_frame.columns, - orient='index') - assert_frame_equal(recons, self.mixed_frame) - self.assertEqual(recons['A'].dtype, np.float64) - - with tm.assertRaisesRegexp(TypeError, - "Must pass columns with orient='index'"): - DataFrame.from_items(row_items, orient='index') - - # orient='index', but thar be tuples - arr = lib.list_to_object_array( - [('bar', 'baz')] * len(self.mixed_frame)) - self.mixed_frame['foo'] = arr - row_items = [(idx, list(self.mixed_frame.xs(idx))) - for idx in self.mixed_frame.index] - recons = DataFrame.from_items(row_items, - columns=self.mixed_frame.columns, - orient='index') - assert_frame_equal(recons, self.mixed_frame) - tm.assertIsInstance(recons['foo'][0], tuple) - - rs = DataFrame.from_items([('A', [1, 2, 3]), ('B', [4, 5, 6])], - orient='index', columns=['one', 'two', 'three']) - xp = DataFrame([[1, 2, 3], [4, 5, 6]], index=['A', 'B'], - columns=['one', 'two', 'three']) - assert_frame_equal(rs, xp) - - def test_constructor_mix_series_nonseries(self): - df = DataFrame({'A': self.frame['A'], - 'B': list(self.frame['B'])}, columns=['A', 'B']) - assert_frame_equal(df, self.frame.ix[:, ['A', 'B']]) - - with tm.assertRaisesRegexp(ValueError, 'does not match index length'): - DataFrame({'A': self.frame['A'], 'B': list(self.frame['B'])[:-2]}) - - def test_constructor_miscast_na_int_dtype(self): - df = DataFrame([[np.nan, 1], [1, 0]], dtype=np.int64) - expected = DataFrame([[np.nan, 1], [1, 0]]) - assert_frame_equal(df, expected) - - def test_constructor_iterator_failure(self): - with assertRaisesRegexp(TypeError, 'iterator'): - df = DataFrame(iter([1, 2, 3])) - - def test_constructor_column_duplicates(self): - # it works! #2079 - df = DataFrame([[8, 5]], columns=['a', 'a']) - edf = DataFrame([[8, 5]]) - edf.columns = ['a', 'a'] - - assert_frame_equal(df, edf) - - idf = DataFrame.from_items( - [('a', [8]), ('a', [5])], columns=['a', 'a']) - assert_frame_equal(idf, edf) - - self.assertRaises(ValueError, DataFrame.from_items, - [('a', [8]), ('a', [5]), ('b', [6])], - columns=['b', 'a', 'a']) - - def test_constructor_empty_with_string_dtype(self): - # GH 9428 - expected = DataFrame(index=[0, 1], columns=[0, 1], dtype=object) - - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=str) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.str_) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype=np.unicode_) - assert_frame_equal(df, expected) - df = DataFrame(index=[0, 1], columns=[0, 1], dtype='U5') - assert_frame_equal(df, expected) - - - def test_column_dups_operations(self): - - def check(result, expected=None): - if expected is not None: - assert_frame_equal(result,expected) - result.dtypes - str(result) - - # assignment - # GH 3687 - arr = np.random.randn(3, 2) - idx = lrange(2) - df = DataFrame(arr, columns=['A', 'A']) - df.columns = idx - expected = DataFrame(arr,columns=idx) - check(df,expected) - - idx = date_range('20130101',periods=4,freq='Q-NOV') - df = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=['a','a','a','a']) - df.columns = idx - expected = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=idx) - check(df,expected) - - # insert - df = DataFrame([[1,1,1,5],[1,1,2,5],[2,1,3,5]],columns=['foo','bar','foo','hello']) - df['string'] = 'bah' - expected = DataFrame([[1,1,1,5,'bah'],[1,1,2,5,'bah'],[2,1,3,5,'bah']],columns=['foo','bar','foo','hello','string']) - check(df,expected) - with assertRaisesRegexp(ValueError, 'Length of value'): - df.insert(0, 'AnotherColumn', range(len(df.index) - 1)) - - # insert same dtype - df['foo2'] = 3 - expected = DataFrame([[1,1,1,5,'bah',3],[1,1,2,5,'bah',3],[2,1,3,5,'bah',3]],columns=['foo','bar','foo','hello','string','foo2']) - check(df,expected) - - # set (non-dup) - df['foo2'] = 4 - expected = DataFrame([[1,1,1,5,'bah',4],[1,1,2,5,'bah',4],[2,1,3,5,'bah',4]],columns=['foo','bar','foo','hello','string','foo2']) - check(df,expected) - df['foo2'] = 3 - - # delete (non dup) - del df['bar'] - expected = DataFrame([[1,1,5,'bah',3],[1,2,5,'bah',3],[2,3,5,'bah',3]],columns=['foo','foo','hello','string','foo2']) - check(df,expected) - - # try to delete again (its not consolidated) - del df['hello'] - expected = DataFrame([[1,1,'bah',3],[1,2,'bah',3],[2,3,'bah',3]],columns=['foo','foo','string','foo2']) - check(df,expected) - - # consolidate - df = df.consolidate() - expected = DataFrame([[1,1,'bah',3],[1,2,'bah',3],[2,3,'bah',3]],columns=['foo','foo','string','foo2']) - check(df,expected) - - # insert - df.insert(2,'new_col',5.) - expected = DataFrame([[1,1,5.,'bah',3],[1,2,5.,'bah',3],[2,3,5.,'bah',3]],columns=['foo','foo','new_col','string','foo2']) - check(df,expected) - - # insert a dup - assertRaisesRegexp(ValueError, 'cannot insert', df.insert, 2, 'new_col', 4.) - df.insert(2,'new_col',4.,allow_duplicates=True) - expected = DataFrame([[1,1,4.,5.,'bah',3],[1,2,4.,5.,'bah',3],[2,3,4.,5.,'bah',3]],columns=['foo','foo','new_col','new_col','string','foo2']) - check(df,expected) - - # delete (dup) - del df['foo'] - expected = DataFrame([[4.,5.,'bah',3],[4.,5.,'bah',3],[4.,5.,'bah',3]],columns=['new_col','new_col','string','foo2']) - assert_frame_equal(df,expected) - - # dup across dtypes - df = DataFrame([[1,1,1.,5],[1,1,2.,5],[2,1,3.,5]],columns=['foo','bar','foo','hello']) - check(df) - - df['foo2'] = 7. - expected = DataFrame([[1,1,1.,5,7.],[1,1,2.,5,7.],[2,1,3.,5,7.]],columns=['foo','bar','foo','hello','foo2']) - check(df,expected) - - result = df['foo'] - expected = DataFrame([[1,1.],[1,2.],[2,3.]],columns=['foo','foo']) - check(result,expected) - - # multiple replacements - df['foo'] = 'string' - expected = DataFrame([['string',1,'string',5,7.],['string',1,'string',5,7.],['string',1,'string',5,7.]],columns=['foo','bar','foo','hello','foo2']) - check(df,expected) - - del df['foo'] - expected = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','hello','foo2']) - check(df,expected) - - # values - df = DataFrame([[1,2.5],[3,4.5]], index=[1,2], columns=['x','x']) - result = df.values - expected = np.array([[1,2.5],[3,4.5]]) - self.assertTrue((result == expected).all().all()) - - # rename, GH 4403 - df4 = DataFrame({'TClose': [22.02], - 'RT': [0.0454], - 'TExg': [0.0422]}, - index=MultiIndex.from_tuples([(600809, 20130331)], names=['STK_ID', 'RPT_Date'])) - - df5 = DataFrame({'STK_ID': [600809] * 3, - 'RPT_Date': [20120930,20121231,20130331], - 'STK_Name': [u('饡驦'), u('饡驦'), u('饡驦')], - 'TClose': [38.05, 41.66, 30.01]}, - index=MultiIndex.from_tuples([(600809, 20120930), (600809, 20121231),(600809,20130331)], names=['STK_ID', 'RPT_Date'])) - - k = pd.merge(df4,df5,how='inner',left_index=True,right_index=True) - result = k.rename(columns={'TClose_x':'TClose', 'TClose_y':'QT_Close'}) - str(result) - result.dtypes - - expected = DataFrame([[0.0454, 22.02, 0.0422, 20130331, 600809, u('饡驦'), 30.01 ]], - columns=['RT','TClose','TExg','RPT_Date','STK_ID','STK_Name','QT_Close']).set_index(['STK_ID','RPT_Date'],drop=False) - assert_frame_equal(result,expected) - - # reindex is invalid! - df = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','a','a']) - self.assertRaises(ValueError, df.reindex, columns=['bar']) - self.assertRaises(ValueError, df.reindex, columns=['bar','foo']) - - # drop - df = DataFrame([[1,5,7.],[1,5,7.],[1,5,7.]],columns=['bar','a','a']) - result = df.drop(['a'],axis=1) - expected = DataFrame([[1],[1],[1]],columns=['bar']) - check(result,expected) - result = df.drop('a',axis=1) - check(result,expected) - - # describe - df = DataFrame([[1,1,1],[2,2,2],[3,3,3]],columns=['bar','a','a'],dtype='float64') - result = df.describe() - s = df.iloc[:,0].describe() - expected = pd.concat([ s, s, s],keys=df.columns,axis=1) - check(result,expected) - - # check column dups with index equal and not equal to df's index - df = DataFrame(np.random.randn(5, 3), index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'A']) - for index in [df.index, pd.Index(list('edcba'))]: - this_df = df.copy() - expected_ser = pd.Series(index.values, index=this_df.index) - expected_df = DataFrame.from_items([('A', expected_ser), - ('B', this_df['B']), - ('A', expected_ser)]) - this_df['A'] = index - check(this_df, expected_df) - - # operations - for op in ['__add__','__mul__','__sub__','__truediv__']: - df = DataFrame(dict(A = np.arange(10), B = np.random.rand(10))) - expected = getattr(df,op)(df) - expected.columns = ['A','A'] - df.columns = ['A','A'] - result = getattr(df,op)(df) - check(result,expected) - - # multiple assignments that change dtypes - # the location indexer is a slice - # GH 6120 - df = DataFrame(np.random.randn(5,2), columns=['that', 'that']) - expected = DataFrame(1.0, index=range(5), columns=['that', 'that']) - - df['that'] = 1.0 - check(df, expected) - - df = DataFrame(np.random.rand(5,2), columns=['that', 'that']) - expected = DataFrame(1, index=range(5), columns=['that', 'that']) - - df['that'] = 1 - check(df, expected) - - def test_column_dups2(self): - - # drop buggy GH 6240 - df = DataFrame({'A' : np.random.randn(5), - 'B' : np.random.randn(5), - 'C' : np.random.randn(5), - 'D' : ['a','b','c','d','e'] }) - - expected = df.take([0,1,1], axis=1) - df2 = df.take([2,0,1,2,1], axis=1) - result = df2.drop('C',axis=1) - assert_frame_equal(result, expected) - - # dropna - df = DataFrame({'A' : np.random.randn(5), - 'B' : np.random.randn(5), - 'C' : np.random.randn(5), - 'D' : ['a','b','c','d','e'] }) - df.iloc[2,[0,1,2]] = np.nan - df.iloc[0,0] = np.nan - df.iloc[1,1] = np.nan - df.iloc[:,3] = np.nan - expected = df.dropna(subset=['A','B','C'],how='all') - expected.columns = ['A','A','B','C'] - - df.columns = ['A','A','B','C'] - - result = df.dropna(subset=['A','C'],how='all') - assert_frame_equal(result, expected) - - def test_column_dups_indexing(self): - def check(result, expected=None): - if expected is not None: - assert_frame_equal(result,expected) - result.dtypes - str(result) - - # boolean indexing - # GH 4879 - dups = ['A', 'A', 'C', 'D'] - df = DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'],dtype='float64') - expected = df[df.C > 6] - expected.columns = dups - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - result = df[df.C > 6] - check(result,expected) - - # where - df = DataFrame(np.arange(12).reshape(3,4), columns=['A', 'B', 'C', 'D'],dtype='float64') - expected = df[df > 6] - expected.columns = dups - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - result = df[df > 6] - check(result,expected) - - # boolean with the duplicate raises - df = DataFrame(np.arange(12).reshape(3,4), columns=dups,dtype='float64') - self.assertRaises(ValueError, lambda : df[df.A > 6]) - - # dup aligining operations should work - # GH 5185 - df1 = DataFrame([1, 2, 3, 4, 5], index=[1, 2, 1, 2, 3]) - df2 = DataFrame([1, 2, 3], index=[1, 2, 3]) - expected = DataFrame([0,2,0,2,2],index=[1,1,2,2,3]) - result = df1.sub(df2) - assert_frame_equal(result,expected) - - # equality - df1 = DataFrame([[1,2],[2,np.nan],[3,4],[4,4]],columns=['A','B']) - df2 = DataFrame([[0,1],[2,4],[2,np.nan],[4,5]],columns=['A','A']) - - # not-comparing like-labelled - self.assertRaises(ValueError, lambda : df1 == df2) - - df1r = df1.reindex_like(df2) - result = df1r == df2 - expected = DataFrame([[False,True],[True,False],[False,False],[True,False]],columns=['A','A']) - assert_frame_equal(result,expected) - - # mixed column selection - # GH 5639 - dfbool = DataFrame({'one' : Series([True, True, False], index=['a', 'b', 'c']), - 'two' : Series([False, False, True, False], index=['a', 'b', 'c', 'd']), - 'three': Series([False, True, True, True], index=['a', 'b', 'c', 'd'])}) - expected = pd.concat([dfbool['one'],dfbool['three'],dfbool['one']],axis=1) - result = dfbool[['one', 'three', 'one']] - check(result,expected) - - # multi-axis dups - # GH 6121 - df = DataFrame(np.arange(25.).reshape(5,5), - index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'C', 'D', 'E']) - z = df[['A', 'C', 'A']].copy() - expected = z.ix[['a', 'c', 'a']] - - df = DataFrame(np.arange(25.).reshape(5,5), - index=['a', 'b', 'c', 'd', 'e'], - columns=['A', 'B', 'C', 'D', 'E']) - z = df[['A', 'C', 'A']] - result = z.ix[['a', 'c', 'a']] - check(result,expected) - - - def test_column_dups_indexing2(self): - - # GH 8363 - # datetime ops with a non-unique index - df = DataFrame({'A' : np.arange(5,dtype='int64'), - 'B' : np.arange(1,6,dtype='int64')}, - index=[2,2,3,3,4]) - result = df.B-df.A - expected = Series(1,index=[2,2,3,3,4]) - assert_series_equal(result,expected) - - df = DataFrame({'A' : date_range('20130101',periods=5), 'B' : date_range('20130101 09:00:00', periods=5)},index=[2,2,3,3,4]) - result = df.B-df.A - expected = Series(Timedelta('9 hours'),index=[2,2,3,3,4]) - assert_series_equal(result,expected) - - def test_insert_benchmark(self): - # from the vb_suite/frame_methods/frame_insert_columns - N = 10 - K = 5 - df = DataFrame(index=lrange(N)) - new_col = np.random.randn(N) - for i in range(K): - df[i] = new_col - expected = DataFrame(np.repeat(new_col,K).reshape(N,K),index=lrange(N)) - assert_frame_equal(df,expected) - - def test_constructor_single_value(self): - - # expecting single value upcasting here - df = DataFrame(0., index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('float64'), df.index, - df.columns)) - - df = DataFrame(0, index=[1, 2, 3], columns=['a', 'b', 'c']) - assert_frame_equal(df, DataFrame(np.zeros(df.shape).astype('int64'), df.index, - df.columns)) - - - df = DataFrame('a', index=[1, 2], columns=['a', 'c']) - assert_frame_equal(df, DataFrame(np.array([['a', 'a'], - ['a', 'a']], - dtype=object), - index=[1, 2], - columns=['a', 'c'])) - - self.assertRaises(com.PandasError, DataFrame, 'a', [1, 2]) - self.assertRaises(com.PandasError, DataFrame, 'a', columns=['a', 'c']) - with tm.assertRaisesRegexp(TypeError, 'incompatible data and dtype'): - DataFrame('a', [1, 2], ['a', 'c'], float) - - def test_constructor_with_datetimes(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - # single item - df = DataFrame({'A' : 1, 'B' : 'foo', 'C' : 'bar', 'D' : Timestamp("20010101"), 'E' : datetime(2001,1,2,0,0) }, - index=np.arange(10)) - result = df.get_dtype_counts() - expected = Series({'int64': 1, datetime64name: 2, objectname : 2}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # check with ndarray construction ndim==0 (e.g. we are passing a ndim 0 ndarray with a dtype specified) - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array(1.,dtype=floatname), - intname : np.array(1,dtype=intname)}, index=np.arange(10)) - result = df.get_dtype_counts() - expected = { objectname : 1 } - if intname == 'int64': - expected['int64'] = 2 - else: - expected['int64'] = 1 - expected[intname] = 1 - if floatname == 'float64': - expected['float64'] = 2 - else: - expected['float64'] = 1 - expected[floatname] = 1 - - result.sort_index() - expected = Series(expected) - expected.sort_index() - assert_series_equal(result, expected) - - # check with ndarray construction ndim>0 - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', floatname : np.array([1.]*10,dtype=floatname), - intname : np.array([1]*10,dtype=intname)}, index=np.arange(10)) - result = df.get_dtype_counts() - result.sort_index() - assert_series_equal(result, expected) - - # GH 2809 - ind = date_range(start="2000-01-01", freq="D", periods=10) - datetimes = [ts.to_pydatetime() for ts in ind] - datetime_s = Series(datetimes) - self.assertEqual(datetime_s.dtype, 'M8[ns]') - df = DataFrame({'datetime_s':datetime_s}) - result = df.get_dtype_counts() - expected = Series({ datetime64name : 1 }) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # GH 2810 - ind = date_range(start="2000-01-01", freq="D", periods=10) - datetimes = [ts.to_pydatetime() for ts in ind] - dates = [ts.date() for ts in ind] - df = DataFrame({'datetimes': datetimes, 'dates':dates}) - result = df.get_dtype_counts() - expected = Series({ datetime64name : 1, objectname : 1 }) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - # GH 7594 - # don't coerce tz-aware - import pytz - tz = pytz.timezone('US/Eastern') - dt = tz.localize(datetime(2012, 1, 1)) - - df = DataFrame({'End Date': dt}, index=[0]) - self.assertEqual(df.iat[0,0],dt) - assert_series_equal(df.dtypes,Series({'End Date' : 'datetime64[ns, US/Eastern]' })) - - df = DataFrame([{'End Date': dt}]) - self.assertEqual(df.iat[0,0],dt) - assert_series_equal(df.dtypes,Series({'End Date' : 'datetime64[ns, US/Eastern]' })) - - # tz-aware (UTC and other tz's) - # GH 8411 - dr = date_range('20130101',periods=3) - df = DataFrame({ 'value' : dr}) - self.assertTrue(df.iat[0,0].tz is None) - dr = date_range('20130101',periods=3,tz='UTC') - df = DataFrame({ 'value' : dr}) - self.assertTrue(str(df.iat[0,0].tz) == 'UTC') - dr = date_range('20130101',periods=3,tz='US/Eastern') - df = DataFrame({ 'value' : dr}) - self.assertTrue(str(df.iat[0,0].tz) == 'US/Eastern') - - # GH 7822 - # preserver an index with a tz on dict construction - i = date_range('1/1/2011', periods=5, freq='10s', tz = 'US/Eastern') - - expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True) }) - df = DataFrame() - df['a'] = i - assert_frame_equal(df, expected) - - df = DataFrame( {'a' : i } ) - assert_frame_equal(df, expected) - - # multiples - i_no_tz = date_range('1/1/2011', periods=5, freq='10s') - df = DataFrame( {'a' : i, 'b' : i_no_tz } ) - expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True), 'b': i_no_tz }) - assert_frame_equal(df, expected) - - def test_constructor_with_datetime_tz(self): - - # 8260 - # support datetime64 with tz - - idx = Index(date_range('20130101',periods=3,tz='US/Eastern'), - name='foo') - dr = date_range('20130110',periods=3) - - # construction - df = DataFrame({'A' : idx, 'B' : dr}) - self.assertTrue(df['A'].dtype,'M8[ns, US/Eastern') - self.assertTrue(df['A'].name == 'A') - assert_series_equal(df['A'],Series(idx,name='A')) - assert_series_equal(df['B'],Series(dr,name='B')) - - # construction from dict - df2 = DataFrame(dict(A=Timestamp('20130102', tz='US/Eastern'), B=Timestamp('20130603', tz='CET')), index=range(5)) - assert_series_equal(df2.dtypes, Series(['datetime64[ns, US/Eastern]', 'datetime64[ns, CET]'], index=['A','B'])) - - # dtypes - tzframe = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern'), - 'C' : date_range('20130101',periods=3,tz='CET')}) - tzframe.iloc[1,1] = pd.NaT - tzframe.iloc[1,2] = pd.NaT - result = tzframe.dtypes.sort_index() - expected = Series([ np.dtype('datetime64[ns]'), - DatetimeTZDtype('datetime64[ns, US/Eastern]'), - DatetimeTZDtype('datetime64[ns, CET]') ], - ['A','B','C']) - - # concat - df3 = pd.concat([df2.A.to_frame(),df2.B.to_frame()],axis=1) - assert_frame_equal(df2, df3) - - # select_dtypes - result = df3.select_dtypes(include=['datetime64[ns]']) - expected = df3.reindex(columns=[]) - assert_frame_equal(result, expected) - - # this will select based on issubclass, and these are the same class - result = df3.select_dtypes(include=['datetime64[ns, CET]']) - expected = df3 - assert_frame_equal(result, expected) - - # from index - idx2 = date_range('20130101',periods=3,tz='US/Eastern',name='foo') - df2 = DataFrame(idx2) - assert_series_equal(df2['foo'],Series(idx2,name='foo')) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2['foo'],Series(idx2,name='foo')) - - idx2 = date_range('20130101',periods=3,tz='US/Eastern') - df2 = DataFrame(idx2) - assert_series_equal(df2[0],Series(idx2,name=0)) - df2 = DataFrame(Series(idx2)) - assert_series_equal(df2[0],Series(idx2,name=0)) - - # interleave with object - result = self.tzframe.assign(D = 'foo').values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')], - ['foo','foo','foo']], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # interleave with only datetime64[ns] - result = self.tzframe.values - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')]], dtype=object).T - self.assert_numpy_array_equal(result, expected) - - # astype - expected = np.array([[Timestamp('2013-01-01 00:00:00'), - Timestamp('2013-01-02 00:00:00'), - Timestamp('2013-01-03 00:00:00')], - [Timestamp('2013-01-01 00:00:00-0500', tz='US/Eastern'), - pd.NaT, - Timestamp('2013-01-03 00:00:00-0500', tz='US/Eastern')], - [Timestamp('2013-01-01 00:00:00+0100', tz='CET'), - pd.NaT, - Timestamp('2013-01-03 00:00:00+0100', tz='CET')]], dtype=object).T - result = self.tzframe.astype(object) - assert_frame_equal(result, DataFrame(expected, index=self.tzframe.index, columns=self.tzframe.columns)) - - result = self.tzframe.astype('datetime64[ns]') - expected = DataFrame({'A' : date_range('20130101',periods=3), - 'B' : date_range('20130101',periods=3,tz='US/Eastern').tz_convert('UTC').tz_localize(None), - 'C' : date_range('20130101',periods=3,tz='CET').tz_convert('UTC').tz_localize(None)}) - expected.iloc[1,1] = pd.NaT - expected.iloc[1,2] = pd.NaT - assert_frame_equal(result, expected) - - # str formatting - result = self.tzframe.astype(str) - expected = np.array([['2013-01-01', '2013-01-01 00:00:00-05:00', - '2013-01-01 00:00:00+01:00'], - ['2013-01-02', 'NaT', 'NaT'], - ['2013-01-03', '2013-01-03 00:00:00-05:00', - '2013-01-03 00:00:00+01:00']], dtype=object) - self.assert_numpy_array_equal(result, expected) - - result = str(self.tzframe) - self.assertTrue('0 2013-01-01 2013-01-01 00:00:00-05:00 2013-01-01 00:00:00+01:00' in result) - self.assertTrue('1 2013-01-02 NaT NaT' in result) - self.assertTrue('2 2013-01-03 2013-01-03 00:00:00-05:00 2013-01-03 00:00:00+01:00' in result) - - # setitem - df['C'] = idx - assert_series_equal(df['C'],Series(idx,name='C')) - - df['D'] = 'foo' - df['D'] = idx - assert_series_equal(df['D'],Series(idx,name='D')) - del df['D'] - - # assert that A & C are not sharing the same base (e.g. they - # are copies) - b1 = df._data.blocks[1] - b2 = df._data.blocks[2] - self.assertTrue(b1.values.equals(b2.values)) - self.assertFalse(id(b1.values.values.base) == id(b2.values.values.base)) - - # with nan - df2 = df.copy() - df2.iloc[1,1] = pd.NaT - df2.iloc[1,2] = pd.NaT - result = df2['B'] - assert_series_equal(notnull(result), Series([True,False,True],name='B')) - assert_series_equal(df2.dtypes, df.dtypes) - - # set/reset - df = DataFrame({'A' : [0,1,2] }, index=idx) - result = df.reset_index() - self.assertTrue(result['foo'].dtype,'M8[ns, US/Eastern') - - result = result.set_index('foo') - tm.assert_index_equal(df.index,idx) - - def test_constructor_for_list_with_dtypes(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - # test list of lists/ndarrays - df = DataFrame([np.arange(5) for x in range(5)]) - result = df.get_dtype_counts() - expected = Series({'int64' : 5}) - - df = DataFrame([np.array(np.arange(5),dtype='int32') for x in range(5)]) - result = df.get_dtype_counts() - expected = Series({'int32' : 5}) - - # overflow issue? (we always expecte int64 upcasting here) - df = DataFrame({'a' : [2**31,2**31+1]}) - result = df.get_dtype_counts() - expected = Series({'int64' : 1 }) - assert_series_equal(result, expected) - - # GH #2751 (construction with no index specified), make sure we cast to platform values - df = DataFrame([1, 2]) - result = df.get_dtype_counts() - expected = Series({'int64': 1 }) - assert_series_equal(result, expected) - - df = DataFrame([1.,2.]) - result = df.get_dtype_counts() - expected = Series({'float64' : 1 }) - assert_series_equal(result, expected) - - df = DataFrame({'a' : [1, 2]}) - result = df.get_dtype_counts() - expected = Series({'int64' : 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : [1., 2.]}) - result = df.get_dtype_counts() - expected = Series({'float64' : 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : 1 }, index=lrange(3)) - result = df.get_dtype_counts() - expected = Series({'int64': 1}) - assert_series_equal(result, expected) - - df = DataFrame({'a' : 1. }, index=lrange(3)) - result = df.get_dtype_counts() - expected = Series({'float64': 1 }) - assert_series_equal(result, expected) - - # with object list - df = DataFrame({'a':[1,2,4,7], 'b':[1.2, 2.3, 5.1, 6.3], - 'c':list('abcd'), 'd':[datetime(2000,1,1) for i in range(4)], - 'e' : [1.,2,4.,7]}) - result = df.get_dtype_counts() - expected = Series({'int64': 1, 'float64' : 2, datetime64name: 1, objectname : 1}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - def test_not_hashable(self): - df = pd.DataFrame([1]) - self.assertRaises(TypeError, hash, df) - self.assertRaises(TypeError, hash, self.empty) - - def test_timedeltas(self): - - df = DataFrame(dict(A = Series(date_range('2012-1-1', periods=3, freq='D')), - B = Series([ timedelta(days=i) for i in range(3) ]))) - result = df.get_dtype_counts().sort_values() - expected = Series({'datetime64[ns]': 1, 'timedelta64[ns]' : 1 }).sort_values() - assert_series_equal(result, expected) - - df['C'] = df['A'] + df['B'] - expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1 }).sort_values() - result = df.get_dtype_counts().sort_values() - assert_series_equal(result, expected) - - # mixed int types - df['D'] = 1 - expected = Series({'datetime64[ns]': 2, 'timedelta64[ns]' : 1, 'int64' : 1 }).sort_values() - result = df.get_dtype_counts().sort_values() - assert_series_equal(result, expected) - - def test_operators_timedelta64(self): - - from datetime import timedelta - df = DataFrame(dict(A = date_range('2012-1-1', periods=3, freq='D'), - B = date_range('2012-1-2', periods=3, freq='D'), - C = Timestamp('20120101')-timedelta(minutes=5,seconds=5))) - - diffs = DataFrame(dict(A = df['A']-df['C'], - B = df['A']-df['B'])) - - - # min - result = diffs.min() - self.assertEqual(result[0], diffs.ix[0,'A']) - self.assertEqual(result[1], diffs.ix[0,'B']) - - result = diffs.min(axis=1) - self.assertTrue((result == diffs.ix[0,'B']).all() == True) - - # max - result = diffs.max() - self.assertEqual(result[0], diffs.ix[2,'A']) - self.assertEqual(result[1], diffs.ix[2,'B']) - - result = diffs.max(axis=1) - self.assertTrue((result == diffs['A']).all() == True) - - # abs - result = diffs.abs() - result2 = abs(diffs) - expected = DataFrame(dict(A = df['A']-df['C'], - B = df['B']-df['A'])) - assert_frame_equal(result,expected) - assert_frame_equal(result2, expected) - - # mixed frame - mixed = diffs.copy() - mixed['C'] = 'foo' - mixed['D'] = 1 - mixed['E'] = 1. - mixed['F'] = Timestamp('20130101') - - # results in an object array - from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type - result = mixed.min() - expected = Series([_coerce_scalar_to_timedelta_type(timedelta(seconds=5*60+5)), - _coerce_scalar_to_timedelta_type(timedelta(days=-1)), - 'foo', - 1, - 1.0, - Timestamp('20130101')], - index=mixed.columns) - assert_series_equal(result,expected) - - # excludes numeric - result = mixed.min(axis=1) - expected = Series([1, 1, 1.],index=[0, 1, 2]) - assert_series_equal(result,expected) - - # works when only those columns are selected - result = mixed[['A','B']].min(1) - expected = Series([ timedelta(days=-1) ] * 3) - assert_series_equal(result,expected) - - result = mixed[['A','B']].min() - expected = Series([ timedelta(seconds=5*60+5), timedelta(days=-1) ],index=['A','B']) - assert_series_equal(result,expected) - - # GH 3106 - df = DataFrame({'time' : date_range('20130102',periods=5), - 'time2' : date_range('20130105',periods=5) }) - df['off1'] = df['time2']-df['time'] - self.assertEqual(df['off1'].dtype, 'timedelta64[ns]') - - df['off2'] = df['time']-df['time2'] - df._consolidate_inplace() - self.assertTrue(df['off1'].dtype == 'timedelta64[ns]') - self.assertTrue(df['off2'].dtype == 'timedelta64[ns]') - - def test_datetimelike_setitem_with_inference(self): - # GH 7592 - # assignment of timedeltas with NaT - - one_hour = timedelta(hours=1) - df = DataFrame(index=date_range('20130101',periods=4)) - df['A'] = np.array([1*one_hour]*4, dtype='m8[ns]') - df.loc[:,'B'] = np.array([2*one_hour]*4, dtype='m8[ns]') - df.loc[:3,'C'] = np.array([3*one_hour]*3, dtype='m8[ns]') - df.ix[:,'D'] = np.array([4*one_hour]*4, dtype='m8[ns]') - df.ix[:3,'E'] = np.array([5*one_hour]*3, dtype='m8[ns]') - df['F'] = np.timedelta64('NaT') - df.ix[:-1,'F'] = np.array([6*one_hour]*3, dtype='m8[ns]') - df.ix[-3:,'G'] = date_range('20130101',periods=3) - df['H'] = np.datetime64('NaT') - result = df.dtypes - expected = Series([np.dtype('timedelta64[ns]')]*6+[np.dtype('datetime64[ns]')]*2,index=list('ABCDEFGH')) - assert_series_equal(result,expected) - - def test_setitem_datetime_coercion(self): - # GH 1048 - df = pd.DataFrame({'c': [pd.Timestamp('2010-10-01')]*3}) - df.loc[0:1, 'c'] = np.datetime64('2008-08-08') - self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[0, 'c']) - self.assertEqual(pd.Timestamp('2008-08-08'), df.loc[1, 'c']) - df.loc[2, 'c'] = date(2005, 5, 5) - self.assertEqual(pd.Timestamp('2005-05-05'), df.loc[2, 'c']) - - - def test_new_empty_index(self): - df1 = DataFrame(randn(0, 3)) - df2 = DataFrame(randn(0, 3)) - df1.index.name = 'foo' - self.assertIsNone(df2.index.name) - - def test_astype(self): - casted = self.frame.astype(int) - expected = DataFrame(self.frame.values.astype(int), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - casted = self.frame.astype(np.int32) - expected = DataFrame(self.frame.values.astype(np.int32), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - self.frame['foo'] = '5' - casted = self.frame.astype(int) - expected = DataFrame(self.frame.values.astype(int), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(casted, expected) - - # mixed casting - def _check_cast(df, v): - self.assertEqual(list(set([ s.dtype.name for _, s in compat.iteritems(df) ]))[0], v) - - mn = self.all_mixed._get_numeric_data().copy() - mn['little_float'] = np.array(12345.,dtype='float16') - mn['big_float'] = np.array(123456789101112.,dtype='float64') - - casted = mn.astype('float64') - _check_cast(casted, 'float64') - - casted = mn.astype('int64') - _check_cast(casted, 'int64') - - casted = self.mixed_float.reindex(columns = ['A','B']).astype('float32') - _check_cast(casted, 'float32') - - casted = mn.reindex(columns = ['little_float']).astype('float16') - _check_cast(casted, 'float16') - - casted = self.mixed_float.reindex(columns = ['A','B']).astype('float16') - _check_cast(casted, 'float16') - - casted = mn.astype('float32') - _check_cast(casted, 'float32') - - casted = mn.astype('int32') - _check_cast(casted, 'int32') - - # to object - casted = mn.astype('O') - _check_cast(casted, 'object') - - def test_astype_with_exclude_string(self): - df = self.frame.copy() - expected = self.frame.astype(int) - df['string'] = 'foo' - casted = df.astype(int, raise_on_error = False) - - expected['string'] = 'foo' - assert_frame_equal(casted, expected) - - df = self.frame.copy() - expected = self.frame.astype(np.int32) - df['string'] = 'foo' - casted = df.astype(np.int32, raise_on_error = False) - - expected['string'] = 'foo' - assert_frame_equal(casted, expected) - - def test_astype_with_view(self): - - tf = self.mixed_float.reindex(columns = ['A','B','C']) - - casted = tf.astype(np.int64) - - casted = tf.astype(np.float32) - - # this is the only real reason to do it this way - tf = np.round(self.frame).astype(np.int32) - casted = tf.astype(np.float32, copy = False) - - tf = self.frame.astype(np.float64) - casted = tf.astype(np.int64, copy = False) - - def test_astype_cast_nan_int(self): - df = DataFrame(data={"Values": [1.0, 2.0, 3.0, np.nan]}) - self.assertRaises(ValueError, df.astype, np.int64) - - def test_astype_str(self): - # GH9757 - a = Series(date_range('2010-01-04', periods=5)) - b = Series(date_range('3/6/2012 00:00', periods=5, tz='US/Eastern')) - c = Series([Timedelta(x, unit='d') for x in range(5)]) - d = Series(range(5)) - e = Series([0.0, 0.2, 0.4, 0.6, 0.8]) - - df = DataFrame({'a' : a, 'b' : b, 'c' : c, 'd' : d, 'e' : e}) - - # datetimelike - # Test str and unicode on python 2.x and just str on python 3.x - for tt in set([str, compat.text_type]): - result = df.astype(tt) - - expected = DataFrame({ - 'a' : list(map(tt, map(lambda x: Timestamp(x)._date_repr, a._values))), - 'b' : list(map(tt, map(Timestamp, b._values))), - 'c' : list(map(tt, map(lambda x: Timedelta(x)._repr_base(format='all'), c._values))), - 'd' : list(map(tt, d._values)), - 'e' : list(map(tt, e._values)), - }) - - assert_frame_equal(result, expected) - - # float/nan - # 11302 - # consistency in astype(str) - for tt in set([str, compat.text_type]): - result = DataFrame([np.NaN]).astype(tt) - expected = DataFrame(['nan']) - assert_frame_equal(result, expected) - - result = DataFrame([1.12345678901234567890]).astype(tt) - expected = DataFrame(['1.12345678901']) - assert_frame_equal(result, expected) - - def test_array_interface(self): - result = np.sqrt(self.frame) - tm.assertIsInstance(result, type(self.frame)) - self.assertIs(result.index, self.frame.index) - self.assertIs(result.columns, self.frame.columns) - - assert_frame_equal(result, self.frame.apply(np.sqrt)) - - def test_pickle(self): - unpickled = self.round_trip_pickle(self.mixed_frame) - assert_frame_equal(self.mixed_frame, unpickled) - - # buglet - self.mixed_frame._data.ndim - - # empty - unpickled = self.round_trip_pickle(self.empty) - repr(unpickled) - - # tz frame - unpickled = self.round_trip_pickle(self.tzframe) - assert_frame_equal(self.tzframe, unpickled) - - def test_to_dict(self): - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - recons_data = DataFrame(test_data).to_dict() - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][k2]) - - recons_data = DataFrame(test_data).to_dict("l") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][int(k2) - 1]) - - recons_data = DataFrame(test_data).to_dict("s") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k][k2]) - - recons_data = DataFrame(test_data).to_dict("sp") - - expected_split = {'columns': ['A', 'B'], 'index': ['1', '2', '3'], - 'data': [[1.0, '1'], [2.0, '2'], [nan, '3']]} - - tm.assert_almost_equal(recons_data, expected_split) - - recons_data = DataFrame(test_data).to_dict("r") - - expected_records = [{'A': 1.0, 'B': '1'}, - {'A': 2.0, 'B': '2'}, - {'A': nan, 'B': '3'}] - - tm.assert_almost_equal(recons_data, expected_records) - - # GH10844 - recons_data = DataFrame(test_data).to_dict("i") - - for k, v in compat.iteritems(test_data): - for k2, v2 in compat.iteritems(v): - self.assertEqual(v2, recons_data[k2][k]) - - def test_latex_repr(self): - result=r"""\begin{tabular}{llll} -\toprule -{} & 0 & 1 & 2 \\ -\midrule -0 & $\alpha$ & b & c \\ -1 & 1 & 2 & 3 \\ -\bottomrule -\end{tabular} -""" - with option_context("display.latex.escape",False): - df=DataFrame([[r'$\alpha$','b','c'],[1,2,3]]) - self.assertEqual(result,df._repr_latex_()) - - - def test_to_dict_timestamp(self): - - # GH11247 - # split/records producing np.datetime64 rather than Timestamps - # on datetime64[ns] dtypes only - - tsmp = Timestamp('20130101') - test_data = DataFrame({'A': [tsmp, tsmp], 'B': [tsmp, tsmp]}) - test_data_mixed = DataFrame({'A': [tsmp, tsmp], 'B': [1, 2]}) - - expected_records = [{'A': tsmp, 'B': tsmp}, - {'A': tsmp, 'B': tsmp}] - expected_records_mixed = [{'A': tsmp, 'B': 1}, - {'A': tsmp, 'B': 2}] - - tm.assert_almost_equal(test_data.to_dict( - orient='records'), expected_records) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='records'), expected_records_mixed) - - expected_series = { - 'A': Series([tsmp, tsmp]), - 'B': Series([tsmp, tsmp]), - } - expected_series_mixed = { - 'A': Series([tsmp, tsmp]), - 'B': Series([1, 2]), - } - - tm.assert_almost_equal(test_data.to_dict( - orient='series'), expected_series) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='series'), expected_series_mixed) - - expected_split = { - 'index': [0, 1], - 'data': [[tsmp, tsmp], - [tsmp, tsmp]], - 'columns': ['A', 'B'] - } - expected_split_mixed = { - 'index': [0, 1], - 'data': [[tsmp, 1], - [tsmp, 2]], - 'columns': ['A', 'B'] - } - - tm.assert_almost_equal(test_data.to_dict( - orient='split'), expected_split) - tm.assert_almost_equal(test_data_mixed.to_dict( - orient='split'), expected_split_mixed) - - def test_to_dict_invalid_orient(self): - df = DataFrame({'A':[0, 1]}) - self.assertRaises(ValueError, df.to_dict, orient='xinvalid') - - def test_to_records_dt64(self): - df = DataFrame([["one", "two", "three"], - ["four", "five", "six"]], - index=date_range("2012-01-01", "2012-01-02")) - self.assertEqual(df.to_records()['index'][0], df.index[0]) - - rs = df.to_records(convert_datetime64=False) - self.assertEqual(rs['index'][0], df.index.values[0]) - - def test_to_records_with_multindex(self): - # GH3189 - index = [['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'], - ['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']] - data = np.zeros((8, 4)) - df = DataFrame(data, index=index) - r = df.to_records(index=True)['level_0'] - self.assertTrue('bar' in r) - self.assertTrue('one' not in r) - - def test_to_records_with_Mapping_type(self): - import email - from email.parser import Parser - import collections - - collections.Mapping.register(email.message.Message) - - headers = Parser().parsestr('From: <user@example.com>\n' - 'To: <someone_else@example.com>\n' - 'Subject: Test message\n' - '\n' - 'Body would go here\n') - - frame = DataFrame.from_records([headers]) - all( x in frame for x in ['Type','Subject','From']) - - def test_from_records_to_records(self): - # from numpy documentation - arr = np.zeros((2,), dtype=('i4,f4,a10')) - arr[:] = [(1, 2., 'Hello'), (2, 3., "World")] - - frame = DataFrame.from_records(arr) - - index = np.arange(len(arr))[::-1] - indexed_frame = DataFrame.from_records(arr, index=index) - self.assert_numpy_array_equal(indexed_frame.index, index) - - # without names, it should go to last ditch - arr2 = np.zeros((2, 3)) - assert_frame_equal(DataFrame.from_records(arr2), DataFrame(arr2)) - - # wrong length - msg = r'Shape of passed values is \(3, 2\), indices imply \(3, 1\)' - with assertRaisesRegexp(ValueError, msg): - DataFrame.from_records(arr, index=index[:-1]) - - indexed_frame = DataFrame.from_records(arr, index='f1') - - # what to do? - records = indexed_frame.to_records() - self.assertEqual(len(records.dtype.names), 3) - - records = indexed_frame.to_records(index=False) - self.assertEqual(len(records.dtype.names), 2) - self.assertNotIn('index', records.dtype.names) - - def test_from_records_nones(self): - tuples = [(1, 2, None, 3), - (1, 2, None, 3), - (None, 2, 5, 3)] - - df = DataFrame.from_records(tuples, columns=['a', 'b', 'c', 'd']) - self.assertTrue(np.isnan(df['c'][0])) - - def test_from_records_iterator(self): - arr = np.array([(1.0, 1.0, 2, 2), (3.0, 3.0, 4, 4), (5., 5., 6, 6), (7., 7., 8, 8)], - dtype=[('x', np.float64), ('u', np.float32), ('y', np.int64), ('z', np.int32) ]) - df = DataFrame.from_records(iter(arr), nrows=2) - xp = DataFrame({'x': np.array([1.0, 3.0], dtype=np.float64), - 'u': np.array([1.0, 3.0], dtype=np.float32), - 'y': np.array([2, 4], dtype=np.int64), - 'z': np.array([2, 4], dtype=np.int32)}) - assert_frame_equal(df.reindex_like(xp), xp) - - # no dtypes specified here, so just compare with the default - arr = [(1.0, 2), (3.0, 4), (5., 6), (7., 8)] - df = DataFrame.from_records(iter(arr), columns=['x', 'y'], - nrows=2) - assert_frame_equal(df, xp.reindex(columns=['x','y']), check_dtype=False) - - def test_from_records_tuples_generator(self): - def tuple_generator(length): - for i in range(length): - letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' - yield (i, letters[i % len(letters)], i/length) - - columns_names = ['Integer', 'String', 'Float'] - columns = [[i[j] for i in tuple_generator(10)] for j in range(len(columns_names))] - data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]} - expected = DataFrame(data, columns=columns_names) - - generator = tuple_generator(10) - result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) - - def test_from_records_lists_generator(self): - def list_generator(length): - for i in range(length): - letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' - yield [i, letters[i % len(letters)], i/length] - - columns_names = ['Integer', 'String', 'Float'] - columns = [[i[j] for i in list_generator(10)] for j in range(len(columns_names))] - data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]} - expected = DataFrame(data, columns=columns_names) - - generator = list_generator(10) - result = DataFrame.from_records(generator, columns=columns_names) - assert_frame_equal(result, expected) - - def test_from_records_columns_not_modified(self): - tuples = [(1, 2, 3), - (1, 2, 3), - (2, 5, 3)] - - columns = ['a', 'b', 'c'] - original_columns = list(columns) - df = DataFrame.from_records(tuples, columns=columns, index='a') - self.assertEqual(columns, original_columns) - - def test_from_records_decimal(self): - from decimal import Decimal - - tuples = [(Decimal('1.5'),), (Decimal('2.5'),), (None,)] - - df = DataFrame.from_records(tuples, columns=['a']) - self.assertEqual(df['a'].dtype, object) - - df = DataFrame.from_records(tuples, columns=['a'], coerce_float=True) - self.assertEqual(df['a'].dtype, np.float64) - self.assertTrue(np.isnan(df['a'].values[-1])) - - def test_from_records_duplicates(self): - result = DataFrame.from_records([(1, 2, 3), (4, 5, 6)], - columns=['a', 'b', 'a']) - - expected = DataFrame([(1, 2, 3), (4, 5, 6)], - columns=['a', 'b', 'a']) - - assert_frame_equal(result, expected) - - def test_from_records_set_index_name(self): - def create_dict(order_id): - return {'order_id': order_id, 'quantity': np.random.randint(1, 10), - 'price': np.random.randint(1, 10)} - documents = [create_dict(i) for i in range(10)] - # demo missing data - documents.append({'order_id': 10, 'quantity': 5}) - - result = DataFrame.from_records(documents, index='order_id') - self.assertEqual(result.index.name, 'order_id') - - # MultiIndex - result = DataFrame.from_records(documents, - index=['order_id', 'quantity']) - self.assertEqual(result.index.names, ('order_id', 'quantity')) - - def test_from_records_misc_brokenness(self): - # #2179 - - data = {1: ['foo'], 2: ['bar']} - - result = DataFrame.from_records(data, columns=['a', 'b']) - exp = DataFrame(data, columns=['a', 'b']) - assert_frame_equal(result, exp) - - # overlap in index/index_names - - data = {'a': [1, 2, 3], 'b': [4, 5, 6]} - - result = DataFrame.from_records(data, index=['a', 'b', 'c']) - exp = DataFrame(data, index=['a', 'b', 'c']) - assert_frame_equal(result, exp) - - - # GH 2623 - rows = [] - rows.append([datetime(2010, 1, 1), 1]) - rows.append([datetime(2010, 1, 2), 'hi']) # test col upconverts to obj - df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) - results = df2_obj.get_dtype_counts() - expected = Series({ 'datetime64[ns]' : 1, 'object' : 1 }) - - rows = [] - rows.append([datetime(2010, 1, 1), 1]) - rows.append([datetime(2010, 1, 2), 1]) - df2_obj = DataFrame.from_records(rows, columns=['date', 'test']) - results = df2_obj.get_dtype_counts() - expected = Series({ 'datetime64[ns]' : 1, 'int64' : 1 }) - - def test_from_records_empty(self): - # 3562 - result = DataFrame.from_records([], columns=['a','b','c']) - expected = DataFrame(columns=['a','b','c']) - assert_frame_equal(result, expected) - - result = DataFrame.from_records([], columns=['a','b','b']) - expected = DataFrame(columns=['a','b','b']) - assert_frame_equal(result, expected) - - def test_from_records_empty_with_nonempty_fields_gh3682(self): - a = np.array([(1, 2)], dtype=[('id', np.int64), ('value', np.int64)]) - df = DataFrame.from_records(a, index='id') - assert_numpy_array_equal(df.index, Index([1], name='id')) - self.assertEqual(df.index.name, 'id') - assert_numpy_array_equal(df.columns, Index(['value'])) - - b = np.array([], dtype=[('id', np.int64), ('value', np.int64)]) - df = DataFrame.from_records(b, index='id') - assert_numpy_array_equal(df.index, Index([], name='id')) - self.assertEqual(df.index.name, 'id') - - def test_from_records_with_datetimes(self): - - # this may fail on certain platforms because of a numpy issue - # related GH6140 - if not is_little_endian(): - raise nose.SkipTest("known failure of test on non-little endian") - - # construction with a null in a recarray - # GH 6140 - expected = DataFrame({ 'EXPIRY' : [datetime(2005, 3, 1, 0, 0), None ]}) - - arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] - dtypes = [('EXPIRY', '<M8[ns]')] - - try: - recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) - except (ValueError): - raise nose.SkipTest("known failure of numpy rec array creation") - - result = DataFrame.from_records(recarray) - assert_frame_equal(result,expected) - - # coercion should work too - arrdata = [np.array([datetime(2005, 3, 1, 0, 0), None])] - dtypes = [('EXPIRY', '<M8[m]')] - recarray = np.core.records.fromarrays(arrdata, dtype=dtypes) - result = DataFrame.from_records(recarray) - assert_frame_equal(result,expected) - - def test_to_records_floats(self): - df = DataFrame(np.random.rand(10, 10)) - df.to_records() - - def test_to_recods_index_name(self): - df = DataFrame(np.random.randn(3, 3)) - df.index.name = 'X' - rs = df.to_records() - self.assertIn('X', rs.dtype.fields) - - df = DataFrame(np.random.randn(3, 3)) - rs = df.to_records() - self.assertIn('index', rs.dtype.fields) - - df.index = MultiIndex.from_tuples([('a', 'x'), ('a', 'y'), ('b', 'z')]) - df.index.names = ['A', None] - rs = df.to_records() - self.assertIn('level_0', rs.dtype.fields) - - def test_join_str_datetime(self): - str_dates = ['20120209', '20120222'] - dt_dates = [datetime(2012, 2, 9), datetime(2012, 2, 22)] - - A = DataFrame(str_dates, index=lrange(2), columns=['aa']) - C = DataFrame([[1, 2], [3, 4]], index=str_dates, columns=dt_dates) - - tst = A.join(C, on='aa') - - self.assertEqual(len(tst.columns), 3) - - def test_join_multiindex_leftright(self): - # GH 10741 - df1 = pd.DataFrame([['a', 'x', 0.471780], ['a','y', 0.774908], - ['a', 'z', 0.563634], ['b', 'x', -0.353756], - ['b', 'y', 0.368062], ['b', 'z', -1.721840], - ['c', 'x', 1], ['c', 'y', 2], ['c', 'z', 3]], - columns=['first', 'second', 'value1']).set_index(['first', 'second']) - df2 = pd.DataFrame([['a', 10], ['b', 20]], columns=['first', 'value2']).set_index(['first']) - - exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], - [-0.353756, 20], [0.368062, 20], [-1.721840, 20], - [1.000000, np.nan], [2.000000, np.nan], [3.000000, np.nan]], - index=df1.index, columns=['value1', 'value2']) - - # these must be the same results (but columns are flipped) - assert_frame_equal(df1.join(df2, how='left'), exp) - assert_frame_equal(df2.join(df1, how='right'), - exp[['value2', 'value1']]) - - exp_idx = pd.MultiIndex.from_product([['a', 'b'], ['x', 'y', 'z']], - names=['first', 'second']) - exp = pd.DataFrame([[0.471780, 10], [0.774908, 10], [0.563634, 10], - [-0.353756, 20], [0.368062, 20], [-1.721840, 20]], - index=exp_idx, columns=['value1', 'value2']) - - assert_frame_equal(df1.join(df2, how='right'), exp) - assert_frame_equal(df2.join(df1, how='left'), - exp[['value2', 'value1']]) - - def test_from_records_sequencelike(self): - df = DataFrame({'A': np.array(np.random.randn(6), dtype=np.float64), - 'A1': np.array(np.random.randn(6), dtype=np.float64), - 'B': np.array(np.arange(6), dtype=np.int64), - 'C': ['foo'] * 6, - 'D': np.array([True, False] * 3, dtype=bool), - 'E': np.array(np.random.randn(6), dtype=np.float32), - 'E1': np.array(np.random.randn(6), dtype=np.float32), - 'F': np.array(np.arange(6), dtype=np.int32)}) - - # this is actually tricky to create the recordlike arrays and - # have the dtypes be intact - blocks = df.blocks - tuples = [] - columns = [] - dtypes = [] - for dtype, b in compat.iteritems(blocks): - columns.extend(b.columns) - dtypes.extend([ (c,np.dtype(dtype).descr[0][1]) for c in b.columns ]) - for i in range(len(df.index)): - tup = [] - for _, b in compat.iteritems(blocks): - tup.extend(b.iloc[i].values) - tuples.append(tuple(tup)) - - recarray = np.array(tuples, dtype=dtypes).view(np.recarray) - recarray2 = df.to_records() - lists = [list(x) for x in tuples] - - # tuples (lose the dtype info) - result = DataFrame.from_records(tuples, columns=columns).reindex(columns=df.columns) - - # created recarray and with to_records recarray (have dtype info) - result2 = DataFrame.from_records(recarray, columns=columns).reindex(columns=df.columns) - result3 = DataFrame.from_records(recarray2, columns=columns).reindex(columns=df.columns) - - # list of tupels (no dtype info) - result4 = DataFrame.from_records(lists, columns=columns).reindex(columns=df.columns) - - assert_frame_equal(result, df, check_dtype=False) - assert_frame_equal(result2, df) - assert_frame_equal(result3, df) - assert_frame_equal(result4, df, check_dtype=False) - - # tuples is in the order of the columns - result = DataFrame.from_records(tuples) - self.assert_numpy_array_equal(result.columns, lrange(8)) - - # test exclude parameter & we are casting the results here (as we don't have dtype info to recover) - columns_to_test = [ columns.index('C'), columns.index('E1') ] - - exclude = list(set(range(8))-set(columns_to_test)) - result = DataFrame.from_records(tuples, exclude=exclude) - result.columns = [ columns[i] for i in sorted(columns_to_test) ] - assert_series_equal(result['C'], df['C']) - assert_series_equal(result['E1'], df['E1'].astype('float64')) - - # empty case - result = DataFrame.from_records([], columns=['foo', 'bar', 'baz']) - self.assertEqual(len(result), 0) - self.assert_numpy_array_equal(result.columns, ['foo', 'bar', 'baz']) - - result = DataFrame.from_records([]) - self.assertEqual(len(result), 0) - self.assertEqual(len(result.columns), 0) - - def test_from_records_dictlike(self): - - # test the dict methods - df = DataFrame({'A' : np.array(np.random.randn(6), dtype = np.float64), - 'A1': np.array(np.random.randn(6), dtype = np.float64), - 'B' : np.array(np.arange(6), dtype = np.int64), - 'C' : ['foo'] * 6, - 'D' : np.array([True, False] * 3, dtype=bool), - 'E' : np.array(np.random.randn(6), dtype = np.float32), - 'E1': np.array(np.random.randn(6), dtype = np.float32), - 'F' : np.array(np.arange(6), dtype = np.int32) }) - - # columns is in a different order here than the actual items iterated from the dict - columns = [] - for dtype, b in compat.iteritems(df.blocks): - columns.extend(b.columns) - - asdict = dict((x, y) for x, y in compat.iteritems(df)) - asdict2 = dict((x, y.values) for x, y in compat.iteritems(df)) - - # dict of series & dict of ndarrays (have dtype info) - results = [] - results.append(DataFrame.from_records(asdict).reindex(columns=df.columns)) - results.append(DataFrame.from_records(asdict, columns=columns).reindex(columns=df.columns)) - results.append(DataFrame.from_records(asdict2, columns=columns).reindex(columns=df.columns)) - - for r in results: - assert_frame_equal(r, df) - - def test_from_records_with_index_data(self): - df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) - - data = np.random.randn(10) - df1 = DataFrame.from_records(df, index=data) - assert(df1.index.equals(Index(data))) - - def test_from_records_bad_index_column(self): - df = DataFrame(np.random.randn(10, 3), columns=['A', 'B', 'C']) - - # should pass - df1 = DataFrame.from_records(df, index=['C']) - assert(df1.index.equals(Index(df.C))) - - df1 = DataFrame.from_records(df, index='C') - assert(df1.index.equals(Index(df.C))) - - # should fail - self.assertRaises(ValueError, DataFrame.from_records, df, index=[2]) - self.assertRaises(KeyError, DataFrame.from_records, df, index=2) - - def test_from_records_non_tuple(self): - class Record(object): - - def __init__(self, *args): - self.args = args - - def __getitem__(self, i): - return self.args[i] - - def __iter__(self): - return iter(self.args) - - recs = [Record(1, 2, 3), Record(4, 5, 6), Record(7, 8, 9)] - tups = lmap(tuple, recs) - - result = DataFrame.from_records(recs) - expected = DataFrame.from_records(tups) - assert_frame_equal(result, expected) - - def test_from_records_len0_with_columns(self): - # #2633 - result = DataFrame.from_records([], index='foo', - columns=['foo', 'bar']) - - self.assertTrue(np.array_equal(result.columns, ['bar'])) - self.assertEqual(len(result), 0) - self.assertEqual(result.index.name, 'foo') - - def test_get_agg_axis(self): - cols = self.frame._get_agg_axis(0) - self.assertIs(cols, self.frame.columns) - - idx = self.frame._get_agg_axis(1) - self.assertIs(idx, self.frame.index) - - self.assertRaises(ValueError, self.frame._get_agg_axis, 2) - - def test_nonzero(self): - self.assertTrue(self.empty.empty) - - self.assertFalse(self.frame.empty) - self.assertFalse(self.mixed_frame.empty) - - # corner case - df = DataFrame({'A': [1., 2., 3.], - 'B': ['a', 'b', 'c']}, - index=np.arange(3)) - del df['A'] - self.assertFalse(df.empty) - - def test_repr_empty(self): - buf = StringIO() - - # empty - foo = repr(self.empty) - - # empty with index - frame = DataFrame(index=np.arange(1000)) - foo = repr(frame) - - def test_repr_mixed(self): - buf = StringIO() - - # mixed - foo = repr(self.mixed_frame) - self.mixed_frame.info(verbose=False, buf=buf) - - @slow - def test_repr_mixed_big(self): - # big mixed - biggie = DataFrame({'A': randn(200), - 'B': tm.makeStringIndex(200)}, - index=lrange(200)) - biggie.loc[:20,'A'] = nan - biggie.loc[:20,'B'] = nan - - foo = repr(biggie) - - def test_repr(self): - buf = StringIO() - - # small one - foo = repr(self.frame) - self.frame.info(verbose=False, buf=buf) - - # even smaller - self.frame.reindex(columns=['A']).info(verbose=False, buf=buf) - self.frame.reindex(columns=['A', 'B']).info(verbose=False, buf=buf) - - # exhausting cases in DataFrame.info - - # columns but no index - no_index = DataFrame(columns=[0, 1, 3]) - foo = repr(no_index) - - # no columns or index - self.empty.info(buf=buf) - - df = DataFrame(["a\n\r\tb"], columns=["a\n\r\td"], index=["a\n\r\tf"]) - self.assertFalse("\t" in repr(df)) - self.assertFalse("\r" in repr(df)) - self.assertFalse("a\n" in repr(df)) - - def test_repr_dimensions(self): - df = DataFrame([[1, 2,], [3, 4]]) - with option_context('display.show_dimensions', True): - self.assertTrue("2 rows x 2 columns" in repr(df)) - - with option_context('display.show_dimensions', False): - self.assertFalse("2 rows x 2 columns" in repr(df)) - - with option_context('display.show_dimensions', 'truncate'): - self.assertFalse("2 rows x 2 columns" in repr(df)) - - @slow - def test_repr_big(self): - buf = StringIO() - - # big one - biggie = DataFrame(np.zeros((200, 4)), columns=lrange(4), - index=lrange(200)) - foo = repr(biggie) - - def test_repr_unsortable(self): - # columns are not sortable - import warnings - warn_filters = warnings.filters - warnings.filterwarnings('ignore', - category=FutureWarning, - module=".*format") - - unsortable = DataFrame({'foo': [1] * 50, - datetime.today(): [1] * 50, - 'bar': ['bar'] * 50, - datetime.today( - ) + timedelta(1): ['bar'] * 50}, - index=np.arange(50)) - foo = repr(unsortable) - - fmt.set_option('display.precision', 3, 'display.column_space', 10) - repr(self.frame) - - fmt.set_option('display.max_rows', 10, 'display.max_columns', 2) - repr(self.frame) - - fmt.set_option('display.max_rows', 1000, 'display.max_columns', 1000) - repr(self.frame) - - self.reset_display_options() - - warnings.filters = warn_filters - - def test_repr_unicode(self): - uval = u('\u03c3\u03c3\u03c3\u03c3') - bval = uval.encode('utf-8') - df = DataFrame({'A': [uval, uval]}) - - result = repr(df) - ex_top = ' A' - self.assertEqual(result.split('\n')[0].rstrip(), ex_top) - - df = DataFrame({'A': [uval, uval]}) - result = repr(df) - self.assertEqual(result.split('\n')[0].rstrip(), ex_top) - - def test_unicode_string_with_unicode(self): - df = DataFrame({'A': [u("\u05d0")]}) - - if compat.PY3: - str(df) - else: - compat.text_type(df) - - def test_bytestring_with_unicode(self): - df = DataFrame({'A': [u("\u05d0")]}) - if compat.PY3: - bytes(df) - else: - str(df) - - def test_very_wide_info_repr(self): - df = DataFrame(np.random.randn(10, 20), - columns=tm.rands_array(10, 20)) - repr(df) - - def test_repr_column_name_unicode_truncation_bug(self): - # #1906 - df = DataFrame({'Id': [7117434], - 'StringCol': ('Is it possible to modify drop plot code' - ' so that the output graph is displayed ' - 'in iphone simulator, Is it possible to ' - 'modify drop plot code so that the ' - 'output graph is \xe2\x80\xa8displayed ' - 'in iphone simulator.Now we are adding ' - 'the CSV file externally. I want to Call' - ' the File through the code..')}) - - result = repr(df) - self.assertIn('StringCol', result) - - def test_head_tail(self): - assert_frame_equal(self.frame.head(), self.frame[:5]) - assert_frame_equal(self.frame.tail(), self.frame[-5:]) - - assert_frame_equal(self.frame.head(0), self.frame[0:0]) - assert_frame_equal(self.frame.tail(0), self.frame[0:0]) - - assert_frame_equal(self.frame.head(-1), self.frame[:-1]) - assert_frame_equal(self.frame.tail(-1), self.frame[1:]) - assert_frame_equal(self.frame.head(1), self.frame[:1]) - assert_frame_equal(self.frame.tail(1), self.frame[-1:]) - # with a float index - df = self.frame.copy() - df.index = np.arange(len(self.frame)) + 0.1 - assert_frame_equal(df.head(), df.iloc[:5]) - assert_frame_equal(df.tail(), df.iloc[-5:]) - assert_frame_equal(df.head(0), df[0:0]) - assert_frame_equal(df.tail(0), df[0:0]) - assert_frame_equal(df.head(-1), df.iloc[:-1]) - assert_frame_equal(df.tail(-1), df.iloc[1:]) - #test empty dataframe - empty_df = DataFrame() - assert_frame_equal(empty_df.tail(), empty_df) - assert_frame_equal(empty_df.head(), empty_df) - - def test_insert(self): - df = DataFrame(np.random.randn(5, 3), index=np.arange(5), - columns=['c', 'b', 'a']) - - df.insert(0, 'foo', df['a']) - self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'b', 'a']) - assert_almost_equal(df['a'], df['foo']) - - df.insert(2, 'bar', df['c']) - self.assert_numpy_array_equal(df.columns, ['foo', 'c', 'bar', 'b', 'a']) - assert_almost_equal(df['c'], df['bar']) - - # diff dtype - - # new item - df['x'] = df['a'].astype('float32') - result = Series(dict(float64 = 5, float32 = 1)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - # replacing current (in different block) - df['a'] = df['a'].astype('float32') - result = Series(dict(float64 = 4, float32 = 2)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - df['y'] = df['a'].astype('int32') - result = Series(dict(float64 = 4, float32 = 2, int32 = 1)) - self.assertTrue((df.get_dtype_counts() == result).all()) - - with assertRaisesRegexp(ValueError, 'already exists'): - df.insert(1, 'a', df['b']) - self.assertRaises(ValueError, df.insert, 1, 'c', df['b']) - - df.columns.name = 'some_name' - # preserve columns name field - df.insert(0, 'baz', df['c']) - self.assertEqual(df.columns.name, 'some_name') - - def test_delitem(self): - del self.frame['A'] - self.assertNotIn('A', self.frame) - - def test_pop(self): - self.frame.columns.name = 'baz' - - A = self.frame.pop('A') - self.assertNotIn('A', self.frame) - - self.frame['foo'] = 'bar' - foo = self.frame.pop('foo') - self.assertNotIn('foo', self.frame) - # TODO self.assertEqual(self.frame.columns.name, 'baz') - - # 10912 - # inplace ops cause caching issue - a = DataFrame([[1,2,3],[4,5,6]], columns=['A','B','C'], index=['X','Y']) - b = a.pop('B') - b += 1 - - # original frame - expected = DataFrame([[1,3],[4,6]], columns=['A','C'], index=['X','Y']) - assert_frame_equal(a, expected) - - # result - expected = Series([2,5],index=['X','Y'],name='B')+1 - assert_series_equal(b, expected) - - def test_pop_non_unique_cols(self): - df = DataFrame({0: [0, 1], 1: [0, 1], 2: [4, 5]}) - df.columns = ["a", "b", "a"] - - res = df.pop("a") - self.assertEqual(type(res), DataFrame) - self.assertEqual(len(res), 2) - self.assertEqual(len(df.columns), 1) - self.assertTrue("b" in df.columns) - self.assertFalse("a" in df.columns) - self.assertEqual(len(df.index), 2) - - def test_iter(self): - self.assertTrue(tm.equalContents(list(self.frame), self.frame.columns)) - - def test_iterrows(self): - for i, (k, v) in enumerate(self.frame.iterrows()): - exp = self.frame.xs(self.frame.index[i]) - assert_series_equal(v, exp) - - for i, (k, v) in enumerate(self.mixed_frame.iterrows()): - exp = self.mixed_frame.xs(self.mixed_frame.index[i]) - assert_series_equal(v, exp) - - def test_itertuples(self): - for i, tup in enumerate(self.frame.itertuples()): - s = Series(tup[1:]) - s.name = tup[0] - expected = self.frame.ix[i, :].reset_index(drop=True) - assert_series_equal(s, expected) - - df = DataFrame({'floats': np.random.randn(5), - 'ints': lrange(5)}, columns=['floats', 'ints']) - - for tup in df.itertuples(index=False): - tm.assertIsInstance(tup[1], np.integer) - - df = DataFrame(data={"a": [1, 2, 3], "b": [4, 5, 6]}) - dfaa = df[['a', 'a']] - self.assertEqual(list(dfaa.itertuples()), [(0, 1, 1), (1, 2, 2), (2, 3, 3)]) - - self.assertEqual(repr(list(df.itertuples(name=None))), '[(0, 1, 4), (1, 2, 5), (2, 3, 6)]') - - tup = next(df.itertuples(name='TestName')) - - # no support for field renaming in Python 2.6, regular tuples are returned - if sys.version >= LooseVersion('2.7'): - self.assertEqual(tup._fields, ('Index', 'a', 'b')) - self.assertEqual((tup.Index, tup.a, tup.b), tup) - self.assertEqual(type(tup).__name__, 'TestName') - - df.columns = ['def', 'return'] - tup2 = next(df.itertuples(name='TestName')) - self.assertEqual(tup2, (0, 1, 4)) - - if sys.version >= LooseVersion('2.7'): - self.assertEqual(tup2._fields, ('Index', '_1', '_2')) - - df3 = DataFrame(dict(('f'+str(i), [i]) for i in range(1024))) - # will raise SyntaxError if trying to create namedtuple - tup3 = next(df3.itertuples()) - self.assertFalse(hasattr(tup3, '_fields')) - self.assertIsInstance(tup3, tuple) - - def test_len(self): - self.assertEqual(len(self.frame), len(self.frame.index)) - - def test_operators(self): - garbage = random.random(4) - colSeries = Series(garbage, index=np.array(self.frame.columns)) - - idSum = self.frame + self.frame - seriesSum = self.frame + colSeries - - for col, series in compat.iteritems(idSum): - for idx, val in compat.iteritems(series): - origVal = self.frame[col][idx] * 2 - if not np.isnan(val): - self.assertEqual(val, origVal) - else: - self.assertTrue(np.isnan(origVal)) - - for col, series in compat.iteritems(seriesSum): - for idx, val in compat.iteritems(series): - origVal = self.frame[col][idx] + colSeries[col] - if not np.isnan(val): - self.assertEqual(val, origVal) - else: - self.assertTrue(np.isnan(origVal)) - - added = self.frame2 + self.frame2 - expected = self.frame2 * 2 - assert_frame_equal(added, expected) - - df = DataFrame({'a': ['a', None, 'b']}) - assert_frame_equal(df + df, DataFrame({'a': ['aa', np.nan, 'bb']})) - - # Test for issue #10181 - for dtype in ('float', 'int64'): - frames = [ - DataFrame(dtype=dtype), - DataFrame(columns=['A'], dtype=dtype), - DataFrame(index=[0], dtype=dtype), - ] - for df in frames: - self.assertTrue((df + df).equals(df)) - assert_frame_equal(df + df, df) - - def test_ops_np_scalar(self): - vals, xs = np.random.rand(5, 3), [nan, 7, -23, 2.718, -3.14, np.inf] - f = lambda x: DataFrame(x, index=list('ABCDE'), - columns=['jim', 'joe', 'jolie']) - - df = f(vals) - - for x in xs: - assert_frame_equal(df / np.array(x), f(vals / x)) - assert_frame_equal(np.array(x) * df, f(vals * x)) - assert_frame_equal(df + np.array(x), f(vals + x)) - assert_frame_equal(np.array(x) - df, f(x - vals)) - - def test_operators_boolean(self): - - # GH 5808 - # empty frames, non-mixed dtype - - result = DataFrame(index=[1]) & DataFrame(index=[1]) - assert_frame_equal(result,DataFrame(index=[1])) - - result = DataFrame(index=[1]) | DataFrame(index=[1]) - assert_frame_equal(result,DataFrame(index=[1])) - - result = DataFrame(index=[1]) & DataFrame(index=[1,2]) - assert_frame_equal(result,DataFrame(index=[1,2])) - - result = DataFrame(index=[1],columns=['A']) & DataFrame(index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(index=[1],columns=['A'])) - - result = DataFrame(True,index=[1],columns=['A']) & DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(True,index=[1],columns=['A'])) - - result = DataFrame(True,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(True,index=[1],columns=['A'])) - - # boolean ops - result = DataFrame(1,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - assert_frame_equal(result,DataFrame(1,index=[1],columns=['A'])) - - def f(): - DataFrame(1.0,index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - self.assertRaises(TypeError, f) - - def f(): - DataFrame('foo',index=[1],columns=['A']) | DataFrame(True,index=[1],columns=['A']) - self.assertRaises(TypeError, f) - - def test_operators_none_as_na(self): - df = DataFrame({"col1": [2, 5.0, 123, None], - "col2": [1, 2, 3, 4]}, dtype=object) - - ops = [operator.add, operator.sub, operator.mul, operator.truediv] - - # since filling converts dtypes from object, changed expected to be object - for op in ops: - filled = df.fillna(np.nan) - result = op(df, 3) - expected = op(filled, 3).astype(object) - expected[com.isnull(expected)] = None - assert_frame_equal(result, expected) - - result = op(df, df) - expected = op(filled, filled).astype(object) - expected[com.isnull(expected)] = None - assert_frame_equal(result, expected) - - result = op(df, df.fillna(7)) - assert_frame_equal(result, expected) - - result = op(df.fillna(7), df) - assert_frame_equal(result, expected, check_dtype=False) - - def test_comparison_invalid(self): - - def check(df,df2): - - for (x, y) in [(df,df2),(df2,df)]: - self.assertRaises(TypeError, lambda : x == y) - self.assertRaises(TypeError, lambda : x != y) - self.assertRaises(TypeError, lambda : x >= y) - self.assertRaises(TypeError, lambda : x > y) - self.assertRaises(TypeError, lambda : x < y) - self.assertRaises(TypeError, lambda : x <= y) - - # GH4968 - # invalid date/int comparisons - df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a']) - df['dates'] = date_range('20010101', periods=len(df)) - - df2 = df.copy() - df2['dates'] = df['a'] - check(df,df2) - - df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b']) - df2 = DataFrame({'a': date_range('20010101', periods=len(df)), 'b': date_range('20100101', periods=len(df))}) - check(df,df2) - - def test_timestamp_compare(self): - # make sure we can compare Timestamps on the right AND left hand side - # GH4982 - df = DataFrame({'dates1': date_range('20010101', periods=10), - 'dates2': date_range('20010102', periods=10), - 'intcol': np.random.randint(1000000000, size=10), - 'floatcol': np.random.randn(10), - 'stringcol': list(tm.rands(10))}) - df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT - ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq', - 'ne': 'ne'} - for left, right in ops.items(): - left_f = getattr(operator, left) - right_f = getattr(operator, right) - - # no nats - expected = left_f(df, Timestamp('20010109')) - result = right_f(Timestamp('20010109'), df) - assert_frame_equal(result, expected) - - # nats - expected = left_f(df, Timestamp('nat')) - result = right_f(Timestamp('nat'), df) - assert_frame_equal(result, expected) - - def test_modulo(self): - - # GH3590, modulo as ints - p = DataFrame({ 'first' : [3,4,5,8], 'second' : [0,0,0,3] }) - - ### this is technically wrong as the integer portion is coerced to float ### - expected = DataFrame({ 'first' : Series([0,0,0,0],dtype='float64'), 'second' : Series([np.nan,np.nan,np.nan,0]) }) - result = p % p - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values % p.values,index=p.index,columns=p.columns,dtype='float64') - result2.iloc[0:3,1] = np.nan - assert_frame_equal(result2,expected) - - result = p % 0 - expected = DataFrame(np.nan,index=p.index,columns=p.columns) - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values.astype('float64') % 0,index=p.index,columns=p.columns) - assert_frame_equal(result2,expected) - - # not commutative with series - p = DataFrame(np.random.randn(10, 5)) - s = p[0] - res = s % p - res2 = p % s - self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) - - def test_div(self): - - # integer div, but deal with the 0's (GH 9144) - p = DataFrame({ 'first' : [3,4,5,8], 'second' : [0,0,0,3] }) - result = p / p - - expected = DataFrame({'first': Series([1.0, 1.0, 1.0, 1.0]), - 'second': Series([nan, nan, nan, 1])}) - assert_frame_equal(result,expected) - - result2 = DataFrame(p.values.astype('float') / p.values, index=p.index, - columns=p.columns) - assert_frame_equal(result2,expected) - - result = p / 0 - expected = DataFrame(inf, index=p.index, columns=p.columns) - expected.iloc[0:3, 1] = nan - assert_frame_equal(result,expected) - - # numpy has a slightly different (wrong) treatement - result2 = DataFrame(p.values.astype('float64') / 0, index=p.index, - columns=p.columns) - assert_frame_equal(result2,expected) - - p = DataFrame(np.random.randn(10, 5)) - s = p[0] - res = s / p - res2 = p / s - self.assertFalse(np.array_equal(res.fillna(0), res2.fillna(0))) - - def test_logical_operators(self): - - def _check_bin_op(op): - result = op(df1, df2) - expected = DataFrame(op(df1.values, df2.values), index=df1.index, - columns=df1.columns) - self.assertEqual(result.values.dtype, np.bool_) - assert_frame_equal(result, expected) - - def _check_unary_op(op): - result = op(df1) - expected = DataFrame(op(df1.values), index=df1.index, - columns=df1.columns) - self.assertEqual(result.values.dtype, np.bool_) - assert_frame_equal(result, expected) - - df1 = {'a': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, - 'b': {'a': False, 'b': True, 'c': False, - 'd': False, 'e': False}, - 'c': {'a': False, 'b': False, 'c': True, - 'd': False, 'e': False}, - 'd': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}, - 'e': {'a': True, 'b': False, 'c': False, 'd': True, 'e': True}} - - df2 = {'a': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, - 'b': {'a': False, 'b': True, 'c': False, - 'd': False, 'e': False}, - 'c': {'a': True, 'b': False, 'c': True, 'd': False, 'e': False}, - 'd': {'a': False, 'b': False, 'c': False, - 'd': True, 'e': False}, - 'e': {'a': False, 'b': False, 'c': False, - 'd': False, 'e': True}} - - df1 = DataFrame(df1) - df2 = DataFrame(df2) - - _check_bin_op(operator.and_) - _check_bin_op(operator.or_) - _check_bin_op(operator.xor) - - # operator.neg is deprecated in numpy >= 1.9 - _check_unary_op(operator.inv) - - def test_logical_typeerror(self): - if not compat.PY3: - self.assertRaises(TypeError, self.frame.__eq__, 'foo') - self.assertRaises(TypeError, self.frame.__lt__, 'foo') - self.assertRaises(TypeError, self.frame.__gt__, 'foo') - self.assertRaises(TypeError, self.frame.__ne__, 'foo') - else: - raise nose.SkipTest('test_logical_typeerror not tested on PY3') - - def test_constructor_lists_to_object_dtype(self): - # from #1074 - d = DataFrame({'a': [np.nan, False]}) - self.assertEqual(d['a'].dtype, np.object_) - self.assertFalse(d['a'][1]) - - def test_constructor_with_nas(self): - # GH 5016 - # na's in indicies - - def check(df): - for i in range(len(df.columns)): - df.iloc[:,i] - - # allow single nans to succeed - indexer = np.arange(len(df.columns))[isnull(df.columns)] - - if len(indexer) == 1: - assert_series_equal(df.iloc[:,indexer[0]],df.loc[:,np.nan]) - - - # multiple nans should fail - else: - - def f(): - df.loc[:,np.nan] - self.assertRaises(TypeError, f) - - - df = DataFrame([[1,2,3],[4,5,6]], index=[1,np.nan]) - check(df) - - df = DataFrame([[1,2,3],[4,5,6]], columns=[1.1,2.2,np.nan]) - check(df) - - df = DataFrame([[0,1,2,3],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan]) - check(df) - - df = DataFrame([[0.0,1,2,3.0],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan]) - check(df) - - def test_logical_with_nas(self): - d = DataFrame({'a': [np.nan, False], 'b': [True, True]}) - - # GH4947 - # bool comparisons should return bool - result = d['a'] | d['b'] - expected = Series([False, True]) - assert_series_equal(result, expected) - - # GH4604, automatic casting here - result = d['a'].fillna(False) | d['b'] - expected = Series([True, True]) - assert_series_equal(result, expected) - - result = d['a'].fillna(False,downcast=False) | d['b'] - expected = Series([True, True]) - assert_series_equal(result, expected) - - def test_neg(self): - # what to do? - assert_frame_equal(-self.frame, -1 * self.frame) - - def test_invert(self): - assert_frame_equal(-(self.frame < 0), ~(self.frame < 0)) - - def test_first_last_valid(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - mat[-5:] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - index = frame.first_valid_index() - - self.assertEqual(index, frame.index[5]) - - index = frame.last_valid_index() - self.assertEqual(index, frame.index[-6]) - - def test_arith_flex_frame(self): - ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod'] - if not compat.PY3: - aliases = {} - else: - aliases = {'div': 'truediv'} - - for op in ops: - try: - alias = aliases.get(op, op) - f = getattr(operator, alias) - result = getattr(self.frame, op)(2 * self.frame) - exp = f(self.frame, 2 * self.frame) - assert_frame_equal(result, exp) - - # vs mix float - result = getattr(self.mixed_float, op)(2 * self.mixed_float) - exp = f(self.mixed_float, 2 * self.mixed_float) - assert_frame_equal(result, exp) - _check_mixed_float(result, dtype = dict(C = None)) - - # vs mix int - if op in ['add','sub','mul']: - result = getattr(self.mixed_int, op)(2 + self.mixed_int) - exp = f(self.mixed_int, 2 + self.mixed_int) - - # overflow in the uint - dtype = None - if op in ['sub']: - dtype = dict(B = 'object', C = None) - elif op in ['add','mul']: - dtype = dict(C = None) - assert_frame_equal(result, exp) - _check_mixed_int(result, dtype = dtype) - - # rops - r_f = lambda x, y: f(y, x) - result = getattr(self.frame, 'r' + op)(2 * self.frame) - exp = r_f(self.frame, 2 * self.frame) - assert_frame_equal(result, exp) - - # vs mix float - result = getattr(self.mixed_float, op)(2 * self.mixed_float) - exp = f(self.mixed_float, 2 * self.mixed_float) - assert_frame_equal(result, exp) - _check_mixed_float(result, dtype = dict(C = None)) - - result = getattr(self.intframe, op)(2 * self.intframe) - exp = f(self.intframe, 2 * self.intframe) - assert_frame_equal(result, exp) - - # vs mix int - if op in ['add','sub','mul']: - result = getattr(self.mixed_int, op)(2 + self.mixed_int) - exp = f(self.mixed_int, 2 + self.mixed_int) - - # overflow in the uint - dtype = None - if op in ['sub']: - dtype = dict(B = 'object', C = None) - elif op in ['add','mul']: - dtype = dict(C = None) - assert_frame_equal(result, exp) - _check_mixed_int(result, dtype = dtype) - except: - com.pprint_thing("Failing operation %r" % op) - raise - - # ndim >= 3 - ndim_5 = np.ones(self.frame.shape + (3, 4, 5)) - with assertRaisesRegexp(ValueError, 'shape'): - f(self.frame, ndim_5) - - with assertRaisesRegexp(ValueError, 'shape'): - getattr(self.frame, op)(ndim_5) - - - # res_add = self.frame.add(self.frame) - # res_sub = self.frame.sub(self.frame) - # res_mul = self.frame.mul(self.frame) - # res_div = self.frame.div(2 * self.frame) - - # assert_frame_equal(res_add, self.frame + self.frame) - # assert_frame_equal(res_sub, self.frame - self.frame) - # assert_frame_equal(res_mul, self.frame * self.frame) - # assert_frame_equal(res_div, self.frame / (2 * self.frame)) - - const_add = self.frame.add(1) - assert_frame_equal(const_add, self.frame + 1) - - # corner cases - result = self.frame.add(self.frame[:0]) - assert_frame_equal(result, self.frame * np.nan) - - result = self.frame[:0].add(self.frame) - assert_frame_equal(result, self.frame * np.nan) - with assertRaisesRegexp(NotImplementedError, 'fill_value'): - self.frame.add(self.frame.iloc[0], fill_value=3) - with assertRaisesRegexp(NotImplementedError, 'fill_value'): - self.frame.add(self.frame.iloc[0], axis='index', fill_value=3) - - def test_binary_ops_align(self): - - # test aligning binary ops - - # GH 6681 - index=MultiIndex.from_product([list('abc'), - ['one','two','three'], - [1,2,3]], - names=['first','second','third']) - - df = DataFrame(np.arange(27*3).reshape(27,3), - index=index, - columns=['value1','value2','value3']).sortlevel() - - idx = pd.IndexSlice - for op in ['add','sub','mul','div','truediv']: - opa = getattr(operator,op,None) - if opa is None: - continue - - x = Series([ 1.0, 10.0, 100.0], [1,2,3]) - result = getattr(df,op)(x,level='third',axis=0) - - expected = pd.concat([ opa(df.loc[idx[:,:,i],:],v) for i, v in x.iteritems() ]).sortlevel() - assert_frame_equal(result, expected) - - x = Series([ 1.0, 10.0], ['two','three']) - result = getattr(df,op)(x,level='second',axis=0) - - expected = pd.concat([ opa(df.loc[idx[:,i],:],v) for i, v in x.iteritems() ]).reindex_like(df).sortlevel() - assert_frame_equal(result, expected) - - ## GH9463 (alignment level of dataframe with series) - - midx = MultiIndex.from_product([['A', 'B'],['a', 'b']]) - df = DataFrame(np.ones((2,4), dtype='int64'), columns=midx) - s = pd.Series({'a':1, 'b':2}) - - df2 = df.copy() - df2.columns.names = ['lvl0', 'lvl1'] - s2 = s.copy() - s2.index.name = 'lvl1' - - # different cases of integer/string level names: - res1 = df.mul(s, axis=1, level=1) - res2 = df.mul(s2, axis=1, level=1) - res3 = df2.mul(s, axis=1, level=1) - res4 = df2.mul(s2, axis=1, level=1) - res5 = df2.mul(s, axis=1, level='lvl1') - res6 = df2.mul(s2, axis=1, level='lvl1') - - exp = DataFrame(np.array([[1, 2, 1, 2], [1, 2, 1, 2]], dtype='int64'), - columns=midx) - - for res in [res1, res2]: - assert_frame_equal(res, exp) - - exp.columns.names = ['lvl0', 'lvl1'] - for res in [res3, res4, res5, res6]: - assert_frame_equal(res, exp) - - def test_arith_mixed(self): - - left = DataFrame({'A': ['a', 'b', 'c'], - 'B': [1, 2, 3]}) - - result = left + left - expected = DataFrame({'A': ['aa', 'bb', 'cc'], - 'B': [2, 4, 6]}) - assert_frame_equal(result, expected) - - def test_arith_getitem_commute(self): - df = DataFrame({'A': [1.1, 3.3], 'B': [2.5, -3.9]}) - - self._test_op(df, operator.add) - self._test_op(df, operator.sub) - self._test_op(df, operator.mul) - self._test_op(df, operator.truediv) - self._test_op(df, operator.floordiv) - self._test_op(df, operator.pow) - - self._test_op(df, lambda x, y: y + x) - self._test_op(df, lambda x, y: y - x) - self._test_op(df, lambda x, y: y * x) - self._test_op(df, lambda x, y: y / x) - self._test_op(df, lambda x, y: y ** x) - - self._test_op(df, lambda x, y: x + y) - self._test_op(df, lambda x, y: x - y) - self._test_op(df, lambda x, y: x * y) - self._test_op(df, lambda x, y: x / y) - self._test_op(df, lambda x, y: x ** y) - - @staticmethod - def _test_op(df, op): - result = op(df, 1) - - if not df.columns.is_unique: - raise ValueError("Only unique columns supported by this test") - - for col in result.columns: - assert_series_equal(result[col], op(df[col], 1)) - - def test_bool_flex_frame(self): - data = np.random.randn(5, 3) - other_data = np.random.randn(5, 3) - df = DataFrame(data) - other = DataFrame(other_data) - ndim_5 = np.ones(df.shape + (1, 3)) - - # Unaligned - def _check_unaligned_frame(meth, op, df, other): - part_o = other.ix[3:, 1:].copy() - rs = meth(part_o) - xp = op(df, part_o.reindex(index=df.index, columns=df.columns)) - assert_frame_equal(rs, xp) - - # DataFrame - self.assertTrue(df.eq(df).values.all()) - self.assertFalse(df.ne(df).values.any()) - for op in ['eq', 'ne', 'gt', 'lt', 'ge', 'le']: - f = getattr(df, op) - o = getattr(operator, op) - # No NAs - assert_frame_equal(f(other), o(df, other)) - _check_unaligned_frame(f, o, df, other) - # ndarray - assert_frame_equal(f(other.values), o(df, other.values)) - # scalar - assert_frame_equal(f(0), o(df, 0)) - # NAs - assert_frame_equal(f(np.nan), o(df, np.nan)) - with assertRaisesRegexp(ValueError, 'shape'): - f(ndim_5) - - # Series - def _test_seq(df, idx_ser, col_ser): - idx_eq = df.eq(idx_ser, axis=0) - col_eq = df.eq(col_ser) - idx_ne = df.ne(idx_ser, axis=0) - col_ne = df.ne(col_ser) - assert_frame_equal(col_eq, df == Series(col_ser)) - assert_frame_equal(col_eq, -col_ne) - assert_frame_equal(idx_eq, -idx_ne) - assert_frame_equal(idx_eq, df.T.eq(idx_ser).T) - assert_frame_equal(col_eq, df.eq(list(col_ser))) - assert_frame_equal(idx_eq, df.eq(Series(idx_ser), axis=0)) - assert_frame_equal(idx_eq, df.eq(list(idx_ser), axis=0)) - - idx_gt = df.gt(idx_ser, axis=0) - col_gt = df.gt(col_ser) - idx_le = df.le(idx_ser, axis=0) - col_le = df.le(col_ser) - - assert_frame_equal(col_gt, df > Series(col_ser)) - assert_frame_equal(col_gt, -col_le) - assert_frame_equal(idx_gt, -idx_le) - assert_frame_equal(idx_gt, df.T.gt(idx_ser).T) - - idx_ge = df.ge(idx_ser, axis=0) - col_ge = df.ge(col_ser) - idx_lt = df.lt(idx_ser, axis=0) - col_lt = df.lt(col_ser) - assert_frame_equal(col_ge, df >= Series(col_ser)) - assert_frame_equal(col_ge, -col_lt) - assert_frame_equal(idx_ge, -idx_lt) - assert_frame_equal(idx_ge, df.T.ge(idx_ser).T) - - idx_ser = Series(np.random.randn(5)) - col_ser = Series(np.random.randn(3)) - _test_seq(df, idx_ser, col_ser) - - - # list/tuple - _test_seq(df, idx_ser.values, col_ser.values) - - # NA - df.ix[0, 0] = np.nan - rs = df.eq(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.ne(df) - self.assertTrue(rs.ix[0, 0]) - rs = df.gt(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.lt(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.ge(df) - self.assertFalse(rs.ix[0, 0]) - rs = df.le(df) - self.assertFalse(rs.ix[0, 0]) - - - - # complex - arr = np.array([np.nan, 1, 6, np.nan]) - arr2 = np.array([2j, np.nan, 7, None]) - df = DataFrame({'a': arr}) - df2 = DataFrame({'a': arr2}) - rs = df.gt(df2) - self.assertFalse(rs.values.any()) - rs = df.ne(df2) - self.assertTrue(rs.values.all()) - - arr3 = np.array([2j, np.nan, None]) - df3 = DataFrame({'a': arr3}) - rs = df3.gt(2j) - self.assertFalse(rs.values.any()) - - # corner, dtype=object - df1 = DataFrame({'col': ['foo', np.nan, 'bar']}) - df2 = DataFrame({'col': ['foo', datetime.now(), 'bar']}) - result = df1.ne(df2) - exp = DataFrame({'col': [False, True, False]}) - assert_frame_equal(result, exp) - - def test_arith_flex_series(self): - df = self.simple - - row = df.xs('a') - col = df['two'] - # after arithmetic refactor, add truediv here - ops = ['add', 'sub', 'mul', 'mod'] - for op in ops: - f = getattr(df, op) - op = getattr(operator, op) - assert_frame_equal(f(row), op(df, row)) - assert_frame_equal(f(col, axis=0), op(df.T, col).T) - - # special case for some reason - assert_frame_equal(df.add(row, axis=None), df + row) - - # cases which will be refactored after big arithmetic refactor - assert_frame_equal(df.div(row), df / row) - assert_frame_equal(df.div(col, axis=0), (df.T / col).T) - - # broadcasting issue in GH7325 - df = DataFrame(np.arange(3*2).reshape((3,2)),dtype='int64') - expected = DataFrame([[nan, inf], [1.0, 1.5], [1.0, 1.25]]) - result = df.div(df[0],axis='index') - assert_frame_equal(result,expected) - - df = DataFrame(np.arange(3*2).reshape((3,2)),dtype='float64') - expected = DataFrame([[np.nan,np.inf],[1.0,1.5],[1.0,1.25]]) - result = df.div(df[0],axis='index') - assert_frame_equal(result,expected) - - def test_arith_non_pandas_object(self): - df = self.simple - - val1 = df.xs('a').values - added = DataFrame(df.values + val1, index=df.index, columns=df.columns) - assert_frame_equal(df + val1, added) - - added = DataFrame((df.values.T + val1).T, - index=df.index, columns=df.columns) - assert_frame_equal(df.add(val1, axis=0), added) - - val2 = list(df['two']) - - added = DataFrame(df.values + val2, index=df.index, columns=df.columns) - assert_frame_equal(df + val2, added) - - added = DataFrame((df.values.T + val2).T, index=df.index, - columns=df.columns) - assert_frame_equal(df.add(val2, axis='index'), added) - - val3 = np.random.rand(*df.shape) - added = DataFrame(df.values + val3, index=df.index, columns=df.columns) - assert_frame_equal(df.add(val3), added) - - def test_combineFrame(self): - frame_copy = self.frame.reindex(self.frame.index[::2]) - - del frame_copy['D'] - frame_copy['C'][:5] = nan - - added = self.frame + frame_copy - tm.assert_dict_equal(added['A'].valid(), - self.frame['A'] * 2, - compare_keys=False) - - self.assertTrue(np.isnan(added['C'].reindex(frame_copy.index)[:5]).all()) - - # assert(False) - - self.assertTrue(np.isnan(added['D']).all()) - - self_added = self.frame + self.frame - self.assertTrue(self_added.index.equals(self.frame.index)) - - added_rev = frame_copy + self.frame - self.assertTrue(np.isnan(added['D']).all()) - - # corner cases - - # empty - plus_empty = self.frame + self.empty - self.assertTrue(np.isnan(plus_empty.values).all()) - - empty_plus = self.empty + self.frame - self.assertTrue(np.isnan(empty_plus.values).all()) - - empty_empty = self.empty + self.empty - self.assertTrue(empty_empty.empty) - - # out of order - reverse = self.frame.reindex(columns=self.frame.columns[::-1]) - - assert_frame_equal(reverse + self.frame, self.frame * 2) - - # mix vs float64, upcast - added = self.frame + self.mixed_float - _check_mixed_float(added, dtype = 'float64') - added = self.mixed_float + self.frame - _check_mixed_float(added, dtype = 'float64') - - # mix vs mix - added = self.mixed_float + self.mixed_float2 - _check_mixed_float(added, dtype = dict(C = None)) - added = self.mixed_float2 + self.mixed_float - _check_mixed_float(added, dtype = dict(C = None)) - - # with int - added = self.frame + self.mixed_int - _check_mixed_float(added, dtype = 'float64') - - def test_combineSeries(self): - - # Series - series = self.frame.xs(self.frame.index[0]) - - added = self.frame + series - - for key, s in compat.iteritems(added): - assert_series_equal(s, self.frame[key] + series[key]) - - larger_series = series.to_dict() - larger_series['E'] = 1 - larger_series = Series(larger_series) - larger_added = self.frame + larger_series - - for key, s in compat.iteritems(self.frame): - assert_series_equal(larger_added[key], s + series[key]) - self.assertIn('E', larger_added) - self.assertTrue(np.isnan(larger_added['E']).all()) - - # vs mix (upcast) as needed - added = self.mixed_float + series - _check_mixed_float(added, dtype = 'float64') - added = self.mixed_float + series.astype('float32') - _check_mixed_float(added, dtype = dict(C = None)) - added = self.mixed_float + series.astype('float16') - _check_mixed_float(added, dtype = dict(C = None)) - - #### these raise with numexpr.....as we are adding an int64 to an uint64....weird - # vs int - #added = self.mixed_int + (100*series).astype('int64') - #_check_mixed_int(added, dtype = dict(A = 'int64', B = 'float64', C = 'int64', D = 'int64')) - #added = self.mixed_int + (100*series).astype('int32') - #_check_mixed_int(added, dtype = dict(A = 'int32', B = 'float64', C = 'int32', D = 'int64')) - - - # TimeSeries - ts = self.tsframe['A'] - - # 10890 - # we no longer allow auto timeseries broadcasting - # and require explict broadcasting - added = self.tsframe.add(ts, axis='index') - - for key, col in compat.iteritems(self.tsframe): - result = col + ts - assert_series_equal(added[key], result, check_names=False) - self.assertEqual(added[key].name, key) - if col.name == ts.name: - self.assertEqual(result.name, 'A') - else: - self.assertTrue(result.name is None) - - smaller_frame = self.tsframe[:-5] - smaller_added = smaller_frame.add(ts, axis='index') - - self.assertTrue(smaller_added.index.equals(self.tsframe.index)) - - smaller_ts = ts[:-5] - smaller_added2 = self.tsframe.add(smaller_ts, axis='index') - assert_frame_equal(smaller_added, smaller_added2) - - # length 0, result is all-nan - result = self.tsframe.add(ts[:0], axis='index') - expected = DataFrame(np.nan,index=self.tsframe.index,columns=self.tsframe.columns) - assert_frame_equal(result, expected) - - # Frame is all-nan - result = self.tsframe[:0].add(ts, axis='index') - expected = DataFrame(np.nan,index=self.tsframe.index,columns=self.tsframe.columns) - assert_frame_equal(result, expected) - - # empty but with non-empty index - frame = self.tsframe[:1].reindex(columns=[]) - result = frame.mul(ts,axis='index') - self.assertEqual(len(result), len(ts)) - - def test_combineFunc(self): - result = self.frame * 2 - self.assert_numpy_array_equal(result.values, self.frame.values * 2) - - # vs mix - result = self.mixed_float * 2 - for c, s in compat.iteritems(result): - self.assert_numpy_array_equal(s.values, self.mixed_float[c].values * 2) - _check_mixed_float(result, dtype = dict(C = None)) - - result = self.empty * 2 - self.assertIs(result.index, self.empty.index) - self.assertEqual(len(result.columns), 0) - - def test_comparisons(self): - df1 = tm.makeTimeDataFrame() - df2 = tm.makeTimeDataFrame() - - row = self.simple.xs('a') - ndim_5 = np.ones(df1.shape + (1, 1, 1)) - - def test_comp(func): - result = func(df1, df2) - self.assert_numpy_array_equal(result.values, - func(df1.values, df2.values)) - with assertRaisesRegexp(ValueError, 'Wrong number of dimensions'): - func(df1, ndim_5) - - result2 = func(self.simple, row) - self.assert_numpy_array_equal(result2.values, - func(self.simple.values, row.values)) - - result3 = func(self.frame, 0) - self.assert_numpy_array_equal(result3.values, - func(self.frame.values, 0)) - - - with assertRaisesRegexp(ValueError, 'Can only compare ' - 'identically-labeled DataFrame'): - func(self.simple, self.simple[:2]) - - test_comp(operator.eq) - test_comp(operator.ne) - test_comp(operator.lt) - test_comp(operator.gt) - test_comp(operator.ge) - test_comp(operator.le) - - def test_string_comparison(self): - df = DataFrame([{"a": 1, "b": "foo"}, {"a": 2, "b": "bar"}]) - mask_a = df.a > 1 - assert_frame_equal(df[mask_a], df.ix[1:1, :]) - assert_frame_equal(df[-mask_a], df.ix[0:0, :]) - - mask_b = df.b == "foo" - assert_frame_equal(df[mask_b], df.ix[0:0, :]) - assert_frame_equal(df[-mask_b], df.ix[1:1, :]) - - def test_float_none_comparison(self): - df = DataFrame(np.random.randn(8, 3), index=lrange(8), - columns=['A', 'B', 'C']) - - self.assertRaises(TypeError, df.__eq__, None) - - def test_boolean_comparison(self): - - # GH 4576 - # boolean comparisons with a tuple/list give unexpected results - df = DataFrame(np.arange(6).reshape((3,2))) - b = np.array([2, 2]) - b_r = np.atleast_2d([2,2]) - b_c = b_r.T - l = (2,2,2) - tup = tuple(l) - - # gt - expected = DataFrame([[False,False],[False,True],[True,True]]) - result = df>b - assert_frame_equal(result,expected) - - result = df.values>b - assert_numpy_array_equal(result,expected.values) - - result = df>l - assert_frame_equal(result,expected) - - result = df>tup - assert_frame_equal(result,expected) - - result = df>b_r - assert_frame_equal(result,expected) - - result = df.values>b_r - assert_numpy_array_equal(result,expected.values) - - self.assertRaises(ValueError, df.__gt__, b_c) - self.assertRaises(ValueError, df.values.__gt__, b_c) - - # == - expected = DataFrame([[False,False],[True,False],[False,False]]) - result = df == b - assert_frame_equal(result,expected) - - result = df==l - assert_frame_equal(result,expected) - - result = df==tup - assert_frame_equal(result,expected) - - result = df == b_r - assert_frame_equal(result,expected) - - result = df.values == b_r - assert_numpy_array_equal(result,expected.values) - - self.assertRaises(ValueError, lambda : df == b_c) - self.assertFalse((df.values == b_c)) - - # with alignment - df = DataFrame(np.arange(6).reshape((3,2)),columns=list('AB'),index=list('abc')) - expected.index=df.index - expected.columns=df.columns - - result = df==l - assert_frame_equal(result,expected) - - result = df==tup - assert_frame_equal(result,expected) - - # not shape compatible - self.assertRaises(ValueError, lambda : df == (2,2)) - self.assertRaises(ValueError, lambda : df == [2,2]) - - def test_equals_different_blocks(self): - # GH 9330 - df0 = pd.DataFrame({"A": ["x","y"], "B": [1,2], - "C": ["w","z"]}) - df1 = df0.reset_index()[["A","B","C"]] - # this assert verifies that the above operations have - # induced a block rearrangement - self.assertTrue(df0._data.blocks[0].dtype != - df1._data.blocks[0].dtype) - # do the real tests - assert_frame_equal(df0, df1) - self.assertTrue(df0.equals(df1)) - self.assertTrue(df1.equals(df0)) - - def test_copy_blocks(self): - # API/ENH 9607 - df = DataFrame(self.frame, copy=True) - column = df.columns[0] - - # use the default copy=True, change a column - blocks = df.as_blocks() - for dtype, _df in blocks.items(): - if column in _df: - _df.ix[:, column] = _df[column] + 1 - - # make sure we did not change the original DataFrame - self.assertFalse(_df[column].equals(df[column])) - - def test_no_copy_blocks(self): - # API/ENH 9607 - df = DataFrame(self.frame, copy=True) - column = df.columns[0] - - # use the copy=False, change a column - blocks = df.as_blocks(copy=False) - for dtype, _df in blocks.items(): - if column in _df: - _df.ix[:, column] = _df[column] + 1 - - # make sure we did change the original DataFrame - self.assertTrue(_df[column].equals(df[column])) - - def test_to_csv_from_csv(self): - - pname = '__tmp_to_csv_from_csv__' - with ensure_clean(pname) as path: - - self.frame['A'][:5] = nan - - self.frame.to_csv(path) - self.frame.to_csv(path, columns=['A', 'B']) - self.frame.to_csv(path, header=False) - self.frame.to_csv(path, index=False) - - # test roundtrip - self.tsframe.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.tsframe, recons) - - self.tsframe.to_csv(path, index_label='index') - recons = DataFrame.from_csv(path, index_col=None) - assert(len(recons.columns) == len(self.tsframe.columns) + 1) - - # no index - self.tsframe.to_csv(path, index=False) - recons = DataFrame.from_csv(path, index_col=None) - assert_almost_equal(self.tsframe.values, recons.values) - - # corner case - dm = DataFrame({'s1': Series(lrange(3), lrange(3)), - 's2': Series(lrange(2), lrange(2))}) - dm.to_csv(path) - recons = DataFrame.from_csv(path) - assert_frame_equal(dm, recons) - - with ensure_clean(pname) as path: - - # duplicate index - df = DataFrame(np.random.randn(3, 3), index=['a', 'a', 'b'], - columns=['x', 'y', 'z']) - df.to_csv(path) - result = DataFrame.from_csv(path) - assert_frame_equal(result, df) - - midx = MultiIndex.from_tuples([('A', 1, 2), ('A', 1, 2), ('B', 1, 2)]) - df = DataFrame(np.random.randn(3, 3), index=midx, - columns=['x', 'y', 'z']) - df.to_csv(path) - result = DataFrame.from_csv(path, index_col=[0, 1, 2], - parse_dates=False) - assert_frame_equal(result, df, check_names=False) # TODO from_csv names index ['Unnamed: 1', 'Unnamed: 2'] should it ? - - # column aliases - col_aliases = Index(['AA', 'X', 'Y', 'Z']) - self.frame2.to_csv(path, header=col_aliases) - rs = DataFrame.from_csv(path) - xp = self.frame2.copy() - xp.columns = col_aliases - - assert_frame_equal(xp, rs) - - self.assertRaises(ValueError, self.frame2.to_csv, path, - header=['AA', 'X']) - - with ensure_clean(pname) as path: - df1 = DataFrame(np.random.randn(3, 1)) - df2 = DataFrame(np.random.randn(3, 1)) - - df1.to_csv(path) - df2.to_csv(path,mode='a',header=False) - xp = pd.concat([df1,df2]) - rs = pd.read_csv(path,index_col=0) - rs.columns = lmap(int,rs.columns) - xp.columns = lmap(int,xp.columns) - assert_frame_equal(xp,rs) - - with ensure_clean() as path: - # GH 10833 (TimedeltaIndex formatting) - dt = pd.Timedelta(seconds=1) - df = pd.DataFrame({'dt_data': [i*dt for i in range(3)]}, - index=pd.Index([i*dt for i in range(3)], - name='dt_index')) - df.to_csv(path) - - result = pd.read_csv(path, index_col='dt_index') - result.index = pd.to_timedelta(result.index) - # TODO: remove renaming when GH 10875 is solved - result.index = result.index.rename('dt_index') - result['dt_data'] = pd.to_timedelta(result['dt_data']) - - assert_frame_equal(df, result, check_index_type=True) - - # tz, 8260 - with ensure_clean(pname) as path: - - self.tzframe.to_csv(path) - result = pd.read_csv(path, index_col=0, parse_dates=['A']) - - converter = lambda c: pd.to_datetime(result[c]).dt.tz_localize('UTC').dt.tz_convert(self.tzframe[c].dt.tz) - result['B'] = converter('B') - result['C'] = converter('C') - assert_frame_equal(result, self.tzframe) - - def test_to_csv_cols_reordering(self): - # GH3454 - import pandas as pd - - chunksize=5 - N = int(chunksize*2.5) - - df= mkdf(N, 3) - cs = df.columns - cols = [cs[2],cs[0]] - - with ensure_clean() as path: - df.to_csv(path,columns = cols,chunksize=chunksize) - rs_c = pd.read_csv(path,index_col=0) - - assert_frame_equal(df[cols],rs_c,check_names=False) - - def test_to_csv_legacy_raises_on_dupe_cols(self): - df= mkdf(10, 3) - df.columns = ['a','a','b'] - with ensure_clean() as path: - with tm.assert_produces_warning(FutureWarning, check_stacklevel=False): - self.assertRaises(NotImplementedError,df.to_csv,path,engine='python') - - def test_to_csv_new_dupe_cols(self): - import pandas as pd - def _check_df(df,cols=None): - with ensure_clean() as path: - df.to_csv(path,columns = cols,chunksize=chunksize) - rs_c = pd.read_csv(path,index_col=0) - - # we wrote them in a different order - # so compare them in that order - if cols is not None: - - if df.columns.is_unique: - rs_c.columns = cols - else: - indexer, missing = df.columns.get_indexer_non_unique(cols) - rs_c.columns = df.columns.take(indexer) - - for c in cols: - obj_df = df[c] - obj_rs = rs_c[c] - if isinstance(obj_df,Series): - assert_series_equal(obj_df,obj_rs) - else: - assert_frame_equal(obj_df,obj_rs,check_names=False) - - # wrote in the same order - else: - rs_c.columns = df.columns - assert_frame_equal(df,rs_c,check_names=False) - - chunksize=5 - N = int(chunksize*2.5) - - # dupe cols - df= mkdf(N, 3) - df.columns = ['a','a','b'] - _check_df(df,None) - - # dupe cols with selection - cols = ['b','a'] - _check_df(df,cols) - - @slow - def test_to_csv_moar(self): - path = '__tmp_to_csv_moar__' - - def _do_test(df,path,r_dtype=None,c_dtype=None,rnlvl=None,cnlvl=None, - dupe_col=False): - - kwargs = dict(parse_dates=False) - if cnlvl: - if rnlvl is not None: - kwargs['index_col'] = lrange(rnlvl) - kwargs['header'] = lrange(cnlvl) - with ensure_clean(path) as path: - df.to_csv(path,encoding='utf8',chunksize=chunksize,tupleize_cols=False) - recons = DataFrame.from_csv(path,tupleize_cols=False,**kwargs) - else: - kwargs['header'] = 0 - with ensure_clean(path) as path: - df.to_csv(path,encoding='utf8',chunksize=chunksize) - recons = DataFrame.from_csv(path,**kwargs) - - def _to_uni(x): - if not isinstance(x, compat.text_type): - return x.decode('utf8') - return x - if dupe_col: - # read_Csv disambiguates the columns by - # labeling them dupe.1,dupe.2, etc'. monkey patch columns - recons.columns = df.columns - if rnlvl and not cnlvl: - delta_lvl = [recons.iloc[:, i].values for i in range(rnlvl-1)] - ix=MultiIndex.from_arrays([list(recons.index)]+delta_lvl) - recons.index = ix - recons = recons.iloc[:,rnlvl-1:] - - type_map = dict(i='i',f='f',s='O',u='O',dt='O',p='O') - if r_dtype: - if r_dtype == 'u': # unicode - r_dtype='O' - recons.index = np.array(lmap(_to_uni,recons.index), - dtype=r_dtype) - df.index = np.array(lmap(_to_uni,df.index),dtype=r_dtype) - elif r_dtype == 'dt': # unicode - r_dtype='O' - recons.index = np.array(lmap(Timestamp,recons.index), - dtype=r_dtype) - df.index = np.array(lmap(Timestamp,df.index),dtype=r_dtype) - elif r_dtype == 'p': - r_dtype='O' - recons.index = np.array(list(map(Timestamp, - recons.index.to_datetime())), - dtype=r_dtype) - df.index = np.array(list(map(Timestamp, - df.index.to_datetime())), - dtype=r_dtype) - else: - r_dtype= type_map.get(r_dtype) - recons.index = np.array(recons.index,dtype=r_dtype ) - df.index = np.array(df.index,dtype=r_dtype ) - if c_dtype: - if c_dtype == 'u': - c_dtype='O' - recons.columns = np.array(lmap(_to_uni,recons.columns), - dtype=c_dtype) - df.columns = np.array(lmap(_to_uni,df.columns),dtype=c_dtype ) - elif c_dtype == 'dt': - c_dtype='O' - recons.columns = np.array(lmap(Timestamp,recons.columns), - dtype=c_dtype ) - df.columns = np.array(lmap(Timestamp,df.columns),dtype=c_dtype) - elif c_dtype == 'p': - c_dtype='O' - recons.columns = np.array(lmap(Timestamp,recons.columns.to_datetime()), - dtype=c_dtype) - df.columns = np.array(lmap(Timestamp,df.columns.to_datetime()),dtype=c_dtype ) - else: - c_dtype= type_map.get(c_dtype) - recons.columns = np.array(recons.columns,dtype=c_dtype ) - df.columns = np.array(df.columns,dtype=c_dtype ) - - assert_frame_equal(df,recons,check_names=False,check_less_precise=True) - - N = 100 - chunksize=1000 - - # GH3437 - from pandas import NaT - def make_dtnat_arr(n,nnat=None): - if nnat is None: - nnat= int(n*0.1) # 10% - s=list(date_range('2000',freq='5min',periods=n)) - if nnat: - for i in np.random.randint(0,len(s),nnat): - s[i] = NaT - i = np.random.randint(100) - s[-i] = NaT - s[i] = NaT - return s - - # N=35000 - s1=make_dtnat_arr(chunksize+5) - s2=make_dtnat_arr(chunksize+5,0) - path = '1.csv' - - # s3=make_dtnjat_arr(chunksize+5,0) - with ensure_clean('.csv') as pth: - df=DataFrame(dict(a=s1,b=s2)) - df.to_csv(pth,chunksize=chunksize) - recons = DataFrame.from_csv(pth)._convert(datetime=True, - coerce=True) - assert_frame_equal(df, recons,check_names=False,check_less_precise=True) - - for ncols in [4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type='dt', - c_idx_type='s'),path, 'dt','s') - - - for ncols in [4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type='dt', - c_idx_type='s'),path, 'dt','s') - pass - - for r_idx_type,c_idx_type in [('i','i'),('s','s'),('u','dt'),('p','p')]: - for ncols in [1,2,3,4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [2,10,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_type=r_idx_type, - c_idx_type=c_idx_type),path,r_idx_type,c_idx_type) - - for ncols in [1,2,3,4]: - base = int((chunksize// ncols or 1) or 1) - for nrows in [10,N-2,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols),path) - - for nrows in [10,N-2,N-1,N,N+1,N+2]: - df = mkdf(nrows, 3) - cols = list(df.columns) - cols[:2] = ["dupe","dupe"] - cols[-2:] = ["dupe","dupe"] - ix = list(df.index) - ix[:2] = ["rdupe","rdupe"] - ix[-2:] = ["rdupe","rdupe"] - df.index=ix - df.columns=cols - _do_test(df,path,dupe_col=True) - - - _do_test(DataFrame(index=lrange(10)),path) - _do_test(mkdf(chunksize//2+1, 2,r_idx_nlevels=2),path,rnlvl=2) - for ncols in [2,3,4]: - base = int(chunksize//ncols) - for nrows in [10,N-2,N-1,N,N+1,N+2,2*N-2,2*N-1,2*N,2*N+1,2*N+2, - base-1,base,base+1]: - _do_test(mkdf(nrows, ncols,r_idx_nlevels=2),path,rnlvl=2) - _do_test(mkdf(nrows, ncols,c_idx_nlevels=2),path,cnlvl=2) - _do_test(mkdf(nrows, ncols,r_idx_nlevels=2,c_idx_nlevels=2), - path,rnlvl=2,cnlvl=2) - - def test_to_csv_from_csv_w_some_infs(self): - - # test roundtrip with inf, -inf, nan, as full columns and mix - self.frame['G'] = np.nan - f = lambda x: [np.inf, np.nan][np.random.rand() < .5] - self.frame['H'] = self.frame.index.map(f) - - with ensure_clean() as path: - self.frame.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name - assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False) - - def test_to_csv_from_csv_w_all_infs(self): - - # test roundtrip with inf, -inf, nan, as full columns and mix - self.frame['E'] = np.inf - self.frame['F'] = -np.inf - - with ensure_clean() as path: - self.frame.to_csv(path) - recons = DataFrame.from_csv(path) - - assert_frame_equal(self.frame, recons, check_names=False) # TODO to_csv drops column name - assert_frame_equal(np.isinf(self.frame), np.isinf(recons), check_names=False) - - def test_to_csv_no_index(self): - # GH 3624, after appending columns, to_csv fails - pname = '__tmp_to_csv_no_index__' - with ensure_clean(pname) as path: - df = DataFrame({'c1':[1,2,3], 'c2':[4,5,6]}) - df.to_csv(path, index=False) - result = read_csv(path) - assert_frame_equal(df,result) - df['c3'] = Series([7,8,9],dtype='int64') - df.to_csv(path, index=False) - result = read_csv(path) - assert_frame_equal(df,result) - - def test_to_csv_with_mix_columns(self): - #GH11637, incorrect output when a mix of integer and string column - # names passed as columns parameter in to_csv - - df = DataFrame({0: ['a', 'b', 'c'], - 1: ['aa', 'bb', 'cc']}) - df['test'] = 'txt' - assert_equal(df.to_csv(), df.to_csv(columns=[0, 1, 'test'])) - - def test_to_csv_headers(self): - # GH6186, the presence or absence of `index` incorrectly - # causes to_csv to have different header semantics. - pname = '__tmp_to_csv_headers__' - from_df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) - to_df = DataFrame([[1, 2], [3, 4]], columns=['X', 'Y']) - with ensure_clean(pname) as path: - from_df.to_csv(path, header=['X', 'Y']) - recons = DataFrame.from_csv(path) - assert_frame_equal(to_df, recons) - - from_df.to_csv(path, index=False, header=['X', 'Y']) - recons = DataFrame.from_csv(path) - recons.reset_index(inplace=True) - assert_frame_equal(to_df, recons) - - def test_to_csv_multiindex(self): - - pname = '__tmp_to_csv_multiindex__' - frame = self.frame - old_index = frame.index - arrays = np.arange(len(old_index) * 2).reshape(2, -1) - new_index = MultiIndex.from_arrays(arrays, names=['first', 'second']) - frame.index = new_index - - with ensure_clean(pname) as path: - - frame.to_csv(path, header=False) - frame.to_csv(path, columns=['A', 'B']) - - # round trip - frame.to_csv(path) - df = DataFrame.from_csv(path, index_col=[0, 1], parse_dates=False) - - assert_frame_equal(frame, df, check_names=False) # TODO to_csv drops column name - self.assertEqual(frame.index.names, df.index.names) - self.frame.index = old_index # needed if setUP becomes a classmethod - - # try multiindex with dates - tsframe = self.tsframe - old_index = tsframe.index - new_index = [old_index, np.arange(len(old_index))] - tsframe.index = MultiIndex.from_arrays(new_index) - - tsframe.to_csv(path, index_label=['time', 'foo']) - recons = DataFrame.from_csv(path, index_col=[0, 1]) - assert_frame_equal(tsframe, recons, check_names=False) # TODO to_csv drops column name - - # do not load index - tsframe.to_csv(path) - recons = DataFrame.from_csv(path, index_col=None) - np.testing.assert_equal(len(recons.columns), len(tsframe.columns) + 2) - - # no index - tsframe.to_csv(path, index=False) - recons = DataFrame.from_csv(path, index_col=None) - assert_almost_equal(recons.values, self.tsframe.values) - self.tsframe.index = old_index # needed if setUP becomes classmethod - - with ensure_clean(pname) as path: - # GH3571, GH1651, GH3141 - - def _make_frame(names=None): - if names is True: - names = ['first','second'] - return DataFrame(np.random.randint(0,10,size=(3,3)), - columns=MultiIndex.from_tuples([('bah', 'foo'), - ('bah', 'bar'), - ('ban', 'baz')], - names=names), - dtype='int64') - - # column & index are multi-index - df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=[0,1],tupleize_cols=False) - assert_frame_equal(df,result) - - # column is mi - df = mkdf(5,3,r_idx_nlevels=1,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=0,tupleize_cols=False) - assert_frame_equal(df,result) - - # dup column names? - df = mkdf(5,3,r_idx_nlevels=3,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1,2,3],index_col=[0,1,2],tupleize_cols=False) - assert_frame_equal(df,result) - - # writing with no index - df = _make_frame() - df.to_csv(path,tupleize_cols=False,index=False) - result = read_csv(path,header=[0,1],tupleize_cols=False) - assert_frame_equal(df,result) - - # we lose the names here - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False,index=False) - result = read_csv(path,header=[0,1],tupleize_cols=False) - self.assertTrue(all([ x is None for x in result.columns.names ])) - result.columns.names = df.columns.names - assert_frame_equal(df,result) - - # tupleize_cols=True and index=False - df = _make_frame(True) - df.to_csv(path,tupleize_cols=True,index=False) - result = read_csv(path,header=0,tupleize_cols=True,index_col=None) - result.columns = df.columns - assert_frame_equal(df,result) - - # whatsnew example - df = _make_frame() - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1],index_col=[0],tupleize_cols=False) - assert_frame_equal(df,result) - - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False) - result = read_csv(path,header=[0,1],index_col=[0],tupleize_cols=False) - assert_frame_equal(df,result) - - # column & index are multi-index (compatibility) - df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4) - df.to_csv(path,tupleize_cols=True) - result = read_csv(path,header=0,index_col=[0,1],tupleize_cols=True) - result.columns = df.columns - assert_frame_equal(df,result) - - # invalid options - df = _make_frame(True) - df.to_csv(path,tupleize_cols=False) - - # catch invalid headers - with assertRaisesRegexp(CParserError, 'Passed header=\[0,1,2\] are too many rows for this multi_index of columns'): - read_csv(path,tupleize_cols=False,header=lrange(3),index_col=0) - - with assertRaisesRegexp(CParserError, 'Passed header=\[0,1,2,3,4,5,6\], len of 7, but only 6 lines in file'): - read_csv(path,tupleize_cols=False,header=lrange(7),index_col=0) - - for i in [4,5,6]: - with tm.assertRaises(CParserError): - read_csv(path, tupleize_cols=False, header=lrange(i), index_col=0) - - # write with cols - with assertRaisesRegexp(TypeError, 'cannot specify cols with a MultiIndex'): - df.to_csv(path, tupleize_cols=False, columns=['foo', 'bar']) - - with ensure_clean(pname) as path: - # empty - tsframe[:0].to_csv(path) - recons = DataFrame.from_csv(path) - exp = tsframe[:0] - exp.index = [] - - self.assertTrue(recons.columns.equals(exp.columns)) - self.assertEqual(len(recons), 0) - - def test_to_csv_float32_nanrep(self): - df = DataFrame(np.random.randn(1, 4).astype(np.float32)) - df[1] = np.nan - - with ensure_clean('__tmp_to_csv_float32_nanrep__.csv') as path: - df.to_csv(path, na_rep=999) - - with open(path) as f: - lines = f.readlines() - self.assertEqual(lines[1].split(',')[2], '999') - - def test_to_csv_withcommas(self): - - # Commas inside fields should be correctly escaped when saving as CSV. - df = DataFrame({'A': [1, 2, 3], 'B': ['5,6', '7,8', '9,0']}) - - with ensure_clean('__tmp_to_csv_withcommas__.csv') as path: - df.to_csv(path) - df2 = DataFrame.from_csv(path) - assert_frame_equal(df2, df) - - def test_to_csv_mixed(self): - - def create_cols(name): - return [ "%s%03d" % (name,i) for i in range(5) ] - - df_float = DataFrame(np.random.randn(100, 5),dtype='float64',columns=create_cols('float')) - df_int = DataFrame(np.random.randn(100, 5),dtype='int64',columns=create_cols('int')) - df_bool = DataFrame(True,index=df_float.index,columns=create_cols('bool')) - df_object = DataFrame('foo',index=df_float.index,columns=create_cols('object')) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=create_cols('date')) - - # add in some nans - df_float.ix[30:50,1:3] = np.nan - - #### this is a bug in read_csv right now #### - #df_dt.ix[30:50,1:3] = np.nan - - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1) - - # dtype - dtypes = dict() - for n,dtype in [('float',np.float64),('int',np.int64),('bool',np.bool),('object',np.object)]: - for c in create_cols(n): - dtypes[c] = dtype - - with ensure_clean() as filename: - df.to_csv(filename) - rs = read_csv(filename, index_col=0, dtype=dtypes, parse_dates=create_cols('date')) - assert_frame_equal(rs, df) - - def test_to_csv_dups_cols(self): - - df = DataFrame(np.random.randn(1000, 30),columns=lrange(15)+lrange(15),dtype='float64') - - with ensure_clean() as filename: - df.to_csv(filename) # single dtype, fine - result = read_csv(filename,index_col=0) - result.columns = df.columns - assert_frame_equal(result,df) - - df_float = DataFrame(np.random.randn(1000, 3),dtype='float64') - df_int = DataFrame(np.random.randn(1000, 3),dtype='int64') - df_bool = DataFrame(True,index=df_float.index,columns=lrange(3)) - df_object = DataFrame('foo',index=df_float.index,columns=lrange(3)) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=lrange(3)) - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1, ignore_index=True) - - cols = [] - for i in range(5): - cols.extend([0,1,2]) - df.columns = cols - - from pandas import to_datetime - with ensure_clean() as filename: - df.to_csv(filename) - result = read_csv(filename,index_col=0) - - # date cols - for i in ['0.4','1.4','2.4']: - result[i] = to_datetime(result[i]) - - result.columns = df.columns - assert_frame_equal(result,df) - - # GH3457 - from pandas.util.testing import makeCustomDataframe as mkdf - - N=10 - df= mkdf(N, 3) - df.columns = ['a','a','b'] - - with ensure_clean() as filename: - df.to_csv(filename) - - # read_csv will rename the dups columns - result = read_csv(filename,index_col=0) - result = result.rename(columns={ 'a.1' : 'a' }) - assert_frame_equal(result,df) - - def test_to_csv_chunking(self): - - aa=DataFrame({'A':lrange(100000)}) - aa['B'] = aa.A + 1.0 - aa['C'] = aa.A + 2.0 - aa['D'] = aa.A + 3.0 - - for chunksize in [10000,50000,100000]: - with ensure_clean() as filename: - aa.to_csv(filename,chunksize=chunksize) - rs = read_csv(filename,index_col=0) - assert_frame_equal(rs, aa) - - @slow - def test_to_csv_wide_frame_formatting(self): - # Issue #8621 - df = DataFrame(np.random.randn(1, 100010), columns=None, index=None) - with ensure_clean() as filename: - df.to_csv(filename, header=False, index=False) - rs = read_csv(filename, header=None) - assert_frame_equal(rs, df) - - def test_to_csv_bug(self): - f1 = StringIO('a,1.0\nb,2.0') - df = DataFrame.from_csv(f1, header=None) - newdf = DataFrame({'t': df[df.columns[0]]}) - - with ensure_clean() as path: - newdf.to_csv(path) - - recons = read_csv(path, index_col=0) - assert_frame_equal(recons, newdf, check_names=False) # don't check_names as t != 1 - - def test_to_csv_unicode(self): - - df = DataFrame({u('c/\u03c3'): [1, 2, 3]}) - with ensure_clean() as path: - - df.to_csv(path, encoding='UTF-8') - df2 = read_csv(path, index_col=0, encoding='UTF-8') - assert_frame_equal(df, df2) - - df.to_csv(path, encoding='UTF-8', index=False) - df2 = read_csv(path, index_col=None, encoding='UTF-8') - assert_frame_equal(df, df2) - - def test_to_csv_unicode_index_col(self): - buf = StringIO('') - df = DataFrame( - [[u("\u05d0"), "d2", "d3", "d4"], ["a1", "a2", "a3", "a4"]], - columns=[u("\u05d0"), - u("\u05d1"), u("\u05d2"), u("\u05d3")], - index=[u("\u05d0"), u("\u05d1")]) - - df.to_csv(buf, encoding='UTF-8') - buf.seek(0) - - df2 = read_csv(buf, index_col=0, encoding='UTF-8') - assert_frame_equal(df, df2) - - def test_to_csv_stringio(self): - buf = StringIO() - self.frame.to_csv(buf) - buf.seek(0) - recons = read_csv(buf, index_col=0) - assert_frame_equal(recons, self.frame, check_names=False) # TODO to_csv drops column name - - def test_to_csv_float_format(self): - - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, float_format='%.2f') - - rs = read_csv(filename, index_col=0) - xp = DataFrame([[0.12, 0.23, 0.57], - [12.32, 123123.20, 321321.20]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - assert_frame_equal(rs, xp) - - def test_to_csv_quoting(self): - df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) - - buf = StringIO() - df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC) - - result = buf.getvalue() - expected = ('"A","B"\n' - '1,"foo"\n' - '2,"bar"\n' - '3,"baz"\n') - - self.assertEqual(result, expected) - - # quoting windows line terminators, presents with encoding? - # #3503 - text = 'a,b,c\n1,"test \r\n",3\n' - df = pd.read_csv(StringIO(text)) - buf = StringIO() - df.to_csv(buf, encoding='utf-8', index=False) - self.assertEqual(buf.getvalue(), text) - - # testing if quoting parameter is passed through with multi-indexes - # related to issue #7791 - df = pd.DataFrame({'a': [1, 2], 'b': [3, 4], 'c': [5, 6]}) - df = df.set_index(['a', 'b']) - expected = '"a","b","c"\n"1","3","5"\n"2","4","6"\n' - self.assertEqual(df.to_csv(quoting=csv.QUOTE_ALL), expected) - - def test_to_csv_unicodewriter_quoting(self): - df = DataFrame({'A': [1, 2, 3], 'B': ['foo', 'bar', 'baz']}) - - buf = StringIO() - df.to_csv(buf, index=False, quoting=csv.QUOTE_NONNUMERIC, - encoding='utf-8') - - result = buf.getvalue() - expected = ('"A","B"\n' - '1,"foo"\n' - '2,"bar"\n' - '3,"baz"\n') - - self.assertEqual(result, expected) - - def test_to_csv_quote_none(self): - # GH4328 - df = DataFrame({'A': ['hello', '{"hello"}']}) - for encoding in (None, 'utf-8'): - buf = StringIO() - df.to_csv(buf, quoting=csv.QUOTE_NONE, - encoding=encoding, index=False) - result = buf.getvalue() - expected = 'A\nhello\n{"hello"}\n' - self.assertEqual(result, expected) - - def test_to_csv_index_no_leading_comma(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, - index=['one', 'two', 'three']) - - buf = StringIO() - df.to_csv(buf, index_label=False) - expected = ('A,B\n' - 'one,1,4\n' - 'two,2,5\n' - 'three,3,6\n') - self.assertEqual(buf.getvalue(), expected) - - def test_to_csv_line_terminators(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}, - index=['one', 'two', 'three']) - - buf = StringIO() - df.to_csv(buf, line_terminator='\r\n') - expected = (',A,B\r\n' - 'one,1,4\r\n' - 'two,2,5\r\n' - 'three,3,6\r\n') - self.assertEqual(buf.getvalue(), expected) - - buf = StringIO() - df.to_csv(buf) # The default line terminator remains \n - expected = (',A,B\n' - 'one,1,4\n' - 'two,2,5\n' - 'three,3,6\n') - self.assertEqual(buf.getvalue(), expected) - - def test_to_csv_from_csv_categorical(self): - - # CSV with categoricals should result in the same output as when one would add a "normal" - # Series/DataFrame. - s = Series(pd.Categorical(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c'])) - s2 = Series(['a', 'b', 'b', 'a', 'a', 'c', 'c', 'c']) - res = StringIO() - s.to_csv(res) - exp = StringIO() - s2.to_csv(exp) - self.assertEqual(res.getvalue(), exp.getvalue()) - - df = DataFrame({"s":s}) - df2 = DataFrame({"s":s2}) - res = StringIO() - df.to_csv(res) - exp = StringIO() - df2.to_csv(exp) - self.assertEqual(res.getvalue(), exp.getvalue()) - - def test_to_csv_path_is_none(self): - # GH 8215 - # Make sure we return string for consistency with - # Series.to_csv() - csv_str = self.frame.to_csv(path=None) - self.assertIsInstance(csv_str, str) - recons = pd.read_csv(StringIO(csv_str), index_col=0) - assert_frame_equal(self.frame, recons) - - def test_to_csv_compression_gzip(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, compression="gzip") - - # test the round trip - to_csv -> read_csv - rs = read_csv(filename, compression="gzip", index_col=0) - assert_frame_equal(df, rs) - - # explicitly make sure file is gziped - import gzip - f = gzip.open(filename, 'rb') - text = f.read().decode('utf8') - f.close() - for col in df.columns: - self.assertIn(col, text) - - def test_to_csv_compression_bz2(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - - df.to_csv(filename, compression="bz2") - - # test the round trip - to_csv -> read_csv - rs = read_csv(filename, compression="bz2", index_col=0) - assert_frame_equal(df, rs) - - # explicitly make sure file is bz2ed - import bz2 - f = bz2.BZ2File(filename, 'rb') - text = f.read().decode('utf8') - f.close() - for col in df.columns: - self.assertIn(col, text) - - def test_to_csv_compression_value_error(self): - ## GH7615 - ## use the compression kw in to_csv - df = DataFrame([[0.123456, 0.234567, 0.567567], - [12.32112, 123123.2, 321321.2]], - index=['A', 'B'], columns=['X', 'Y', 'Z']) - - with ensure_clean() as filename: - # zip compression is not supported and should raise ValueError - self.assertRaises(ValueError, df.to_csv, filename, compression="zip") - - def test_info(self): - io = StringIO() - self.frame.info(buf=io) - self.tsframe.info(buf=io) - - frame = DataFrame(np.random.randn(5, 3)) - - import sys - sys.stdout = StringIO() - frame.info() - frame.info(verbose=False) - sys.stdout = sys.__stdout__ - - def test_info_wide(self): - from pandas import set_option, reset_option - io = StringIO() - df = DataFrame(np.random.randn(5, 101)) - df.info(buf=io) - - io = StringIO() - df.info(buf=io, max_cols=101) - rs = io.getvalue() - self.assertTrue(len(rs.splitlines()) > 100) - xp = rs - - set_option('display.max_info_columns', 101) - io = StringIO() - df.info(buf=io) - self.assertEqual(rs, xp) - reset_option('display.max_info_columns') - - def test_info_duplicate_columns(self): - io = StringIO() - - # it works! - frame = DataFrame(np.random.randn(1500, 4), - columns=['a', 'a', 'b', 'b']) - frame.info(buf=io) - - def test_info_duplicate_columns_shows_correct_dtypes(self): - # GH11761 - io = StringIO() - - frame = DataFrame([[1, 2.0]], - columns=['a', 'a']) - frame.info(buf=io) - io.seek(0) - lines = io.readlines() - self.assertEqual('a 1 non-null int64\n', lines[3]) - self.assertEqual('a 1 non-null float64\n', lines[4]) - - def test_info_shows_column_dtypes(self): - dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', - 'complex128', 'object', 'bool'] - data = {} - n = 10 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - buf = StringIO() - df.info(buf=buf) - res = buf.getvalue() - for i, dtype in enumerate(dtypes): - name = '%d %d non-null %s' % (i, n, dtype) - assert name in res - - def test_info_max_cols(self): - df = DataFrame(np.random.randn(10, 5)) - for len_, verbose in [(5, None), (5, False), (10, True)]: - # For verbose always ^ setting ^ summarize ^ full output - with option_context('max_info_columns', 4): - buf = StringIO() - df.info(buf=buf, verbose=verbose) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - for len_, verbose in [(10, None), (5, False), (10, True)]: - - # max_cols no exceeded - with option_context('max_info_columns', 5): - buf = StringIO() - df.info(buf=buf, verbose=verbose) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - for len_, max_cols in [(10, 5), (5, 4)]: - # setting truncates - with option_context('max_info_columns', 4): - buf = StringIO() - df.info(buf=buf, max_cols=max_cols) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - # setting wouldn't truncate - with option_context('max_info_columns', 5): - buf = StringIO() - df.info(buf=buf, max_cols=max_cols) - res = buf.getvalue() - self.assertEqual(len(res.strip().split('\n')), len_) - - def test_info_memory_usage(self): - # Ensure memory usage is displayed, when asserted, on the last line - dtypes = ['int64', 'float64', 'datetime64[ns]', 'timedelta64[ns]', - 'complex128', 'object', 'bool'] - data = {} - n = 10 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - buf = StringIO() - # display memory usage case - df.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - self.assertTrue("memory usage: " in res[-1]) - # do not display memory usage cas - df.info(buf=buf, memory_usage=False) - res = buf.getvalue().splitlines() - self.assertTrue("memory usage: " not in res[-1]) - - df.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - # memory usage is a lower bound, so print it as XYZ+ MB - self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) - - df.iloc[:, :5].info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - # excluded column with object dtype, so estimate is accurate - self.assertFalse(re.match(r"memory usage: [^+]+\+", res[-1])) - - df_with_object_index = pd.DataFrame({'a': [1]}, index=['foo']) - df_with_object_index.info(buf=buf, memory_usage=True) - res = buf.getvalue().splitlines() - self.assertTrue(re.match(r"memory usage: [^+]+\+", res[-1])) - - df_with_object_index.info(buf=buf, memory_usage='deep') - res = buf.getvalue().splitlines() - self.assertTrue(re.match(r"memory usage: [^+]+$", res[-1])) - - self.assertTrue(df_with_object_index.memory_usage(index=True, deep=True).sum() \ - > df_with_object_index.memory_usage(index=True).sum()) - - df_object = pd.DataFrame({'a': ['a']}) - self.assertTrue(df_object.memory_usage(deep=True).sum() \ - > df_object.memory_usage().sum()) - - # Test a DataFrame with duplicate columns - dtypes = ['int64', 'int64', 'int64', 'float64'] - data = {} - n = 100 - for i, dtype in enumerate(dtypes): - data[i] = np.random.randint(2, size=n).astype(dtype) - df = DataFrame(data) - df.columns = dtypes - # Ensure df size is as expected - df_size = df.memory_usage().sum() - exp_size = (len(dtypes) + 1) * n * 8 # (cols + index) * rows * bytes - self.assertEqual(df_size, exp_size) - # Ensure number of cols in memory_usage is the same as df - size_df = np.size(df.columns.values) + 1 # index=True; default - self.assertEqual(size_df, np.size(df.memory_usage())) - - # assert deep works only on object - self.assertEqual(df.memory_usage().sum(), - df.memory_usage(deep=True).sum()) - - # test for validity - DataFrame(1, index=['a'], columns=['A'] - ).memory_usage(index=True) - DataFrame(1, index=['a'], columns=['A'] - ).index.nbytes - df = DataFrame( - data=1, - index=pd.MultiIndex.from_product( - [['a'], range(1000)]), - columns=['A'] - ) - df.index.nbytes - df.memory_usage(index=True) - df.index.values.nbytes - - # sys.getsizeof will call the .memory_usage with - # deep=True, and add on some GC overhead - diff = df.memory_usage(deep=True).sum() - sys.getsizeof(df) - self.assertTrue(abs(diff) < 100) - - def test_dtypes(self): - self.mixed_frame['bool'] = self.mixed_frame['A'] > 0 - result = self.mixed_frame.dtypes - expected = Series(dict((k, v.dtype) - for k, v in compat.iteritems(self.mixed_frame)), - index=result.index) - assert_series_equal(result, expected) - - # compat, GH 8722 - with option_context('use_inf_as_null',True): - df = DataFrame([[1]]) - result = df.dtypes - assert_series_equal(result,Series({0:np.dtype('int64')})) - - def test_convert_objects(self): - - oops = self.mixed_frame.T.T - converted = oops._convert(datetime=True) - assert_frame_equal(converted, self.mixed_frame) - self.assertEqual(converted['A'].dtype, np.float64) - - # force numeric conversion - self.mixed_frame['H'] = '1.' - self.mixed_frame['I'] = '1' - - # add in some items that will be nan - l = len(self.mixed_frame) - self.mixed_frame['J'] = '1.' - self.mixed_frame['K'] = '1' - self.mixed_frame.ix[0:5,['J','K']] = 'garbled' - converted = self.mixed_frame._convert(datetime=True, numeric=True) - self.assertEqual(converted['H'].dtype, 'float64') - self.assertEqual(converted['I'].dtype, 'int64') - self.assertEqual(converted['J'].dtype, 'float64') - self.assertEqual(converted['K'].dtype, 'float64') - self.assertEqual(len(converted['J'].dropna()), l-5) - self.assertEqual(len(converted['K'].dropna()), l-5) - - # via astype - converted = self.mixed_frame.copy() - converted['H'] = converted['H'].astype('float64') - converted['I'] = converted['I'].astype('int64') - self.assertEqual(converted['H'].dtype, 'float64') - self.assertEqual(converted['I'].dtype, 'int64') - - # via astype, but errors - converted = self.mixed_frame.copy() - with assertRaisesRegexp(ValueError, 'invalid literal'): - converted['H'].astype('int32') - - # mixed in a single column - df = DataFrame(dict(s = Series([1, 'na', 3 ,4]))) - result = df._convert(datetime=True, numeric=True) - expected = DataFrame(dict(s = Series([1, np.nan, 3 ,4]))) - assert_frame_equal(result, expected) - - def test_convert_objects_no_conversion(self): - mixed1 = DataFrame( - {'a': [1, 2, 3], 'b': [4.0, 5, 6], 'c': ['x', 'y', 'z']}) - mixed2 = mixed1._convert(datetime=True) - assert_frame_equal(mixed1, mixed2) - - def test_append_series_dict(self): - df = DataFrame(np.random.randn(5, 4), - columns=['foo', 'bar', 'baz', 'qux']) - - series = df.ix[4] - with assertRaisesRegexp(ValueError, 'Indexes have overlapping values'): - df.append(series, verify_integrity=True) - series.name = None - with assertRaisesRegexp(TypeError, 'Can only append a Series if ' - 'ignore_index=True'): - df.append(series, verify_integrity=True) - - result = df.append(series[::-1], ignore_index=True) - expected = df.append(DataFrame({0: series[::-1]}, index=df.columns).T, - ignore_index=True) - assert_frame_equal(result, expected) - - # dict - result = df.append(series.to_dict(), ignore_index=True) - assert_frame_equal(result, expected) - - result = df.append(series[::-1][:3], ignore_index=True) - expected = df.append(DataFrame({0: series[::-1][:3]}).T, - ignore_index=True) - assert_frame_equal(result, expected.ix[:, result.columns]) - - # can append when name set - row = df.ix[4] - row.name = 5 - result = df.append(row) - expected = df.append(df[-1:], ignore_index=True) - assert_frame_equal(result, expected) - - def test_append_list_of_series_dicts(self): - df = DataFrame(np.random.randn(5, 4), - columns=['foo', 'bar', 'baz', 'qux']) - - dicts = [x.to_dict() for idx, x in df.iterrows()] - - result = df.append(dicts, ignore_index=True) - expected = df.append(df, ignore_index=True) - assert_frame_equal(result, expected) - - # different columns - dicts = [{'foo': 1, 'bar': 2, 'baz': 3, 'peekaboo': 4}, - {'foo': 5, 'bar': 6, 'baz': 7, 'peekaboo': 8}] - result = df.append(dicts, ignore_index=True) - expected = df.append(DataFrame(dicts), ignore_index=True) - assert_frame_equal(result, expected) - - def test_append_empty_dataframe(self): - - # Empty df append empty df - df1 = DataFrame([]) - df2 = DataFrame([]) - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Non-empty df append empty df - df1 = DataFrame(np.random.randn(5, 2)) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Empty df with columns append empty df - df1 = DataFrame(columns=['bar', 'foo']) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - # Non-Empty df with columns append empty df - df1 = DataFrame(np.random.randn(5, 2), columns=['bar', 'foo']) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - def test_append_dtypes(self): - - # GH 5754 - # row appends of different dtypes (so need to do by-item) - # can sometimes infer the correct type - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(5)) - df2 = DataFrame() - result = df1.append(df2) - expected = df1.copy() - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : 'foo' }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : [ Timestamp('20130101'), 'foo' ]}) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : np.nan }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), np.nan ],dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : np.nan }, index=lrange(1,2), dtype=object) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), np.nan ],dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : np.nan }, index=lrange(1)) - df2 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1,2)) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ np.nan, Timestamp('20130101')] ,dtype='M8[ns]') }) - assert_frame_equal(result, expected) - - df1 = DataFrame({ 'bar' : Timestamp('20130101') }, index=lrange(1)) - df2 = DataFrame({ 'bar' : 1 }, index=lrange(1,2), dtype=object) - result = df1.append(df2) - expected = DataFrame({ 'bar' : Series([ Timestamp('20130101'), 1 ]) }) - assert_frame_equal(result, expected) - - def test_asfreq(self): - offset_monthly = self.tsframe.asfreq(datetools.bmonthEnd) - rule_monthly = self.tsframe.asfreq('BM') - - assert_almost_equal(offset_monthly['A'], rule_monthly['A']) - - filled = rule_monthly.asfreq('B', method='pad') - # TODO: actually check that this worked. - - # don't forget! - filled_dep = rule_monthly.asfreq('B', method='pad') - - # test does not blow up on length-0 DataFrame - zero_length = self.tsframe.reindex([]) - result = zero_length.asfreq('BM') - self.assertIsNot(result, zero_length) - - def test_asfreq_datetimeindex(self): - df = DataFrame({'A': [1, 2, 3]}, - index=[datetime(2011, 11, 1), datetime(2011, 11, 2), - datetime(2011, 11, 3)]) - df = df.asfreq('B') - tm.assertIsInstance(df.index, DatetimeIndex) - - ts = df['A'].asfreq('B') - tm.assertIsInstance(ts.index, DatetimeIndex) - - def test_at_time_between_time_datetimeindex(self): - index = date_range("2012-01-01", "2012-01-05", freq='30min') - df = DataFrame(randn(len(index), 5), index=index) - akey = time(12, 0, 0) - bkey = slice(time(13, 0, 0), time(14, 0, 0)) - ainds = [24, 72, 120, 168] - binds = [26, 27, 28, 74, 75, 76, 122, 123, 124, 170, 171, 172] - - result = df.at_time(akey) - expected = df.ix[akey] - expected2 = df.ix[ainds] - assert_frame_equal(result, expected) - assert_frame_equal(result, expected2) - self.assertEqual(len(result), 4) - - result = df.between_time(bkey.start, bkey.stop) - expected = df.ix[bkey] - expected2 = df.ix[binds] - assert_frame_equal(result, expected) - assert_frame_equal(result, expected2) - self.assertEqual(len(result), 12) - - result = df.copy() - result.ix[akey] = 0 - result = result.ix[akey] - expected = df.ix[akey].copy() - expected.ix[:] = 0 - assert_frame_equal(result, expected) - - result = df.copy() - result.ix[akey] = 0 - result.ix[akey] = df.ix[ainds] - assert_frame_equal(result, df) - - result = df.copy() - result.ix[bkey] = 0 - result = result.ix[bkey] - expected = df.ix[bkey].copy() - expected.ix[:] = 0 - assert_frame_equal(result, expected) - - result = df.copy() - result.ix[bkey] = 0 - result.ix[bkey] = df.ix[binds] - assert_frame_equal(result, df) - - def test_as_matrix(self): - frame = self.frame - mat = frame.as_matrix() - - frameCols = frame.columns - for i, row in enumerate(mat): - for j, value in enumerate(row): - col = frameCols[j] - if np.isnan(value): - self.assertTrue(np.isnan(frame[col][i])) - else: - self.assertEqual(value, frame[col][i]) - - # mixed type - mat = self.mixed_frame.as_matrix(['foo', 'A']) - self.assertEqual(mat[0, 0], 'bar') - - df = DataFrame({'real': [1, 2, 3], 'complex': [1j, 2j, 3j]}) - mat = df.as_matrix() - self.assertEqual(mat[0, 0], 1j) - - # single block corner case - mat = self.frame.as_matrix(['A', 'B']) - expected = self.frame.reindex(columns=['A', 'B']).values - assert_almost_equal(mat, expected) - - def test_as_matrix_duplicates(self): - df = DataFrame([[1, 2, 'a', 'b'], - [1, 2, 'a', 'b']], - columns=['one', 'one', 'two', 'two']) - - result = df.values - expected = np.array([[1, 2, 'a', 'b'], [1, 2, 'a', 'b']], - dtype=object) - - self.assertTrue(np.array_equal(result, expected)) - - def test_ftypes(self): - frame = self.mixed_float - expected = Series(dict(A = 'float32:dense', - B = 'float32:dense', - C = 'float16:dense', - D = 'float64:dense')).sort_values() - result = frame.ftypes.sort_values() - assert_series_equal(result,expected) - - def test_values(self): - self.frame.values[:, 0] = 5. - self.assertTrue((self.frame.values[:, 0] == 5).all()) - - def test_deepcopy(self): - cp = deepcopy(self.frame) - series = cp['A'] - series[:] = 10 - for idx, value in compat.iteritems(series): - self.assertNotEqual(self.frame['A'][idx], value) - - def test_copy(self): - cop = self.frame.copy() - cop['E'] = cop['A'] - self.assertNotIn('E', self.frame) - - # copy objects - copy = self.mixed_frame.copy() - self.assertIsNot(copy._data, self.mixed_frame._data) - - def _check_method(self, method='pearson', check_minp=False): - if not check_minp: - correls = self.frame.corr(method=method) - exp = self.frame['A'].corr(self.frame['C'], method=method) - assert_almost_equal(correls['A']['C'], exp) - else: - result = self.frame.corr(min_periods=len(self.frame) - 8) - expected = self.frame.corr() - expected.ix['A', 'B'] = expected.ix['B', 'A'] = nan - - def test_corr_pearson(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('pearson') - - def test_corr_kendall(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('kendall') - - def test_corr_spearman(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - self._check_method('spearman') - - def test_corr_non_numeric(self): - tm._skip_if_no_scipy() - self.frame['A'][:5] = nan - self.frame['B'][5:10] = nan - - # exclude non-numeric types - result = self.mixed_frame.corr() - expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].corr() - assert_frame_equal(result, expected) - - def test_corr_nooverlap(self): - tm._skip_if_no_scipy() - - # nothing in common - for meth in ['pearson', 'kendall', 'spearman']: - df = DataFrame({'A': [1, 1.5, 1, np.nan, np.nan, np.nan], - 'B': [np.nan, np.nan, np.nan, 1, 1.5, 1], - 'C': [np.nan, np.nan, np.nan, np.nan, np.nan, np.nan]}) - rs = df.corr(meth) - self.assertTrue(isnull(rs.ix['A', 'B'])) - self.assertTrue(isnull(rs.ix['B', 'A'])) - self.assertEqual(rs.ix['A', 'A'], 1) - self.assertEqual(rs.ix['B', 'B'], 1) - self.assertTrue(isnull(rs.ix['C', 'C'])) - - def test_corr_constant(self): - tm._skip_if_no_scipy() - - # constant --> all NA - - for meth in ['pearson', 'spearman']: - df = DataFrame({'A': [1, 1, 1, np.nan, np.nan, np.nan], - 'B': [np.nan, np.nan, np.nan, 1, 1, 1]}) - rs = df.corr(meth) - self.assertTrue(isnull(rs.values).all()) - - def test_corr_int(self): - # dtypes other than float64 #1761 - df3 = DataFrame({"a": [1, 2, 3, 4], "b": [1, 2, 3, 4]}) - - # it works! - df3.cov() - df3.corr() - - def test_corr_int_and_boolean(self): - tm._skip_if_no_scipy() - - # when dtypes of pandas series are different - # then ndarray will have dtype=object, - # so it need to be properly handled - df = DataFrame({"a": [True, False], "b": [1, 0]}) - - expected = DataFrame(np.ones((2, 2)), index=['a', 'b'], columns=['a', 'b']) - for meth in ['pearson', 'kendall', 'spearman']: - assert_frame_equal(df.corr(meth), expected) - - def test_cov(self): - # min_periods no NAs (corner case) - expected = self.frame.cov() - result = self.frame.cov(min_periods=len(self.frame)) - - assert_frame_equal(expected, result) - - result = self.frame.cov(min_periods=len(self.frame) + 1) - self.assertTrue(isnull(result.values).all()) - - # with NAs - frame = self.frame.copy() - frame['A'][:5] = nan - frame['B'][5:10] = nan - result = self.frame.cov(min_periods=len(self.frame) - 8) - expected = self.frame.cov() - expected.ix['A', 'B'] = np.nan - expected.ix['B', 'A'] = np.nan - - # regular - self.frame['A'][:5] = nan - self.frame['B'][:10] = nan - cov = self.frame.cov() - - assert_almost_equal(cov['A']['C'], - self.frame['A'].cov(self.frame['C'])) - - # exclude non-numeric types - result = self.mixed_frame.cov() - expected = self.mixed_frame.ix[:, ['A', 'B', 'C', 'D']].cov() - assert_frame_equal(result, expected) - - # Single column frame - df = DataFrame(np.linspace(0.0,1.0,10)) - result = df.cov() - expected = DataFrame(np.cov(df.values.T).reshape((1,1)), - index=df.columns,columns=df.columns) - assert_frame_equal(result, expected) - df.ix[0] = np.nan - result = df.cov() - expected = DataFrame(np.cov(df.values[1:].T).reshape((1,1)), - index=df.columns,columns=df.columns) - assert_frame_equal(result, expected) - - def test_corrwith(self): - a = self.tsframe - noise = Series(randn(len(a)), index=a.index) - - b = self.tsframe + noise - - # make sure order does not matter - b = b.reindex(columns=b.columns[::-1], index=b.index[::-1][10:]) - del b['B'] - - colcorr = a.corrwith(b, axis=0) - assert_almost_equal(colcorr['A'], a['A'].corr(b['A'])) - - rowcorr = a.corrwith(b, axis=1) - assert_series_equal(rowcorr, a.T.corrwith(b.T, axis=0)) - - dropped = a.corrwith(b, axis=0, drop=True) - assert_almost_equal(dropped['A'], a['A'].corr(b['A'])) - self.assertNotIn('B', dropped) - - dropped = a.corrwith(b, axis=1, drop=True) - self.assertNotIn(a.index[-1], dropped.index) - - # non time-series data - index = ['a', 'b', 'c', 'd', 'e'] - columns = ['one', 'two', 'three', 'four'] - df1 = DataFrame(randn(5, 4), index=index, columns=columns) - df2 = DataFrame(randn(4, 4), index=index[:4], columns=columns) - correls = df1.corrwith(df2, axis=1) - for row in index[:4]: - assert_almost_equal(correls[row], df1.ix[row].corr(df2.ix[row])) - - def test_corrwith_with_objects(self): - df1 = tm.makeTimeDataFrame() - df2 = tm.makeTimeDataFrame() - cols = ['A', 'B', 'C', 'D'] - - df1['obj'] = 'foo' - df2['obj'] = 'bar' - - result = df1.corrwith(df2) - expected = df1.ix[:, cols].corrwith(df2.ix[:, cols]) - assert_series_equal(result, expected) - - result = df1.corrwith(df2, axis=1) - expected = df1.ix[:, cols].corrwith(df2.ix[:, cols], axis=1) - assert_series_equal(result, expected) - - def test_corrwith_series(self): - result = self.tsframe.corrwith(self.tsframe['A']) - expected = self.tsframe.apply(self.tsframe['A'].corr) - - assert_series_equal(result, expected) - - def test_corrwith_matches_corrcoef(self): - df1 = DataFrame(np.arange(10000), columns=['a']) - df2 = DataFrame(np.arange(10000)**2, columns=['a']) - c1 = df1.corrwith(df2)['a'] - c2 = np.corrcoef(df1['a'],df2['a'])[0][1] - - assert_almost_equal(c1, c2) - self.assertTrue(c1 < 1) - - def test_drop_names(self): - df = DataFrame([[1, 2, 3],[3, 4, 5],[5, 6, 7]], index=['a', 'b', 'c'], - columns=['d', 'e', 'f']) - df.index.name, df.columns.name = 'first', 'second' - df_dropped_b = df.drop('b') - df_dropped_e = df.drop('e', axis=1) - df_inplace_b, df_inplace_e = df.copy(), df.copy() - df_inplace_b.drop('b', inplace=True) - df_inplace_e.drop('e', axis=1, inplace=True) - for obj in (df_dropped_b, df_dropped_e, df_inplace_b, df_inplace_e): - self.assertEqual(obj.index.name, 'first') - self.assertEqual(obj.columns.name, 'second') - self.assertEqual(list(df.columns), ['d', 'e', 'f']) - - self.assertRaises(ValueError, df.drop, ['g']) - self.assertRaises(ValueError, df.drop, ['g'], 1) - - # errors = 'ignore' - dropped = df.drop(['g'], errors='ignore') - expected = Index(['a', 'b', 'c'], name='first') - self.assert_index_equal(dropped.index, expected) - - dropped = df.drop(['b', 'g'], errors='ignore') - expected = Index(['a', 'c'], name='first') - self.assert_index_equal(dropped.index, expected) - - dropped = df.drop(['g'], axis=1, errors='ignore') - expected = Index(['d', 'e', 'f'], name='second') - self.assert_index_equal(dropped.columns, expected) - - dropped = df.drop(['d', 'g'], axis=1, errors='ignore') - expected = Index(['e', 'f'], name='second') - self.assert_index_equal(dropped.columns, expected) - - def test_dropEmptyRows(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - original = Series(mat, index=self.frame.index, name='foo') - expected = original.dropna() - inplace_frame1, inplace_frame2 = frame.copy(), frame.copy() - - smaller_frame = frame.dropna(how='all') - # check that original was preserved - assert_series_equal(frame['foo'], original) - inplace_frame1.dropna(how='all', inplace=True) - assert_series_equal(smaller_frame['foo'], expected) - assert_series_equal(inplace_frame1['foo'], expected) - - smaller_frame = frame.dropna(how='all', subset=['foo']) - inplace_frame2.dropna(how='all', subset=['foo'], inplace=True) - assert_series_equal(smaller_frame['foo'], expected) - assert_series_equal(inplace_frame2['foo'], expected) - - def test_dropIncompleteRows(self): - N = len(self.frame.index) - mat = randn(N) - mat[:5] = nan - - frame = DataFrame({'foo': mat}, index=self.frame.index) - frame['bar'] = 5 - original = Series(mat, index=self.frame.index, name='foo') - inp_frame1, inp_frame2 = frame.copy(), frame.copy() - - smaller_frame = frame.dropna() - assert_series_equal(frame['foo'], original) - inp_frame1.dropna(inplace=True) - self.assert_numpy_array_equal(smaller_frame['foo'], mat[5:]) - self.assert_numpy_array_equal(inp_frame1['foo'], mat[5:]) - - samesize_frame = frame.dropna(subset=['bar']) - assert_series_equal(frame['foo'], original) - self.assertTrue((frame['bar'] == 5).all()) - inp_frame2.dropna(subset=['bar'], inplace=True) - self.assertTrue(samesize_frame.index.equals(self.frame.index)) - self.assertTrue(inp_frame2.index.equals(self.frame.index)) - - def test_dropna(self): - df = DataFrame(np.random.randn(6, 4)) - df[2][:2] = nan - - dropped = df.dropna(axis=1) - expected = df.ix[:, [0, 1, 3]] - inp = df.copy() - inp.dropna(axis=1, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=0) - expected = df.ix[lrange(2, 6)] - inp = df.copy() - inp.dropna(axis=0, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - # threshold - dropped = df.dropna(axis=1, thresh=5) - expected = df.ix[:, [0, 1, 3]] - inp = df.copy() - inp.dropna(axis=1, thresh=5, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=0, thresh=4) - expected = df.ix[lrange(2, 6)] - inp = df.copy() - inp.dropna(axis=0, thresh=4, inplace=True) - assert_frame_equal(dropped, expected) - assert_frame_equal(inp, expected) - - dropped = df.dropna(axis=1, thresh=4) - assert_frame_equal(dropped, df) - - dropped = df.dropna(axis=1, thresh=3) - assert_frame_equal(dropped, df) - - # subset - dropped = df.dropna(axis=0, subset=[0, 1, 3]) - inp = df.copy() - inp.dropna(axis=0, subset=[0, 1, 3], inplace=True) - assert_frame_equal(dropped, df) - assert_frame_equal(inp, df) - - # all - dropped = df.dropna(axis=1, how='all') - assert_frame_equal(dropped, df) - - df[2] = nan - dropped = df.dropna(axis=1, how='all') - expected = df.ix[:, [0, 1, 3]] - assert_frame_equal(dropped, expected) - - # bad input - self.assertRaises(ValueError, df.dropna, axis=3) - - - def test_drop_and_dropna_caching(self): - # tst that cacher updates - original = Series([1, 2, np.nan], name='A') - expected = Series([1, 2], dtype=original.dtype, name='A') - df = pd.DataFrame({'A': original.values.copy()}) - df2 = df.copy() - df['A'].dropna() - assert_series_equal(df['A'], original) - df['A'].dropna(inplace=True) - assert_series_equal(df['A'], expected) - df2['A'].drop([1]) - assert_series_equal(df2['A'], original) - df2['A'].drop([1], inplace=True) - assert_series_equal(df2['A'], original.drop([1])) - - def test_dropna_corner(self): - # bad input - self.assertRaises(ValueError, self.frame.dropna, how='foo') - self.assertRaises(TypeError, self.frame.dropna, how=None) - # non-existent column - 8303 - self.assertRaises(KeyError, self.frame.dropna, subset=['A','X']) - - def test_dropna_multiple_axes(self): - df = DataFrame([[1, np.nan, 2, 3], - [4, np.nan, 5, 6], - [np.nan, np.nan, np.nan, np.nan], - [7, np.nan, 8, 9]]) - cp = df.copy() - result = df.dropna(how='all', axis=[0, 1]) - result2 = df.dropna(how='all', axis=(0, 1)) - expected = df.dropna(how='all').dropna(how='all', axis=1) - - assert_frame_equal(result, expected) - assert_frame_equal(result2, expected) - assert_frame_equal(df, cp) - - inp = df.copy() - inp.dropna(how='all', axis=(0, 1), inplace=True) - assert_frame_equal(inp, expected) - - def test_drop_duplicates(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('AAA') - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep='last') - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep=False) - expected = df.ix[[]] - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('AAA', take_last=True) - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - # multi column - expected = df.ix[[0, 1, 2, 3]] - result = df.drop_duplicates(np.array(['AAA', 'B'])) - assert_frame_equal(result, expected) - result = df.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AAA', 'B'), keep='last') - expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AAA', 'B'), keep=False) - expected = df.ix[[0]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(('AAA', 'B'), take_last=True) - expected = df.ix[[0, 5, 6, 7]] - assert_frame_equal(result, expected) - - # consider everything - df2 = df.ix[:, ['AAA', 'B', 'C']] - - result = df2.drop_duplicates() - # in this case only - expected = df2.drop_duplicates(['AAA', 'B']) - assert_frame_equal(result, expected) - - result = df2.drop_duplicates(keep='last') - expected = df2.drop_duplicates(['AAA', 'B'], keep='last') - assert_frame_equal(result, expected) - - result = df2.drop_duplicates(keep=False) - expected = df2.drop_duplicates(['AAA', 'B'], keep=False) - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df2.drop_duplicates(take_last=True) - with tm.assert_produces_warning(FutureWarning): - expected = df2.drop_duplicates(['AAA', 'B'], take_last=True) - assert_frame_equal(result, expected) - - # integers - result = df.drop_duplicates('C') - expected = df.iloc[[0,2]] - assert_frame_equal(result, expected) - result = df.drop_duplicates('C',keep='last') - expected = df.iloc[[-2,-1]] - assert_frame_equal(result, expected) - - df['E'] = df['C'].astype('int8') - result = df.drop_duplicates('E') - expected = df.iloc[[0,2]] - assert_frame_equal(result, expected) - result = df.drop_duplicates('E',keep='last') - expected = df.iloc[[-2,-1]] - assert_frame_equal(result, expected) - - # GH 11376 - df = pd.DataFrame({'x': [7, 6, 3, 3, 4, 8, 0], - 'y': [0, 6, 5, 5, 9, 1, 2]}) - expected = df.loc[df.index != 3] - assert_frame_equal(df.drop_duplicates(), expected) - - df = pd.DataFrame([[1 , 0], [0, 2]]) - assert_frame_equal(df.drop_duplicates(), df) - - df = pd.DataFrame([[-2, 0], [0, -4]]) - assert_frame_equal(df.drop_duplicates(), df) - - x = np.iinfo(np.int64).max / 3 * 2 - df = pd.DataFrame([[-x, x], [0, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) - - df = pd.DataFrame([[-x, x], [x, x + 4]]) - assert_frame_equal(df.drop_duplicates(), df) - - # GH 11864 - df = pd.DataFrame([i] * 9 for i in range(16)) - df = df.append([[1] + [0] * 8], ignore_index=True) - - for keep in ['first', 'last', False]: - assert_equal(df.duplicated(keep=keep).sum(), 0) - - def test_drop_duplicates_for_take_all(self): - df = DataFrame({'AAA': ['foo', 'bar', 'baz', 'bar', - 'foo', 'bar', 'qux', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('AAA') - expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep='last') - expected = df.iloc[[2, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('AAA', keep=False) - expected = df.iloc[[2, 6]] - assert_frame_equal(result, expected) - - # multiple columns - result = df.drop_duplicates(['AAA', 'B']) - expected = df.iloc[[0, 1, 2, 3, 4, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['AAA', 'B'], keep='last') - expected = df.iloc[[0, 1, 2, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['AAA', 'B'], keep=False) - expected = df.iloc[[0, 1, 2, 6]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - expected = df[:2] - - # Raises warning - with tm.assert_produces_warning(False): - result = df.drop_duplicates(subset='AAA') - assert_frame_equal(result, expected) - - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(cols='AAA') - assert_frame_equal(result, expected) - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.drop_duplicates, - kwargs={'subset': 'AAA', 'bad_arg': True}) - - # deprecate take_last - # Raises warning - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(take_last=False, subset='AAA') - assert_frame_equal(result, expected) - - self.assertRaises(ValueError, df.drop_duplicates, keep='invalid_name') - - def test_drop_duplicates_tuple(self): - df = DataFrame({('AA', 'AB'): ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates(('AA', 'AB')) - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AA', 'AB'), keep='last') - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(('AA', 'AB'), keep=False) - expected = df.ix[[]] # empty df - self.assertEqual(len(result), 0) - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(('AA', 'AB'), take_last=True) - expected = df.ix[[6, 7]] - assert_frame_equal(result, expected) - - # multi column - expected = df.ix[[0, 1, 2, 3]] - result = df.drop_duplicates((('AA', 'AB'), 'B')) - assert_frame_equal(result, expected) - - def test_drop_duplicates_NA(self): - # none - df = DataFrame({'A': [None, None, 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('A') - expected = df.ix[[0, 2, 3]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep='last') - expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep=False) - expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('A', take_last=True) - expected = df.ix[[1, 6, 7]] - assert_frame_equal(result, expected) - - # multi column - result = df.drop_duplicates(['A', 'B']) - expected = df.ix[[0, 2, 3, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['A', 'B'], keep='last') - expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['A', 'B'], keep=False) - expected = df.ix[[6]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(['A', 'B'], take_last=True) - expected = df.ix[[1, 5, 6, 7]] - assert_frame_equal(result, expected) - - # nan - df = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 1., 1, 1.], - 'D': lrange(8)}) - - # single column - result = df.drop_duplicates('C') - expected = df[:2] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep='last') - expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep=False) - expected = df.ix[[]] # empty df - assert_frame_equal(result, expected) - self.assertEqual(len(result), 0) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates('C', take_last=True) - expected = df.ix[[3, 7]] - assert_frame_equal(result, expected) - - # multi column - result = df.drop_duplicates(['C', 'B']) - expected = df.ix[[0, 1, 2, 4]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['C', 'B'], keep='last') - expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates(['C', 'B'], keep=False) - expected = df.ix[[1]] - assert_frame_equal(result, expected) - - # deprecate take_last - with tm.assert_produces_warning(FutureWarning): - result = df.drop_duplicates(['C', 'B'], take_last=True) - expected = df.ix[[1, 3, 6, 7]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_NA_for_take_all(self): - # none - df = DataFrame({'A': [None, None, 'foo', 'bar', - 'foo', 'baz', 'bar', 'qux'], - 'C': [1.0, np.nan, np.nan, np.nan, 1., 2., 3, 1.]}) - - # single column - result = df.drop_duplicates('A') - expected = df.iloc[[0, 2, 3, 5, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep='last') - expected = df.iloc[[1, 4, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('A', keep=False) - expected = df.iloc[[5, 7]] - assert_frame_equal(result, expected) - - # nan - - # single column - result = df.drop_duplicates('C') - expected = df.iloc[[0, 1, 5, 6]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep='last') - expected = df.iloc[[3, 5, 6, 7]] - assert_frame_equal(result, expected) - - result = df.drop_duplicates('C', keep=False) - expected = df.iloc[[5, 6]] - assert_frame_equal(result, expected) - - def test_drop_duplicates_inplace(self): - orig = DataFrame({'A': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # single column - df = orig.copy() - df.drop_duplicates('A', inplace=True) - expected = orig[:2] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates('A', keep='last', inplace=True) - expected = orig.ix[[6, 7]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates('A', keep=False, inplace=True) - expected = orig.ix[[]] - result = df - assert_frame_equal(result, expected) - self.assertEqual(len(df), 0) - - # deprecate take_last - df = orig.copy() - with tm.assert_produces_warning(FutureWarning): - df.drop_duplicates('A', take_last=True, inplace=True) - expected = orig.ix[[6, 7]] - result = df - assert_frame_equal(result, expected) - - # multi column - df = orig.copy() - df.drop_duplicates(['A', 'B'], inplace=True) - expected = orig.ix[[0, 1, 2, 3]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates(['A', 'B'], keep='last', inplace=True) - expected = orig.ix[[0, 5, 6, 7]] - result = df - assert_frame_equal(result, expected) - - df = orig.copy() - df.drop_duplicates(['A', 'B'], keep=False, inplace=True) - expected = orig.ix[[0]] - result = df - assert_frame_equal(result, expected) - - # deprecate take_last - df = orig.copy() - with tm.assert_produces_warning(FutureWarning): - df.drop_duplicates(['A', 'B'], take_last=True, inplace=True) - expected = orig.ix[[0, 5, 6, 7]] - result = df - assert_frame_equal(result, expected) - - # consider everything - orig2 = orig.ix[:, ['A', 'B', 'C']].copy() - - df2 = orig2.copy() - df2.drop_duplicates(inplace=True) - # in this case only - expected = orig2.drop_duplicates(['A', 'B']) - result = df2 - assert_frame_equal(result, expected) - - df2 = orig2.copy() - df2.drop_duplicates(keep='last', inplace=True) - expected = orig2.drop_duplicates(['A', 'B'], keep='last') - result = df2 - assert_frame_equal(result, expected) - - df2 = orig2.copy() - df2.drop_duplicates(keep=False, inplace=True) - expected = orig2.drop_duplicates(['A', 'B'], keep=False) - result = df2 - assert_frame_equal(result, expected) - - # deprecate take_last - df2 = orig2.copy() - with tm.assert_produces_warning(FutureWarning): - df2.drop_duplicates(take_last=True, inplace=True) - with tm.assert_produces_warning(FutureWarning): - expected = orig2.drop_duplicates(['A', 'B'], take_last=True) - result = df2 - assert_frame_equal(result, expected) - - def test_duplicated_deprecated_warning(self): - df = DataFrame({'AAA': ['foo', 'bar', 'foo', 'bar', - 'foo', 'bar', 'bar', 'foo'], - 'B': ['one', 'one', 'two', 'two', - 'two', 'two', 'one', 'two'], - 'C': [1, 1, 2, 2, 2, 2, 1, 2], - 'D': lrange(8)}) - - # Raises warning - with tm.assert_produces_warning(False): - result = df.duplicated(subset='AAA') - - with tm.assert_produces_warning(FutureWarning): - result = df.duplicated(cols='AAA') - - # Does not allow both subset and cols - self.assertRaises(TypeError, df.duplicated, - kwargs={'cols': 'AAA', 'subset': 'B'}) - - # Does not allow unknown kwargs - self.assertRaises(TypeError, df.duplicated, - kwargs={'subset': 'AAA', 'bad_arg': True}) - - def test_drop_col_still_multiindex(self): - arrays = [['a', 'b', 'c', 'top'], - ['', '', '', 'OD'], - ['', '', '', 'wx']] - - tuples = sorted(zip(*arrays)) - index = MultiIndex.from_tuples(tuples) - - df = DataFrame(randn(3, 4), columns=index) - del df[('a', '', '')] - assert(isinstance(df.columns, MultiIndex)) - - def test_drop(self): - simple = DataFrame({"A": [1, 2, 3, 4], "B": [0, 1, 2, 3]}) - assert_frame_equal(simple.drop("A", axis=1), simple[['B']]) - assert_frame_equal(simple.drop(["A", "B"], axis='columns'), - simple[[]]) - assert_frame_equal(simple.drop([0, 1, 3], axis=0), simple.ix[[2], :]) - assert_frame_equal(simple.drop([0, 3], axis='index'), simple.ix[[1, 2], :]) - - self.assertRaises(ValueError, simple.drop, 5) - self.assertRaises(ValueError, simple.drop, 'C', 1) - self.assertRaises(ValueError, simple.drop, [1, 5]) - self.assertRaises(ValueError, simple.drop, ['A', 'C'], 1) - - # errors = 'ignore' - assert_frame_equal(simple.drop(5, errors='ignore'), simple) - assert_frame_equal(simple.drop([0, 5], errors='ignore'), - simple.ix[[1, 2, 3], :]) - assert_frame_equal(simple.drop('C', axis=1, errors='ignore'), simple) - assert_frame_equal(simple.drop(['A', 'C'], axis=1, errors='ignore'), - simple[['B']]) - - #non-unique - wheee! - nu_df = DataFrame(lzip(range(3), range(-3, 1), list('abc')), - columns=['a', 'a', 'b']) - assert_frame_equal(nu_df.drop('a', axis=1), nu_df[['b']]) - assert_frame_equal(nu_df.drop('b', axis='columns'), nu_df['a']) - - nu_df = nu_df.set_index(pd.Index(['X', 'Y', 'X'])) - nu_df.columns = list('abc') - assert_frame_equal(nu_df.drop('X', axis='rows'), nu_df.ix[["Y"], :]) - assert_frame_equal(nu_df.drop(['X', 'Y'], axis=0), nu_df.ix[[], :]) - - # inplace cache issue - # GH 5628 - df = pd.DataFrame(np.random.randn(10,3), columns=list('abc')) - expected = df[~(df.b>0)] - df.drop(labels=df[df.b>0].index, inplace=True) - assert_frame_equal(df,expected) - - def test_fillna(self): - self.tsframe.ix[:5,'A'] = nan - self.tsframe.ix[-5:,'A'] = nan - - zero_filled = self.tsframe.fillna(0) - self.assertTrue((zero_filled.ix[:5,'A'] == 0).all()) - - padded = self.tsframe.fillna(method='pad') - self.assertTrue(np.isnan(padded.ix[:5,'A']).all()) - self.assertTrue((padded.ix[-5:,'A'] == padded.ix[-5,'A']).all()) - - # mixed type - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - result = self.mixed_frame.fillna(value=0) - result = self.mixed_frame.fillna(method='pad') - - self.assertRaises(ValueError, self.tsframe.fillna) - self.assertRaises(ValueError, self.tsframe.fillna, 5, method='ffill') - - # mixed numeric (but no float16) - mf = self.mixed_float.reindex(columns=['A','B','D']) - mf.ix[-10:,'A'] = nan - result = mf.fillna(value=0) - _check_mixed_float(result, dtype = dict(C = None)) - - result = mf.fillna(method='pad') - _check_mixed_float(result, dtype = dict(C = None)) - - # empty frame (GH #2778) - df = DataFrame(columns=['x']) - for m in ['pad','backfill']: - df.x.fillna(method=m,inplace=1) - df.x.fillna(method=m) - - # with different dtype (GH3386) - df = DataFrame([['a','a',np.nan,'a'],['b','b',np.nan,'b'],['c','c',np.nan,'c']]) - - result = df.fillna({ 2: 'foo' }) - expected = DataFrame([['a','a','foo','a'],['b','b','foo','b'],['c','c','foo','c']]) - assert_frame_equal(result, expected) - - df.fillna({ 2: 'foo' }, inplace=True) - assert_frame_equal(df, expected) - - # limit and value - df = DataFrame(np.random.randn(10,3)) - df.iloc[2:7,0] = np.nan - df.iloc[3:5,2] = np.nan - - expected = df.copy() - expected.iloc[2,0] = 999 - expected.iloc[3,2] = 999 - result = df.fillna(999,limit=1) - assert_frame_equal(result, expected) - - # with datelike - # GH 6344 - df = DataFrame({ - 'Date':[pd.NaT, Timestamp("2014-1-1")], - 'Date2':[ Timestamp("2013-1-1"), pd.NaT] - }) - - expected = df.copy() - expected['Date'] = expected['Date'].fillna(df.ix[0,'Date2']) - result = df.fillna(value={'Date':df['Date2']}) - assert_frame_equal(result, expected) - - def test_fillna_dtype_conversion(self): - # make sure that fillna on an empty frame works - df = DataFrame(index=["A","B","C"], columns = [1,2,3,4,5]) - result = df.get_dtype_counts().sort_values() - expected = Series({ 'object' : 5 }) - assert_series_equal(result, expected) - - result = df.fillna(1) - expected = DataFrame(1, index=["A","B","C"], columns = [1,2,3,4,5]) - result = result.get_dtype_counts().sort_values() - expected = Series({ 'int64' : 5 }) - assert_series_equal(result, expected) - - # empty block - df = DataFrame(index=lrange(3),columns=['A','B'],dtype='float64') - result = df.fillna('nan') - expected = DataFrame('nan',index=lrange(3),columns=['A','B']) - assert_frame_equal(result, expected) - - # equiv of replace - df = DataFrame(dict(A = [1,np.nan], B = [1.,2.])) - for v in ['',1,np.nan,1.0]: - expected = df.replace(np.nan,v) - result = df.fillna(v) - assert_frame_equal(result, expected) - - def test_fillna_datetime_columns(self): - # GH 7095 - df = pd.DataFrame({'A': [-1, -2, np.nan], - 'B': date_range('20130101', periods=3), - 'C': ['foo', 'bar', None], - 'D': ['foo2', 'bar2', None]}, - index=date_range('20130110', periods=3)) - result = df.fillna('?') - expected = pd.DataFrame({'A': [-1, -2, '?'], - 'B': date_range('20130101', periods=3), - 'C': ['foo', 'bar', '?'], - 'D': ['foo2', 'bar2', '?']}, - index=date_range('20130110', periods=3)) - self.assert_frame_equal(result, expected) - - df = pd.DataFrame({'A': [-1, -2, np.nan], - 'B': [pd.Timestamp('2013-01-01'), pd.Timestamp('2013-01-02'), pd.NaT], - 'C': ['foo', 'bar', None], - 'D': ['foo2', 'bar2', None]}, - index=date_range('20130110', periods=3)) - result = df.fillna('?') - expected = pd.DataFrame({'A': [-1, -2, '?'], - 'B': [pd.Timestamp('2013-01-01'), pd.Timestamp('2013-01-02'), '?'], - 'C': ['foo', 'bar', '?'], - 'D': ['foo2', 'bar2', '?']}, - index=date_range('20130110', periods=3)) - self.assert_frame_equal(result, expected) - - def test_ffill(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - assert_frame_equal(self.tsframe.ffill(), - self.tsframe.fillna(method='ffill')) - - def test_bfill(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - assert_frame_equal(self.tsframe.bfill(), - self.tsframe.fillna(method='bfill')) - - def test_fillna_skip_certain_blocks(self): - # don't try to fill boolean, int blocks - - df = DataFrame(np.random.randn(10, 4).astype(int)) - - # it works! - df.fillna(np.nan) - - def test_fillna_inplace(self): - df = DataFrame(np.random.randn(10, 4)) - df[1][:4] = np.nan - df[3][-4:] = np.nan - - expected = df.fillna(value=0) - self.assertIsNot(expected, df) - - df.fillna(value=0, inplace=True) - assert_frame_equal(df, expected) - - df[1][:4] = np.nan - df[3][-4:] = np.nan - expected = df.fillna(method='ffill') - self.assertIsNot(expected, df) - - df.fillna(method='ffill', inplace=True) - assert_frame_equal(df, expected) - - def test_fillna_dict_series(self): - df = DataFrame({'a': [nan, 1, 2, nan, nan], - 'b': [1, 2, 3, nan, nan], - 'c': [nan, 1, 2, 3, 4]}) - - result = df.fillna({'a': 0, 'b': 5}) - - expected = df.copy() - expected['a'] = expected['a'].fillna(0) - expected['b'] = expected['b'].fillna(5) - assert_frame_equal(result, expected) - - # it works - result = df.fillna({'a': 0, 'b': 5, 'd': 7}) - - # Series treated same as dict - result = df.fillna(df.max()) - expected = df.fillna(df.max().to_dict()) - assert_frame_equal(result, expected) - - # disable this for now - with assertRaisesRegexp(NotImplementedError, 'column by column'): - df.fillna(df.max(1), axis=1) - - def test_fillna_dataframe(self): - # GH 8377 - df = DataFrame({'a': [nan, 1, 2, nan, nan], - 'b': [1, 2, 3, nan, nan], - 'c': [nan, 1, 2, 3, 4]}, - index = list('VWXYZ')) - - # df2 may have different index and columns - df2 = DataFrame({'a': [nan, 10, 20, 30, 40], - 'b': [50, 60, 70, 80, 90], - 'foo': ['bar']*5}, - index = list('VWXuZ')) - - result = df.fillna(df2) - - # only those columns and indices which are shared get filled - expected = DataFrame({'a': [nan, 1, 2, nan, 40], - 'b': [1, 2, 3, nan, 90], - 'c': [nan, 1, 2, 3, 4]}, - index = list('VWXYZ')) - - assert_frame_equal(result, expected) - - def test_fillna_columns(self): - df = DataFrame(np.random.randn(10, 10)) - df.values[:, ::2] = np.nan - - result = df.fillna(method='ffill', axis=1) - expected = df.T.fillna(method='pad').T - assert_frame_equal(result, expected) - - df.insert(6, 'foo', 5) - result = df.fillna(method='ffill', axis=1) - expected = df.astype(float).fillna(method='ffill', axis=1) - assert_frame_equal(result, expected) - - - def test_fillna_invalid_method(self): - with assertRaisesRegexp(ValueError, 'ffil'): - self.frame.fillna(method='ffil') - - def test_fillna_invalid_value(self): - # list - self.assertRaises(TypeError, self.frame.fillna, [1, 2]) - # tuple - self.assertRaises(TypeError, self.frame.fillna, (1, 2)) - # frame with series - self.assertRaises(ValueError, self.frame.iloc[:,0].fillna, self.frame) - - def test_replace_inplace(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - tsframe = self.tsframe.copy() - tsframe.replace(nan, 0, inplace=True) - assert_frame_equal(tsframe, self.tsframe.fillna(0)) - - self.assertRaises(TypeError, self.tsframe.replace, nan, inplace=True) - self.assertRaises(TypeError, self.tsframe.replace, nan) - - # mixed type - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - result = self.mixed_frame.replace(np.nan, 0) - expected = self.mixed_frame.fillna(value=0) - assert_frame_equal(result, expected) - - tsframe = self.tsframe.copy() - tsframe.replace([nan], [0], inplace=True) - assert_frame_equal(tsframe, self.tsframe.fillna(0)) - - def test_regex_replace_scalar(self): - obj = {'a': list('ab..'), 'b': list('efgh')} - dfobj = DataFrame(obj) - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ### simplest cases - ## regex -> value - # obj frame - res = dfobj.replace(r'\s*\.\s*', nan, regex=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.replace(r'\s*\.\s*', nan, regex=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.replace(re.compile(r'\s*\.\s*'), nan, regex=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.replace(re.compile(r'\s*\.\s*'), nan, regex=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfmix.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfmix.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1') - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - def test_regex_replace_scalar_inplace(self): - obj = {'a': list('ab..'), 'b': list('efgh')} - dfobj = DataFrame(obj) - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ### simplest cases - ## regex -> value - # obj frame - res = dfobj.copy() - res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(r'\s*\.\s*', nan, regex=True, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(r'\s*(\.)\s*', r'\1\1\1', regex=True, inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(re.compile(r'\s*\.\s*'), nan, regex=True, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, - inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(re.compile(r'\s*(\.)\s*'), r'\1\1\1', regex=True, - inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - res = dfobj.copy() - res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(regex=r'\s*\.\s*', value=nan, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(regex=r'\s*(\.)\s*', value=r'\1\1\1', inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) - assert_frame_equal(dfobj, res.fillna('.')) - - # mixed - res = dfmix.copy() - res.replace(regex=re.compile(r'\s*\.\s*'), value=nan, inplace=True) - assert_frame_equal(dfmix, res.fillna('.')) - - ## regex -> regex - # obj frame - res = dfobj.copy() - res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', - inplace=True) - objc = obj.copy() - objc['a'] = ['a', 'b', '...', '...'] - expec = DataFrame(objc) - assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - res.replace(regex=re.compile(r'\s*(\.)\s*'), value=r'\1\1\1', - inplace=True) - mixc = mix.copy() - mixc['b'] = ['a', 'b', '...', '...'] - expec = DataFrame(mixc) - assert_frame_equal(res, expec) - - def test_regex_replace_list_obj(self): - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'e|f|g'] - values = [nan, 'crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + - ['h'], 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] - values = [r'\1\1', r'\1_crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', - 'f_crap', - 'g_crap', 'h'], - 'c': ['h', 'e_crap', 'l', 'o']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.replace(value=values, regex=to_replace_res) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_obj_inplace(self): - ### same as above with inplace=True - ## lists of regexes and values - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'e|f|g'] - values = [nan, 'crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', nan, nan], 'b': ['crap'] * 3 + - ['h'], 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(e|f|g)'] - values = [r'\1\1', r'\1_crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e_crap', - 'f_crap', - 'g_crap', 'h'], - 'c': ['h', 'e_crap', 'l', 'o']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'e'] - values = [r'\1\1', r'crap'] - res = dfobj.copy() - res.replace(value=values, regex=to_replace_res, inplace=True) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['crap', 'f', 'g', - 'h'], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_mixed(self): - ## mixed frame to make sure this doesn't break things - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'a'] - values = [nan, 'crap'] - mix2 = {'a': lrange(4), 'b': list('ab..'), 'c': list('halo')} - dfmix2 = DataFrame(mix2) - res = dfmix2.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix2['a'], 'b': ['crap', 'b', nan, nan], - 'c': ['h', 'crap', 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] - values = [r'\1\1', r'\1_crap'] - res = dfmix.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', - '..']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.replace(to_replace_res, values, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.replace(regex=to_replace_res, value=values) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - def test_regex_replace_list_mixed_inplace(self): - mix = {'a': lrange(4), 'b': list('ab..')} - dfmix = DataFrame(mix) - # the same inplace - ## lists of regexes and values - # list of [re1, re2, ..., reN] -> [v1, v2, ..., vN] - to_replace_res = [r'\s*\.\s*', r'a'] - values = [nan, 'crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b', nan, nan]}) - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [re1, re2, .., reN] - to_replace_res = [r'\s*(\.)\s*', r'(a|b)'] - values = [r'\1\1', r'\1_crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a_crap', 'b_crap', '..', - '..']}) - - assert_frame_equal(res, expec) - - # list of [re1, re2, ..., reN] -> [(re1 or v1), (re2 or v2), ..., (reN - # or vN)] - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.copy() - res.replace(to_replace_res, values, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - to_replace_res = [r'\s*(\.)\s*', r'a', r'(b)'] - values = [r'\1\1', r'crap', r'\1_crap'] - res = dfmix.copy() - res.replace(regex=to_replace_res, value=values, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['crap', 'b_crap', '..', '..']}) - assert_frame_equal(res, expec) - - def test_regex_replace_dict_mixed(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - dfmix = DataFrame(mix) - - ## dicts - # single dict {re1: v1}, search the whole frame - # need test for this... - - # list of dicts {re1: v1, re2: v2, ..., re3: v3}, search the whole - # frame - res = dfmix.replace({'b': r'\s*\.\s*'}, {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace({'b': r'\s*\.\s*'}, {'b': nan}, inplace=True, regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - # list of dicts {re1: re11, re2: re12, ..., reN: re1N}, search the - # whole frame - res = dfmix.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, regex=True) - res2 = dfmix.copy() - res2.replace({'b': r'\s*(\.)\s*'}, {'b': r'\1ty'}, inplace=True, - regex=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - res = dfmix.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}) - res2 = dfmix.copy() - res2.replace(regex={'b': r'\s*(\.)\s*'}, value={'b': r'\1ty'}, - inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', '.ty', '.ty'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - # scalar -> dict - # to_replace regex, {value: value} - expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': - mix['c']}) - res = dfmix.replace('a', {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace('a', {'b': nan}, regex=True, inplace=True) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - res = dfmix.replace('a', {'b': nan}, regex=True) - res2 = dfmix.copy() - res2.replace(regex='a', value={'b': nan}, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': [nan, 'b', '.', '.'], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - - def test_regex_replace_dict_nested(self): - # nested dicts will not work until this is implemented for Series - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - dfmix = DataFrame(mix) - res = dfmix.replace({'b': {r'\s*\.\s*': nan}}, regex=True) - res2 = dfmix.copy() - res4 = dfmix.copy() - res2.replace({'b': {r'\s*\.\s*': nan}}, inplace=True, regex=True) - res3 = dfmix.replace(regex={'b': {r'\s*\.\s*': nan}}) - res4.replace(regex={'b': {r'\s*\.\s*': nan}}, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - assert_frame_equal(res4, expec) - - def test_regex_replace_dict_nested_gh4115(self): - df = pd.DataFrame({'Type':['Q','T','Q','Q','T'], 'tmp':2}) - expected = DataFrame({'Type': [0,1,0,0,1], 'tmp': 2}) - result = df.replace({'Type': {'Q':0,'T':1}}) - assert_frame_equal(result, expected) - - def test_regex_replace_list_to_scalar(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - expec = DataFrame({'a': mix['a'], 'b': np.array([nan] * 4), - 'c': [nan, nan, nan, 'd']}) - - res = df.replace([r'\s*\.\s*', 'a|b'], nan, regex=True) - res2 = df.copy() - res3 = df.copy() - res2.replace([r'\s*\.\s*', 'a|b'], nan, regex=True, inplace=True) - res3.replace(regex=[r'\s*\.\s*', 'a|b'], value=nan, inplace=True) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_str_to_numeric(self): - # what happens when you try to replace a numeric value with a regex? - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - res = df.replace(r'\s*\.\s*', 0, regex=True) - res2 = df.copy() - res2.replace(r'\s*\.\s*', 0, inplace=True, regex=True) - res3 = df.copy() - res3.replace(regex=r'\s*\.\s*', value=0, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', 0, 0], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_regex_list_to_numeric(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - res = df.replace([r'\s*\.\s*', 'b'], 0, regex=True) - res2 = df.copy() - res2.replace([r'\s*\.\s*', 'b'], 0, regex=True, inplace=True) - res3 = df.copy() - res3.replace(regex=[r'\s*\.\s*', 'b'], value=0, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 0, 0, 0], 'c': ['a', 0, - nan, - 'd']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_series_of_regexes(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - s1 = Series({'b': r'\s*\.\s*'}) - s2 = Series({'b': nan}) - res = df.replace(s1, s2, regex=True) - res2 = df.copy() - res2.replace(s1, s2, inplace=True, regex=True) - res3 = df.copy() - res3.replace(regex=s1, value=s2, inplace=True) - expec = DataFrame({'a': mix['a'], 'b': ['a', 'b', nan, nan], 'c': - mix['c']}) - assert_frame_equal(res, expec) - assert_frame_equal(res2, expec) - assert_frame_equal(res3, expec) - - def test_regex_replace_numeric_to_object_conversion(self): - mix = {'a': lrange(4), 'b': list('ab..'), 'c': ['a', 'b', nan, 'd']} - df = DataFrame(mix) - expec = DataFrame({'a': ['a', 1, 2, 3], 'b': mix['b'], 'c': mix['c']}) - res = df.replace(0, 'a') - assert_frame_equal(res, expec) - self.assertEqual(res.a.dtype, np.object_) - - def test_replace_regex_metachar(self): - metachars = '[]', '()', '\d', '\w', '\s' - - for metachar in metachars: - df = DataFrame({'a': [metachar, 'else']}) - result = df.replace({'a': {metachar: 'paren'}}) - expected = DataFrame({'a': ['paren', 'else']}) - assert_frame_equal(result, expected) - - def test_replace(self): - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - - zero_filled = self.tsframe.replace(nan, -1e8) - assert_frame_equal(zero_filled, self.tsframe.fillna(-1e8)) - assert_frame_equal(zero_filled.replace(-1e8, nan), self.tsframe) - - self.tsframe['A'][:5] = nan - self.tsframe['A'][-5:] = nan - self.tsframe['B'][:5] = -1e8 - - # empty - df = DataFrame(index=['a', 'b']) - assert_frame_equal(df, df.replace(5, 7)) - - # GH 11698 - # test for mixed data types. - df = pd.DataFrame([('-', pd.to_datetime('20150101')), ('a', pd.to_datetime('20150102'))]) - df1 = df.replace('-', np.nan) - expected_df = pd.DataFrame([(np.nan, pd.to_datetime('20150101')), ('a', pd.to_datetime('20150102'))]) - assert_frame_equal(df1, expected_df) - - def test_replace_list(self): - obj = {'a': list('ab..'), 'b': list('efgh'), 'c': list('helo')} - dfobj = DataFrame(obj) - - ## lists of regexes and values - # list of [v1, v2, ..., vN] -> [v1, v2, ..., vN] - to_replace_res = [r'.', r'e'] - values = [nan, 'crap'] - res = dfobj.replace(to_replace_res, values) - expec = DataFrame({'a': ['a', 'b', nan, nan], - 'b': ['crap', 'f', 'g', 'h'], 'c': ['h', 'crap', - 'l', 'o']}) - assert_frame_equal(res, expec) - - # list of [v1, v2, ..., vN] -> [v1, v2, .., vN] - to_replace_res = [r'.', r'f'] - values = [r'..', r'crap'] - res = dfobj.replace(to_replace_res, values) - expec = DataFrame({'a': ['a', 'b', '..', '..'], 'b': ['e', 'crap', 'g', - 'h'], - 'c': ['h', 'e', 'l', 'o']}) - - assert_frame_equal(res, expec) - - def test_replace_series_dict(self): - # from GH 3064 - df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) - result = df.replace(0, {'zero': 0.5, 'one': 1.0}) - expected = DataFrame({'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 2.0, 'b': 1.0}}) - assert_frame_equal(result, expected) - - result = df.replace(0, df.mean()) - assert_frame_equal(result, expected) - - # series to series/dict - df = DataFrame({'zero': {'a': 0.0, 'b': 1}, 'one': {'a': 2.0, 'b': 0}}) - s = Series({'zero': 0.0, 'one': 2.0}) - result = df.replace(s, {'zero': 0.5, 'one': 1.0}) - expected = DataFrame({'zero': {'a': 0.5, 'b': 1}, 'one': {'a': 1.0, 'b': 0.0}}) - assert_frame_equal(result, expected) - - result = df.replace(s, df.mean()) - assert_frame_equal(result, expected) - - def test_replace_convert(self): - # gh 3907 - df = DataFrame([['foo', 'bar', 'bah'], ['bar', 'foo', 'bah']]) - m = {'foo': 1, 'bar': 2, 'bah': 3} - rep = df.replace(m) - expec = Series([ np.int64] * 3) - res = rep.dtypes - assert_series_equal(expec, res) - - def test_replace_mixed(self): - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - result = self.mixed_frame.replace(np.nan, -18) - expected = self.mixed_frame.fillna(value=-18) - assert_frame_equal(result, expected) - assert_frame_equal(result.replace(-18, nan), self.mixed_frame) - - result = self.mixed_frame.replace(np.nan, -1e8) - expected = self.mixed_frame.fillna(value=-1e8) - assert_frame_equal(result, expected) - assert_frame_equal(result.replace(-1e8, nan), self.mixed_frame) - - # int block upcasting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0.5,1],dtype='float64') }) - result = df.replace(0, 0.5) - assert_frame_equal(result,expected) - - df.replace(0, 0.5, inplace=True) - assert_frame_equal(df,expected) - - # int block splitting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64'), 'C' : Series([1,2],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0.5,1],dtype='float64'), 'C' : Series([1,2],dtype='int64') }) - result = df.replace(0, 0.5) - assert_frame_equal(result,expected) - - # to object block upcasting - df = DataFrame({ 'A' : Series([1.0,2.0],dtype='float64'), 'B' : Series([0,1],dtype='int64') }) - expected = DataFrame({ 'A' : Series([1,'foo'],dtype='object'), 'B' : Series([0,1],dtype='int64') }) - result = df.replace(2, 'foo') - assert_frame_equal(result,expected) - - expected = DataFrame({ 'A' : Series(['foo','bar'],dtype='object'), 'B' : Series([0,'foo'],dtype='object') }) - result = df.replace([1,2], ['foo','bar']) - assert_frame_equal(result,expected) - - # test case from - df = DataFrame({'A' : Series([3,0],dtype='int64'), 'B' : Series([0,3],dtype='int64') }) - result = df.replace(3, df.mean().to_dict()) - expected = df.copy().astype('float64') - m = df.mean() - expected.iloc[0,0] = m[0] - expected.iloc[1,1] = m[1] - assert_frame_equal(result,expected) - - def test_replace_simple_nested_dict(self): - df = DataFrame({'col': range(1, 5)}) - expected = DataFrame({'col': ['a', 2, 3, 'b']}) - - result = df.replace({'col': {1: 'a', 4: 'b'}}) - assert_frame_equal(expected, result) - - # in this case, should be the same as the not nested version - result = df.replace({1: 'a', 4: 'b'}) - assert_frame_equal(expected, result) - - def test_replace_simple_nested_dict_with_nonexistent_value(self): - df = DataFrame({'col': range(1, 5)}) - expected = DataFrame({'col': ['a', 2, 3, 'b']}) - - result = df.replace({-1: '-', 1: 'a', 4: 'b'}) - assert_frame_equal(expected, result) - - result = df.replace({'col': {-1: '-', 1: 'a', 4: 'b'}}) - assert_frame_equal(expected, result) - - def test_interpolate(self): - pass - - def test_replace_value_is_none(self): - self.assertRaises(TypeError, self.tsframe.replace, nan) - orig_value = self.tsframe.iloc[0, 0] - orig2 = self.tsframe.iloc[1, 0] - - self.tsframe.iloc[0, 0] = nan - self.tsframe.iloc[1, 0] = 1 - - result = self.tsframe.replace(to_replace={nan: 0}) - expected = self.tsframe.T.replace(to_replace={nan: 0}).T - assert_frame_equal(result, expected) - - result = self.tsframe.replace(to_replace={nan: 0, 1: -1e8}) - tsframe = self.tsframe.copy() - tsframe.iloc[0, 0] = 0 - tsframe.iloc[1, 0] = -1e8 - expected = tsframe - assert_frame_equal(expected, result) - self.tsframe.iloc[0, 0] = orig_value - self.tsframe.iloc[1, 0] = orig2 - - def test_replace_for_new_dtypes(self): - - # dtypes - tsframe = self.tsframe.copy().astype(np.float32) - tsframe['A'][:5] = nan - tsframe['A'][-5:] = nan - - zero_filled = tsframe.replace(nan, -1e8) - assert_frame_equal(zero_filled, tsframe.fillna(-1e8)) - assert_frame_equal(zero_filled.replace(-1e8, nan), tsframe) - - tsframe['A'][:5] = nan - tsframe['A'][-5:] = nan - tsframe['B'][:5] = -1e8 - - b = tsframe['B'] - b[b == -1e8] = nan - tsframe['B'] = b - result = tsframe.fillna(method='bfill') - assert_frame_equal(result, tsframe.fillna(method='bfill')) - - def test_replace_dtypes(self): - # int - df = DataFrame({'ints': [1, 2, 3]}) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}) - assert_frame_equal(result, expected) - - df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int32) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int32) - assert_frame_equal(result, expected) - - df = DataFrame({'ints': [1, 2, 3]}, dtype=np.int16) - result = df.replace(1, 0) - expected = DataFrame({'ints': [0, 2, 3]}, dtype=np.int16) - assert_frame_equal(result, expected) - - # bools - df = DataFrame({'bools': [True, False, True]}) - result = df.replace(False, True) - self.assertTrue(result.values.all()) - - # complex blocks - df = DataFrame({'complex': [1j, 2j, 3j]}) - result = df.replace(1j, 0j) - expected = DataFrame({'complex': [0j, 2j, 3j]}) - assert_frame_equal(result, expected) - - # datetime blocks - prev = datetime.today() - now = datetime.today() - df = DataFrame({'datetime64': Index([prev, now, prev])}) - result = df.replace(prev, now) - expected = DataFrame({'datetime64': Index([now] * 3)}) - assert_frame_equal(result, expected) - - def test_replace_input_formats(self): - # both dicts - to_rep = {'A': np.nan, 'B': 0, 'C': ''} - values = {'A': 0, 'B': -1, 'C': 'missing'} - df = DataFrame({'A': [np.nan, 0, np.inf], 'B': [0, 2, 5], - 'C': ['', 'asdf', 'fd']}) - filled = df.replace(to_rep, values) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(to_rep[k], values[k]) - assert_frame_equal(filled, DataFrame(expected)) - - result = df.replace([0, 2, 5], [5, 2, 0]) - expected = DataFrame({'A': [np.nan, 5, np.inf], 'B': [5, 2, 0], - 'C': ['', 'asdf', 'fd']}) - assert_frame_equal(result, expected) - - # dict to scalar - filled = df.replace(to_rep, 0) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(to_rep[k], 0) - assert_frame_equal(filled, DataFrame(expected)) - - self.assertRaises(TypeError, df.replace, to_rep, [np.nan, 0, '']) - - # scalar to dict - values = {'A': 0, 'B': -1, 'C': 'missing'} - df = DataFrame({'A': [np.nan, 0, np.nan], 'B': [0, 2, 5], - 'C': ['', 'asdf', 'fd']}) - filled = df.replace(np.nan, values) - expected = {} - for k, v in compat.iteritems(df): - expected[k] = v.replace(np.nan, values[k]) - assert_frame_equal(filled, DataFrame(expected)) - - # list to list - to_rep = [np.nan, 0, ''] - values = [-2, -1, 'missing'] - result = df.replace(to_rep, values) - expected = df.copy() - for i in range(len(to_rep)): - expected.replace(to_rep[i], values[i], inplace=True) - assert_frame_equal(result, expected) - - self.assertRaises(ValueError, df.replace, to_rep, values[1:]) - - # list to scalar - to_rep = [np.nan, 0, ''] - result = df.replace(to_rep, -1) - expected = df.copy() - for i in range(len(to_rep)): - expected.replace(to_rep[i], -1, inplace=True) - assert_frame_equal(result, expected) - - def test_replace_limit(self): - pass - - def test_replace_dict_no_regex(self): - answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: - 'Disagree', 4: 'Strongly Disagree'}) - weights = {'Agree': 4, 'Disagree': 2, 'Neutral': 3, 'Strongly Agree': - 5, 'Strongly Disagree': 1} - expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) - result = answer.replace(weights) - assert_series_equal(result, expected) - - def test_replace_series_no_regex(self): - answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3: - 'Disagree', 4: 'Strongly Disagree'}) - weights = Series({'Agree': 4, 'Disagree': 2, 'Neutral': 3, - 'Strongly Agree': 5, 'Strongly Disagree': 1}) - expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1}) - result = answer.replace(weights) - assert_series_equal(result, expected) - - def test_replace_dict_tuple_list_ordering_remains_the_same(self): - df = DataFrame(dict(A=[nan, 1])) - res1 = df.replace(to_replace={nan: 0, 1: -1e8}) - res2 = df.replace(to_replace=(1, nan), value=[-1e8, 0]) - res3 = df.replace(to_replace=[1, nan], value=[-1e8, 0]) - - expected = DataFrame({'A': [0, -1e8]}) - assert_frame_equal(res1, res2) - assert_frame_equal(res2, res3) - assert_frame_equal(res3, expected) - - def test_replace_doesnt_replace_without_regex(self): - from pandas.compat import StringIO - raw = """fol T_opp T_Dir T_Enh - 0 1 0 0 vo - 1 2 vr 0 0 - 2 2 0 0 0 - 3 3 0 bt 0""" - df = read_csv(StringIO(raw), sep=r'\s+') - res = df.replace({'\D': 1}) - assert_frame_equal(df, res) - - def test_replace_bool_with_string(self): - df = DataFrame({'a': [True, False], 'b': list('ab')}) - result = df.replace(True, 'a') - expected = DataFrame({'a': ['a', False], 'b': df.b}) - assert_frame_equal(result, expected) - - def test_replace_pure_bool_with_string_no_op(self): - df = DataFrame(np.random.rand(2, 2) > 0.5) - result = df.replace('asdf', 'fdsa') - assert_frame_equal(df, result) - - def test_replace_bool_with_bool(self): - df = DataFrame(np.random.rand(2, 2) > 0.5) - result = df.replace(False, True) - expected = DataFrame(np.ones((2, 2), dtype=bool)) - assert_frame_equal(result, expected) - - def test_replace_with_dict_with_bool_keys(self): - df = DataFrame({0: [True, False], 1: [False, True]}) - with tm.assertRaisesRegexp(TypeError, 'Cannot compare types .+'): - df.replace({'asdf': 'asdb', True: 'yes'}) - - def test_replace_truthy(self): - df = DataFrame({'a': [True, True]}) - r = df.replace([np.inf, -np.inf], np.nan) - e = df - assert_frame_equal(r, e) - - def test_replace_int_to_int_chain(self): - df = DataFrame({'a': lrange(1, 5)}) - with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): - df.replace({'a': dict(zip(range(1, 5), range(2, 6)))}) - - def test_replace_str_to_str_chain(self): - a = np.arange(1, 5) - astr = a.astype(str) - bstr = np.arange(2, 6).astype(str) - df = DataFrame({'a': astr}) - with tm.assertRaisesRegexp(ValueError, "Replacement not allowed .+"): - df.replace({'a': dict(zip(astr, bstr))}) - - def test_replace_swapping_bug(self): - df = pd.DataFrame({'a': [True, False, True]}) - res = df.replace({'a': {True: 'Y', False: 'N'}}) - expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) - assert_frame_equal(res, expect) - - df = pd.DataFrame({'a': [0, 1, 0]}) - res = df.replace({'a': {0: 'Y', 1: 'N'}}) - expect = pd.DataFrame({'a': ['Y', 'N', 'Y']}) - assert_frame_equal(res, expect) - - def test_replace_period(self): - d = {'fname': - {'out_augmented_AUG_2011.json': pd.Period(year=2011, month=8, freq='M'), - 'out_augmented_JAN_2011.json': pd.Period(year=2011, month=1, freq='M'), - 'out_augmented_MAY_2012.json': pd.Period(year=2012, month=5, freq='M'), - 'out_augmented_SUBSIDY_WEEK.json': pd.Period(year=2011, month=4, freq='M'), - 'out_augmented_AUG_2012.json': pd.Period(year=2012, month=8, freq='M'), - 'out_augmented_MAY_2011.json': pd.Period(year=2011, month=5, freq='M'), - 'out_augmented_SEP_2013.json': pd.Period(year=2013, month=9, freq='M')}} - - df = pd.DataFrame(['out_augmented_AUG_2012.json', - 'out_augmented_SEP_2013.json', - 'out_augmented_SUBSIDY_WEEK.json', - 'out_augmented_MAY_2012.json', - 'out_augmented_MAY_2011.json', - 'out_augmented_AUG_2011.json', - 'out_augmented_JAN_2011.json'], columns=['fname']) - tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) - expected = DataFrame({'fname': [d['fname'][k] - for k in df.fname.values]}) - result = df.replace(d) - assert_frame_equal(result, expected) - - def test_replace_datetime(self): - d = {'fname': - {'out_augmented_AUG_2011.json': pd.Timestamp('2011-08'), - 'out_augmented_JAN_2011.json': pd.Timestamp('2011-01'), - 'out_augmented_MAY_2012.json': pd.Timestamp('2012-05'), - 'out_augmented_SUBSIDY_WEEK.json': pd.Timestamp('2011-04'), - 'out_augmented_AUG_2012.json': pd.Timestamp('2012-08'), - 'out_augmented_MAY_2011.json': pd.Timestamp('2011-05'), - 'out_augmented_SEP_2013.json': pd.Timestamp('2013-09')}} - - df = pd.DataFrame(['out_augmented_AUG_2012.json', - 'out_augmented_SEP_2013.json', - 'out_augmented_SUBSIDY_WEEK.json', - 'out_augmented_MAY_2012.json', - 'out_augmented_MAY_2011.json', - 'out_augmented_AUG_2011.json', - 'out_augmented_JAN_2011.json'], columns=['fname']) - tm.assert_equal(set(df.fname.values), set(d['fname'].keys())) - expected = DataFrame({'fname': [d['fname'][k] - for k in df.fname.values]}) - result = df.replace(d) - assert_frame_equal(result, expected) - - def test_replace_datetimetz(self): - - # GH 11326 - # behaving poorly when presented with a datetime64[ns, tz] - df = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : [0, np.nan, 2]}) - result = df.replace(np.nan,1) - expected = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : Series([0, 1, 2],dtype='float64')}) - assert_frame_equal(result, expected) - - result = df.fillna(1) - assert_frame_equal(result, expected) - - result = df.replace(0,np.nan) - expected = DataFrame({'A' : date_range('20130101',periods=3,tz='US/Eastern'), - 'B' : [np.nan, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.replace(Timestamp('20130102',tz='US/Eastern'),Timestamp('20130104',tz='US/Eastern')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104',tz='US/Eastern'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : pd.NaT }, Timestamp('20130104',tz='US/Eastern')) - assert_frame_equal(result, expected) - - # coerce to object - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : pd.NaT }, Timestamp('20130104',tz='US/Pacific')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104',tz='US/Pacific'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - result = df.copy() - result.iloc[1,0] = np.nan - result = result.replace({'A' : np.nan }, Timestamp('20130104')) - expected = DataFrame({'A' : [Timestamp('20130101',tz='US/Eastern'), - Timestamp('20130104'), - Timestamp('20130103',tz='US/Eastern')], - 'B' : [0, np.nan, 2]}) - assert_frame_equal(result, expected) - - def test_combine_multiple_frames_dtypes(self): - - # GH 2759 - A = DataFrame(data=np.ones((10, 2)), columns=['foo', 'bar'], dtype=np.float64) - B = DataFrame(data=np.ones((10, 2)), dtype=np.float32) - results = pd.concat((A, B), axis=1).get_dtype_counts() - expected = Series(dict( float64 = 2, float32 = 2 )) - assert_series_equal(results,expected) - - def test_ops(self): - - # tst ops and reversed ops in evaluation - # GH7198 - - # smaller hits python, larger hits numexpr - for n in [ 4, 4000 ]: - - df = DataFrame(1,index=range(n),columns=list('abcd')) - df.iloc[0] = 2 - m = df.mean() - - for op_str, op, rop in [('+','__add__','__radd__'), - ('-','__sub__','__rsub__'), - ('*','__mul__','__rmul__'), - ('/','__truediv__','__rtruediv__')]: - - base = DataFrame(np.tile(m.values,n).reshape(n,-1),columns=list('abcd')) - expected = eval("base{op}df".format(op=op_str)) - - # ops as strings - result = eval("m{op}df".format(op=op_str)) - assert_frame_equal(result,expected) - - # these are commutative - if op in ['+','*']: - result = getattr(df,op)(m) - assert_frame_equal(result,expected) - - # these are not - elif op in ['-','/']: - result = getattr(df,rop)(m) - assert_frame_equal(result,expected) - - # GH7192 - df = DataFrame(dict(A=np.random.randn(25000))) - df.iloc[0:5] = np.nan - expected = (1-np.isnan(df.iloc[0:25])) - result = (1-np.isnan(df)).iloc[0:25] - assert_frame_equal(result,expected) - - def test_truncate(self): - offset = datetools.bday - - ts = self.tsframe[::3] - - start, end = self.tsframe.index[3], self.tsframe.index[6] - - start_missing = self.tsframe.index[2] - end_missing = self.tsframe.index[7] - - # neither specified - truncated = ts.truncate() - assert_frame_equal(truncated, ts) - - # both specified - expected = ts[1:3] - - truncated = ts.truncate(start, end) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(start_missing, end_missing) - assert_frame_equal(truncated, expected) - - # start specified - expected = ts[1:] - - truncated = ts.truncate(before=start) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(before=start_missing) - assert_frame_equal(truncated, expected) - - # end specified - expected = ts[:3] - - truncated = ts.truncate(after=end) - assert_frame_equal(truncated, expected) - - truncated = ts.truncate(after=end_missing) - assert_frame_equal(truncated, expected) - - self.assertRaises(ValueError, ts.truncate, - before=ts.index[-1] - 1, - after=ts.index[0] +1) - - def test_truncate_copy(self): - index = self.tsframe.index - truncated = self.tsframe.truncate(index[5], index[10]) - truncated.values[:] = 5. - self.assertFalse((self.tsframe.values[5:11] == 5).any()) - - def test_xs(self): - idx = self.frame.index[5] - xs = self.frame.xs(idx) - for item, value in compat.iteritems(xs): - if np.isnan(value): - self.assertTrue(np.isnan(self.frame[item][idx])) - else: - self.assertEqual(value, self.frame[item][idx]) - - # mixed-type xs - test_data = { - 'A': {'1': 1, '2': 2}, - 'B': {'1': '1', '2': '2', '3': '3'}, - } - frame = DataFrame(test_data) - xs = frame.xs('1') - self.assertEqual(xs.dtype, np.object_) - self.assertEqual(xs['A'], 1) - self.assertEqual(xs['B'], '1') - - with tm.assertRaises(KeyError): - self.tsframe.xs(self.tsframe.index[0] - datetools.bday) - - # xs get column - series = self.frame.xs('A', axis=1) - expected = self.frame['A'] - assert_series_equal(series, expected) - - # view is returned if possible - series = self.frame.xs('A', axis=1) - series[:] = 5 - self.assertTrue((expected == 5).all()) - - def test_xs_corner(self): - # pathological mixed-type reordering case - df = DataFrame(index=[0]) - df['A'] = 1. - df['B'] = 'foo' - df['C'] = 2. - df['D'] = 'bar' - df['E'] = 3. - - xs = df.xs(0) - assert_almost_equal(xs, [1., 'foo', 2., 'bar', 3.]) - - # no columns but Index(dtype=object) - df = DataFrame(index=['a', 'b', 'c']) - result = df.xs('a') - expected = Series([], name='a', index=pd.Index([], dtype=object)) - assert_series_equal(result, expected) - - def test_xs_duplicates(self): - df = DataFrame(randn(5, 2), index=['b', 'b', 'c', 'b', 'a']) - - cross = df.xs('c') - exp = df.iloc[2] - assert_series_equal(cross, exp) - - def test_xs_keep_level(self): - df = DataFrame({'day': {0: 'sat', 1: 'sun'}, - 'flavour': {0: 'strawberry', 1: 'strawberry'}, - 'sales': {0: 10, 1: 12}, - 'year': {0: 2008, 1: 2008}}).set_index(['year','flavour','day']) - result = df.xs('sat', level='day', drop_level=False) - expected = df[:1] - assert_frame_equal(result, expected) - - result = df.xs([2008, 'sat'], level=['year', 'day'], drop_level=False) - assert_frame_equal(result, expected) - - def test_pivot(self): - data = { - 'index': ['A', 'B', 'C', 'C', 'B', 'A'], - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - frame = DataFrame(data) - pivoted = frame.pivot( - index='index', columns='columns', values='values') - - expected = DataFrame({ - 'One': {'A': 1., 'B': 2., 'C': 3.}, - 'Two': {'A': 1., 'B': 2., 'C': 3.} - }) - expected.index.name, expected.columns.name = 'index', 'columns' - - assert_frame_equal(pivoted, expected) - - # name tracking - self.assertEqual(pivoted.index.name, 'index') - self.assertEqual(pivoted.columns.name, 'columns') - - # don't specify values - pivoted = frame.pivot(index='index', columns='columns') - self.assertEqual(pivoted.index.name, 'index') - self.assertEqual(pivoted.columns.names, (None, 'columns')) - - # pivot multiple columns - wp = tm.makePanel() - lp = wp.to_frame() - df = lp.reset_index() - assert_frame_equal(df.pivot('major', 'minor'), lp.unstack()) - - def test_pivot_duplicates(self): - data = DataFrame({'a': ['bar', 'bar', 'foo', 'foo', 'foo'], - 'b': ['one', 'two', 'one', 'one', 'two'], - 'c': [1., 2., 3., 3., 4.]}) - with assertRaisesRegexp(ValueError, 'duplicate entries'): - data.pivot('a', 'b', 'c') - - def test_pivot_empty(self): - df = DataFrame({}, columns=['a', 'b', 'c']) - result = df.pivot('a', 'b', 'c') - expected = DataFrame({}) - assert_frame_equal(result, expected, check_names=False) - - def test_pivot_integer_bug(self): - df = DataFrame(data=[("A", "1", "A1"), ("B", "2", "B2")]) - - result = df.pivot(index=1, columns=0, values=2) - repr(result) - self.assert_numpy_array_equal(result.columns, ['A', 'B']) - - def test_pivot_index_none(self): - # gh-3962 - data = { - 'index': ['A', 'B', 'C', 'C', 'B', 'A'], - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - frame = DataFrame(data).set_index('index') - result = frame.pivot(columns='columns', values='values') - expected = DataFrame({ - 'One': {'A': 1., 'B': 2., 'C': 3.}, - 'Two': {'A': 1., 'B': 2., 'C': 3.} - }) - - expected.index.name, expected.columns.name = 'index', 'columns' - assert_frame_equal(result, expected) - - # omit values - result = frame.pivot(columns='columns') - - expected.columns = pd.MultiIndex.from_tuples([('values', 'One'), - ('values', 'Two')], - names=[None, 'columns']) - expected.index.name = 'index' - assert_frame_equal(result, expected, check_names=False) - self.assertEqual(result.index.name, 'index',) - self.assertEqual(result.columns.names, (None, 'columns')) - expected.columns = expected.columns.droplevel(0) - - data = { - 'index': range(7), - 'columns': ['One', 'One', 'One', 'Two', 'Two', 'Two'], - 'values': [1., 2., 3., 3., 2., 1.] - } - - result = frame.pivot(columns='columns', values='values') - - expected.columns.name = 'columns' - assert_frame_equal(result, expected) - - def test_reindex(self): - newFrame = self.frame.reindex(self.ts1.index) - - for col in newFrame.columns: - for idx, val in compat.iteritems(newFrame[col]): - if idx in self.frame.index: - if np.isnan(val): - self.assertTrue(np.isnan(self.frame[col][idx])) - else: - self.assertEqual(val, self.frame[col][idx]) - else: - self.assertTrue(np.isnan(val)) - - for col, series in compat.iteritems(newFrame): - self.assertTrue(tm.equalContents(series.index, newFrame.index)) - emptyFrame = self.frame.reindex(Index([])) - self.assertEqual(len(emptyFrame.index), 0) - - # Cython code should be unit-tested directly - nonContigFrame = self.frame.reindex(self.ts1.index[::2]) - - for col in nonContigFrame.columns: - for idx, val in compat.iteritems(nonContigFrame[col]): - if idx in self.frame.index: - if np.isnan(val): - self.assertTrue(np.isnan(self.frame[col][idx])) - else: - self.assertEqual(val, self.frame[col][idx]) - else: - self.assertTrue(np.isnan(val)) - - for col, series in compat.iteritems(nonContigFrame): - self.assertTrue(tm.equalContents(series.index, - nonContigFrame.index)) - - # corner cases - - # Same index, copies values but not index if copy=False - newFrame = self.frame.reindex(self.frame.index, copy=False) - self.assertIs(newFrame.index, self.frame.index) - - # length zero - newFrame = self.frame.reindex([]) - self.assertTrue(newFrame.empty) - self.assertEqual(len(newFrame.columns), len(self.frame.columns)) - - # length zero with columns reindexed with non-empty index - newFrame = self.frame.reindex([]) - newFrame = newFrame.reindex(self.frame.index) - self.assertEqual(len(newFrame.index), len(self.frame.index)) - self.assertEqual(len(newFrame.columns), len(self.frame.columns)) - - # pass non-Index - newFrame = self.frame.reindex(list(self.ts1.index)) - self.assertTrue(newFrame.index.equals(self.ts1.index)) - - # copy with no axes - result = self.frame.reindex() - assert_frame_equal(result,self.frame) - self.assertFalse(result is self.frame) - - def test_reindex_nan(self): - df = pd.DataFrame([[1, 2], [3, 5], [7, 11], [9, 23]], - index=[2, np.nan, 1, 5], - columns=['joe', 'jim']) - - i, j = [np.nan, 5, 5, np.nan, 1, 2, np.nan], [1, 3, 3, 1, 2, 0, 1] - assert_frame_equal(df.reindex(i), df.iloc[j]) - - df.index = df.index.astype('object') - assert_frame_equal(df.reindex(i), df.iloc[j], check_index_type=False) - - # GH10388 - df = pd.DataFrame({'other': ['a', 'b', np.nan, 'c'], - 'date': ['2015-03-22', np.nan, - '2012-01-08', np.nan], - 'amount': [2, 3, 4, 5]}) - - df['date'] = pd.to_datetime(df.date) - df['delta'] = (pd.to_datetime('2015-06-18') - df['date']).shift(1) - - left = df.set_index(['delta', 'other', 'date']).reset_index() - right = df.reindex(columns=['delta', 'other', 'date', 'amount']) - assert_frame_equal(left, right) - - def test_reindex_name_remains(self): - s = Series(random.rand(10)) - df = DataFrame(s, index=np.arange(len(s))) - i = Series(np.arange(10), name='iname') - - df = df.reindex(i) - self.assertEqual(df.index.name, 'iname') - - df = df.reindex(Index(np.arange(10), name='tmpname')) - self.assertEqual(df.index.name, 'tmpname') - - s = Series(random.rand(10)) - df = DataFrame(s.T, index=np.arange(len(s))) - i = Series(np.arange(10), name='iname') - df = df.reindex(columns=i) - self.assertEqual(df.columns.name, 'iname') - - def test_reindex_int(self): - smaller = self.intframe.reindex(self.intframe.index[::2]) - - self.assertEqual(smaller['A'].dtype, np.int64) - - bigger = smaller.reindex(self.intframe.index) - self.assertEqual(bigger['A'].dtype, np.float64) - - smaller = self.intframe.reindex(columns=['A', 'B']) - self.assertEqual(smaller['A'].dtype, np.int64) - - def test_reindex_like(self): - other = self.frame.reindex(index=self.frame.index[:10], - columns=['C', 'B']) - - assert_frame_equal(other, self.frame.reindex_like(other)) - - def test_reindex_columns(self): - newFrame = self.frame.reindex(columns=['A', 'B', 'E']) - - assert_series_equal(newFrame['B'], self.frame['B']) - self.assertTrue(np.isnan(newFrame['E']).all()) - self.assertNotIn('C', newFrame) - - # length zero - newFrame = self.frame.reindex(columns=[]) - self.assertTrue(newFrame.empty) - - def test_reindex_axes(self): - - # GH 3317, reindexing by both axes loses freq of the index - from datetime import datetime - df = DataFrame(np.ones((3, 3)), index=[datetime(2012, 1, 1), datetime(2012, 1, 2), datetime(2012, 1, 3)], columns=['a', 'b', 'c']) - time_freq = date_range('2012-01-01', '2012-01-03', freq='d') - some_cols = ['a', 'b'] - - index_freq = df.reindex(index=time_freq).index.freq - both_freq = df.reindex(index=time_freq, columns=some_cols).index.freq - seq_freq = df.reindex(index=time_freq).reindex(columns=some_cols).index.freq - self.assertEqual(index_freq, both_freq) - self.assertEqual(index_freq, seq_freq) - - def test_reindex_fill_value(self): - df = DataFrame(np.random.randn(10, 4)) - - # axis=0 - result = df.reindex(lrange(15)) - self.assertTrue(np.isnan(result.values[-5:]).all()) - - result = df.reindex(lrange(15), fill_value=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - # axis=1 - result = df.reindex(columns=lrange(5), fill_value=0.) - expected = df.copy() - expected[4] = 0. - assert_frame_equal(result, expected) - - result = df.reindex(columns=lrange(5), fill_value=0) - expected = df.copy() - expected[4] = 0 - assert_frame_equal(result, expected) - - result = df.reindex(columns=lrange(5), fill_value='foo') - expected = df.copy() - expected[4] = 'foo' - assert_frame_equal(result, expected) - - # reindex_axis - result = df.reindex_axis(lrange(15), fill_value=0., axis=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - result = df.reindex_axis(lrange(5), fill_value=0., axis=1) - expected = df.reindex(columns=lrange(5)).fillna(0) - assert_frame_equal(result, expected) - - # other dtypes - df['foo'] = 'foo' - result = df.reindex(lrange(15), fill_value=0) - expected = df.reindex(lrange(15)).fillna(0) - assert_frame_equal(result, expected) - - def test_reindex_dups(self): - - # GH4746, reindex on duplicate index error messages - arr = np.random.randn(10) - df = DataFrame(arr,index=[1,2,3,4,5,1,2,3,4,5]) - - # set index is ok - result = df.copy() - result.index = list(range(len(df))) - expected = DataFrame(arr,index=list(range(len(df)))) - assert_frame_equal(result,expected) - - # reindex fails - self.assertRaises(ValueError, df.reindex, index=list(range(len(df)))) - - def test_align(self): - af, bf = self.frame.align(self.frame) - self.assertIsNot(af._data, self.frame._data) - - af, bf = self.frame.align(self.frame, copy=False) - self.assertIs(af._data, self.frame._data) - - # axis = 0 - other = self.frame.ix[:-5, :3] - af, bf = self.frame.align(other, axis=0, fill_value=-1) - self.assertTrue(bf.columns.equals(other.columns)) - # test fill value - join_idx = self.frame.index.join(other.index) - diff_a = self.frame.index.difference(join_idx) - diff_b = other.index.difference(join_idx) - diff_a_vals = af.reindex(diff_a).values - diff_b_vals = bf.reindex(diff_b).values - self.assertTrue((diff_a_vals == -1).all()) - - af, bf = self.frame.align(other, join='right', axis=0) - self.assertTrue(bf.columns.equals(other.columns)) - self.assertTrue(bf.index.equals(other.index)) - self.assertTrue(af.index.equals(other.index)) - - # axis = 1 - other = self.frame.ix[:-5, :3].copy() - af, bf = self.frame.align(other, axis=1) - self.assertTrue(bf.columns.equals(self.frame.columns)) - self.assertTrue(bf.index.equals(other.index)) - - # test fill value - join_idx = self.frame.index.join(other.index) - diff_a = self.frame.index.difference(join_idx) - diff_b = other.index.difference(join_idx) - diff_a_vals = af.reindex(diff_a).values - diff_b_vals = bf.reindex(diff_b).values - self.assertTrue((diff_a_vals == -1).all()) - - af, bf = self.frame.align(other, join='inner', axis=1) - self.assertTrue(bf.columns.equals(other.columns)) - - af, bf = self.frame.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) - - # test other non-float types - af, bf = self.intframe.align(other, join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(other.columns)) - - af, bf = self.mixed_frame.align(self.mixed_frame, - join='inner', axis=1, method='pad') - self.assertTrue(bf.columns.equals(self.mixed_frame.columns)) - - af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=None) - self.assertTrue(bf.index.equals(Index([]))) - - af, bf = self.frame.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - # mixed floats/ints - af, bf = self.mixed_float.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - af, bf = self.mixed_int.align(other.ix[:, 0], join='inner', axis=1, - method=None, fill_value=0) - self.assertTrue(bf.index.equals(Index([]))) - - # try to align dataframe to series along bad axis - self.assertRaises(ValueError, self.frame.align, af.ix[0, :3], - join='inner', axis=2) - - # align dataframe to series with broadcast or not - idx = self.frame.index - s = Series(range(len(idx)), index=idx) - - left, right = self.frame.align(s, axis=0) - tm.assert_index_equal(left.index, self.frame.index) - tm.assert_index_equal(right.index, self.frame.index) - self.assertTrue(isinstance(right, Series)) - - left, right = self.frame.align(s, broadcast_axis=1) - tm.assert_index_equal(left.index, self.frame.index) - expected = {} - for c in self.frame.columns: - expected[c] = s - expected = DataFrame(expected, index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(right, expected) - - # GH 9558 - df = DataFrame({'a':[1,2,3], 'b':[4,5,6]}) - result = df[df['a'] == 2] - expected = DataFrame([[2, 5]], index=[1], columns=['a', 'b']) - assert_frame_equal(result, expected) - - result = df.where(df['a'] == 2, 0) - expected = DataFrame({'a':[0, 2, 0], 'b':[0, 5, 0]}) - assert_frame_equal(result, expected) - - def _check_align(self, a, b, axis, fill_axis, how, method, limit=None): - aa, ab = a.align(b, axis=axis, join=how, method=method, limit=limit, - fill_axis=fill_axis) - - join_index, join_columns = None, None - - ea, eb = a, b - if axis is None or axis == 0: - join_index = a.index.join(b.index, how=how) - ea = ea.reindex(index=join_index) - eb = eb.reindex(index=join_index) - - if axis is None or axis == 1: - join_columns = a.columns.join(b.columns, how=how) - ea = ea.reindex(columns=join_columns) - eb = eb.reindex(columns=join_columns) - - ea = ea.fillna(axis=fill_axis, method=method, limit=limit) - eb = eb.fillna(axis=fill_axis, method=method, limit=limit) - - assert_frame_equal(aa, ea) - assert_frame_equal(ab, eb) - - def test_align_fill_method_inner(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('inner', meth, ax, fax) - - def test_align_fill_method_outer(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('outer', meth, ax, fax) - - def test_align_fill_method_left(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('left', meth, ax, fax) - - def test_align_fill_method_right(self): - for meth in ['pad', 'bfill']: - for ax in [0, 1, None]: - for fax in [0, 1]: - self._check_align_fill('right', meth, ax, fax) - - def _check_align_fill(self, kind, meth, ax, fax): - left = self.frame.ix[0:4, :10] - right = self.frame.ix[2:, 6:] - empty = self.frame.ix[:0, :0] - - self._check_align(left, right, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(left, right, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # empty left - self._check_align(empty, right, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(empty, right, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # empty right - self._check_align(left, empty, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(left, empty, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - # both empty - self._check_align(empty, empty, axis=ax, fill_axis=fax, - how=kind, method=meth) - self._check_align(empty, empty, axis=ax, fill_axis=fax, - how=kind, method=meth, limit=1) - - def test_align_int_fill_bug(self): - # GH #910 - X = np.arange(10*10, dtype='float64').reshape(10, 10) - Y = np.ones((10, 1), dtype=int) - - df1 = DataFrame(X) - df1['0.X'] = Y.squeeze() - - df2 = df1.astype(float) - - result = df1 - df1.mean() - expected = df2 - df2.mean() - assert_frame_equal(result, expected) - - def test_align_multiindex(self): - # GH 10665 - # same test cases as test_align_multiindex in test_series.py - - midx = pd.MultiIndex.from_product([range(2), range(3), range(2)], - names=('a', 'b', 'c')) - idx = pd.Index(range(2), name='b') - df1 = pd.DataFrame(np.arange(12,dtype='int64'), index=midx) - df2 = pd.DataFrame(np.arange(2,dtype='int64'), index=idx) - - # these must be the same results (but flipped) - res1l, res1r = df1.align(df2, join='left') - res2l, res2r = df2.align(df1, join='right') - - expl = df1 - assert_frame_equal(expl, res1l) - assert_frame_equal(expl, res2r) - expr = pd.DataFrame([0, 0, 1, 1, np.nan, np.nan] * 2, index=midx) - assert_frame_equal(expr, res1r) - assert_frame_equal(expr, res2l) - - res1l, res1r = df1.align(df2, join='right') - res2l, res2r = df2.align(df1, join='left') - - exp_idx = pd.MultiIndex.from_product([range(2), range(2), range(2)], - names=('a', 'b', 'c')) - expl = pd.DataFrame([0, 1, 2, 3, 6, 7, 8, 9], index=exp_idx) - assert_frame_equal(expl, res1l) - assert_frame_equal(expl, res2r) - expr = pd.DataFrame([0, 0, 1, 1] * 2, index=exp_idx) - assert_frame_equal(expr, res1r) - assert_frame_equal(expr, res2l) - - def test_where(self): - default_frame = DataFrame(np.random.randn(5, 3), - columns=['A', 'B', 'C']) - - def _safe_add(df): - # only add to the numeric items - def is_ok(s): - return issubclass(s.dtype.type, (np.integer,np.floating)) and s.dtype != 'uint8' - return DataFrame(dict([ (c,s+1) if is_ok(s) else (c,s) for c, s in compat.iteritems(df) ])) - - def _check_get(df, cond, check_dtypes = True): - other1 = _safe_add(df) - rs = df.where(cond, other1) - rs2 = df.where(cond.values, other1) - for k, v in rs.iteritems(): - exp = Series(np.where(cond[k], df[k], other1[k]),index=v.index) - assert_series_equal(v, exp, check_names=False) - assert_frame_equal(rs, rs2) - - # dtypes - if check_dtypes: - self.assertTrue((rs.dtypes == df.dtypes).all() == True) - - # check getting - for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]: - cond = df > 0 - _check_get(df, cond) - - - # upcasting case (GH # 2794) - df = DataFrame(dict([ (c,Series([1]*3,dtype=c)) for c in ['int64','int32','float32','float64'] ])) - df.ix[1,:] = 0 - result = df.where(df>=0).get_dtype_counts() - - #### when we don't preserve boolean casts #### - #expected = Series({ 'float32' : 1, 'float64' : 3 }) - - expected = Series({ 'float32' : 1, 'float64' : 1, 'int32' : 1, 'int64' : 1 }) - assert_series_equal(result, expected) - - # aligning - def _check_align(df, cond, other, check_dtypes = True): - rs = df.where(cond, other) - for i, k in enumerate(rs.columns): - result = rs[k] - d = df[k].values - c = cond[k].reindex(df[k].index).fillna(False).values - - if np.isscalar(other): - o = other - else: - if isinstance(other,np.ndarray): - o = Series(other[:,i],index=result.index).values - else: - o = other[k].values - - new_values = d if c.all() else np.where(c, d, o) - expected = Series(new_values, index=result.index, name=k) - - # since we can't always have the correct numpy dtype - # as numpy doesn't know how to downcast, don't check - assert_series_equal(result, expected, check_dtype=False) - - # dtypes - # can't check dtype when other is an ndarray - - if check_dtypes and not isinstance(other,np.ndarray): - self.assertTrue((rs.dtypes == df.dtypes).all() == True) - - for df in [ self.mixed_frame, self.mixed_float, self.mixed_int ]: - - # other is a frame - cond = (df > 0)[1:] - _check_align(df, cond, _safe_add(df)) - - # check other is ndarray - cond = df > 0 - _check_align(df, cond, (_safe_add(df).values)) - - # integers are upcast, so don't check the dtypes - cond = df > 0 - check_dtypes = all([ not issubclass(s.type,np.integer) for s in df.dtypes ]) - _check_align(df, cond, np.nan, check_dtypes = check_dtypes) - - # invalid conditions - df = default_frame - err1 = (df + 1).values[0:2, :] - self.assertRaises(ValueError, df.where, cond, err1) - - err2 = cond.ix[:2, :].values - other1 = _safe_add(df) - self.assertRaises(ValueError, df.where, err2, other1) - - self.assertRaises(ValueError, df.mask, True) - self.assertRaises(ValueError, df.mask, 0) - - # where inplace - def _check_set(df, cond, check_dtypes = True): - dfi = df.copy() - econd = cond.reindex_like(df).fillna(True) - expected = dfi.mask(~econd) - - dfi.where(cond, np.nan, inplace=True) - assert_frame_equal(dfi, expected) - - # dtypes (and confirm upcasts)x - if check_dtypes: - for k, v in compat.iteritems(df.dtypes): - if issubclass(v.type,np.integer) and not cond[k].all(): - v = np.dtype('float64') - self.assertEqual(dfi[k].dtype, v) - - for df in [ default_frame, self.mixed_frame, self.mixed_float, self.mixed_int ]: - - cond = df > 0 - _check_set(df, cond) - - cond = df >= 0 - _check_set(df, cond) - - # aligining - cond = (df >= 0)[1:] - _check_set(df, cond) - - # GH 10218 - # test DataFrame.where with Series slicing - df = DataFrame({'a': range(3), 'b': range(4, 7)}) - result = df.where(df['a'] == 1) - expected = df[df['a'] == 1].reindex(df.index) - assert_frame_equal(result, expected) - - def test_where_bug(self): - - # GH 2793 - - df = DataFrame({'a': [1.0, 2.0, 3.0, 4.0], 'b': [4.0, 3.0, 2.0, 1.0]}, dtype = 'float64') - expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64') - result = df.where(df > 2, np.nan) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(result > 2, np.nan, inplace=True) - assert_frame_equal(result, expected) - - # mixed - for dtype in ['int16','int8','int32','int64']: - df = DataFrame({'a': np.array([1, 2, 3, 4],dtype=dtype), 'b': np.array([4.0, 3.0, 2.0, 1.0], dtype = 'float64') }) - expected = DataFrame({'a': [np.nan, np.nan, 3.0, 4.0], 'b': [4.0, 3.0, np.nan, np.nan]}, dtype = 'float64') - result = df.where(df > 2, np.nan) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(result > 2, np.nan, inplace=True) - assert_frame_equal(result, expected) - - # transpositional issue - # GH7506 - a = DataFrame({ 0 : [1,2], 1 : [3,4], 2 : [5,6]}) - b = DataFrame({ 0 : [np.nan,8], 1:[9,np.nan], 2:[np.nan,np.nan]}) - do_not_replace = b.isnull() | (a > b) - - expected = a.copy() - expected[~do_not_replace] = b - - result = a.where(do_not_replace,b) - assert_frame_equal(result,expected) - - a = DataFrame({ 0 : [4,6], 1 : [1,0]}) - b = DataFrame({ 0 : [np.nan,3],1:[3,np.nan]}) - do_not_replace = b.isnull() | (a > b) - - expected = a.copy() - expected[~do_not_replace] = b - - result = a.where(do_not_replace,b) - assert_frame_equal(result,expected) - - def test_where_datetime(self): - - # GH 3311 - df = DataFrame(dict(A = date_range('20130102',periods=5), - B = date_range('20130104',periods=5), - C = np.random.randn(5))) - - stamp = datetime(2013,1,3) - result = df[df>stamp] - expected = df.copy() - expected.loc[[0,1],'A'] = np.nan - assert_frame_equal(result,expected) - - def test_where_none(self): - # GH 4667 - # setting with None changes dtype - df = DataFrame({'series': Series(range(10))}).astype(float) - df[df > 7] = None - expected = DataFrame({'series': Series([0,1,2,3,4,5,6,7,np.nan,np.nan]) }) - assert_frame_equal(df, expected) - - # GH 7656 - df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, {'A': np.nan, 'B': 'Test', 'C': np.nan}]) - expected = df.where(~isnull(df), None) - with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): - df.where(~isnull(df), None, inplace=True) - - def test_where_align(self): - - def create(): - df = DataFrame(np.random.randn(10,3)) - df.iloc[3:5,0] = np.nan - df.iloc[4:6,1] = np.nan - df.iloc[5:8,2] = np.nan - return df - - # series - df = create() - expected = df.fillna(df.mean()) - result = df.where(pd.notnull(df),df.mean(),axis='columns') - assert_frame_equal(result, expected) - - df.where(pd.notnull(df),df.mean(),inplace=True,axis='columns') - assert_frame_equal(df, expected) - - df = create().fillna(0) - expected = df.apply(lambda x, y: x.where(x>0,y), y=df[0]) - result = df.where(df>0,df[0],axis='index') - assert_frame_equal(result, expected) - result = df.where(df>0,df[0],axis='rows') - assert_frame_equal(result, expected) - - # frame - df = create() - expected = df.fillna(1) - result = df.where(pd.notnull(df),DataFrame(1,index=df.index,columns=df.columns)) - assert_frame_equal(result, expected) - - def test_where_complex(self): - # GH 6345 - expected = DataFrame([[1+1j, 2], [np.nan, 4+1j]], columns=['a', 'b']) - df = DataFrame([[1+1j, 2], [5+1j, 4+1j]], columns=['a', 'b']) - df[df.abs() >= 5] = np.nan - assert_frame_equal(df,expected) - - def test_where_axis(self): - # GH 9736 - df = DataFrame(np.random.randn(2, 2)) - mask = DataFrame([[False, False], [False, False]]) - s = Series([0, 1]) - - expected = DataFrame([[0, 0], [1, 1]], dtype='float64') - result = df.where(mask, s, axis='index') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='index', inplace=True) - assert_frame_equal(result, expected) - - expected = DataFrame([[0, 1], [0, 1]], dtype='float64') - result = df.where(mask, s, axis='columns') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - # Upcast needed - df = DataFrame([[1, 2], [3, 4]], dtype='int64') - mask = DataFrame([[False, False], [False, False]]) - s = Series([0, np.nan]) - - expected = DataFrame([[0, 0], [np.nan, np.nan]], dtype='float64') - result = df.where(mask, s, axis='index') - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s, axis='index', inplace=True) - assert_frame_equal(result, expected) - - expected = DataFrame([[0, np.nan], [0, np.nan]], dtype='float64') - result = df.where(mask, s, axis='columns') - assert_frame_equal(result, expected) - - expected = DataFrame({0 : np.array([0, 0], dtype='int64'), - 1 : np.array([np.nan, np.nan], dtype='float64')}) - result = df.copy() - result.where(mask, s, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - # Multiple dtypes (=> multiple Blocks) - df = pd.concat([DataFrame(np.random.randn(10, 2)), - DataFrame(np.random.randint(0, 10, size=(10, 2)))], - ignore_index=True, axis=1) - mask = DataFrame(False, columns=df.columns, index=df.index) - s1 = Series(1, index=df.columns) - s2 = Series(2, index=df.index) - - result = df.where(mask, s1, axis='columns') - expected = DataFrame(1.0, columns=df.columns, index=df.index) - expected[2] = expected[2].astype(int) - expected[3] = expected[3].astype(int) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s1, axis='columns', inplace=True) - assert_frame_equal(result, expected) - - result = df.where(mask, s2, axis='index') - expected = DataFrame(2.0, columns=df.columns, index=df.index) - expected[2] = expected[2].astype(int) - expected[3] = expected[3].astype(int) - assert_frame_equal(result, expected) - - result = df.copy() - result.where(mask, s2, axis='index', inplace=True) - assert_frame_equal(result, expected) - - # DataFrame vs DataFrame - d1 = df.copy().drop(1, axis=0) - expected = df.copy() - expected.loc[1, :] = np.nan - - result = df.where(mask, d1) - assert_frame_equal(result, expected) - result = df.where(mask, d1, axis='index') - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d1, inplace=True) - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d1, inplace=True, axis='index') - assert_frame_equal(result, expected) - - d2 = df.copy().drop(1, axis=1) - expected = df.copy() - expected.loc[:, 1] = np.nan - - result = df.where(mask, d2) - assert_frame_equal(result, expected) - result = df.where(mask, d2, axis='columns') - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d2, inplace=True) - assert_frame_equal(result, expected) - result = df.copy() - result.where(mask, d2, inplace=True, axis='columns') - assert_frame_equal(result, expected) - - def test_mask(self): - df = DataFrame(np.random.randn(5, 3)) - cond = df > 0 - - rs = df.where(cond, np.nan) - assert_frame_equal(rs, df.mask(df <= 0)) - assert_frame_equal(rs, df.mask(~cond)) - - other = DataFrame(np.random.randn(5, 3)) - rs = df.where(cond, other) - assert_frame_equal(rs, df.mask(df <= 0, other)) - assert_frame_equal(rs, df.mask(~cond, other)) - - def test_mask_inplace(self): - # GH8801 - df = DataFrame(np.random.randn(5, 3)) - cond = df > 0 - - rdf = df.copy() - - rdf.where(cond, inplace=True) - assert_frame_equal(rdf, df.where(cond)) - assert_frame_equal(rdf, df.mask(~cond)) - - rdf = df.copy() - rdf.where(cond, -df, inplace=True) - assert_frame_equal(rdf, df.where(cond, -df)) - assert_frame_equal(rdf, df.mask(~cond, -df)) - - def test_mask_edge_case_1xN_frame(self): - # GH4071 - df = DataFrame([[1, 2]]) - res = df.mask(DataFrame([[True, False]])) - expec = DataFrame([[nan, 2]]) - assert_frame_equal(res, expec) - - #---------------------------------------------------------------------- - # Transposing - - def test_transpose(self): - frame = self.frame - dft = frame.T - for idx, series in compat.iteritems(dft): - for col, value in compat.iteritems(series): - if np.isnan(value): - self.assertTrue(np.isnan(frame[col][idx])) - else: - self.assertEqual(value, frame[col][idx]) - - # mixed type - index, data = tm.getMixedTypeDict() - mixed = DataFrame(data, index=index) - - mixed_T = mixed.T - for col, s in compat.iteritems(mixed_T): - self.assertEqual(s.dtype, np.object_) - - def test_transpose_get_view(self): - dft = self.frame.T - dft.values[:, 5:10] = 5 - - self.assertTrue((self.frame.values[5:10] == 5).all()) - - #---------------------------------------------------------------------- - # Renaming - - def test_rename(self): - mapping = { - 'A': 'a', - 'B': 'b', - 'C': 'c', - 'D': 'd' - } - - renamed = self.frame.rename(columns=mapping) - renamed2 = self.frame.rename(columns=str.lower) - - assert_frame_equal(renamed, renamed2) - assert_frame_equal(renamed2.rename(columns=str.upper), - self.frame, check_names=False) - - # index - data = { - 'A': {'foo': 0, 'bar': 1} - } - - # gets sorted alphabetical - df = DataFrame(data) - renamed = df.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['foo', 'bar']) - - renamed = df.rename(index=str.upper) - self.assert_numpy_array_equal(renamed.index, ['BAR', 'FOO']) - - # have to pass something - self.assertRaises(TypeError, self.frame.rename) - - # partial columns - renamed = self.frame.rename(columns={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal(renamed.columns, ['A', 'B', 'foo', 'bar']) - - # other axis - renamed = self.frame.T.rename(index={'C': 'foo', 'D': 'bar'}) - self.assert_numpy_array_equal(renamed.index, ['A', 'B', 'foo', 'bar']) - - # index with name - index = Index(['foo', 'bar'], name='name') - renamer = DataFrame(data, index=index) - renamed = renamer.rename(index={'foo': 'bar', 'bar': 'foo'}) - self.assert_numpy_array_equal(renamed.index, ['bar', 'foo']) - self.assertEqual(renamed.index.name, renamer.index.name) - - # MultiIndex - tuples_index = [('foo1', 'bar1'), ('foo2', 'bar2')] - tuples_columns = [('fizz1', 'buzz1'), ('fizz2', 'buzz2')] - index = MultiIndex.from_tuples(tuples_index, names=['foo', 'bar']) - columns = MultiIndex.from_tuples(tuples_columns, names=['fizz', 'buzz']) - renamer = DataFrame([(0,0),(1,1)], index=index, columns=columns) - renamed = renamer.rename(index={'foo1': 'foo3', 'bar2': 'bar3'}, - columns={'fizz1': 'fizz3', 'buzz2': 'buzz3'}) - new_index = MultiIndex.from_tuples([('foo3', 'bar1'), ('foo2', 'bar3')]) - new_columns = MultiIndex.from_tuples([('fizz3', 'buzz1'), ('fizz2', 'buzz3')]) - self.assert_numpy_array_equal(renamed.index, new_index) - self.assert_numpy_array_equal(renamed.columns, new_columns) - self.assertEqual(renamed.index.names, renamer.index.names) - self.assertEqual(renamed.columns.names, renamer.columns.names) - - def test_rename_nocopy(self): - renamed = self.frame.rename(columns={'C': 'foo'}, copy=False) - renamed['foo'] = 1. - self.assertTrue((self.frame['C'] == 1.).all()) - - def test_rename_inplace(self): - self.frame.rename(columns={'C': 'foo'}) - self.assertIn('C', self.frame) - self.assertNotIn('foo', self.frame) - - c_id = id(self.frame['C']) - frame = self.frame.copy() - frame.rename(columns={'C': 'foo'}, inplace=True) - - self.assertNotIn('C', frame) - self.assertIn('foo', frame) - self.assertNotEqual(id(frame['foo']), c_id) - - def test_rename_bug(self): - # GH 5344 - # rename set ref_locs, and set_index was not resetting - df = DataFrame({ 0 : ['foo','bar'], 1 : ['bah','bas'], 2 : [1,2]}) - df = df.rename(columns={0 : 'a'}) - df = df.rename(columns={1 : 'b'}) - df = df.set_index(['a','b']) - df.columns = ['2001-01-01'] - expected = DataFrame([[1],[2]],index=MultiIndex.from_tuples([('foo','bah'),('bar','bas')], - names=['a','b']), - columns=['2001-01-01']) - assert_frame_equal(df,expected) - - #---------------------------------------------------------------------- - # Time series related - def test_diff(self): - the_diff = self.tsframe.diff(1) - - assert_series_equal(the_diff['A'], - self.tsframe['A'] - self.tsframe['A'].shift(1)) - - # int dtype - a = 10000000000000000 - b = a + 1 - s = Series([a, b]) - - rs = DataFrame({'s': s}).diff() - self.assertEqual(rs.s[1], 1) - - # mixed numeric - tf = self.tsframe.astype('float32') - the_diff = tf.diff(1) - assert_series_equal(the_diff['A'], - tf['A'] - tf['A'].shift(1)) - - # issue 10907 - df = pd.DataFrame({'y': pd.Series([2]), 'z': pd.Series([3])}) - df.insert(0, 'x', 1) - result = df.diff(axis=1) - expected = pd.DataFrame({'x':np.nan, 'y':pd.Series(1), 'z':pd.Series(1)}).astype('float64') - assert_frame_equal(result, expected) - - - def test_diff_timedelta(self): - # GH 4533 - df = DataFrame(dict(time=[Timestamp('20130101 9:01'), - Timestamp('20130101 9:02')], - value=[1.0,2.0])) - - res = df.diff() - exp = DataFrame([[pd.NaT, np.nan], - [Timedelta('00:01:00'), 1]], - columns=['time', 'value']) - assert_frame_equal(res, exp) - - def test_diff_mixed_dtype(self): - df = DataFrame(np.random.randn(5, 3)) - df['A'] = np.array([1, 2, 3, 4, 5], dtype=object) - - result = df.diff() - self.assertEqual(result[0].dtype, np.float64) - - def test_diff_neg_n(self): - rs = self.tsframe.diff(-1) - xp = self.tsframe - self.tsframe.shift(-1) - assert_frame_equal(rs, xp) - - def test_diff_float_n(self): - rs = self.tsframe.diff(1.) - xp = self.tsframe.diff(1) - assert_frame_equal(rs, xp) - - def test_diff_axis(self): - # GH 9727 - df = DataFrame([[1., 2.], [3., 4.]]) - assert_frame_equal(df.diff(axis=1), DataFrame([[np.nan, 1.], [np.nan, 1.]])) - assert_frame_equal(df.diff(axis=0), DataFrame([[np.nan, np.nan], [2., 2.]])) - - def test_pct_change(self): - rs = self.tsframe.pct_change(fill_method=None) - assert_frame_equal(rs, self.tsframe / self.tsframe.shift(1) - 1) - - rs = self.tsframe.pct_change(2) - filled = self.tsframe.fillna(method='pad') - assert_frame_equal(rs, filled / filled.shift(2) - 1) - - rs = self.tsframe.pct_change(fill_method='bfill', limit=1) - filled = self.tsframe.fillna(method='bfill', limit=1) - assert_frame_equal(rs, filled / filled.shift(1) - 1) - - rs = self.tsframe.pct_change(freq='5D') - filled = self.tsframe.fillna(method='pad') - assert_frame_equal(rs, filled / filled.shift(freq='5D') - 1) - - def test_pct_change_shift_over_nas(self): - s = Series([1., 1.5, np.nan, 2.5, 3.]) - - df = DataFrame({'a': s, 'b': s}) - - chg = df.pct_change() - expected = Series([np.nan, 0.5, np.nan, 2.5 / 1.5 - 1, .2]) - edf = DataFrame({'a': expected, 'b': expected}) - assert_frame_equal(chg, edf) - - def test_shift(self): - # naive shift - shiftedFrame = self.tsframe.shift(5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) - - shiftedSeries = self.tsframe['A'].shift(5) - assert_series_equal(shiftedFrame['A'], shiftedSeries) - - shiftedFrame = self.tsframe.shift(-5) - self.assertTrue(shiftedFrame.index.equals(self.tsframe.index)) - - shiftedSeries = self.tsframe['A'].shift(-5) - assert_series_equal(shiftedFrame['A'], shiftedSeries) - - # shift by 0 - unshifted = self.tsframe.shift(0) - assert_frame_equal(unshifted, self.tsframe) - - # shift by DateOffset - shiftedFrame = self.tsframe.shift(5, freq=datetools.BDay()) - self.assertEqual(len(shiftedFrame), len(self.tsframe)) - - shiftedFrame2 = self.tsframe.shift(5, freq='B') - assert_frame_equal(shiftedFrame, shiftedFrame2) - - d = self.tsframe.index[0] - shifted_d = d + datetools.BDay(5) - assert_series_equal(self.tsframe.xs(d), - shiftedFrame.xs(shifted_d), check_names=False) - - # shift int frame - int_shifted = self.intframe.shift(1) - - # Shifting with PeriodIndex - ps = tm.makePeriodFrame() - shifted = ps.shift(1) - unshifted = shifted.shift(-1) - self.assertTrue(shifted.index.equals(ps.index)) - - tm.assert_dict_equal(unshifted.ix[:, 0].valid(), ps.ix[:, 0], - compare_keys=False) - - shifted2 = ps.shift(1, 'B') - shifted3 = ps.shift(1, datetools.bday) - assert_frame_equal(shifted2, shifted3) - assert_frame_equal(ps, shifted2.shift(-1, 'B')) - - assertRaisesRegexp(ValueError, 'does not match PeriodIndex freq', - ps.shift, freq='D') - - - # shift other axis - # GH 6371 - df = DataFrame(np.random.rand(10,5)) - expected = pd.concat([DataFrame(np.nan,index=df.index,columns=[0]),df.iloc[:,0:-1]],ignore_index=True,axis=1) - result = df.shift(1,axis=1) - assert_frame_equal(result,expected) - - # shift named axis - df = DataFrame(np.random.rand(10,5)) - expected = pd.concat([DataFrame(np.nan,index=df.index,columns=[0]),df.iloc[:,0:-1]],ignore_index=True,axis=1) - result = df.shift(1,axis='columns') - assert_frame_equal(result,expected) - - def test_shift_bool(self): - df = DataFrame({'high': [True, False], - 'low': [False, False]}) - rs = df.shift(1) - xp = DataFrame(np.array([[np.nan, np.nan], - [True, False]], dtype=object), - columns=['high', 'low']) - assert_frame_equal(rs, xp) - - def test_shift_categorical(self): - # GH 9416 - s1 = pd.Series(['a', 'b', 'c'], dtype='category') - s2 = pd.Series(['A', 'B', 'C'], dtype='category') - df = DataFrame({'one': s1, 'two': s2}) - rs = df.shift(1) - xp = DataFrame({'one': s1.shift(1), 'two': s2.shift(1)}) - assert_frame_equal(rs, xp) - - def test_shift_empty(self): - # Regression test for #8019 - df = DataFrame({'foo': []}) - rs = df.shift(-1) - - assert_frame_equal(df, rs) - - def test_tshift(self): - # PeriodIndex - ps = tm.makePeriodFrame() - shifted = ps.tshift(1) - unshifted = shifted.tshift(-1) - - assert_frame_equal(unshifted, ps) - - shifted2 = ps.tshift(freq='B') - assert_frame_equal(shifted, shifted2) - - shifted3 = ps.tshift(freq=datetools.bday) - assert_frame_equal(shifted, shifted3) - - assertRaisesRegexp(ValueError, 'does not match', ps.tshift, freq='M') - - # DatetimeIndex - shifted = self.tsframe.tshift(1) - unshifted = shifted.tshift(-1) - - assert_frame_equal(self.tsframe, unshifted) - - shifted2 = self.tsframe.tshift(freq=self.tsframe.index.freq) - assert_frame_equal(shifted, shifted2) - - inferred_ts = DataFrame(self.tsframe.values, - Index(np.asarray(self.tsframe.index)), - columns=self.tsframe.columns) - shifted = inferred_ts.tshift(1) - unshifted = shifted.tshift(-1) - assert_frame_equal(shifted, self.tsframe.tshift(1)) - assert_frame_equal(unshifted, inferred_ts) - - no_freq = self.tsframe.ix[[0, 5, 7], :] - self.assertRaises(ValueError, no_freq.tshift) - - def test_apply(self): - # ufunc - applied = self.frame.apply(np.sqrt) - assert_series_equal(np.sqrt(self.frame['A']), applied['A']) - - # aggregator - applied = self.frame.apply(np.mean) - self.assertEqual(applied['A'], np.mean(self.frame['A'])) - - d = self.frame.index[0] - applied = self.frame.apply(np.mean, axis=1) - self.assertEqual(applied[d], np.mean(self.frame.xs(d))) - self.assertIs(applied.index, self.frame.index) # want this - - # invalid axis - df = DataFrame( - [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) - self.assertRaises(ValueError, df.apply, lambda x: x, 2) - - # GH9573 - df = DataFrame({'c0':['A','A','B','B'], 'c1':['C','C','D','D']}) - df = df.apply(lambda ts: ts.astype('category')) - self.assertEqual(df.shape, (4, 2)) - self.assertTrue(isinstance(df['c0'].dtype, com.CategoricalDtype)) - self.assertTrue(isinstance(df['c1'].dtype, com.CategoricalDtype)) - - def test_apply_mixed_datetimelike(self): - # mixed datetimelike - # GH 7778 - df = DataFrame({ 'A' : date_range('20130101',periods=3), 'B' : pd.to_timedelta(np.arange(3),unit='s') }) - result = df.apply(lambda x: x, axis=1) - assert_frame_equal(result, df) - - def test_apply_empty(self): - # empty - applied = self.empty.apply(np.sqrt) - self.assertTrue(applied.empty) - - applied = self.empty.apply(np.mean) - self.assertTrue(applied.empty) - - no_rows = self.frame[:0] - result = no_rows.apply(lambda x: x.mean()) - expected = Series(np.nan, index=self.frame.columns) - assert_series_equal(result, expected) - - no_cols = self.frame.ix[:, []] - result = no_cols.apply(lambda x: x.mean(), axis=1) - expected = Series(np.nan, index=self.frame.index) - assert_series_equal(result, expected) - - # 2476 - xp = DataFrame(index=['a']) - rs = xp.apply(lambda x: x['a'], axis=1) - assert_frame_equal(xp, rs) - - # reduce with an empty DataFrame - x = [] - result = self.empty.apply(x.append, axis=1, reduce=False) - assert_frame_equal(result, self.empty) - result = self.empty.apply(x.append, axis=1, reduce=True) - assert_series_equal(result, Series([], index=pd.Index([], dtype=object))) - - empty_with_cols = DataFrame(columns=['a', 'b', 'c']) - result = empty_with_cols.apply(x.append, axis=1, reduce=False) - assert_frame_equal(result, empty_with_cols) - result = empty_with_cols.apply(x.append, axis=1, reduce=True) - assert_series_equal(result, Series([], index=pd.Index([], dtype=object))) - - # Ensure that x.append hasn't been called - self.assertEqual(x, []) - - def test_apply_standard_nonunique(self): - df = DataFrame( - [[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c']) - rs = df.apply(lambda s: s[0], axis=1) - xp = Series([1, 4, 7], ['a', 'a', 'c']) - assert_series_equal(rs, xp) - - rs = df.T.apply(lambda s: s[0], axis=0) - assert_series_equal(rs, xp) - - def test_apply_broadcast(self): - broadcasted = self.frame.apply(np.mean, broadcast=True) - agged = self.frame.apply(np.mean) - - for col, ts in compat.iteritems(broadcasted): - self.assertTrue((ts == agged[col]).all()) - - broadcasted = self.frame.apply(np.mean, axis=1, broadcast=True) - agged = self.frame.apply(np.mean, axis=1) - for idx in broadcasted.index: - self.assertTrue((broadcasted.xs(idx) == agged[idx]).all()) - - def test_apply_raw(self): - result0 = self.frame.apply(np.mean, raw=True) - result1 = self.frame.apply(np.mean, axis=1, raw=True) - - expected0 = self.frame.apply(lambda x: x.values.mean()) - expected1 = self.frame.apply(lambda x: x.values.mean(), axis=1) - - assert_series_equal(result0, expected0) - assert_series_equal(result1, expected1) - - # no reduction - result = self.frame.apply(lambda x: x * 2, raw=True) - expected = self.frame * 2 - assert_frame_equal(result, expected) - - def test_apply_axis1(self): - d = self.frame.index[0] - tapplied = self.frame.apply(np.mean, axis=1) - self.assertEqual(tapplied[d], np.mean(self.frame.xs(d))) - - def test_apply_ignore_failures(self): - result = self.mixed_frame._apply_standard(np.mean, 0, - ignore_failures=True) - expected = self.mixed_frame._get_numeric_data().apply(np.mean) - assert_series_equal(result, expected) - - def test_apply_mixed_dtype_corner(self): - df = DataFrame({'A': ['foo'], - 'B': [1.]}) - result = df[:0].apply(np.mean, axis=1) - # the result here is actually kind of ambiguous, should it be a Series - # or a DataFrame? - expected = Series(np.nan, index=pd.Index([], dtype='int64')) - assert_series_equal(result, expected) - - df = DataFrame({'A': ['foo'], - 'B': [1.]}) - result = df.apply(lambda x: x['A'], axis=1) - expected = Series(['foo'],index=[0]) - assert_series_equal(result, expected) - - result = df.apply(lambda x: x['B'], axis=1) - expected = Series([1.],index=[0]) - assert_series_equal(result, expected) - - def test_apply_empty_infer_type(self): - no_cols = DataFrame(index=['a', 'b', 'c']) - no_index = DataFrame(columns=['a', 'b', 'c']) - - def _check(df, f): - test_res = f(np.array([], dtype='f8')) - is_reduction = not isinstance(test_res, np.ndarray) - - def _checkit(axis=0, raw=False): - res = df.apply(f, axis=axis, raw=raw) - if is_reduction: - agg_axis = df._get_agg_axis(axis) - tm.assertIsInstance(res, Series) - self.assertIs(res.index, agg_axis) - else: - tm.assertIsInstance(res, DataFrame) - - _checkit() - _checkit(axis=1) - _checkit(raw=True) - _checkit(axis=0, raw=True) - - _check(no_cols, lambda x: x) - _check(no_cols, lambda x: x.mean()) - _check(no_index, lambda x: x) - _check(no_index, lambda x: x.mean()) - - result = no_cols.apply(lambda x: x.mean(), broadcast=True) - tm.assertIsInstance(result, DataFrame) - - def test_apply_with_args_kwds(self): - def add_some(x, howmuch=0): - return x + howmuch - - def agg_and_add(x, howmuch=0): - return x.mean() + howmuch - - def subtract_and_divide(x, sub, divide=1): - return (x - sub) / divide - - result = self.frame.apply(add_some, howmuch=2) - exp = self.frame.apply(lambda x: x + 2) - assert_frame_equal(result, exp) - - result = self.frame.apply(agg_and_add, howmuch=2) - exp = self.frame.apply(lambda x: x.mean() + 2) - assert_series_equal(result, exp) - - res = self.frame.apply(subtract_and_divide, args=(2,), divide=2) - exp = self.frame.apply(lambda x: (x - 2.) / 2.) - assert_frame_equal(res, exp) - - def test_apply_yield_list(self): - result = self.frame.apply(list) - assert_frame_equal(result, self.frame) - - def test_apply_reduce_Series(self): - self.frame.ix[::2, 'A'] = np.nan - expected = self.frame.mean(1) - result = self.frame.apply(np.mean, axis=1) - assert_series_equal(result, expected) - - def test_apply_differently_indexed(self): - df = DataFrame(np.random.randn(20, 10)) - - result0 = df.apply(Series.describe, axis=0) - expected0 = DataFrame(dict((i, v.describe()) - for i, v in compat.iteritems(df)), - columns=df.columns) - assert_frame_equal(result0, expected0) - - result1 = df.apply(Series.describe, axis=1) - expected1 = DataFrame(dict((i, v.describe()) - for i, v in compat.iteritems(df.T)), - columns=df.index).T - assert_frame_equal(result1, expected1) - - def test_apply_modify_traceback(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) - - data.loc[4,'C'] = np.nan - - def transform(row): - if row['C'].startswith('shin') and row['A'] == 'foo': - row['D'] = 7 - return row - - def transform2(row): - if (notnull(row['C']) and row['C'].startswith('shin') - and row['A'] == 'foo'): - row['D'] = 7 - return row - - try: - transformed = data.apply(transform, axis=1) - except AttributeError as e: - self.assertEqual(len(e.args), 2) - self.assertEqual(e.args[1], 'occurred at index 4') - self.assertEqual(e.args[0], "'float' object has no attribute 'startswith'") - - def test_apply_bug(self): - - # GH 6125 - import datetime - positions = pd.DataFrame([[1, 'ABC0', 50], [1, 'YUM0', 20], - [1, 'DEF0', 20], [2, 'ABC1', 50], - [2, 'YUM1', 20], [2, 'DEF1', 20]], - columns=['a', 'market', 'position']) - def f(r): - return r['market'] - expected = positions.apply(f, axis=1) - - positions = DataFrame([[datetime.datetime(2013, 1, 1), 'ABC0', 50], - [datetime.datetime(2013, 1, 2), 'YUM0', 20], - [datetime.datetime(2013, 1, 3), 'DEF0', 20], - [datetime.datetime(2013, 1, 4), 'ABC1', 50], - [datetime.datetime(2013, 1, 5), 'YUM1', 20], - [datetime.datetime(2013, 1, 6), 'DEF1', 20]], - columns=['a', 'market', 'position']) - result = positions.apply(f, axis=1) - assert_series_equal(result,expected) - - def test_swapaxes(self): - df = DataFrame(np.random.randn(10, 5)) - assert_frame_equal(df.T, df.swapaxes(0, 1)) - assert_frame_equal(df.T, df.swapaxes(1, 0)) - assert_frame_equal(df, df.swapaxes(0, 0)) - self.assertRaises(ValueError, df.swapaxes, 2, 5) - - def test_apply_convert_objects(self): - data = DataFrame({'A': ['foo', 'foo', 'foo', 'foo', - 'bar', 'bar', 'bar', 'bar', - 'foo', 'foo', 'foo'], - 'B': ['one', 'one', 'one', 'two', - 'one', 'one', 'one', 'two', - 'two', 'two', 'one'], - 'C': ['dull', 'dull', 'shiny', 'dull', - 'dull', 'shiny', 'shiny', 'dull', - 'shiny', 'shiny', 'shiny'], - 'D': np.random.randn(11), - 'E': np.random.randn(11), - 'F': np.random.randn(11)}) - - result = data.apply(lambda x: x, axis=1) - assert_frame_equal(result._convert(datetime=True), data) - - def test_apply_attach_name(self): - result = self.frame.apply(lambda x: x.name) - expected = Series(self.frame.columns, index=self.frame.columns) - assert_series_equal(result, expected) - - result = self.frame.apply(lambda x: x.name, axis=1) - expected = Series(self.frame.index, index=self.frame.index) - assert_series_equal(result, expected) - - # non-reductions - result = self.frame.apply(lambda x: np.repeat(x.name, len(x))) - expected = DataFrame(np.tile(self.frame.columns, - (len(self.frame.index), 1)), - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(result, expected) - - result = self.frame.apply(lambda x: np.repeat(x.name, len(x)), - axis=1) - expected = DataFrame(np.tile(self.frame.index, - (len(self.frame.columns), 1)).T, - index=self.frame.index, - columns=self.frame.columns) - assert_frame_equal(result, expected) - - def test_apply_multi_index(self): - s = DataFrame([[1,2], [3,4], [5,6]]) - s.index = MultiIndex.from_arrays([['a','a','b'], ['c','d','d']]) - s.columns = ['col1','col2'] - res = s.apply(lambda x: Series({'min': min(x), 'max': max(x)}), 1) - tm.assertIsInstance(res.index, MultiIndex) - - def test_apply_dict(self): - - # GH 8735 - A = DataFrame([['foo', 'bar'], ['spam', 'eggs']]) - A_dicts = pd.Series([dict([(0, 'foo'), (1, 'spam')]), - dict([(0, 'bar'), (1, 'eggs')])]) - B = DataFrame([[0, 1], [2, 3]]) - B_dicts = pd.Series([dict([(0, 0), (1, 2)]), dict([(0, 1), (1, 3)])]) - fn = lambda x: x.to_dict() - - for df, dicts in [(A, A_dicts), (B, B_dicts)]: - reduce_true = df.apply(fn, reduce=True) - reduce_false = df.apply(fn, reduce=False) - reduce_none = df.apply(fn, reduce=None) - - assert_series_equal(reduce_true, dicts) - assert_frame_equal(reduce_false, df) - assert_series_equal(reduce_none, dicts) - - def test_applymap(self): - applied = self.frame.applymap(lambda x: x * 2) - assert_frame_equal(applied, self.frame * 2) - result = self.frame.applymap(type) - - # GH #465, function returning tuples - result = self.frame.applymap(lambda x: (x, x)) - tm.assertIsInstance(result['A'][0], tuple) - - # GH 2909, object conversion to float in constructor? - df = DataFrame(data=[1,'a']) - result = df.applymap(lambda x: x) - self.assertEqual(result.dtypes[0], object) - - df = DataFrame(data=[1.,'a']) - result = df.applymap(lambda x: x) - self.assertEqual(result.dtypes[0], object) - - # GH2786 - df = DataFrame(np.random.random((3,4))) - df2 = df.copy() - cols = ['a','a','a','a'] - df.columns = cols - - expected = df2.applymap(str) - expected.columns = cols - result = df.applymap(str) - assert_frame_equal(result,expected) - - # datetime/timedelta - df['datetime'] = Timestamp('20130101') - df['timedelta'] = Timedelta('1 min') - result = df.applymap(str) - for f in ['datetime','timedelta']: - self.assertEqual(result.loc[0,f],str(df.loc[0,f])) - - def test_filter(self): - # items - filtered = self.frame.filter(['A', 'B', 'E']) - self.assertEqual(len(filtered.columns), 2) - self.assertNotIn('E', filtered) - - filtered = self.frame.filter(['A', 'B', 'E'], axis='columns') - self.assertEqual(len(filtered.columns), 2) - self.assertNotIn('E', filtered) - - # other axis - idx = self.frame.index[0:4] - filtered = self.frame.filter(idx, axis='index') - expected = self.frame.reindex(index=idx) - assert_frame_equal(filtered, expected) - - # like - fcopy = self.frame.copy() - fcopy['AA'] = 1 - - filtered = fcopy.filter(like='A') - self.assertEqual(len(filtered.columns), 2) - self.assertIn('AA', filtered) - - # like with ints in column names - df = DataFrame(0., index=[0, 1, 2], columns=[0, 1, '_A', '_B']) - filtered = df.filter(like='_') - self.assertEqual(len(filtered.columns), 2) - - # regex with ints in column names - # from PR #10384 - df = DataFrame(0., index=[0, 1, 2], columns=['A1', 1, 'B', 2, 'C']) - expected = DataFrame(0., index=[0, 1, 2], columns=pd.Index([1, 2], dtype=object)) - filtered = df.filter(regex='^[0-9]+$') - assert_frame_equal(filtered, expected) - - expected = DataFrame(0., index=[0, 1, 2], columns=[0, '0', 1, '1']) - filtered = expected.filter(regex='^[0-9]+$') # shouldn't remove anything - assert_frame_equal(filtered, expected) - - # pass in None - with assertRaisesRegexp(TypeError, 'Must pass'): - self.frame.filter(items=None) - - # objects - filtered = self.mixed_frame.filter(like='foo') - self.assertIn('foo', filtered) - - # unicode columns, won't ascii-encode - df = self.frame.rename(columns={'B': u('\u2202')}) - filtered = df.filter(like='C') - self.assertTrue('C' in filtered) - - def test_filter_regex_search(self): - fcopy = self.frame.copy() - fcopy['AA'] = 1 - - # regex - filtered = fcopy.filter(regex='[A]+') - self.assertEqual(len(filtered.columns), 2) - self.assertIn('AA', filtered) - - # doesn't have to be at beginning - df = DataFrame({'aBBa': [1, 2], - 'BBaBB': [1, 2], - 'aCCa': [1, 2], - 'aCCaBB': [1, 2]}) - - result = df.filter(regex='BB') - exp = df[[x for x in df.columns if 'BB' in x]] - assert_frame_equal(result, exp) - - def test_filter_corner(self): - empty = DataFrame() - - result = empty.filter([]) - assert_frame_equal(result, empty) - - result = empty.filter(like='foo') - assert_frame_equal(result, empty) - - def test_select(self): - f = lambda x: x.weekday() == 2 - result = self.tsframe.select(f, axis=0) - expected = self.tsframe.reindex( - index=self.tsframe.index[[f(x) for x in self.tsframe.index]]) - assert_frame_equal(result, expected) - - result = self.frame.select(lambda x: x in ('B', 'D'), axis=1) - expected = self.frame.reindex(columns=['B', 'D']) - - assert_frame_equal(result, expected, check_names=False) # TODO should reindex check_names? - - def test_reorder_levels(self): - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]], - names=['L0', 'L1', 'L2']) - df = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, index=index) - - # no change, position - result = df.reorder_levels([0, 1, 2]) - assert_frame_equal(df, result) - - # no change, labels - result = df.reorder_levels(['L0', 'L1', 'L2']) - assert_frame_equal(df, result) - - # rotate, position - result = df.reorder_levels([1, 2, 0]) - e_idx = MultiIndex(levels=[['one', 'two', 'three'], [0, 1], ['bar']], - labels=[[0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1], - [0, 0, 0, 0, 0, 0]], - names=['L1', 'L2', 'L0']) - expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, - index=e_idx) - assert_frame_equal(result, expected) - - result = df.reorder_levels([0, 0, 0]) - e_idx = MultiIndex(levels=[['bar'], ['bar'], ['bar']], - labels=[[0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0], - [0, 0, 0, 0, 0, 0]], - names=['L0', 'L0', 'L0']) - expected = DataFrame({'A': np.arange(6), 'B': np.arange(6)}, - index=e_idx) - assert_frame_equal(result, expected) - - result = df.reorder_levels(['L0', 'L0', 'L0']) - assert_frame_equal(result, expected) - - def test_sort_values(self): - - # API for 9816 - - # sort_index - frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # 9816 deprecated - with tm.assert_produces_warning(FutureWarning): - frame.sort(columns='A') - with tm.assert_produces_warning(FutureWarning): - frame.sort() - - unordered = frame.ix[[3, 2, 4, 1]] - expected = unordered.sort_index() - - result = unordered.sort_index(axis=0) - assert_frame_equal(result, expected) - - unordered = frame.ix[:, [2, 1, 3, 0]] - expected = unordered.sort_index(axis=1) - - result = unordered.sort_index(axis=1) - assert_frame_equal(result, expected) - assert_frame_equal(result, expected) - - # sortlevel - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - - result = df.sort_index(level='A', sort_remaining=False) - expected = df.sortlevel('A', sort_remaining=False) - assert_frame_equal(result, expected) - - df = df.T - result = df.sort_index(level='A', axis=1, sort_remaining=False) - expected = df.sortlevel('A', axis=1, sort_remaining=False) - assert_frame_equal(result, expected) - - # MI sort, but no by - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - result = df.sort_index(sort_remaining=False) - expected = df.sort_index() - assert_frame_equal(result, expected) - - def test_sort_index(self): - frame = DataFrame(np.arange(16).reshape(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # axis=0 - unordered = frame.ix[[3, 2, 4, 1]] - sorted_df = unordered.sort_index(axis=0) - expected = frame - assert_frame_equal(sorted_df, expected) - - sorted_df = unordered.sort_index(ascending=False) - expected = frame[::-1] - assert_frame_equal(sorted_df, expected) - - # axis=1 - unordered = frame.ix[:, ['D', 'B', 'C', 'A']] - sorted_df = unordered.sort_index(axis=1) - expected = frame - assert_frame_equal(sorted_df, expected) - - sorted_df = unordered.sort_index(axis=1, ascending=False) - expected = frame.ix[:, ::-1] - assert_frame_equal(sorted_df, expected) - - # by column - sorted_df = frame.sort_values(by='A') - indexer = frame['A'].argsort().values - expected = frame.ix[frame.index[indexer]] - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.sort_values(by='A', ascending=False) - indexer = indexer[::-1] - expected = frame.ix[frame.index[indexer]] - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.sort_values(by='A', ascending=False) - assert_frame_equal(sorted_df, expected) - - # GH4839 - sorted_df = frame.sort_values(by=['A'], ascending=[False]) - assert_frame_equal(sorted_df, expected) - - # check for now - sorted_df = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected[::-1]) - expected = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected) - - expected = frame.sort_values(by=['A', 'B'], ascending=False) - sorted_df = frame.sort_values(by=['A', 'B']) - assert_frame_equal(sorted_df, expected[::-1]) - - self.assertRaises(ValueError, lambda : frame.sort_values(by=['A','B'], axis=2, inplace=True)) - - msg = 'When sorting by column, axis must be 0' - with assertRaisesRegexp(ValueError, msg): - frame.sort_values(by='A', axis=1) - - msg = r'Length of ascending \(5\) != length of by \(2\)' - with assertRaisesRegexp(ValueError, msg): - frame.sort_values(by=['A', 'B'], axis=0, ascending=[True] * 5) - - def test_sort_index_categorical_index(self): - - df = DataFrame({'A' : np.arange(6,dtype='int64'), - 'B' : Series(list('aabbca')).astype('category',categories=list('cab')) }).set_index('B') - - result = df.sort_index() - expected = df.iloc[[4,0,1,5,2,3]] - assert_frame_equal(result, expected) - - result = df.sort_index(ascending=False) - expected = df.iloc[[3,2,5,1,0,4]] - assert_frame_equal(result, expected) - - def test_sort_nan(self): - # GH3917 - nan = np.nan - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}) - - # sort one column only - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 9, 2, nan, 5, 5, 4]}, - index=[2, 0, 3, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A'], na_position='first') - assert_frame_equal(sorted_df, expected) - - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 9, 2]}, - index=[2, 5, 4, 6, 1, 0, 3]) - sorted_df = df.sort_values(['A'], na_position='first', ascending=False) - assert_frame_equal(sorted_df, expected) - - # na_position='last', order - expected = DataFrame( - {'A': [1, 1, 2, 4, 6, 8, nan], - 'B': [2, 9, nan, 5, 5, 4, 5]}, - index=[3, 0, 1, 6, 4, 5, 2]) - sorted_df = df.sort_values(['A','B']) - assert_frame_equal(sorted_df, expected) - - # na_position='first', order - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 2, 9, nan, 5, 5, 4]}, - index=[2, 3, 0, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A','B'], na_position='first') - assert_frame_equal(sorted_df, expected) - - # na_position='first', not order - expected = DataFrame( - {'A': [nan, 1, 1, 2, 4, 6, 8], - 'B': [5, 9, 2, nan, 5, 5, 4]}, - index=[2, 0, 3, 1, 6, 4, 5]) - sorted_df = df.sort_values(['A','B'], ascending=[1,0], na_position='first') - assert_frame_equal(sorted_df, expected) - - # na_position='last', not order - expected = DataFrame( - {'A': [8, 6, 4, 2, 1, 1, nan], - 'B': [4, 5, 5, nan, 2, 9, 5]}, - index=[5, 4, 6, 1, 3, 0, 2]) - sorted_df = df.sort_values(['A','B'], ascending=[0,1], na_position='last') - assert_frame_equal(sorted_df, expected) - - # Test DataFrame with nan label - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}, - index = [1, 2, 3, 4, 5, 6, nan]) - - # NaN label, ascending=True, na_position='last' - sorted_df = df.sort_index(kind='quicksort', ascending=True, na_position='last') - expected = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}, - index = [1, 2, 3, 4, 5, 6, nan]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=True, na_position='first' - sorted_df = df.sort_index(na_position='first') - expected = DataFrame({'A': [4, 1, 2, nan, 1, 6, 8], - 'B': [5, 9, nan, 5, 2, 5, 4]}, - index = [nan, 1, 2, 3, 4, 5, 6]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=False, na_position='last' - sorted_df = df.sort_index(kind='quicksort', ascending=False) - expected = DataFrame({'A': [8, 6, 1, nan, 2, 1, 4], - 'B': [4, 5, 2, 5, nan, 9, 5]}, - index = [6, 5, 4, 3, 2, 1, nan]) - assert_frame_equal(sorted_df, expected) - - # NaN label, ascending=False, na_position='first' - sorted_df = df.sort_index(kind='quicksort', ascending=False, na_position='first') - expected = DataFrame({'A': [4, 8, 6, 1, nan, 2, 1], - 'B': [5, 4, 5, 2, 5, nan, 9]}, - index = [nan, 6, 5, 4, 3, 2, 1]) - assert_frame_equal(sorted_df, expected) - - def test_stable_descending_sort(self): - # GH #6399 - df = DataFrame([[2, 'first'], [2, 'second'], [1, 'a'], [1, 'b']], - columns=['sort_col', 'order']) - sorted_df = df.sort_values(by='sort_col', kind='mergesort', - ascending=False) - assert_frame_equal(df, sorted_df) - - def test_stable_descending_multicolumn_sort(self): - nan = np.nan - df = DataFrame({'A': [1, 2, nan, 1, 6, 8, 4], - 'B': [9, nan, 5, 2, 5, 4, 5]}) - # test stable mergesort - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 2, 9]}, - index=[2, 5, 4, 6, 1, 3, 0]) - sorted_df = df.sort_values(['A','B'], ascending=[0,1], na_position='first', - kind='mergesort') - assert_frame_equal(sorted_df, expected) - - expected = DataFrame( - {'A': [nan, 8, 6, 4, 2, 1, 1], - 'B': [5, 4, 5, 5, nan, 9, 2]}, - index=[2, 5, 4, 6, 1, 0, 3]) - sorted_df = df.sort_values(['A','B'], ascending=[0,0], na_position='first', - kind='mergesort') - assert_frame_equal(sorted_df, expected) - - def test_sort_index_multicolumn(self): - import random - A = np.arange(5).repeat(20) - B = np.tile(np.arange(5), 20) - random.shuffle(A) - random.shuffle(B) - frame = DataFrame({'A': A, 'B': B, - 'C': np.random.randn(100)}) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['A', 'B']) - result = frame.sort_values(by=['A', 'B']) - indexer = np.lexsort((frame['B'], frame['A'])) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['A', 'B'], ascending=False) - result = frame.sort_values(by=['A', 'B'], ascending=False) - indexer = np.lexsort((frame['B'].rank(ascending=False), - frame['A'].rank(ascending=False))) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - frame.sort_index(by=['B', 'A']) - result = frame.sort_values(by=['B', 'A']) - indexer = np.lexsort((frame['A'], frame['B'])) - expected = frame.take(indexer) - assert_frame_equal(result, expected) - - def test_sort_index_inplace(self): - frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - # axis=0 - unordered = frame.ix[[3, 2, 4, 1]] - a_id = id(unordered['A']) - df = unordered.copy() - df.sort_index(inplace=True) - expected = frame - assert_frame_equal(df, expected) - self.assertNotEqual(a_id, id(df['A'])) - - df = unordered.copy() - df.sort_index(ascending=False, inplace=True) - expected = frame[::-1] - assert_frame_equal(df, expected) - - # axis=1 - unordered = frame.ix[:, ['D', 'B', 'C', 'A']] - df = unordered.copy() - df.sort_index(axis=1, inplace=True) - expected = frame - assert_frame_equal(df, expected) - - df = unordered.copy() - df.sort_index(axis=1, ascending=False, inplace=True) - expected = frame.ix[:, ::-1] - assert_frame_equal(df, expected) - - def test_sort_index_different_sortorder(self): - A = np.arange(20).repeat(5) - B = np.tile(np.arange(5), 20) - - indexer = np.random.permutation(100) - A = A.take(indexer) - B = B.take(indexer) - - df = DataFrame({'A': A, 'B': B, - 'C': np.random.randn(100)}) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=['A', 'B'], ascending=[1, 0]) - result = df.sort_values(by=['A', 'B'], ascending=[1, 0]) - - ex_indexer = np.lexsort((df.B.max() - df.B, df.A)) - expected = df.take(ex_indexer) - assert_frame_equal(result, expected) - - # test with multiindex, too - idf = df.set_index(['A', 'B']) - - result = idf.sort_index(ascending=[1, 0]) - expected = idf.take(ex_indexer) - assert_frame_equal(result, expected) - - # also, Series! - result = idf['C'].sort_index(ascending=[1, 0]) - assert_series_equal(result, expected['C']) - - def test_sort_inplace(self): - frame = DataFrame(np.random.randn(4, 4), index=[1, 2, 3, 4], - columns=['A', 'B', 'C', 'D']) - - sorted_df = frame.copy() - sorted_df.sort_values(by='A', inplace=True) - expected = frame.sort_values(by='A') - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.copy() - sorted_df.sort_values(by='A', ascending=False, inplace=True) - expected = frame.sort_values(by='A', ascending=False) - assert_frame_equal(sorted_df, expected) - - sorted_df = frame.copy() - sorted_df.sort_values(by=['A', 'B'], ascending=False, inplace=True) - expected = frame.sort_values(by=['A', 'B'], ascending=False) - assert_frame_equal(sorted_df, expected) - - def test_sort_index_duplicates(self): - - ### with 9816, these are all translated to .sort_values - - df = DataFrame([lrange(5,9), lrange(4)], - columns=['a', 'a', 'b', 'b']) - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by='a') - with assertRaisesRegexp(ValueError, 'duplicate'): - df.sort_values(by='a') - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=['a']) - with assertRaisesRegexp(ValueError, 'duplicate'): - df.sort_values(by=['a']) - - with assertRaisesRegexp(ValueError, 'duplicate'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - # multi-column 'by' is separate codepath - df.sort_index(by=['a', 'b']) - with assertRaisesRegexp(ValueError, 'duplicate'): - # multi-column 'by' is separate codepath - df.sort_values(by=['a', 'b']) - - # with multi-index - # GH4370 - df = DataFrame(np.random.randn(4,2),columns=MultiIndex.from_tuples([('a',0),('a',1)])) - with assertRaisesRegexp(ValueError, 'levels'): - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by='a') - with assertRaisesRegexp(ValueError, 'levels'): - df.sort_values(by='a') - - # convert tuples to a list of tuples - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=[('a',1)]) - expected = df.sort_values(by=[('a',1)]) - - # use .sort_values #9816 - with tm.assert_produces_warning(FutureWarning): - df.sort_index(by=('a',1)) - result = df.sort_values(by=('a',1)) - assert_frame_equal(result, expected) - - def test_sortlevel(self): - mi = MultiIndex.from_tuples([[1, 1, 3], [1, 1, 1]], names=list('ABC')) - df = DataFrame([[1, 2], [3, 4]], mi) - res = df.sortlevel('A', sort_remaining=False) - assert_frame_equal(df, res) - - res = df.sortlevel(['A', 'B'], sort_remaining=False) - assert_frame_equal(df, res) - - def test_sort_datetimes(self): - - # GH 3461, argsort / lexsort differences for a datetime column - df = DataFrame(['a','a','a','b','c','d','e','f','g'], - columns=['A'], - index=date_range('20130101',periods=9)) - dts = [Timestamp(x) - for x in ['2004-02-11','2004-01-21','2004-01-26', - '2005-09-20','2010-10-04','2009-05-12', - '2008-11-12','2010-09-28','2010-09-28']] - df['B'] = dts[::2] + dts[1::2] - df['C'] = 2. - df['A1'] = 3. - - df1 = df.sort_values(by='A') - df2 = df.sort_values(by=['A']) - assert_frame_equal(df1,df2) - - df1 = df.sort_values(by='B') - df2 = df.sort_values(by=['B']) - assert_frame_equal(df1,df2) - - def test_frame_column_inplace_sort_exception(self): - s = self.frame['A'] - with assertRaisesRegexp(ValueError, "This Series is a view"): - s.sort_values(inplace=True) - - cp = s.copy() - cp.sort_values() # it works! - - def test_combine_first(self): - # disjoint - head, tail = self.frame[:5], self.frame[5:] - - combined = head.combine_first(tail) - reordered_frame = self.frame.reindex(combined.index) - assert_frame_equal(combined, reordered_frame) - self.assertTrue(tm.equalContents(combined.columns, self.frame.columns)) - assert_series_equal(combined['A'], reordered_frame['A']) - - # same index - fcopy = self.frame.copy() - fcopy['A'] = 1 - del fcopy['C'] - - fcopy2 = self.frame.copy() - fcopy2['B'] = 0 - del fcopy2['D'] - - combined = fcopy.combine_first(fcopy2) - - self.assertTrue((combined['A'] == 1).all()) - assert_series_equal(combined['B'], fcopy['B']) - assert_series_equal(combined['C'], fcopy2['C']) - assert_series_equal(combined['D'], fcopy['D']) - - # overlap - head, tail = reordered_frame[:10].copy(), reordered_frame - head['A'] = 1 - - combined = head.combine_first(tail) - self.assertTrue((combined['A'][:10] == 1).all()) - - # reverse overlap - tail['A'][:10] = 0 - combined = tail.combine_first(head) - self.assertTrue((combined['A'][:10] == 0).all()) - - # no overlap - f = self.frame[:10] - g = self.frame[10:] - combined = f.combine_first(g) - assert_series_equal(combined['A'].reindex(f.index), f['A']) - assert_series_equal(combined['A'].reindex(g.index), g['A']) - - # corner cases - comb = self.frame.combine_first(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combine_first(self.frame) - assert_frame_equal(comb, self.frame) - - comb = self.frame.combine_first(DataFrame(index=["faz", "boo"])) - self.assertTrue("faz" in comb.index) - - # #2525 - df = DataFrame({'a': [1]}, index=[datetime(2012, 1, 1)]) - df2 = DataFrame({}, columns=['b']) - result = df.combine_first(df2) - self.assertTrue('b' in result) - - def test_combine_first_mixed_bug(self): - idx = Index(['a', 'b', 'c', 'e']) - ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) - ser2 = Series(['a', 'b', 'c', 'e'], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame1 = DataFrame({"col0": ser1, - "col2": ser2, - "col3": ser3}) - - idx = Index(['a', 'b', 'c', 'f']) - ser1 = Series([5.0, -9.0, 4.0, 100.], index=idx) - ser2 = Series(['a', 'b', 'c', 'f'], index=idx) - ser3 = Series([12, 4, 5, 97], index=idx) - - frame2 = DataFrame({"col1": ser1, - "col2": ser2, - "col5": ser3}) - - combined = frame1.combine_first(frame2) - self.assertEqual(len(combined.columns), 5) - - # gh 3016 (same as in update) - df = DataFrame([[1.,2.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - - other = DataFrame([[45,45]],index=[0],columns=['A','B']) - result = df.combine_first(other) - assert_frame_equal(result, df) - - df.ix[0,'A'] = np.nan - result = df.combine_first(other) - df.ix[0,'A'] = 45 - assert_frame_equal(result, df) - - # doc example - df1 = DataFrame({'A' : [1., np.nan, 3., 5., np.nan], - 'B' : [np.nan, 2., 3., np.nan, 6.]}) - - df2 = DataFrame({'A' : [5., 2., 4., np.nan, 3., 7.], - 'B' : [np.nan, np.nan, 3., 4., 6., 8.]}) - - result = df1.combine_first(df2) - expected = DataFrame({ 'A' : [1,2,3,5,3,7.], 'B' : [np.nan,2,3,4,6,8] }) - assert_frame_equal(result,expected) - - # GH3552, return object dtype with bools - df1 = DataFrame([[np.nan, 3.,True], [-4.6, np.nan, True], [np.nan, 7., False]]) - df2 = DataFrame([[-42.6, np.nan, True], [-5., 1.6, False]], index=[1, 2]) - - result = df1.combine_first(df2)[2] - expected = Series([True, True, False], name=2) - assert_series_equal(result, expected) - - # GH 3593, converting datetime64[ns] incorrecly - df0 = DataFrame({"a":[datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)]}) - df1 = DataFrame({"a":[None, None, None]}) - df2 = df1.combine_first(df0) - assert_frame_equal(df2, df0) - - df2 = df0.combine_first(df1) - assert_frame_equal(df2, df0) - - df0 = DataFrame({"a":[datetime(2000, 1, 1), datetime(2000, 1, 2), datetime(2000, 1, 3)]}) - df1 = DataFrame({"a":[datetime(2000, 1, 2), None, None]}) - df2 = df1.combine_first(df0) - result = df0.copy() - result.iloc[0,:] = df1.iloc[0,:] - assert_frame_equal(df2, result) - - df2 = df0.combine_first(df1) - assert_frame_equal(df2, df0) - - def test_update(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other) - - expected = DataFrame([[1.5, nan, 3], - [3.6, 2, 3], - [1.5, nan, 3], - [1.5, nan, 7.]]) - assert_frame_equal(df, expected) - - def test_update_dtypes(self): - - # gh 3016 - df = DataFrame([[1.,2.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - - other = DataFrame([[45,45]],index=[0],columns=['A','B']) - df.update(other) - - expected = DataFrame([[45.,45.,False, True],[4.,5.,True,False]], - columns=['A','B','bool1','bool2']) - assert_frame_equal(df, expected) - - def test_update_nooverwrite(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other, overwrite=False) - - expected = DataFrame([[1.5, nan, 3], - [1.5, 2, 3], - [1.5, nan, 3], - [1.5, nan, 3.]]) - assert_frame_equal(df, expected) - - def test_update_filtered(self): - df = DataFrame([[1.5, nan, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[3.6, 2., np.nan], - [np.nan, np.nan, 7]], index=[1, 3]) - - df.update(other, filter_func=lambda x: x > 2) - - expected = DataFrame([[1.5, nan, 3], - [1.5, nan, 3], - [1.5, nan, 3], - [1.5, nan, 7.]]) - assert_frame_equal(df, expected) - - def test_update_raise(self): - df = DataFrame([[1.5, 1, 3.], - [1.5, nan, 3.], - [1.5, nan, 3], - [1.5, nan, 3]]) - - other = DataFrame([[2., nan], - [nan, 7]], index=[1, 3], columns=[1, 2]) - with assertRaisesRegexp(ValueError, "Data overlaps"): - df.update(other, raise_conflict=True) - - def test_update_from_non_df(self): - d = {'a': Series([1, 2, 3, 4]), 'b': Series([5, 6, 7, 8])} - df = DataFrame(d) - - d['a'] = Series([5, 6, 7, 8]) - df.update(d) - - expected = DataFrame(d) - - assert_frame_equal(df, expected) - - d = {'a': [1, 2, 3, 4], 'b': [5, 6, 7, 8]} - df = DataFrame(d) - - d['a'] = [5, 6, 7, 8] - df.update(d) - - expected = DataFrame(d) - - assert_frame_equal(df, expected) - - def test_combineAdd(self): - - with tm.assert_produces_warning(FutureWarning): - # trivial - comb = self.frame.combineAdd(self.frame) - assert_frame_equal(comb, self.frame * 2) - - # more rigorous - a = DataFrame([[1., nan, nan, 2., nan]], - columns=np.arange(5)) - b = DataFrame([[2., 3., nan, 2., 6., nan]], - columns=np.arange(6)) - expected = DataFrame([[3., 3., nan, 4., 6., nan]], - columns=np.arange(6)) - - result = a.combineAdd(b) - assert_frame_equal(result, expected) - result2 = a.T.combineAdd(b.T) - assert_frame_equal(result2, expected.T) - - expected2 = a.combine(b, operator.add, fill_value=0.) - assert_frame_equal(expected, expected2) - - # corner cases - comb = self.frame.combineAdd(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combineAdd(self.frame) - assert_frame_equal(comb, self.frame) - - # integer corner case - df1 = DataFrame({'x': [5]}) - df2 = DataFrame({'x': [1]}) - df3 = DataFrame({'x': [6]}) - comb = df1.combineAdd(df2) - assert_frame_equal(comb, df3) - - # mixed type GH2191 - df1 = DataFrame({'A': [1, 2], 'B': [3, 4]}) - df2 = DataFrame({'A': [1, 2], 'C': [5, 6]}) - rs = df1.combineAdd(df2) - xp = DataFrame({'A': [2, 4], 'B': [3, 4.], 'C': [5, 6.]}) - assert_frame_equal(xp, rs) - - # TODO: test integer fill corner? - - def test_combineMult(self): - - with tm.assert_produces_warning(FutureWarning): - # trivial - comb = self.frame.combineMult(self.frame) - - assert_frame_equal(comb, self.frame ** 2) - - # corner cases - comb = self.frame.combineMult(self.empty) - assert_frame_equal(comb, self.frame) - - comb = self.empty.combineMult(self.frame) - assert_frame_equal(comb, self.frame) - - def test_combine_generic(self): - df1 = self.frame - df2 = self.frame.ix[:-5, ['A', 'B', 'C']] - - combined = df1.combine(df2, np.add) - combined2 = df2.combine(df1, np.add) - self.assertTrue(combined['D'].isnull().all()) - self.assertTrue(combined2['D'].isnull().all()) - - chunk = combined.ix[:-5, ['A', 'B', 'C']] - chunk2 = combined2.ix[:-5, ['A', 'B', 'C']] - - exp = self.frame.ix[:-5, ['A', 'B', 'C']].reindex_like(chunk) * 2 - assert_frame_equal(chunk, exp) - assert_frame_equal(chunk2, exp) - - def test_clip(self): - median = self.frame.median().median() - - capped = self.frame.clip_upper(median) - self.assertFalse((capped.values > median).any()) - - floored = self.frame.clip_lower(median) - self.assertFalse((floored.values < median).any()) - - double = self.frame.clip(upper=median, lower=median) - self.assertFalse((double.values != median).any()) - - def test_dataframe_clip(self): - - # GH #2747 - df = DataFrame(np.random.randn(1000,2)) - - for lb, ub in [(-1,1),(1,-1)]: - clipped_df = df.clip(lb, ub) - - lb, ub = min(lb,ub), max(ub,lb) - lb_mask = df.values <= lb - ub_mask = df.values >= ub - mask = ~lb_mask & ~ub_mask - self.assertTrue((clipped_df.values[lb_mask] == lb).all() == True) - self.assertTrue((clipped_df.values[ub_mask] == ub).all() == True) - self.assertTrue((clipped_df.values[mask] == df.values[mask]).all() == True) - - def test_clip_against_series(self): - # GH #6966 - - df = DataFrame(np.random.randn(1000, 2)) - lb = Series(np.random.randn(1000)) - ub = lb + 1 - - clipped_df = df.clip(lb, ub, axis=0) - - for i in range(2): - lb_mask = df.iloc[:, i] <= lb - ub_mask = df.iloc[:, i] >= ub - mask = ~lb_mask & ~ub_mask - - result = clipped_df.loc[lb_mask, i] - assert_series_equal(result, lb[lb_mask], check_names=False) - self.assertEqual(result.name, i) - - result = clipped_df.loc[ub_mask, i] - assert_series_equal(result, ub[ub_mask], check_names=False) - self.assertEqual(result.name, i) - - assert_series_equal(clipped_df.loc[mask, i], df.loc[mask, i]) - - def test_clip_against_frame(self): - df = DataFrame(np.random.randn(1000, 2)) - lb = DataFrame(np.random.randn(1000, 2)) - ub = lb + 1 - - clipped_df = df.clip(lb, ub) - - lb_mask = df <= lb - ub_mask = df >= ub - mask = ~lb_mask & ~ub_mask - - assert_frame_equal(clipped_df[lb_mask], lb[lb_mask]) - assert_frame_equal(clipped_df[ub_mask], ub[ub_mask]) - assert_frame_equal(clipped_df[mask], df[mask]) - - def test_get_X_columns(self): - # numeric and object columns - - df = DataFrame({'a': [1, 2, 3], - 'b' : [True, False, True], - 'c': ['foo', 'bar', 'baz'], - 'd': [None, None, None], - 'e': [3.14, 0.577, 2.773]}) - - self.assert_numpy_array_equal(df._get_numeric_data().columns, - ['a', 'b', 'e']) - - def test_is_mixed_type(self): - self.assertFalse(self.frame._is_mixed_type) - self.assertTrue(self.mixed_frame._is_mixed_type) - - def test_get_numeric_data(self): - intname = np.dtype(np.int_).name - floatname = np.dtype(np.float_).name - datetime64name = np.dtype('M8[ns]').name - objectname = np.dtype(np.object_).name - - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', 'f' : Timestamp('20010102')}, - index=np.arange(10)) - result = df.get_dtype_counts() - expected = Series({'int64': 1, 'float64' : 1, datetime64name: 1, objectname : 1}) - result.sort_index() - expected.sort_index() - assert_series_equal(result, expected) - - df = DataFrame({'a': 1., 'b': 2, 'c': 'foo', - 'd' : np.array([1.]*10,dtype='float32'), - 'e' : np.array([1]*10,dtype='int32'), - 'f' : np.array([1]*10,dtype='int16'), - 'g' : Timestamp('20010102')}, - index=np.arange(10)) - - result = df._get_numeric_data() - expected = df.ix[:, ['a', 'b','d','e','f']] - assert_frame_equal(result, expected) - - only_obj = df.ix[:, ['c','g']] - result = only_obj._get_numeric_data() - expected = df.ix[:, []] - assert_frame_equal(result, expected) - - df = DataFrame.from_dict({'a':[1,2], 'b':['foo','bar'],'c':[np.pi,np.e]}) - result = df._get_numeric_data() - expected = DataFrame.from_dict({'a':[1,2], 'c':[np.pi,np.e]}) - assert_frame_equal(result, expected) - - df = result.copy() - result = df._get_numeric_data() - expected = df - assert_frame_equal(result, expected) - - def test_bool_describe_in_mixed_frame(self): - df = DataFrame({ - 'string_data': ['a', 'b', 'c', 'd', 'e'], - 'bool_data': [True, True, False, False, False], - 'int_data': [10, 20, 30, 40, 50], - }) - - # Boolean data and integer data is included in .describe() output, string data isn't - self.assert_numpy_array_equal(df.describe().columns, ['bool_data', 'int_data']) - - bool_describe = df.describe()['bool_data'] - - # Both the min and the max values should stay booleans - self.assertEqual(bool_describe['min'].dtype, np.bool_) - self.assertEqual(bool_describe['max'].dtype, np.bool_) - - self.assertFalse(bool_describe['min']) - self.assertTrue(bool_describe['max']) - - # For numeric operations, like mean or median, the values True/False are cast to - # the integer values 1 and 0 - assert_almost_equal(bool_describe['mean'], 0.4) - assert_almost_equal(bool_describe['50%'], 0) - - def test_reduce_mixed_frame(self): - # GH 6806 - df = DataFrame({ - 'bool_data': [True, True, False, False, False], - 'int_data': [10, 20, 30, 40, 50], - 'string_data': ['a', 'b', 'c', 'd', 'e'], - }) - df.reindex(columns=['bool_data', 'int_data', 'string_data']) - test = df.sum(axis=0) - assert_almost_equal(test.values, [2, 150, 'abcde']) - assert_series_equal(test, df.T.sum(axis=1)) - - def test_count(self): - f = lambda s: notnull(s).sum() - self._check_stat_op('count', f, - has_skipna=False, - has_numeric_only=True, - check_dtype=False, - check_dates=True) - - # corner case - frame = DataFrame() - ct1 = frame.count(1) - tm.assertIsInstance(ct1, Series) - - ct2 = frame.count(0) - tm.assertIsInstance(ct2, Series) - - # GH #423 - df = DataFrame(index=lrange(10)) - result = df.count(1) - expected = Series(0, index=df.index) - assert_series_equal(result, expected) - - df = DataFrame(columns=lrange(10)) - result = df.count(0) - expected = Series(0, index=df.columns) - assert_series_equal(result, expected) - - df = DataFrame() - result = df.count() - expected = Series(0, index=[]) - assert_series_equal(result, expected) - - def test_sum(self): - self._check_stat_op('sum', np.sum, has_numeric_only=True) - - # mixed types (with upcasting happening) - self._check_stat_op('sum', np.sum, frame=self.mixed_float.astype('float32'), - has_numeric_only=True, check_dtype=False, check_less_precise=True) - - def test_stat_operators_attempt_obj_array(self): - data = { - 'a': [-0.00049987540199591344, -0.0016467257772919831, - 0.00067695870775883013], - 'b': [-0, -0, 0.0], - 'c': [0.00031111847529610595, 0.0014902627951905339, - -0.00094099200035979691] - } - df1 = DataFrame(data, index=['foo', 'bar', 'baz'], - dtype='O') - methods = ['sum', 'mean', 'prod', 'var', 'std', 'skew', 'min', 'max'] - - # GH #676 - df2 = DataFrame({0: [np.nan, 2], 1: [np.nan, 3], - 2: [np.nan, 4]}, dtype=object) - - for df in [df1, df2]: - for meth in methods: - self.assertEqual(df.values.dtype, np.object_) - result = getattr(df, meth)(1) - expected = getattr(df.astype('f8'), meth)(1) - - if not tm._incompat_bottleneck_version(meth): - assert_series_equal(result, expected) - - def test_mean(self): - self._check_stat_op('mean', np.mean, check_dates=True) - - def test_product(self): - self._check_stat_op('product', np.prod) - - def test_median(self): - def wrapper(x): - if isnull(x).any(): - return np.nan - return np.median(x) - - self._check_stat_op('median', wrapper, check_dates=True) - - def test_min(self): - self._check_stat_op('min', np.min, check_dates=True) - self._check_stat_op('min', np.min, frame=self.intframe) - - def test_cummin(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cummin = self.tsframe.cummin() - expected = self.tsframe.apply(Series.cummin) - assert_frame_equal(cummin, expected) - - # axis = 1 - cummin = self.tsframe.cummin(axis=1) - expected = self.tsframe.apply(Series.cummin, axis=1) - assert_frame_equal(cummin, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cummin() - - # fix issue - cummin_xs = self.tsframe.cummin(axis=1) - self.assertEqual(np.shape(cummin_xs), np.shape(self.tsframe)) - - def test_cummax(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cummax = self.tsframe.cummax() - expected = self.tsframe.apply(Series.cummax) - assert_frame_equal(cummax, expected) - - # axis = 1 - cummax = self.tsframe.cummax(axis=1) - expected = self.tsframe.apply(Series.cummax, axis=1) - assert_frame_equal(cummax, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cummax() - - # fix issue - cummax_xs = self.tsframe.cummax(axis=1) - self.assertEqual(np.shape(cummax_xs), np.shape(self.tsframe)) - - def test_max(self): - self._check_stat_op('max', np.max, check_dates=True) - self._check_stat_op('max', np.max, frame=self.intframe) - - def test_mad(self): - f = lambda x: np.abs(x - x.mean()).mean() - self._check_stat_op('mad', f) - - def test_var_std(self): - alt = lambda x: np.var(x, ddof=1) - self._check_stat_op('var', alt) - - alt = lambda x: np.std(x, ddof=1) - self._check_stat_op('std', alt) - - result = self.tsframe.std(ddof=4) - expected = self.tsframe.apply(lambda x: x.std(ddof=4)) - assert_almost_equal(result, expected) - - result = self.tsframe.var(ddof=4) - expected = self.tsframe.apply(lambda x: x.var(ddof=4)) - assert_almost_equal(result, expected) - - arr = np.repeat(np.random.random((1, 1000)), 1000, 0) - result = nanops.nanvar(arr, axis=0) - self.assertFalse((result < 0).any()) - if nanops._USE_BOTTLENECK: - nanops._USE_BOTTLENECK = False - result = nanops.nanvar(arr, axis=0) - self.assertFalse((result < 0).any()) - nanops._USE_BOTTLENECK = True - - def test_numeric_only_flag(self): - # GH #9201 - methods = ['sem', 'var', 'std'] - df1 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) - # set one entry to a number in str format - df1.ix[0, 'foo'] = '100' - - df2 = DataFrame(np.random.randn(5, 3), columns=['foo', 'bar', 'baz']) - # set one entry to a non-number str - df2.ix[0, 'foo'] = 'a' - - for meth in methods: - result = getattr(df1, meth)(axis=1, numeric_only=True) - expected = getattr(df1[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) - - result = getattr(df2, meth)(axis=1, numeric_only=True) - expected = getattr(df2[['bar', 'baz']], meth)(axis=1) - assert_series_equal(expected, result) - - # df1 has all numbers, df2 has a letter inside - self.assertRaises(TypeError, lambda : getattr(df1, meth)(axis=1, numeric_only=False)) - self.assertRaises(TypeError, lambda : getattr(df2, meth)(axis=1, numeric_only=False)) - - def test_sem(self): - alt = lambda x: np.std(x, ddof=1)/np.sqrt(len(x)) - self._check_stat_op('sem', alt) - - result = self.tsframe.sem(ddof=4) - expected = self.tsframe.apply(lambda x: x.std(ddof=4)/np.sqrt(len(x))) - assert_almost_equal(result, expected) - - arr = np.repeat(np.random.random((1, 1000)), 1000, 0) - result = nanops.nansem(arr, axis=0) - self.assertFalse((result < 0).any()) - if nanops._USE_BOTTLENECK: - nanops._USE_BOTTLENECK = False - result = nanops.nansem(arr, axis=0) - self.assertFalse((result < 0).any()) - nanops._USE_BOTTLENECK = True - - def test_skew(self): - tm._skip_if_no_scipy() - from scipy.stats import skew - - def alt(x): - if len(x) < 3: - return np.nan - return skew(x, bias=False) - - self._check_stat_op('skew', alt) - - def test_kurt(self): - tm._skip_if_no_scipy() - - from scipy.stats import kurtosis - - def alt(x): - if len(x) < 4: - return np.nan - return kurtosis(x, bias=False) - - self._check_stat_op('kurt', alt) - - index = MultiIndex(levels=[['bar'], ['one', 'two', 'three'], [0, 1]], - labels=[[0, 0, 0, 0, 0, 0], - [0, 1, 2, 0, 1, 2], - [0, 1, 0, 1, 0, 1]]) - df = DataFrame(np.random.randn(6, 3), index=index) - - kurt = df.kurt() - kurt2 = df.kurt(level=0).xs('bar') - assert_series_equal(kurt, kurt2, check_names=False) - self.assertTrue(kurt.name is None) - self.assertEqual(kurt2.name, 'bar') - - def _check_stat_op(self, name, alternative, frame=None, has_skipna=True, - has_numeric_only=False, check_dtype=True, check_dates=False, - check_less_precise=False): - if frame is None: - frame = self.frame - # set some NAs - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - - f = getattr(frame, name) - - if check_dates: - df = DataFrame({'b': date_range('1/1/2001', periods=2)}) - _f = getattr(df, name) - result = _f() - self.assertIsInstance(result, Series) - - df['a'] = lrange(len(df)) - result = getattr(df, name)() - self.assertIsInstance(result, Series) - self.assertTrue(len(result)) - - if has_skipna: - def skipna_wrapper(x): - nona = x.dropna() - if len(nona) == 0: - return np.nan - return alternative(nona) - - def wrapper(x): - return alternative(x.values) - - result0 = f(axis=0, skipna=False) - result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) # HACK: win32 - else: - skipna_wrapper = alternative - wrapper = alternative - - result0 = f(axis=0) - result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper), - check_dtype=check_dtype, - check_less_precise=check_less_precise) - if not tm._incompat_bottleneck_version(name): - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False, - check_less_precise=check_less_precise) - - # check dtypes - if check_dtype: - lcd_dtype = frame.values.dtype - self.assertEqual(lcd_dtype, result0.dtype) - self.assertEqual(lcd_dtype, result1.dtype) - - # result = f(axis=1) - # comp = frame.apply(alternative, axis=1).reindex(result.index) - # assert_series_equal(result, comp) - - # bad axis - assertRaisesRegexp(ValueError, 'No axis named 2', f, axis=2) - # make sure works on mixed-type frame - getattr(self.mixed_frame, name)(axis=0) - getattr(self.mixed_frame, name)(axis=1) - - if has_numeric_only: - getattr(self.mixed_frame, name)(axis=0, numeric_only=True) - getattr(self.mixed_frame, name)(axis=1, numeric_only=True) - getattr(self.frame, name)(axis=0, numeric_only=False) - getattr(self.frame, name)(axis=1, numeric_only=False) - - # all NA case - if has_skipna: - all_na = self.frame * np.NaN - r0 = getattr(all_na, name)(axis=0) - r1 = getattr(all_na, name)(axis=1) - if not tm._incompat_bottleneck_version(name): - self.assertTrue(np.isnan(r0).all()) - self.assertTrue(np.isnan(r1).all()) - - def test_mode(self): - df = pd.DataFrame({"A": [12, 12, 11, 12, 19, 11], - "B": [10, 10, 10, np.nan, 3, 4], - "C": [8, 8, 8, 9, 9, 9], - "D": np.arange(6,dtype='int64'), - "E": [8, 8, 1, 1, 3, 3]}) - assert_frame_equal(df[["A"]].mode(), - pd.DataFrame({"A": [12]})) - expected = pd.Series([], dtype='int64', name='D').to_frame() - assert_frame_equal(df[["D"]].mode(), expected) - expected = pd.Series([1, 3, 8], dtype='int64', name='E').to_frame() - assert_frame_equal(df[["E"]].mode(), expected) - assert_frame_equal(df[["A", "B"]].mode(), - pd.DataFrame({"A": [12], "B": [10.]})) - assert_frame_equal(df.mode(), - pd.DataFrame({"A": [12, np.nan, np.nan], - "B": [10, np.nan, np.nan], - "C": [8, 9, np.nan], - "D": [np.nan, np.nan, np.nan], - "E": [1, 3, 8]})) - - # outputs in sorted order - df["C"] = list(reversed(df["C"])) - com.pprint_thing(df["C"]) - com.pprint_thing(df["C"].mode()) - a, b = (df[["A", "B", "C"]].mode(), - pd.DataFrame({"A": [12, np.nan], - "B": [10, np.nan], - "C": [8, 9]})) - com.pprint_thing(a) - com.pprint_thing(b) - assert_frame_equal(a, b) - # should work with heterogeneous types - df = pd.DataFrame({"A": np.arange(6,dtype='int64'), - "B": pd.date_range('2011', periods=6), - "C": list('abcdef')}) - exp = pd.DataFrame({"A": pd.Series([], dtype=df["A"].dtype), - "B": pd.Series([], dtype=df["B"].dtype), - "C": pd.Series([], dtype=df["C"].dtype)}) - assert_frame_equal(df.mode(), exp) - - # and also when not empty - df.loc[1, "A"] = 0 - df.loc[4, "B"] = df.loc[3, "B"] - df.loc[5, "C"] = 'e' - exp = pd.DataFrame({"A": pd.Series([0], dtype=df["A"].dtype), - "B": pd.Series([df.loc[3, "B"]], dtype=df["B"].dtype), - "C": pd.Series(['e'], dtype=df["C"].dtype)}) - - assert_frame_equal(df.mode(), exp) - - def test_sum_corner(self): - axis0 = self.empty.sum(0) - axis1 = self.empty.sum(1) - tm.assertIsInstance(axis0, Series) - tm.assertIsInstance(axis1, Series) - self.assertEqual(len(axis0), 0) - self.assertEqual(len(axis1), 0) - - def test_sum_object(self): - values = self.frame.values.astype(int) - frame = DataFrame(values, index=self.frame.index, - columns=self.frame.columns) - deltas = frame * timedelta(1) - deltas.sum() - - def test_sum_bool(self): - # ensure this works, bug report - bools = np.isnan(self.frame) - bools.sum(1) - bools.sum(0) - - def test_mean_corner(self): - # unit test when have object data - the_mean = self.mixed_frame.mean(axis=0) - the_sum = self.mixed_frame.sum(axis=0, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) - self.assertTrue(len(the_mean.index) < len(self.mixed_frame.columns)) - - # xs sum mixed type, just want to know it works... - the_mean = self.mixed_frame.mean(axis=1) - the_sum = self.mixed_frame.sum(axis=1, numeric_only=True) - self.assertTrue(the_sum.index.equals(the_mean.index)) - - # take mean of boolean column - self.frame['bool'] = self.frame['A'] > 0 - means = self.frame.mean(0) - self.assertEqual(means['bool'], self.frame['bool'].values.mean()) - - def test_stats_mixed_type(self): - # don't blow up - self.mixed_frame.std(1) - self.mixed_frame.var(1) - self.mixed_frame.mean(1) - self.mixed_frame.skew(1) - - def test_median_corner(self): - def wrapper(x): - if isnull(x).any(): - return np.nan - return np.median(x) - - self._check_stat_op('median', wrapper, frame=self.intframe, - check_dtype=False, check_dates=True) - - def test_round(self): - - # GH 2665 - - # Test that rounding an empty DataFrame does nothing - df = DataFrame() - assert_frame_equal(df, df.round()) - - # Here's the test frame we'll be working with - df = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) - - # Default round to integer (i.e. decimals=0) - expected_rounded = DataFrame( - {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(df.round(), expected_rounded) - - # Round with an integer - decimals = 2 - expected_rounded = DataFrame( - {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) - assert_frame_equal(df.round(decimals), expected_rounded) - - # This should also work with np.round (since np.round dispatches to - # df.round) - assert_frame_equal(np.round(df, decimals), expected_rounded) - - # Round with a list - round_list = [1, 2] - with self.assertRaises(TypeError): - df.round(round_list) - - # Round with a dictionary - expected_rounded = DataFrame( - {'col1': [1.1, 2.1, 3.1], 'col2': [1.23, 2.23, 3.23]}) - round_dict = {'col1': 1, 'col2': 2} - assert_frame_equal(df.round(round_dict), expected_rounded) - - # Incomplete dict - expected_partially_rounded = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) - partial_round_dict = {'col2': 1} - assert_frame_equal( - df.round(partial_round_dict), expected_partially_rounded) - - # Dict with unknown elements - wrong_round_dict = {'col3': 2, 'col2': 1} - assert_frame_equal( - df.round(wrong_round_dict), expected_partially_rounded) - - # float input to `decimals` - non_int_round_dict = {'col1': 1, 'col2': 0.5} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - # String input - non_int_round_dict = {'col1': 1, 'col2': 'foo'} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # List input - non_int_round_dict = {'col1': 1, 'col2': [1, 2]} - with self.assertRaises(TypeError): - df.round(non_int_round_dict) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # Non integer Series inputs - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - non_int_round_Series = Series(non_int_round_dict) - with self.assertRaises(TypeError): - df.round(non_int_round_Series) - - # Negative numbers - negative_round_dict = {'col1': -1, 'col2': -2} - big_df = df * 100 - expected_neg_rounded = DataFrame( - {'col1': [110., 210, 310], 'col2': [100., 200, 300]}) - assert_frame_equal( - big_df.round(negative_round_dict), expected_neg_rounded) - - # nan in Series round - nan_round_Series = Series({'col1': nan, 'col2':1}) - expected_nan_round = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.2, 2.2, 3.2]}) - if sys.version < LooseVersion('2.7'): - # Rounding with decimal is a ValueError in Python < 2.7 - with self.assertRaises(ValueError): - df.round(nan_round_Series) - else: - with self.assertRaises(TypeError): - df.round(nan_round_Series) - - # Make sure this doesn't break existing Series.round - assert_series_equal(df['col1'].round(1), expected_rounded['col1']) - - # named columns - # GH 11986 - decimals = 2 - expected_rounded = DataFrame( - {'col1': [1.12, 2.12, 3.12], 'col2': [1.23, 2.23, 3.23]}) - df.columns.name = "cols" - expected_rounded.columns.name = "cols" - assert_frame_equal(df.round(decimals), expected_rounded) - - # interaction of named columns & series - assert_series_equal(df['col1'].round(decimals), - expected_rounded['col1']) - assert_series_equal(df.round(decimals)['col1'], - expected_rounded['col1']) - - def test_round_mixed_type(self): - # GH11885 - df = DataFrame({'col1': [1.1, 2.2, 3.3, 4.4], - 'col2': ['1', 'a', 'c', 'f'], - 'col3': date_range('20111111', periods=4)}) - round_0 = DataFrame({'col1': [1., 2., 3., 4.], - 'col2': ['1', 'a', 'c', 'f'], - 'col3': date_range('20111111', periods=4)}) - assert_frame_equal(df.round(), round_0) - assert_frame_equal(df.round(1), df) - assert_frame_equal(df.round({'col1': 1}), df) - assert_frame_equal(df.round({'col1': 0}), round_0) - assert_frame_equal(df.round({'col1': 0, 'col2': 1}), round_0) - assert_frame_equal(df.round({'col3': 1}), df) - - def test_round_issue(self): - # GH11611 - - df = pd.DataFrame(np.random.random([3, 3]), columns=['A', 'B', 'C'], - index=['first', 'second', 'third']) - - dfs = pd.concat((df, df), axis=1) - rounded = dfs.round() - self.assertTrue(rounded.index.equals(dfs.index)) - - decimals = pd.Series([1, 0, 2], index=['A', 'B', 'A']) - self.assertRaises(ValueError, df.round, decimals) - - def test_built_in_round(self): - if not compat.PY3: - raise nose.SkipTest("build in round cannot be overriden " - "prior to Python 3") - - # GH11763 - # Here's the test frame we'll be working with - df = DataFrame( - {'col1': [1.123, 2.123, 3.123], 'col2': [1.234, 2.234, 3.234]}) - - # Default round to integer (i.e. decimals=0) - expected_rounded = DataFrame( - {'col1': [1., 2., 3.], 'col2': [1., 2., 3.]}) - assert_frame_equal(round(df), expected_rounded) - - def test_quantile(self): - from numpy import percentile - - q = self.tsframe.quantile(0.1, axis=0) - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.tsframe.quantile(0.9, axis=1) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test degenerate case - q = DataFrame({'x': [], 'y': []}).quantile(0.1, axis=0) - assert(np.isnan(q['x']) and np.isnan(q['y'])) - - # non-numeric exclusion - df = DataFrame({'col1':['A','A','B','B'], 'col2':[1,2,3,4]}) - rs = df.quantile(0.5) - xp = df.median() - assert_series_equal(rs, xp) - - # axis - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(.5, axis=1) - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) - assert_series_equal(result, expected) - - result = df.quantile([.5, .75], axis=1) - expected = DataFrame({1: [1.5, 1.75], 2: [2.5, 2.75], - 3: [3.5, 3.75]}, index=[0.5, 0.75]) - assert_frame_equal(result, expected, check_index_type=True) - - # We may want to break API in the future to change this - # so that we exclude non-numeric along the same axis - # See GH #7312 - df = DataFrame([[1, 2, 3], - ['a', 'b', 4]]) - result = df.quantile(.5, axis=1) - expected = Series([3., 4.], index=[0, 1]) - assert_series_equal(result, expected) - - def test_quantile_axis_parameter(self): - # GH 9543/9544 - - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - - result = df.quantile(.5, axis=0) - - expected = Series([2., 3.], index=["A", "B"]) - assert_series_equal(result, expected) - - expected = df.quantile(.5, axis="index") - assert_series_equal(result, expected) - - result = df.quantile(.5, axis=1) - - expected = Series([1.5, 2.5, 3.5], index=[1, 2, 3]) - assert_series_equal(result, expected) - - result = df.quantile(.5, axis="columns") - assert_series_equal(result, expected) - - self.assertRaises(ValueError, df.quantile, 0.1, axis=-1) - self.assertRaises(ValueError, df.quantile, 0.1, axis="column") - - def test_quantile_interpolation(self): - # GH #10174 - if _np_version_under1p9: - raise nose.SkipTest("Numpy version under 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test with and without interpolation keyword - q1 = self.intframe.quantile(0.1) - self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) - assert_series_equal(q, q1) - - # interpolation method other than default linear - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - result = df.quantile(.5, axis=1, interpolation='nearest') - expected = Series([1., 2., 3.], index=[1, 2, 3]) - assert_series_equal(result, expected) - - # axis - result = df.quantile([.5, .75], axis=1, interpolation='lower') - expected = DataFrame({1: [1., 1.], 2: [2., 2.], - 3: [3., 3.]}, index=[0.5, 0.75]) - assert_frame_equal(result, expected) - - # test degenerate case - df = DataFrame({'x': [], 'y': []}) - q = df.quantile(0.1, axis=0, interpolation='higher') - assert(np.isnan(q['x']) and np.isnan(q['y'])) - - # multi - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - result = df.quantile([.25, .5], interpolation='midpoint') - expected = DataFrame([[1.5, 1.5, 1.5], [2.5, 2.5, 2.5]], - index=[.25, .5], columns=['a', 'b', 'c']) - assert_frame_equal(result, expected) - - def test_quantile_interpolation_np_lt_1p9(self): - # GH #10174 - if not _np_version_under1p9: - raise nose.SkipTest("Numpy version is greater than 1.9") - - from numpy import percentile - - # interpolation = linear (default case) - q = self.tsframe.quantile(0.1, axis=0, interpolation='linear') - self.assertEqual(q['A'], percentile(self.tsframe['A'], 10)) - q = self.intframe.quantile(0.1) - self.assertEqual(q['A'], percentile(self.intframe['A'], 10)) - - # test with and without interpolation keyword - q1 = self.intframe.quantile(0.1) - self.assertEqual(q1['A'], np.percentile(self.intframe['A'], 10)) - assert_series_equal(q, q1) - - # interpolation method other than default linear - expErrMsg = "Interpolation methods other than linear" - df = DataFrame({"A": [1, 2, 3], "B": [2, 3, 4]}, index=[1, 2, 3]) - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile(.5, axis=1, interpolation='nearest') - - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile([.5, .75], axis=1, interpolation='lower') - - # test degenerate case - df = DataFrame({'x': [], 'y': []}) - with assertRaisesRegexp(ValueError, expErrMsg): - q = df.quantile(0.1, axis=0, interpolation='higher') - - # multi - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - with assertRaisesRegexp(ValueError, expErrMsg): - df.quantile([.25, .5], interpolation='midpoint') - - def test_quantile_multi(self): - df = DataFrame([[1, 1, 1], [2, 2, 2], [3, 3, 3]], - columns=['a', 'b', 'c']) - result = df.quantile([.25, .5]) - expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], - index=[.25, .5], columns=['a', 'b', 'c']) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.quantile([.25, .5], axis=1) - expected = DataFrame([[1.5, 1.5, 1.5], [2., 2., 2.]], - index=[.25, .5], columns=[0, 1, 2]) - - # empty - result = DataFrame({'x': [], 'y': []}).quantile([0.1, .9], axis=0) - expected = DataFrame({'x': [np.nan, np.nan], 'y': [np.nan, np.nan]}, - index=[.1, .9]) - assert_frame_equal(result, expected) - - def test_quantile_datetime(self): - df = DataFrame({'a': pd.to_datetime(['2010', '2011']), 'b': [0, 5]}) - - # exclude datetime - result = df.quantile(.5) - expected = Series([2.5], index=['b']) - - # datetime - result = df.quantile(.5, numeric_only=False) - expected = Series([Timestamp('2010-07-02 12:00:00'), 2.5], - index=['a', 'b']) - assert_series_equal(result, expected) - - # datetime w/ multi - result = df.quantile([.5], numeric_only=False) - expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), 2.5]], - index=[.5], columns=['a', 'b']) - assert_frame_equal(result, expected) - - # axis = 1 - df['c'] = pd.to_datetime(['2011', '2012']) - result = df[['a', 'c']].quantile(.5, axis=1, numeric_only=False) - expected = Series([Timestamp('2010-07-02 12:00:00'), - Timestamp('2011-07-02 12:00:00')], - index=[0, 1]) - assert_series_equal(result, expected) - - result = df[['a', 'c']].quantile([.5], axis=1, numeric_only=False) - expected = DataFrame([[Timestamp('2010-07-02 12:00:00'), - Timestamp('2011-07-02 12:00:00')]], - index=[0.5], columns=[0, 1]) - assert_frame_equal(result, expected) - - def test_quantile_invalid(self): - msg = 'percentiles should all be in the interval \\[0, 1\\]' - for invalid in [-1, 2, [0.5, -1], [0.5, 2]]: - with tm.assertRaisesRegexp(ValueError, msg): - self.tsframe.quantile(invalid) - - def test_cumsum(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cumsum = self.tsframe.cumsum() - expected = self.tsframe.apply(Series.cumsum) - assert_frame_equal(cumsum, expected) - - # axis = 1 - cumsum = self.tsframe.cumsum(axis=1) - expected = self.tsframe.apply(Series.cumsum, axis=1) - assert_frame_equal(cumsum, expected) - - # works - df = DataFrame({'A': np.arange(20)}, index=np.arange(20)) - result = df.cumsum() - - # fix issue - cumsum_xs = self.tsframe.cumsum(axis=1) - self.assertEqual(np.shape(cumsum_xs), np.shape(self.tsframe)) - - def test_cumprod(self): - self.tsframe.ix[5:10, 0] = nan - self.tsframe.ix[10:15, 1] = nan - self.tsframe.ix[15:, 2] = nan - - # axis = 0 - cumprod = self.tsframe.cumprod() - expected = self.tsframe.apply(Series.cumprod) - assert_frame_equal(cumprod, expected) - - # axis = 1 - cumprod = self.tsframe.cumprod(axis=1) - expected = self.tsframe.apply(Series.cumprod, axis=1) - assert_frame_equal(cumprod, expected) - - # fix issue - cumprod_xs = self.tsframe.cumprod(axis=1) - self.assertEqual(np.shape(cumprod_xs), np.shape(self.tsframe)) - - # ints - df = self.tsframe.fillna(0).astype(int) - df.cumprod(0) - df.cumprod(1) - - # ints32 - df = self.tsframe.fillna(0).astype(np.int32) - df.cumprod(0) - df.cumprod(1) - - def test_rank(self): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - self.frame['A'][::2] = np.nan - self.frame['B'][::3] = np.nan - self.frame['C'][::4] = np.nan - self.frame['D'][::5] = np.nan - - ranks0 = self.frame.rank() - ranks1 = self.frame.rank(1) - mask = np.isnan(self.frame.values) - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fvals) - exp0[mask] = np.nan - - exp1 = np.apply_along_axis(rankdata, 1, fvals) - exp1[mask] = np.nan - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # integers - df = DataFrame(np.random.randint(0, 5, size=40).reshape((10, 4))) - - result = df.rank() - exp = df.astype(float).rank() - assert_frame_equal(result, exp) - - result = df.rank(1) - exp = df.astype(float).rank(1) - assert_frame_equal(result, exp) - - def test_rank2(self): - from datetime import datetime - df = DataFrame([[1, 3, 2], [1, 2, 3]]) - expected = DataFrame([[1.0, 3.0, 2.0], [1, 2, 3]]) / 3.0 - result = df.rank(1, pct=True) - assert_frame_equal(result, expected) - - df = DataFrame([[1, 3, 2], [1, 2, 3]]) - expected = df.rank(0) / 2.0 - result = df.rank(0, pct=True) - assert_frame_equal(result, expected) - - - - df = DataFrame([['b', 'c', 'a'], ['a', 'c', 'b']]) - expected = DataFrame([[2.0, 3.0, 1.0], [1, 3, 2]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - - expected = DataFrame([[2.0, 1.5, 1.0], [1, 1.5, 2]]) - result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) - - df = DataFrame([['b', np.nan, 'a'], ['a', 'c', 'b']]) - expected = DataFrame([[2.0, nan, 1.0], [1.0, 3.0, 2.0]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - expected = DataFrame([[2.0, nan, 1.0], [1.0, 1.0, 2.0]]) - result = df.rank(0, numeric_only=False) - assert_frame_equal(result, expected) - - # f7u12, this does not work without extensive workaround - data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], - [datetime(2000, 1, 2), datetime(2000, 1, 3), - datetime(2000, 1, 1)]] - df = DataFrame(data) - - # check the rank - expected = DataFrame([[2., nan, 1.], - [2., 3., 1.]]) - result = df.rank(1, numeric_only=False) - assert_frame_equal(result, expected) - - # mixed-type frames - self.mixed_frame['datetime'] = datetime.now() - self.mixed_frame['timedelta'] = timedelta(days=1,seconds=1) - - result = self.mixed_frame.rank(1) - expected = self.mixed_frame.rank(1, numeric_only=True) - assert_frame_equal(result, expected) - - df = DataFrame({"a":[1e-20, -5, 1e-20+1e-40, 10, 1e60, 1e80, 1e-30]}) - exp = DataFrame({"a":[ 3.5, 1. , 3.5, 5. , 6. , 7. , 2. ]}) - assert_frame_equal(df.rank(), exp) - - def test_rank_na_option(self): - tm._skip_if_no_scipy() - from scipy.stats import rankdata - - self.frame['A'][::2] = np.nan - self.frame['B'][::3] = np.nan - self.frame['C'][::4] = np.nan - self.frame['D'][::5] = np.nan - - # bottom - ranks0 = self.frame.rank(na_option='bottom') - ranks1 = self.frame.rank(1, na_option='bottom') - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fvals) - exp1 = np.apply_along_axis(rankdata, 1, fvals) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # top - ranks0 = self.frame.rank(na_option='top') - ranks1 = self.frame.rank(1, na_option='top') - - fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values - fval1 = self.frame.T - fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T - fval1 = fval1.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, fval0) - exp1 = np.apply_along_axis(rankdata, 1, fval1) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # descending - - # bottom - ranks0 = self.frame.rank(na_option='top', ascending=False) - ranks1 = self.frame.rank(1, na_option='top', ascending=False) - - fvals = self.frame.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, -fvals) - exp1 = np.apply_along_axis(rankdata, 1, -fvals) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - # descending - - # top - ranks0 = self.frame.rank(na_option='bottom', ascending=False) - ranks1 = self.frame.rank(1, na_option='bottom', ascending=False) - - fval0 = self.frame.fillna((self.frame.min() - 1).to_dict()).values - fval1 = self.frame.T - fval1 = fval1.fillna((fval1.min() - 1).to_dict()).T - fval1 = fval1.fillna(np.inf).values - - exp0 = np.apply_along_axis(rankdata, 0, -fval0) - exp1 = np.apply_along_axis(rankdata, 1, -fval1) - - assert_almost_equal(ranks0.values, exp0) - assert_almost_equal(ranks1.values, exp1) - - def test_axis_aliases(self): - - f = self.frame - - # reg name - expected = f.sum(axis=0) - result = f.sum(axis='index') - assert_series_equal(result, expected) - - expected = f.sum(axis=1) - result = f.sum(axis='columns') - assert_series_equal(result, expected) - - def test_combine_first_mixed(self): - a = Series(['a', 'b'], index=lrange(2)) - b = Series(lrange(2), index=lrange(2)) - f = DataFrame({'A': a, 'B': b}) - - a = Series(['a', 'b'], index=lrange(5, 7)) - b = Series(lrange(2), index=lrange(5, 7)) - g = DataFrame({'A': a, 'B': b}) - - combined = f.combine_first(g) - - def test_more_asMatrix(self): - values = self.mixed_frame.as_matrix() - self.assertEqual(values.shape[1], len(self.mixed_frame.columns)) - - def test_reindex_boolean(self): - frame = DataFrame(np.ones((10, 2), dtype=bool), - index=np.arange(0, 20, 2), - columns=[0, 2]) - - reindexed = frame.reindex(np.arange(10)) - self.assertEqual(reindexed.values.dtype, np.object_) - self.assertTrue(isnull(reindexed[0][1])) - - reindexed = frame.reindex(columns=lrange(3)) - self.assertEqual(reindexed.values.dtype, np.object_) - self.assertTrue(isnull(reindexed[1]).all()) - - def test_reindex_objects(self): - reindexed = self.mixed_frame.reindex(columns=['foo', 'A', 'B']) - self.assertIn('foo', reindexed) - - reindexed = self.mixed_frame.reindex(columns=['A', 'B']) - self.assertNotIn('foo', reindexed) - - def test_reindex_corner(self): - index = Index(['a', 'b', 'c']) - dm = self.empty.reindex(index=[1, 2, 3]) - reindexed = dm.reindex(columns=index) - self.assertTrue(reindexed.columns.equals(index)) - - # ints are weird - - smaller = self.intframe.reindex(columns=['A', 'B', 'E']) - self.assertEqual(smaller['E'].dtype, np.float64) - - def test_reindex_axis(self): - cols = ['A', 'B', 'E'] - reindexed1 = self.intframe.reindex_axis(cols, axis=1) - reindexed2 = self.intframe.reindex(columns=cols) - assert_frame_equal(reindexed1, reindexed2) - - rows = self.intframe.index[0:5] - reindexed1 = self.intframe.reindex_axis(rows, axis=0) - reindexed2 = self.intframe.reindex(index=rows) - assert_frame_equal(reindexed1, reindexed2) - - self.assertRaises(ValueError, self.intframe.reindex_axis, rows, axis=2) - - # no-op case - cols = self.frame.columns.copy() - newFrame = self.frame.reindex_axis(cols, axis=1) - assert_frame_equal(newFrame, self.frame) - - def test_reindex_with_nans(self): - df = DataFrame([[1, 2], [3, 4], [np.nan, np.nan], [7, 8], [9, 10]], - columns=['a', 'b'], - index=[100.0, 101.0, np.nan, 102.0, 103.0]) - - result = df.reindex(index=[101.0, 102.0, 103.0]) - expected = df.iloc[[1, 3, 4]] - assert_frame_equal(result, expected) - - result = df.reindex(index=[103.0]) - expected = df.iloc[[4]] - assert_frame_equal(result, expected) - - result = df.reindex(index=[101.0]) - expected = df.iloc[[1]] - assert_frame_equal(result, expected) - - def test_reindex_multi(self): - df = DataFrame(np.random.randn(3, 3)) - - result = df.reindex(lrange(4), lrange(4)) - expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randint(0, 10, (3, 3))) - - result = df.reindex(lrange(4), lrange(4)) - expected = df.reindex(lrange(4)).reindex(columns=lrange(4)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randint(0, 10, (3, 3))) - - result = df.reindex(lrange(2), lrange(2)) - expected = df.reindex(lrange(2)).reindex(columns=lrange(2)) - - assert_frame_equal(result, expected) - - df = DataFrame(np.random.randn(5, 3) + 1j, columns=['a', 'b', 'c']) - - result = df.reindex(index=[0, 1], columns=['a', 'b']) - expected = df.reindex([0, 1]).reindex(columns=['a', 'b']) - - assert_frame_equal(result, expected) - - def test_rename_objects(self): - renamed = self.mixed_frame.rename(columns=str.upper) - self.assertIn('FOO', renamed) - self.assertNotIn('foo', renamed) - - def test_fill_corner(self): - self.mixed_frame.ix[5:20,'foo'] = nan - self.mixed_frame.ix[-10:,'A'] = nan - - filled = self.mixed_frame.fillna(value=0) - self.assertTrue((filled.ix[5:20,'foo'] == 0).all()) - del self.mixed_frame['foo'] - - empty_float = self.frame.reindex(columns=[]) - result = empty_float.fillna(value=0) - - def test_count_objects(self): - dm = DataFrame(self.mixed_frame._series) - df = DataFrame(self.mixed_frame._series) - - assert_series_equal(dm.count(), df.count()) - assert_series_equal(dm.count(1), df.count(1)) - - def test_cumsum_corner(self): - dm = DataFrame(np.arange(20).reshape(4, 5), - index=lrange(4), columns=lrange(5)) - result = dm.cumsum() - - #---------------------------------------------------------------------- - # Stacking / unstacking - - def test_stack_unstack(self): - stacked = self.frame.stack() - stacked_df = DataFrame({'foo': stacked, 'bar': stacked}) - - unstacked = stacked.unstack() - unstacked_df = stacked_df.unstack() - - assert_frame_equal(unstacked, self.frame) - assert_frame_equal(unstacked_df['bar'], self.frame) - - unstacked_cols = stacked.unstack(0) - unstacked_cols_df = stacked_df.unstack(0) - assert_frame_equal(unstacked_cols.T, self.frame) - assert_frame_equal(unstacked_cols_df['bar'].T, self.frame) - - def test_stack_ints(self): - df = DataFrame( - np.random.randn(30, 27), - columns=MultiIndex.from_tuples( - list(itertools.product(range(3), repeat=3)) - ) - ) - assert_frame_equal( - df.stack(level=[1, 2]), - df.stack(level=1).stack(level=1) - ) - assert_frame_equal( - df.stack(level=[-2, -1]), - df.stack(level=1).stack(level=1) - ) - - df_named = df.copy() - df_named.columns.set_names(range(3), inplace=True) - assert_frame_equal( - df_named.stack(level=[1, 2]), - df_named.stack(level=1).stack(level=1) - ) - - def test_stack_mixed_levels(self): - columns = MultiIndex.from_tuples( - [('A', 'cat', 'long'), ('B', 'cat', 'long'), - ('A', 'dog', 'short'), ('B', 'dog', 'short')], - names=['exp', 'animal', 'hair_length'] - ) - df = DataFrame(randn(4, 4), columns=columns) - - animal_hair_stacked = df.stack(level=['animal', 'hair_length']) - exp_hair_stacked = df.stack(level=['exp', 'hair_length']) - - # GH #8584: Need to check that stacking works when a number - # is passed that is both a level name and in the range of - # the level numbers - df2 = df.copy() - df2.columns.names = ['exp', 'animal', 1] - assert_frame_equal(df2.stack(level=['animal', 1]), - animal_hair_stacked, check_names=False) - assert_frame_equal(df2.stack(level=['exp', 1]), - exp_hair_stacked, check_names=False) - - # When mixed types are passed and the ints are not level - # names, raise - self.assertRaises(ValueError, df2.stack, level=['animal', 0]) - - # GH #8584: Having 0 in the level names could raise a - # strange error about lexsort depth - df3 = df.copy() - df3.columns.names = ['exp', 'animal', 0] - assert_frame_equal(df3.stack(level=['animal', 0]), - animal_hair_stacked, check_names=False) - - def test_stack_int_level_names(self): - columns = MultiIndex.from_tuples( - [('A', 'cat', 'long'), ('B', 'cat', 'long'), - ('A', 'dog', 'short'), ('B', 'dog', 'short')], - names=['exp', 'animal', 'hair_length'] - ) - df = DataFrame(randn(4, 4), columns=columns) - - exp_animal_stacked = df.stack(level=['exp', 'animal']) - animal_hair_stacked = df.stack(level=['animal', 'hair_length']) - exp_hair_stacked = df.stack(level=['exp', 'hair_length']) - - df2 = df.copy() - df2.columns.names = [0, 1, 2] - assert_frame_equal(df2.stack(level=[1, 2]), animal_hair_stacked, - check_names=False ) - assert_frame_equal(df2.stack(level=[0, 1]), exp_animal_stacked, - check_names=False) - assert_frame_equal(df2.stack(level=[0, 2]), exp_hair_stacked, - check_names=False) - - # Out-of-order int column names - df3 = df.copy() - df3.columns.names = [2, 0, 1] - assert_frame_equal(df3.stack(level=[0, 1]), animal_hair_stacked, - check_names=False) - assert_frame_equal(df3.stack(level=[2, 0]), exp_animal_stacked, - check_names=False) - assert_frame_equal(df3.stack(level=[2, 1]), exp_hair_stacked, - check_names=False) - - - def test_unstack_bool(self): - df = DataFrame([False, False], - index=MultiIndex.from_arrays([['a', 'b'], ['c', 'l']]), - columns=['col']) - rs = df.unstack() - xp = DataFrame(np.array([[False, np.nan], [np.nan, False]], - dtype=object), - index=['a', 'b'], - columns=MultiIndex.from_arrays([['col', 'col'], - ['c', 'l']])) - assert_frame_equal(rs, xp) - - def test_unstack_level_binding(self): - # GH9856 - mi = pd.MultiIndex( - levels=[[u('foo'), u('bar')], [u('one'), u('two')], - [u('a'), u('b')]], - labels=[[0, 0, 1, 1], [0, 1, 0, 1], [1, 0, 1, 0]], - names=[u('first'), u('second'), u('third')]) - s = pd.Series(0, index=mi) - result = s.unstack([1, 2]).stack(0) - - expected_mi = pd.MultiIndex( - levels=[['foo', 'bar'], ['one', 'two']], - labels=[[0, 0, 1, 1], [0, 1, 0, 1]], - names=['first', 'second']) - - expected = pd.DataFrame(np.array([[np.nan, 0], - [0, np.nan], - [np.nan, 0], - [0, np.nan]], - dtype=np.float64), - index=expected_mi, - columns=pd.Index(['a', 'b'], name='third')) - - assert_frame_equal(result, expected) - - def test_unstack_to_series(self): - # check reversibility - data = self.frame.unstack() - - self.assertTrue(isinstance(data, Series)) - undo = data.unstack().T - assert_frame_equal(undo, self.frame) - - # check NA handling - data = DataFrame({'x': [1, 2, np.NaN], 'y': [3.0, 4, np.NaN]}) - data.index = Index(['a', 'b', 'c']) - result = data.unstack() - - midx = MultiIndex(levels=[['x', 'y'], ['a', 'b', 'c']], - labels=[[0, 0, 0, 1, 1, 1], [0, 1, 2, 0, 1, 2]]) - expected = Series([1, 2, np.NaN, 3, 4, np.NaN], index=midx) - - assert_series_equal(result, expected) - - # check composability of unstack - old_data = data.copy() - for _ in range(4): - data = data.unstack() - assert_frame_equal(old_data, data) - - def test_unstack_dtypes(self): - - # GH 2929 - rows = [[1, 1, 3, 4], - [1, 2, 3, 4], - [2, 1, 3, 4], - [2, 2, 3, 4]] - - df = DataFrame(rows, columns=list('ABCD')) - result = df.get_dtype_counts() - expected = Series({'int64' : 4}) - assert_series_equal(result, expected) - - # single dtype - df2 = df.set_index(['A','B']) - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'int64' : 4}) - assert_series_equal(result, expected) - - # mixed - df2 = df.set_index(['A','B']) - df2['C'] = 3. - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'int64' : 2, 'float64' : 2}) - assert_series_equal(result, expected) - - df2['D'] = 'foo' - df3 = df2.unstack('B') - result = df3.get_dtype_counts() - expected = Series({'float64' : 2, 'object' : 2}) - assert_series_equal(result, expected) - - # GH7405 - for c, d in (np.zeros(5), np.zeros(5)), \ - (np.arange(5, dtype='f8'), np.arange(5, 10, dtype='f8')): - - df = DataFrame({'A': ['a']*5, 'C':c, 'D':d, - 'B':pd.date_range('2012-01-01', periods=5)}) - - right = df.iloc[:3].copy(deep=True) - - df = df.set_index(['A', 'B']) - df['D'] = df['D'].astype('int64') - - left = df.iloc[:3].unstack(0) - right = right.set_index(['A', 'B']).unstack(0) - right[('D', 'a')] = right[('D', 'a')].astype('int64') - - self.assertEqual(left.shape, (3, 2)) - assert_frame_equal(left, right) - - def test_unstack_non_unique_index_names(self): - idx = MultiIndex.from_tuples([('a', 'b'), ('c', 'd')], - names=['c1', 'c1']) - df = DataFrame([1, 2], index=idx) - with tm.assertRaises(ValueError): - df.unstack('c1') - - with tm.assertRaises(ValueError): - df.T.stack('c1') - - def test_unstack_nan_index(self): # GH7466 - cast = lambda val: '{0:1}'.format('' if val != val else val) - nan = np.nan - - def verify(df): - mk_list = lambda a: list(a) if isinstance(a, tuple) else [a] - rows, cols = df.notnull().values.nonzero() - for i, j in zip(rows, cols): - left = sorted(df.iloc[i, j].split('.')) - right = mk_list(df.index[i]) + mk_list(df.columns[j]) - right = sorted(list(map(cast, right))) - self.assertEqual(left, right) - - df = DataFrame({'jim':['a', 'b', nan, 'd'], - 'joe':['w', 'x', 'y', 'z'], - 'jolie':['a.w', 'b.x', ' .y', 'd.z']}) - - left = df.set_index(['jim', 'joe']).unstack()['jolie'] - right = df.set_index(['joe', 'jim']).unstack()['jolie'].T - assert_frame_equal(left, right) - - for idx in permutations(df.columns[:2]): - mi = df.set_index(list(idx)) - for lev in range(2): - udf = mi.unstack(level=lev) - self.assertEqual(udf.notnull().values.sum(), len(df)) - verify(udf['jolie']) - - df = DataFrame({'1st':['d'] * 3 + [nan] * 5 + ['a'] * 2 + - ['c'] * 3 + ['e'] * 2 + ['b'] * 5, - '2nd':['y'] * 2 + ['w'] * 3 + [nan] * 3 + - ['z'] * 4 + [nan] * 3 + ['x'] * 3 + [nan] * 2, - '3rd':[67,39,53,72,57,80,31,18,11,30,59, - 50,62,59,76,52,14,53,60,51]}) - - df['4th'], df['5th'] = \ - df.apply(lambda r: '.'.join(map(cast, r)), axis=1), \ - df.apply(lambda r: '.'.join(map(cast, r.iloc[::-1])), axis=1) - - for idx in permutations(['1st', '2nd', '3rd']): - mi = df.set_index(list(idx)) - for lev in range(3): - udf = mi.unstack(level=lev) - self.assertEqual(udf.notnull().values.sum(), 2 * len(df)) - for col in ['4th', '5th']: - verify(udf[col]) - - # GH7403 - df = pd.DataFrame({'A': list('aaaabbbb'),'B':range(8), 'C':range(8)}) - df.iloc[3, 1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[3, 0, 1, 2, nan, nan, nan, nan], - [nan, nan, nan, nan, 4, 5, 6, 7]] - vals = list(map(list, zip(*vals))) - idx = Index([nan, 0, 1, 2, 4, 5, 6, 7], name='B') - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - df = DataFrame({'A': list('aaaabbbb'), 'B':list(range(4))*2, - 'C':range(8)}) - df.iloc[2,1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[2, nan], [0, 4], [1, 5], [nan, 6], [3, 7]] - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - idx = Index([nan, 0, 1, 2, 3], name='B') - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - df = pd.DataFrame({'A': list('aaaabbbb'),'B':list(range(4))*2, - 'C':range(8)}) - df.iloc[3,1] = np.NaN - left = df.set_index(['A', 'B']).unstack(0) - - vals = [[3, nan], [0, 4], [1, 5], [2, 6], [nan, 7]] - cols = MultiIndex(levels=[['C'], ['a', 'b']], - labels=[[0, 0], [0, 1]], - names=[None, 'A']) - idx = Index([nan, 0, 1, 2, 3], name='B') - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - # GH7401 - df = pd.DataFrame({'A': list('aaaaabbbbb'), 'C':np.arange(10), - 'B':date_range('2012-01-01', periods=5).tolist()*2 }) - - df.iloc[3,1] = np.NaN - left = df.set_index(['A', 'B']).unstack() - - vals = np.array([[3, 0, 1, 2, nan, 4], [nan, 5, 6, 7, 8, 9]]) - idx = Index(['a', 'b'], name='A') - cols = MultiIndex(levels=[['C'], date_range('2012-01-01', periods=5)], - labels=[[0, 0, 0, 0, 0, 0], [-1, 0, 1, 2, 3, 4]], - names=[None, 'B']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - # GH4862 - vals = [['Hg', nan, nan, 680585148], - ['U', 0.0, nan, 680585148], - ['Pb', 7.07e-06, nan, 680585148], - ['Sn', 2.3614e-05, 0.0133, 680607017], - ['Ag', 0.0, 0.0133, 680607017], - ['Hg', -0.00015, 0.0133, 680607017]] - df = DataFrame(vals, columns=['agent', 'change', 'dosage', 's_id'], - index=[17263, 17264, 17265, 17266, 17267, 17268]) - - left = df.copy().set_index(['s_id','dosage','agent']).unstack() - - vals = [[nan, nan, 7.07e-06, nan, 0.0], - [0.0, -0.00015, nan, 2.3614e-05, nan]] - - idx = MultiIndex(levels=[[680585148, 680607017], [0.0133]], - labels=[[0, 1], [-1, 0]], - names=['s_id', 'dosage']) - - cols = MultiIndex(levels=[['change'], ['Ag', 'Hg', 'Pb', 'Sn', 'U']], - labels=[[0, 0, 0, 0, 0], [0, 1, 2, 3, 4]], - names=[None, 'agent']) - - right = DataFrame(vals, columns=cols, index=idx) - assert_frame_equal(left, right) - - left = df.ix[17264:].copy().set_index(['s_id','dosage','agent']) - assert_frame_equal(left.unstack(), right) - - # GH9497 - multiple unstack with nulls - df = DataFrame({'1st':[1, 2, 1, 2, 1, 2], - '2nd':pd.date_range('2014-02-01', periods=6, freq='D'), - 'jim':100 + np.arange(6), - 'joe':(np.random.randn(6) * 10).round(2)}) - - df['3rd'] = df['2nd'] - pd.Timestamp('2014-02-02') - df.loc[1, '2nd'] = df.loc[3, '2nd'] = nan - df.loc[1, '3rd'] = df.loc[4, '3rd'] = nan - - left = df.set_index(['1st', '2nd', '3rd']).unstack(['2nd', '3rd']) - self.assertEqual(left.notnull().values.sum(), 2 * len(df)) - - for col in ['jim', 'joe']: - for _, r in df.iterrows(): - key = r['1st'], (col, r['2nd'], r['3rd']) - self.assertEqual(r[col], left.loc[key]) - - def test_stack_datetime_column_multiIndex(self): - # GH 8039 - t = datetime(2014, 1, 1) - df = DataFrame([1, 2, 3, 4], columns=MultiIndex.from_tuples([(t, 'A', 'B')])) - result = df.stack() - - eidx = MultiIndex.from_product([(0, 1, 2, 3), ('B',)]) - ecols = MultiIndex.from_tuples([(t, 'A')]) - expected = DataFrame([1, 2, 3, 4], index=eidx, columns=ecols) - assert_frame_equal(result, expected) - - def test_stack_partial_multiIndex(self): - # GH 8844 - def _test_stack_with_multiindex(multiindex): - df = DataFrame(np.arange(3 * len(multiindex)).reshape(3, len(multiindex)), - columns=multiindex) - for level in (-1, 0, 1, [0, 1], [1, 0]): - result = df.stack(level=level, dropna=False) - - if isinstance(level, int): - # Stacking a single level should not make any all-NaN rows, - # so df.stack(level=level, dropna=False) should be the same - # as df.stack(level=level, dropna=True). - expected = df.stack(level=level, dropna=True) - if isinstance(expected, Series): - assert_series_equal(result, expected) - else: - assert_frame_equal(result, expected) - - df.columns = MultiIndex.from_tuples(df.columns.get_values(), - names=df.columns.names) - expected = df.stack(level=level, dropna=False) - if isinstance(expected, Series): - assert_series_equal(result, expected) - else: - assert_frame_equal(result, expected) - - full_multiindex = MultiIndex.from_tuples([('B', 'x'), ('B', 'z'), - ('A', 'y'), - ('C', 'x'), ('C', 'u')], - names=['Upper', 'Lower']) - for multiindex_columns in ([0, 1, 2, 3, 4], - [0, 1, 2, 3], [0, 1, 2, 4], - [0, 1, 2], [1, 2, 3], [2, 3, 4], - [0, 1], [0, 2], [0, 3], - [0], [2], [4]): - _test_stack_with_multiindex(full_multiindex[multiindex_columns]) - if len(multiindex_columns) > 1: - multiindex_columns.reverse() - _test_stack_with_multiindex(full_multiindex[multiindex_columns]) - - df = DataFrame(np.arange(6).reshape(2, 3), columns=full_multiindex[[0, 1, 3]]) - result = df.stack(dropna=False) - expected = DataFrame([[0, 2], [1, nan], [3, 5], [4, nan]], - index=MultiIndex(levels=[[0, 1], ['u', 'x', 'y', 'z']], - labels=[[0, 0, 1, 1], [1, 3, 1, 3]], - names=[None, 'Lower']), - columns=Index(['B', 'C'], name='Upper'), - dtype=df.dtypes[0]) - assert_frame_equal(result, expected) - - def test_repr_with_mi_nat(self): - df = DataFrame({'X': [1, 2]}, - index=[[pd.NaT, pd.Timestamp('20130101')], ['a', 'b']]) - res = repr(df) - exp = ' X\nNaT a 1\n2013-01-01 b 2' - nose.tools.assert_equal(res, exp) - - def test_reset_index(self): - stacked = self.frame.stack()[::2] - stacked = DataFrame({'foo': stacked, 'bar': stacked}) - - names = ['first', 'second'] - stacked.index.names = names - deleveled = stacked.reset_index() - for i, (lev, lab) in enumerate(zip(stacked.index.levels, - stacked.index.labels)): - values = lev.take(lab) - name = names[i] - assert_almost_equal(values, deleveled[name]) - - stacked.index.names = [None, None] - deleveled2 = stacked.reset_index() - self.assert_numpy_array_equal(deleveled['first'], - deleveled2['level_0']) - self.assert_numpy_array_equal(deleveled['second'], - deleveled2['level_1']) - - # default name assigned - rdf = self.frame.reset_index() - self.assert_numpy_array_equal(rdf['index'], self.frame.index.values) - - # default name assigned, corner case - df = self.frame.copy() - df['index'] = 'foo' - rdf = df.reset_index() - self.assert_numpy_array_equal(rdf['level_0'], self.frame.index.values) - - # but this is ok - self.frame.index.name = 'index' - deleveled = self.frame.reset_index() - self.assert_numpy_array_equal(deleveled['index'], - self.frame.index.values) - self.assert_numpy_array_equal(deleveled.index, - np.arange(len(deleveled))) - - # preserve column names - self.frame.columns.name = 'columns' - resetted = self.frame.reset_index() - self.assertEqual(resetted.columns.name, 'columns') - - # only remove certain columns - frame = self.frame.reset_index().set_index(['index', 'A', 'B']) - rs = frame.reset_index(['A', 'B']) - - assert_frame_equal(rs, self.frame, check_names=False) # TODO should reset_index check_names ? - - rs = frame.reset_index(['index', 'A', 'B']) - assert_frame_equal(rs, self.frame.reset_index(), check_names=False) - - rs = frame.reset_index(['index', 'A', 'B']) - assert_frame_equal(rs, self.frame.reset_index(), check_names=False) - - rs = frame.reset_index('A') - xp = self.frame.reset_index().set_index(['index', 'B']) - assert_frame_equal(rs, xp, check_names=False) - - # test resetting in place - df = self.frame.copy() - resetted = self.frame.reset_index() - df.reset_index(inplace=True) - assert_frame_equal(df, resetted, check_names=False) - - frame = self.frame.reset_index().set_index(['index', 'A', 'B']) - rs = frame.reset_index('A', drop=True) - xp = self.frame.copy() - del xp['A'] - xp = xp.set_index(['B'], append=True) - assert_frame_equal(rs, xp, check_names=False) - - def test_reset_index_right_dtype(self): - time = np.arange(0.0, 10, np.sqrt(2) / 2) - s1 = Series((9.81 * time ** 2) / 2, - index=Index(time, name='time'), - name='speed') - df = DataFrame(s1) - - resetted = s1.reset_index() - self.assertEqual(resetted['time'].dtype, np.float64) - - resetted = df.reset_index() - self.assertEqual(resetted['time'].dtype, np.float64) - - def test_reset_index_multiindex_col(self): - vals = np.random.randn(3, 3).astype(object) - idx = ['x', 'y', 'z'] - full = np.hstack(([[x] for x in idx], vals)) - df = DataFrame(vals, Index(idx, name='a'), - columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) - rs = df.reset_index() - xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], - ['', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index(col_fill=None) - xp = DataFrame(full, columns=[['a', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index(col_level=1, col_fill='blah') - xp = DataFrame(full, columns=[['blah', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - df = DataFrame(vals, - MultiIndex.from_arrays([[0, 1, 2], ['x', 'y', 'z']], - names=['d', 'a']), - columns=[['b', 'b', 'c'], ['mean', 'median', 'mean']]) - rs = df.reset_index('a', ) - xp = DataFrame(full, Index([0, 1, 2], name='d'), - columns=[['a', 'b', 'b', 'c'], - ['', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index('a', col_fill=None) - xp = DataFrame(full, Index(lrange(3), name='d'), - columns=[['a', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - rs = df.reset_index('a', col_fill='blah', col_level=1) - xp = DataFrame(full, Index(lrange(3), name='d'), - columns=[['blah', 'b', 'b', 'c'], - ['a', 'mean', 'median', 'mean']]) - assert_frame_equal(rs, xp) - - def test_reset_index_with_datetimeindex_cols(self): - # GH5818 - # - df = pd.DataFrame([[1, 2], [3, 4]], - columns=pd.date_range('1/1/2013', '1/2/2013'), - index=['A', 'B']) - - result = df.reset_index() - expected = pd.DataFrame([['A', 1, 2], ['B', 3, 4]], - columns=['index', datetime(2013, 1, 1), - datetime(2013, 1, 2)]) - assert_frame_equal(result, expected) - - #---------------------------------------------------------------------- - # Tests to cope with refactored internals - def test_as_matrix_numeric_cols(self): - self.frame['foo'] = 'bar' - - values = self.frame.as_matrix(['A', 'B', 'C', 'D']) - self.assertEqual(values.dtype, np.float64) - - def test_as_matrix_lcd(self): - - # mixed lcd - values = self.mixed_float.as_matrix(['A', 'B', 'C', 'D']) - self.assertEqual(values.dtype, np.float64) - - values = self.mixed_float.as_matrix(['A', 'B', 'C' ]) - self.assertEqual(values.dtype, np.float32) - - values = self.mixed_float.as_matrix(['C']) - self.assertEqual(values.dtype, np.float16) - - values = self.mixed_int.as_matrix(['A','B','C','D']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A','D']) - self.assertEqual(values.dtype, np.int64) - - # guess all ints are cast to uints.... - values = self.mixed_int.as_matrix(['A','B','C']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A','C']) - self.assertEqual(values.dtype, np.int32) - - values = self.mixed_int.as_matrix(['C','D']) - self.assertEqual(values.dtype, np.int64) - - values = self.mixed_int.as_matrix(['A']) - self.assertEqual(values.dtype, np.int32) - - values = self.mixed_int.as_matrix(['C']) - self.assertEqual(values.dtype, np.uint8) - - def test_constructor_with_convert(self): - # this is actually mostly a test of lib.maybe_convert_objects - # #2845 - df = DataFrame({'A' : [2**63-1] }) - result = df['A'] - expected = Series(np.asarray([2**63-1], np.int64), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2**63] }) - result = df['A'] - expected = Series(np.asarray([2**63], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [datetime(2005, 1, 1), True] }) - result = df['A'] - expected = Series(np.asarray([datetime(2005, 1, 1), True], np.object_), - name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [None, 1] }) - result = df['A'] - expected = Series(np.asarray([np.nan, 1], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0, 2] }) - result = df['A'] - expected = Series(np.asarray([1.0, 2], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, 3] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, 3], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, 3.0] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, 3.0], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, True] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, True], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0, None] }) - result = df['A'] - expected = Series(np.asarray([1.0, np.nan], np.float_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [1.0+2.0j, None] }) - result = df['A'] - expected = Series(np.asarray([1.0+2.0j, np.nan], np.complex_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2.0, 1, True, None] }) - result = df['A'] - expected = Series(np.asarray([2.0, 1, True, None], np.object_), name='A') - assert_series_equal(result, expected) - - df = DataFrame({'A' : [2.0, 1, datetime(2006, 1, 1), None] }) - result = df['A'] - expected = Series(np.asarray([2.0, 1, datetime(2006, 1, 1), - None], np.object_), name='A') - assert_series_equal(result, expected) - - def test_construction_with_mixed(self): - # test construction edge cases with mixed types - - # f7u12, this does not work without extensive workaround - data = [[datetime(2001, 1, 5), nan, datetime(2001, 1, 2)], - [datetime(2000, 1, 2), datetime(2000, 1, 3), - datetime(2000, 1, 1)]] - df = DataFrame(data) - - # check dtypes - result = df.get_dtype_counts().sort_values() - expected = Series({ 'datetime64[ns]' : 3 }) - - # mixed-type frames - self.mixed_frame['datetime'] = datetime.now() - self.mixed_frame['timedelta'] = timedelta(days=1,seconds=1) - self.assertEqual(self.mixed_frame['datetime'].dtype, 'M8[ns]') - self.assertEqual(self.mixed_frame['timedelta'].dtype, 'm8[ns]') - result = self.mixed_frame.get_dtype_counts().sort_values() - expected = Series({ 'float64' : 4, - 'object' : 1, - 'datetime64[ns]' : 1, - 'timedelta64[ns]' : 1}).sort_values() - assert_series_equal(result,expected) - - def test_construction_with_conversions(self): - - # convert from a numpy array of non-ns timedelta64 - arr = np.array([1,2,3],dtype='timedelta64[s]') - s = Series(arr) - expected = Series(timedelta_range('00:00:01',periods=3,freq='s')) - assert_series_equal(s,expected) - - df = DataFrame(index=range(3)) - df['A'] = arr - expected = DataFrame({'A' : timedelta_range('00:00:01',periods=3,freq='s')}, - index=range(3)) - assert_frame_equal(df,expected) - - # convert from a numpy array of non-ns datetime64 - #### note that creating a numpy datetime64 is in LOCAL time!!!! - #### seems to work for M8[D], but not for M8[s] - - s = Series(np.array(['2013-01-01','2013-01-02','2013-01-03'],dtype='datetime64[D]')) - assert_series_equal(s,Series(date_range('20130101',periods=3,freq='D'))) - #s = Series(np.array(['2013-01-01 00:00:01','2013-01-01 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]')) - #assert_series_equal(s,date_range('20130101 00:00:01',period=3,freq='s')) - - expected = DataFrame({ - 'dt1' : Timestamp('20130101'), - 'dt2' : date_range('20130101',periods=3), - #'dt3' : date_range('20130101 00:00:01',periods=3,freq='s'), - },index=range(3)) - - - df = DataFrame(index=range(3)) - df['dt1'] = np.datetime64('2013-01-01') - df['dt2'] = np.array(['2013-01-01','2013-01-02','2013-01-03'],dtype='datetime64[D]') - #df['dt3'] = np.array(['2013-01-01 00:00:01','2013-01-01 00:00:02','2013-01-01 00:00:03'],dtype='datetime64[s]') - assert_frame_equal(df, expected) - - def test_constructor_frame_copy(self): - cop = DataFrame(self.frame, copy=True) - cop['A'] = 5 - self.assertTrue((cop['A'] == 5).all()) - self.assertFalse((self.frame['A'] == 5).all()) - - def test_constructor_ndarray_copy(self): - df = DataFrame(self.frame.values) - - self.frame.values[5] = 5 - self.assertTrue((df.values[5] == 5).all()) - - df = DataFrame(self.frame.values, copy=True) - self.frame.values[6] = 6 - self.assertFalse((df.values[6] == 6).all()) - - def test_constructor_series_copy(self): - series = self.frame._series - - df = DataFrame({'A': series['A']}) - df['A'][:] = 5 - - self.assertFalse((series['A'] == 5).all()) - - def test_constructor_compound_dtypes(self): - # GH 5191 - # compound dtypes should raise not-implementederror - - def f(dtype): - return DataFrame(data = list(itertools.repeat((datetime(2001, 1, 1), "aa", 20), 9)), - columns=["A", "B", "C"], dtype=dtype) - - self.assertRaises(NotImplementedError, f, [("A","datetime64[h]"), ("B","str"), ("C","int32")]) - - # these work (though results may be unexpected) - f('int64') - f('float64') - - # 10822 - # invalid error message on dt inference - if not is_platform_windows(): - f('M8[ns]') - - def test_assign_columns(self): - self.frame['hi'] = 'there' - - frame = self.frame.copy() - frame.columns = ['foo', 'bar', 'baz', 'quux', 'foo2'] - assert_series_equal(self.frame['C'], frame['baz'], check_names=False) - assert_series_equal(self.frame['hi'], frame['foo2'], check_names=False) - - def test_columns_with_dups(self): - - # GH 3468 related - - # basic - df = DataFrame([[1,2]], columns=['a','a']) - df.columns = ['a','a.1'] - str(df) - expected = DataFrame([[1,2]], columns=['a','a.1']) - assert_frame_equal(df, expected) - - df = DataFrame([[1,2,3]], columns=['b','a','a']) - df.columns = ['b','a','a.1'] - str(df) - expected = DataFrame([[1,2,3]], columns=['b','a','a.1']) - assert_frame_equal(df, expected) - - # with a dup index - df = DataFrame([[1,2]], columns=['a','a']) - df.columns = ['b','b'] - str(df) - expected = DataFrame([[1,2]], columns=['b','b']) - assert_frame_equal(df, expected) - - # multi-dtype - df = DataFrame([[1,2,1.,2.,3.,'foo','bar']], columns=['a','a','b','b','d','c','c']) - df.columns = list('ABCDEFG') - str(df) - expected = DataFrame([[1,2,1.,2.,3.,'foo','bar']], columns=list('ABCDEFG')) - assert_frame_equal(df, expected) - - # this is an error because we cannot disambiguate the dup columns - self.assertRaises(Exception, lambda x: DataFrame([[1,2,'foo','bar']], columns=['a','a','a','a'])) - - # dups across blocks - df_float = DataFrame(np.random.randn(10, 3),dtype='float64') - df_int = DataFrame(np.random.randn(10, 3),dtype='int64') - df_bool = DataFrame(True,index=df_float.index,columns=df_float.columns) - df_object = DataFrame('foo',index=df_float.index,columns=df_float.columns) - df_dt = DataFrame(Timestamp('20010101'),index=df_float.index,columns=df_float.columns) - df = pd.concat([ df_float, df_int, df_bool, df_object, df_dt ], axis=1) - - self.assertEqual(len(df._data._blknos), len(df.columns)) - self.assertEqual(len(df._data._blklocs), len(df.columns)) - - # testing iget - for i in range(len(df.columns)): - df.iloc[:,i] - - # dup columns across dtype GH 2079/2194 - vals = [[1, -1, 2.], [2, -2, 3.]] - rs = DataFrame(vals, columns=['A', 'A', 'B']) - xp = DataFrame(vals) - xp.columns = ['A', 'A', 'B'] - assert_frame_equal(rs, xp) - - def test_insert_column_bug_4032(self): - - # GH4032, inserting a column and renaming causing errors - df = DataFrame({'b': [1.1, 2.2]}) - df = df.rename(columns={}) - df.insert(0, 'a', [1, 2]) - - result = df.rename(columns={}) - str(result) - expected = DataFrame([[1,1.1],[2, 2.2]],columns=['a','b']) - assert_frame_equal(result,expected) - df.insert(0, 'c', [1.3, 2.3]) - - result = df.rename(columns={}) - str(result) - - expected = DataFrame([[1.3,1,1.1],[2.3,2, 2.2]],columns=['c','a','b']) - assert_frame_equal(result,expected) - - def test_cast_internals(self): - casted = DataFrame(self.frame._data, dtype=int) - expected = DataFrame(self.frame._series, dtype=int) - assert_frame_equal(casted, expected) - - casted = DataFrame(self.frame._data, dtype=np.int32) - expected = DataFrame(self.frame._series, dtype=np.int32) - assert_frame_equal(casted, expected) - - def test_consolidate(self): - self.frame['E'] = 7. - consolidated = self.frame.consolidate() - self.assertEqual(len(consolidated._data.blocks), 1) - - # Ensure copy, do I want this? - recons = consolidated.consolidate() - self.assertIsNot(recons, consolidated) - assert_frame_equal(recons, consolidated) - - self.frame['F'] = 8. - self.assertEqual(len(self.frame._data.blocks), 3) - self.frame.consolidate(inplace=True) - self.assertEqual(len(self.frame._data.blocks), 1) - - def test_consolidate_inplace(self): - frame = self.frame.copy() - - # triggers in-place consolidation - for letter in range(ord('A'), ord('Z')): - self.frame[chr(letter)] = chr(letter) - - def test_as_matrix_consolidate(self): - self.frame['E'] = 7. - self.assertFalse(self.frame._data.is_consolidated()) - _ = self.frame.as_matrix() - self.assertTrue(self.frame._data.is_consolidated()) - - def test_modify_values(self): - self.frame.values[5] = 5 - self.assertTrue((self.frame.values[5] == 5).all()) - - # unconsolidated - self.frame['E'] = 7. - self.frame.values[6] = 6 - self.assertTrue((self.frame.values[6] == 6).all()) - - def test_boolean_set_uncons(self): - self.frame['E'] = 7. - - expected = self.frame.values.copy() - expected[expected > 1] = 2 - - self.frame[self.frame > 1] = 2 - assert_almost_equal(expected, self.frame.values) - - def test_xs_view(self): - """ - in 0.14 this will return a view if possible - a copy otherwise, but this is numpy dependent - """ - - dm = DataFrame(np.arange(20.).reshape(4, 5), - index=lrange(4), columns=lrange(5)) - - dm.xs(2)[:] = 10 - self.assertTrue((dm.xs(2) == 10).all()) - - def test_boolean_indexing(self): - idx = lrange(3) - cols = ['A','B','C'] - df1 = DataFrame(index=idx, columns=cols, - data=np.array([[0.0, 0.5, 1.0], - [1.5, 2.0, 2.5], - [3.0, 3.5, 4.0]], - dtype=float)) - df2 = DataFrame(index=idx, columns=cols, - data=np.ones((len(idx), len(cols)))) - - expected = DataFrame(index=idx, columns=cols, - data=np.array([[0.0, 0.5, 1.0], - [1.5, 2.0, -1], - [-1, -1, -1]], dtype=float)) - - df1[df1 > 2.0 * df2] = -1 - assert_frame_equal(df1, expected) - with assertRaisesRegexp(ValueError, 'Item wrong length'): - df1[df1.index[:-1] > 2] = -1 - - def test_boolean_indexing_mixed(self): - df = DataFrame( - {long(0): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - long(1): {35: np.nan, - 40: 0.32632316859446198, - 43: np.nan, - 49: 0.32632316859446198, - 50: 0.39114724480578139}, - long(2): {35: np.nan, 40: np.nan, 43: 0.29012581014105987, 49: np.nan, 50: np.nan}, - long(3): {35: np.nan, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - long(4): {35: 0.34215328467153283, 40: np.nan, 43: np.nan, 49: np.nan, 50: np.nan}, - 'y': {35: 0, 40: 0, 43: 0, 49: 0, 50: 1}}) - - # mixed int/float ok - df2 = df.copy() - df2[df2>0.3] = 1 - expected = df.copy() - expected.loc[40,1] = 1 - expected.loc[49,1] = 1 - expected.loc[50,1] = 1 - expected.loc[35,4] = 1 - assert_frame_equal(df2,expected) - - df['foo'] = 'test' - with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'): - df[df > 0.3] = 1 - - def test_sum_bools(self): - df = DataFrame(index=lrange(1), columns=lrange(10)) - bools = isnull(df) - self.assertEqual(bools.sum(axis=1)[0], 10) - - def test_fillna_col_reordering(self): - idx = lrange(20) - cols = ["COL." + str(i) for i in range(5, 0, -1)] - data = np.random.rand(20, 5) - df = DataFrame(index=lrange(20), columns=cols, data=data) - filled = df.fillna(method='ffill') - self.assertEqual(df.columns.tolist(), filled.columns.tolist()) - - def test_take(self): - - # homogeneous - #---------------------------------------- - order = [3, 1, 2, 0] - for df in [self.frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['D', 'B', 'C', 'A']] - assert_frame_equal(result, expected, check_names=False) - - # neg indicies - order = [2,1,-1] - for df in [self.frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['C', 'B', 'D']] - assert_frame_equal(result, expected, check_names=False) - - # illegal indices - self.assertRaises(IndexError, df.take, [3,1,2,30], axis=0) - self.assertRaises(IndexError, df.take, [3,1,2,-31], axis=0) - self.assertRaises(IndexError, df.take, [3,1,2,5], axis=1) - self.assertRaises(IndexError, df.take, [3,1,2,-5], axis=1) - - # mixed-dtype - #---------------------------------------- - order = [4, 1, 2, 0, 3] - for df in [self.mixed_frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['foo', 'B', 'C', 'A', 'D']] - assert_frame_equal(result, expected) - - # neg indicies - order = [4,1,-2] - for df in [self.mixed_frame]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['foo', 'B', 'D']] - assert_frame_equal(result, expected) - - # by dtype - order = [1, 2, 0, 3] - for df in [self.mixed_float,self.mixed_int]: - - result = df.take(order, axis=0) - expected = df.reindex(df.index.take(order)) - assert_frame_equal(result, expected) - - # axis = 1 - result = df.take(order, axis=1) - expected = df.ix[:, ['B', 'C', 'A', 'D']] - assert_frame_equal(result, expected) - - def test_iterkv_deprecation(self): - with tm.assert_produces_warning(FutureWarning): - self.mixed_float.iterkv() - - def test_iterkv_names(self): - for k, v in compat.iteritems(self.mixed_frame): - self.assertEqual(v.name, k) - - def test_series_put_names(self): - series = self.mixed_frame._series - for k, v in compat.iteritems(series): - self.assertEqual(v.name, k) - - def test_dot(self): - a = DataFrame(np.random.randn(3, 4), index=['a', 'b', 'c'], - columns=['p', 'q', 'r', 's']) - b = DataFrame(np.random.randn(4, 2), index=['p', 'q', 'r', 's'], - columns=['one', 'two']) - - result = a.dot(b) - expected = DataFrame(np.dot(a.values, b.values), - index=['a', 'b', 'c'], - columns=['one', 'two']) - # Check alignment - b1 = b.reindex(index=reversed(b.index)) - result = a.dot(b) - assert_frame_equal(result, expected) - - # Check series argument - result = a.dot(b['one']) - assert_series_equal(result, expected['one'], check_names=False) - self.assertTrue(result.name is None) - - result = a.dot(b1['one']) - assert_series_equal(result, expected['one'], check_names=False) - self.assertTrue(result.name is None) - - # can pass correct-length arrays - row = a.ix[0].values - - result = a.dot(row) - exp = a.dot(a.ix[0]) - assert_series_equal(result, exp) - - with assertRaisesRegexp(ValueError, 'Dot product shape mismatch'): - a.dot(row[:-1]) - - a = np.random.rand(1, 5) - b = np.random.rand(5, 1) - A = DataFrame(a) - B = DataFrame(b) - - # it works - result = A.dot(b) - - # unaligned - df = DataFrame(randn(3, 4), index=[1, 2, 3], columns=lrange(4)) - df2 = DataFrame(randn(5, 3), index=lrange(5), columns=[1, 2, 3]) - - assertRaisesRegexp(ValueError, 'aligned', df.dot, df2) - - def test_idxmin(self): - frame = self.frame - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - for skipna in [True, False]: - for axis in [0, 1]: - for df in [frame, self.intframe]: - result = df.idxmin(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmin, axis=axis, skipna=skipna) - assert_series_equal(result, expected) - - self.assertRaises(ValueError, frame.idxmin, axis=2) - - def test_idxmax(self): - frame = self.frame - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - for skipna in [True, False]: - for axis in [0, 1]: - for df in [frame, self.intframe]: - result = df.idxmax(axis=axis, skipna=skipna) - expected = df.apply( - Series.idxmax, axis=axis, skipna=skipna) - assert_series_equal(result, expected) - - self.assertRaises(ValueError, frame.idxmax, axis=2) - - def test_stale_cached_series_bug_473(self): - - # this is chained, but ok - with option_context('chained_assignment',None): - Y = DataFrame(np.random.random((4, 4)), index=('a', 'b', 'c', 'd'), - columns=('e', 'f', 'g', 'h')) - repr(Y) - Y['e'] = Y['e'].astype('object') - Y['g']['c'] = np.NaN - repr(Y) - result = Y.sum() - exp = Y['g'].sum() - self.assertTrue(isnull(Y['g']['c'])) - - def test_index_namedtuple(self): - from collections import namedtuple - IndexType = namedtuple("IndexType", ["a", "b"]) - idx1 = IndexType("foo", "bar") - idx2 = IndexType("baz", "bof") - index = Index([idx1, idx2], - name="composite_index", tupleize_cols=False) - df = DataFrame([(1, 2), (3, 4)], index=index, columns=["A", "B"]) - result = df.ix[IndexType("foo", "bar")]["A"] - self.assertEqual(result, 1) - - def test_empty_nonzero(self): - df = DataFrame([1, 2, 3]) - self.assertFalse(df.empty) - df = DataFrame(index=['a', 'b'], columns=['c', 'd']).dropna() - self.assertTrue(df.empty) - self.assertTrue(df.T.empty) - - def test_any_all(self): - - self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True) - self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True) - - df = DataFrame(randn(10, 4)) > 0 - df.any(1) - df.all(1) - df.any(1, bool_only=True) - df.all(1, bool_only=True) - - # skip pathological failure cases - # class CantNonzero(object): - - # def __nonzero__(self): - # raise ValueError - - # df[4] = CantNonzero() - - # it works! - # df.any(1) - # df.all(1) - # df.any(1, bool_only=True) - # df.all(1, bool_only=True) - - # df[4][4] = np.nan - # df.any(1) - # df.all(1) - # df.any(1, bool_only=True) - # df.all(1, bool_only=True) - - def test_consolidate_datetime64(self): - # numpy vstack bug - - data = """\ -starting,ending,measure -2012-06-21 00:00,2012-06-23 07:00,77 -2012-06-23 07:00,2012-06-23 16:30,65 -2012-06-23 16:30,2012-06-25 08:00,77 -2012-06-25 08:00,2012-06-26 12:00,0 -2012-06-26 12:00,2012-06-27 08:00,77 -""" - df = read_csv(StringIO(data), parse_dates=[0, 1]) - - ser_starting = df.starting - ser_starting.index = ser_starting.values - ser_starting = ser_starting.tz_localize('US/Eastern') - ser_starting = ser_starting.tz_convert('UTC') - - ser_ending = df.ending - ser_ending.index = ser_ending.values - ser_ending = ser_ending.tz_localize('US/Eastern') - ser_ending = ser_ending.tz_convert('UTC') - - df.starting = ser_starting.index - df.ending = ser_ending.index - - tm.assert_index_equal(pd.DatetimeIndex(df.starting), ser_starting.index) - tm.assert_index_equal(pd.DatetimeIndex(df.ending), ser_ending.index) - - def _check_bool_op(self, name, alternative, frame=None, has_skipna=True, - has_bool_only=False): - if frame is None: - frame = self.frame > 0 - # set some NAs - frame = DataFrame(frame.values.astype(object), frame.index, - frame.columns) - frame.ix[5:10] = np.nan - frame.ix[15:20, -2:] = np.nan - - f = getattr(frame, name) - - if has_skipna: - def skipna_wrapper(x): - nona = x.dropna().values - return alternative(nona) - - def wrapper(x): - return alternative(x.values) - - result0 = f(axis=0, skipna=False) - result1 = f(axis=1, skipna=False) - assert_series_equal(result0, frame.apply(wrapper)) - assert_series_equal(result1, frame.apply(wrapper, axis=1), - check_dtype=False) # HACK: win32 - else: - skipna_wrapper = alternative - wrapper = alternative - - result0 = f(axis=0) - result1 = f(axis=1) - assert_series_equal(result0, frame.apply(skipna_wrapper)) - assert_series_equal(result1, frame.apply(skipna_wrapper, axis=1), - check_dtype=False) - - # result = f(axis=1) - # comp = frame.apply(alternative, axis=1).reindex(result.index) - # assert_series_equal(result, comp) - - # bad axis - self.assertRaises(ValueError, f, axis=2) - - # make sure works on mixed-type frame - mixed = self.mixed_frame - mixed['_bool_'] = np.random.randn(len(mixed)) > 0 - getattr(mixed, name)(axis=0) - getattr(mixed, name)(axis=1) - - class NonzeroFail: - - def __nonzero__(self): - raise ValueError - - mixed['_nonzero_fail_'] = NonzeroFail() - - if has_bool_only: - getattr(mixed, name)(axis=0, bool_only=True) - getattr(mixed, name)(axis=1, bool_only=True) - getattr(frame, name)(axis=0, bool_only=False) - getattr(frame, name)(axis=1, bool_only=False) - - # all NA case - if has_skipna: - all_na = frame * np.NaN - r0 = getattr(all_na, name)(axis=0) - r1 = getattr(all_na, name)(axis=1) - if name == 'any': - self.assertFalse(r0.any()) - self.assertFalse(r1.any()) - else: - self.assertTrue(r0.all()) - self.assertTrue(r1.all()) - - def test_strange_column_corruption_issue(self): - - df = DataFrame(index=[0, 1]) - df[0] = nan - wasCol = {} - # uncommenting these makes the results match - # for col in xrange(100, 200): - # wasCol[col] = 1 - # df[col] = nan - - for i, dt in enumerate(df.index): - for col in range(100, 200): - if not col in wasCol: - wasCol[col] = 1 - df[col] = nan - df[col][dt] = i - - myid = 100 - - first = len(df.ix[isnull(df[myid]), [myid]]) - second = len(df.ix[isnull(df[myid]), [myid]]) - self.assertTrue(first == second == 0) - - def test_inplace_return_self(self): - # re #1893 - - data = DataFrame({'a': ['foo', 'bar', 'baz', 'qux'], - 'b': [0, 0, 1, 1], - 'c': [1, 2, 3, 4]}) - - def _check_f(base, f): - result = f(base) - self.assertTrue(result is None) - - # -----DataFrame----- - - # set_index - f = lambda x: x.set_index('a', inplace=True) - _check_f(data.copy(), f) - - # reset_index - f = lambda x: x.reset_index(inplace=True) - _check_f(data.set_index('a'), f) - - # drop_duplicates - f = lambda x: x.drop_duplicates(inplace=True) - _check_f(data.copy(), f) - - # sort - f = lambda x: x.sort_values('b', inplace=True) - _check_f(data.copy(), f) - - # sort_index - f = lambda x: x.sort_index(inplace=True) - _check_f(data.copy(), f) - - # sortlevel - f = lambda x: x.sortlevel(0, inplace=True) - _check_f(data.set_index(['a', 'b']), f) - - # fillna - f = lambda x: x.fillna(0, inplace=True) - _check_f(data.copy(), f) - - # replace - f = lambda x: x.replace(1, 0, inplace=True) - _check_f(data.copy(), f) - - # rename - f = lambda x: x.rename({1: 'foo'}, inplace=True) - _check_f(data.copy(), f) - - # -----Series----- - d = data.copy()['c'] - - # reset_index - f = lambda x: x.reset_index(inplace=True, drop=True) - _check_f(data.set_index('a')['c'], f) - - # fillna - f = lambda x: x.fillna(0, inplace=True) - _check_f(d.copy(), f) - - # replace - f = lambda x: x.replace(1, 0, inplace=True) - _check_f(d.copy(), f) - - # rename - f = lambda x: x.rename({1: 'foo'}, inplace=True) - _check_f(d.copy(), f) - - def test_isin(self): - # GH #4211 - df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], - 'ids2': ['a', 'n', 'c', 'n']}, - index=['foo', 'bar', 'baz', 'qux']) - other = ['a', 'b', 'c'] - - result = df.isin(other) - expected = DataFrame([df.loc[s].isin(other) for s in df.index]) - assert_frame_equal(result, expected) - - def test_isin_empty(self): - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - result = df.isin([]) - expected = pd.DataFrame(False, df.index, df.columns) - assert_frame_equal(result, expected) - - def test_isin_dict(self): - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - d = {'A': ['a']} - - expected = DataFrame(False, df.index, df.columns) - expected.loc[0, 'A'] = True - - result = df.isin(d) - assert_frame_equal(result, expected) - - # non unique columns - df = DataFrame({'A': ['a', 'b', 'c'], 'B': ['a', 'e', 'f']}) - df.columns = ['A', 'A'] - expected = DataFrame(False, df.index, df.columns) - expected.loc[0, 'A'] = True - result = df.isin(d) - assert_frame_equal(result, expected) - - def test_isin_with_string_scalar(self): - #GH4763 - df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'], - 'ids2': ['a', 'n', 'c', 'n']}, - index=['foo', 'bar', 'baz', 'qux']) - with tm.assertRaises(TypeError): - df.isin('a') - - with tm.assertRaises(TypeError): - df.isin('aaa') - - def test_isin_df(self): - df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) - df2 = DataFrame({'A': [0, 2, 12, 4], 'B': [2, np.nan, 4, 5]}) - expected = DataFrame(False, df1.index, df1.columns) - result = df1.isin(df2) - expected['A'].loc[[1, 3]] = True - expected['B'].loc[[0, 2]] = True - assert_frame_equal(result, expected) - - # partial overlapping columns - df2.columns = ['A', 'C'] - result = df1.isin(df2) - expected['B'] = False - assert_frame_equal(result, expected) - - def test_isin_df_dupe_values(self): - df1 = DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}) - # just cols duped - df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], - columns=['B', 'B']) - with tm.assertRaises(ValueError): - df1.isin(df2) - - # just index duped - df2 = DataFrame([[0, 2], [12, 4], [2, np.nan], [4, 5]], - columns=['A', 'B'], index=[0, 0, 1, 1]) - with tm.assertRaises(ValueError): - df1.isin(df2) - - # cols and index: - df2.columns = ['B', 'B'] - with tm.assertRaises(ValueError): - df1.isin(df2) - - def test_isin_dupe_self(self): - other = DataFrame({'A': [1, 0, 1, 0], 'B': [1, 1, 0, 0]}) - df = DataFrame([[1, 1], [1, 0], [0, 0]], columns=['A','A']) - result = df.isin(other) - expected = DataFrame(False, index=df.index, columns=df.columns) - expected.loc[0] = True - expected.iloc[1, 1] = True - assert_frame_equal(result, expected) - - def test_isin_against_series(self): - df = pd.DataFrame({'A': [1, 2, 3, 4], 'B': [2, np.nan, 4, 4]}, - index=['a', 'b', 'c', 'd']) - s = pd.Series([1, 3, 11, 4], index=['a', 'b', 'c', 'd']) - expected = DataFrame(False, index=df.index, columns=df.columns) - expected['A'].loc['a'] = True - expected.loc['d'] = True - result = df.isin(s) - assert_frame_equal(result, expected) - - def test_isin_multiIndex(self): - idx = MultiIndex.from_tuples([(0, 'a', 'foo'), (0, 'a', 'bar'), - (0, 'b', 'bar'), (0, 'b', 'baz'), - (2, 'a', 'foo'), (2, 'a', 'bar'), - (2, 'c', 'bar'), (2, 'c', 'baz'), - (1, 'b', 'foo'), (1, 'b', 'bar'), - (1, 'c', 'bar'), (1, 'c', 'baz')]) - df1 = DataFrame({'A': np.ones(12), - 'B': np.zeros(12)}, index=idx) - df2 = DataFrame({'A': [1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1], - 'B': [1, 1, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1]}) - # against regular index - expected = DataFrame(False, index=df1.index, columns=df1.columns) - result = df1.isin(df2) - assert_frame_equal(result, expected) - - df2.index = idx - expected = df2.values.astype(np.bool) - expected[:, 1] = ~expected[:, 1] - expected = DataFrame(expected, columns=['A', 'B'], index=idx) - - result = df1.isin(df2) - assert_frame_equal(result, expected) - - def test_to_csv_date_format(self): - from pandas import to_datetime - pname = '__tmp_to_csv_date_format__' - with ensure_clean(pname) as path: - for engine in [None, 'python']: - w = FutureWarning if engine == 'python' else None - - dt_index = self.tsframe.index - datetime_frame = DataFrame({'A': dt_index, 'B': dt_index.shift(1)}, index=dt_index) - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame.to_csv(path, date_format='%Y%m%d', engine=engine) - - # Check that the data was put in the specified format - test = read_csv(path, index_col=0) - - datetime_frame_int = datetime_frame.applymap(lambda x: int(x.strftime('%Y%m%d'))) - datetime_frame_int.index = datetime_frame_int.index.map(lambda x: int(x.strftime('%Y%m%d'))) - - assert_frame_equal(test, datetime_frame_int) - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame.to_csv(path, date_format='%Y-%m-%d', engine=engine) - - # Check that the data was put in the specified format - test = read_csv(path, index_col=0) - datetime_frame_str = datetime_frame.applymap(lambda x: x.strftime('%Y-%m-%d')) - datetime_frame_str.index = datetime_frame_str.index.map(lambda x: x.strftime('%Y-%m-%d')) - - assert_frame_equal(test, datetime_frame_str) - - # Check that columns get converted - datetime_frame_columns = datetime_frame.T - - with tm.assert_produces_warning(w, check_stacklevel=False): - datetime_frame_columns.to_csv(path, date_format='%Y%m%d', engine=engine) - - test = read_csv(path, index_col=0) - - datetime_frame_columns = datetime_frame_columns.applymap(lambda x: int(x.strftime('%Y%m%d'))) - # Columns don't get converted to ints by read_csv - datetime_frame_columns.columns = datetime_frame_columns.columns.map(lambda x: x.strftime('%Y%m%d')) - - assert_frame_equal(test, datetime_frame_columns) - - # test NaTs - nat_index = to_datetime(['NaT'] * 10 + ['2000-01-01', '1/1/2000', '1-1-2000']) - nat_frame = DataFrame({'A': nat_index}, index=nat_index) - - with tm.assert_produces_warning(w, check_stacklevel=False): - nat_frame.to_csv(path, date_format='%Y-%m-%d', engine=engine) - - test = read_csv(path, parse_dates=[0, 1], index_col=0) - - assert_frame_equal(test, nat_frame) - - def test_to_csv_with_dst_transitions(self): - - with ensure_clean('csv_date_format_with_dst') as path: - # make sure we are not failing on transitions - times = pd.date_range("2013-10-26 23:00", "2013-10-27 01:00", - tz="Europe/London", - freq="H", - ambiguous='infer') - - for i in [times, times+pd.Timedelta('10s')]: - time_range = np.array(range(len(i)), dtype='int64') - df = DataFrame({'A' : time_range}, index=i) - df.to_csv(path,index=True) - - # we have to reconvert the index as we - # don't parse the tz's - result = read_csv(path,index_col=0) - result.index = pd.to_datetime(result.index).tz_localize('UTC').tz_convert('Europe/London') - assert_frame_equal(result,df) - - # GH11619 - idx = pd.date_range('2015-01-01', '2015-12-31', freq = 'H', tz='Europe/Paris') - df = DataFrame({'values' : 1, 'idx' : idx}, - index=idx) - with ensure_clean('csv_date_format_with_dst') as path: - df.to_csv(path,index=True) - result = read_csv(path,index_col=0) - result.index = pd.to_datetime(result.index).tz_localize('UTC').tz_convert('Europe/Paris') - result['idx'] = pd.to_datetime(result['idx']).astype('datetime64[ns, Europe/Paris]') - assert_frame_equal(result,df) - - # assert working - df.astype(str) - - with ensure_clean('csv_date_format_with_dst') as path: - df.to_pickle(path) - result = pd.read_pickle(path) - assert_frame_equal(result,df) - - - def test_concat_empty_dataframe_dtypes(self): - df = DataFrame(columns=list("abc")) - df['a'] = df['a'].astype(np.bool_) - df['b'] = df['b'].astype(np.int32) - df['c'] = df['c'].astype(np.float64) - - result = pd.concat([df, df]) - self.assertEqual(result['a'].dtype, np.bool_) - self.assertEqual(result['b'].dtype, np.int32) - self.assertEqual(result['c'].dtype, np.float64) - - result = pd.concat([df, df.astype(np.float64)]) - self.assertEqual(result['a'].dtype, np.object_) - self.assertEqual(result['b'].dtype, np.float64) - self.assertEqual(result['c'].dtype, np.float64) - - def test_empty_frame_dtypes_ftypes(self): - empty_df = pd.DataFrame() - assert_series_equal(empty_df.dtypes, pd.Series(dtype=np.object)) - assert_series_equal(empty_df.ftypes, pd.Series(dtype=np.object)) - - nocols_df = pd.DataFrame(index=[1,2,3]) - assert_series_equal(nocols_df.dtypes, pd.Series(dtype=np.object)) - assert_series_equal(nocols_df.ftypes, pd.Series(dtype=np.object)) - - norows_df = pd.DataFrame(columns=list("abc")) - assert_series_equal(norows_df.dtypes, pd.Series(np.object, index=list("abc"))) - assert_series_equal(norows_df.ftypes, pd.Series('object:dense', index=list("abc"))) - - norows_int_df = pd.DataFrame(columns=list("abc")).astype(np.int32) - assert_series_equal(norows_int_df.dtypes, pd.Series(np.dtype('int32'), index=list("abc"))) - assert_series_equal(norows_int_df.ftypes, pd.Series('int32:dense', index=list("abc"))) - - odict = OrderedDict - df = pd.DataFrame(odict([('a', 1), ('b', True), ('c', 1.0)]), index=[1, 2, 3]) - assert_series_equal(df.dtypes, pd.Series(odict([('a', np.int64), - ('b', np.bool), - ('c', np.float64)]))) - assert_series_equal(df.ftypes, pd.Series(odict([('a', 'int64:dense'), - ('b', 'bool:dense'), - ('c', 'float64:dense')]))) - - # same but for empty slice of df - assert_series_equal(df[:0].dtypes, pd.Series(odict([('a', np.int64), - ('b', np.bool), - ('c', np.float64)]))) - assert_series_equal(df[:0].ftypes, pd.Series(odict([('a', 'int64:dense'), - ('b', 'bool:dense'), - ('c', 'float64:dense')]))) - - def test_dtypes_are_correct_after_column_slice(self): - # GH6525 - df = pd.DataFrame(index=range(5), columns=list("abc"), dtype=np.float_) - odict = OrderedDict - assert_series_equal(df.dtypes, - pd.Series(odict([('a', np.float_), ('b', np.float_), - ('c', np.float_),]))) - assert_series_equal(df.iloc[:,2:].dtypes, - pd.Series(odict([('c', np.float_)]))) - assert_series_equal(df.dtypes, - pd.Series(odict([('a', np.float_), ('b', np.float_), - ('c', np.float_),]))) - - def test_set_index_names(self): - df = pd.util.testing.makeDataFrame() - df.index.name = 'name' - - self.assertEqual(df.set_index(df.index).index.names, ['name']) - - mi = MultiIndex.from_arrays(df[['A', 'B']].T.values, names=['A', 'B']) - mi2 = MultiIndex.from_arrays(df[['A', 'B', 'A', 'B']].T.values, - names=['A', 'B', 'A', 'B']) - - df = df.set_index(['A', 'B']) - - self.assertEqual(df.set_index(df.index).index.names, ['A', 'B']) - - # Check that set_index isn't converting a MultiIndex into an Index - self.assertTrue(isinstance(df.set_index(df.index).index, MultiIndex)) - - # Check actual equality - tm.assert_index_equal(df.set_index(df.index).index, mi) - - # Check that [MultiIndex, MultiIndex] yields a MultiIndex rather - # than a pair of tuples - self.assertTrue(isinstance(df.set_index([df.index, df.index]).index, MultiIndex)) - - # Check equality - tm.assert_index_equal(df.set_index([df.index, df.index]).index, mi2) - - def test_select_dtypes_include(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.Categorical(list('abc'))}) - ri = df.select_dtypes(include=[np.number]) - ei = df[['b', 'c', 'd']] - assert_frame_equal(ri, ei) - - ri = df.select_dtypes(include=[np.number, 'category']) - ei = df[['b', 'c', 'd', 'f']] - assert_frame_equal(ri, ei) - - def test_select_dtypes_exclude(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True]}) - re = df.select_dtypes(exclude=[np.number]) - ee = df[['a', 'e']] - assert_frame_equal(re, ee) - - def test_select_dtypes_exclude_include(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - exclude = np.datetime64, - include = np.bool_, 'integer' - r = df.select_dtypes(include=include, exclude=exclude) - e = df[['b', 'c', 'e']] - assert_frame_equal(r, e) - - exclude = 'datetime', - include = 'bool', 'int64', 'int32' - r = df.select_dtypes(include=include, exclude=exclude) - e = df[['b', 'e']] - assert_frame_equal(r, e) - - def test_select_dtypes_not_an_attr_but_still_valid_dtype(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - df['g'] = df.f.diff() - assert not hasattr(np, 'u8') - r = df.select_dtypes(include=['i8', 'O'], exclude=['timedelta']) - e = df[['a', 'b']] - assert_frame_equal(r, e) - - r = df.select_dtypes(include=['i8', 'O', 'timedelta64[ns]']) - e = df[['a', 'b', 'g']] - assert_frame_equal(r, e) - - def test_select_dtypes_empty(self): - df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) - with tm.assertRaisesRegexp(ValueError, 'at least one of include or ' - 'exclude must be nonempty'): - df.select_dtypes() - - def test_select_dtypes_raises_on_string(self): - df = DataFrame({'a': list('abc'), 'b': list(range(1, 4))}) - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(include='object') - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(exclude='object') - with tm.assertRaisesRegexp(TypeError, 'include and exclude .+ non-'): - df.select_dtypes(include=int, exclude='object') - - def test_select_dtypes_bad_datetime64(self): - df = DataFrame({'a': list('abc'), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): - df.select_dtypes(include=['datetime64[D]']) - - with tm.assertRaisesRegexp(ValueError, '.+ is too specific'): - df.select_dtypes(exclude=['datetime64[as]']) - - def test_select_dtypes_str_raises(self): - df = DataFrame({'a': list('abc'), - 'g': list(u('abc')), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - string_dtypes = set((str, 'str', np.string_, 'S1', - 'unicode', np.unicode_, 'U1')) - try: - string_dtypes.add(unicode) - except NameError: - pass - for dt in string_dtypes: - with tm.assertRaisesRegexp(TypeError, - 'string dtypes are not allowed'): - df.select_dtypes(include=[dt]) - with tm.assertRaisesRegexp(TypeError, - 'string dtypes are not allowed'): - df.select_dtypes(exclude=[dt]) - - def test_select_dtypes_bad_arg_raises(self): - df = DataFrame({'a': list('abc'), - 'g': list(u('abc')), - 'b': list(range(1, 4)), - 'c': np.arange(3, 6).astype('u1'), - 'd': np.arange(4.0, 7.0, dtype='float64'), - 'e': [True, False, True], - 'f': pd.date_range('now', periods=3).values}) - with tm.assertRaisesRegexp(TypeError, 'data type.*not understood'): - df.select_dtypes(['blargy, blarg, blarg']) - - def test_select_dtypes_typecodes(self): - # GH 11990 - df = mkdf(30, 3, data_gen_f=lambda x, y: np.random.random()) - expected = df - FLOAT_TYPES = list(np.typecodes['AllFloat']) - assert_frame_equal(df.select_dtypes(FLOAT_TYPES), expected) - - def test_assign(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) - original = df.copy() - result = df.assign(C=df.B / df.A) - expected = df.copy() - expected['C'] = [4, 2.5, 2] - assert_frame_equal(result, expected) - - # lambda syntax - result = df.assign(C=lambda x: x.B / x.A) - assert_frame_equal(result, expected) - - # original is unmodified - assert_frame_equal(df, original) - - # Non-Series array-like - result = df.assign(C=[4, 2.5, 2]) - assert_frame_equal(result, expected) - # original is unmodified - assert_frame_equal(df, original) - - result = df.assign(B=df.B / df.A) - expected = expected.drop('B', axis=1).rename(columns={'C': 'B'}) - assert_frame_equal(result, expected) - - # overwrite - result = df.assign(A=df.A + df.B) - expected = df.copy() - expected['A'] = [5, 7, 9] - assert_frame_equal(result, expected) - - # lambda - result = df.assign(A=lambda x: x.A + x.B) - assert_frame_equal(result, expected) - - def test_assign_multiple(self): - df = DataFrame([[1, 4], [2, 5], [3, 6]], columns=['A', 'B']) - result = df.assign(C=[7, 8, 9], D=df.A, E=lambda x: x.B) - expected = DataFrame([[1, 4, 7, 1, 4], [2, 5, 8, 2, 5], - [3, 6, 9, 3, 6]], columns=list('ABCDE')) - assert_frame_equal(result, expected) - - def test_assign_alphabetical(self): - # GH 9818 - df = DataFrame([[1, 2], [3, 4]], columns=['A', 'B']) - result = df.assign(D=df.A + df.B, C=df.A - df.B) - expected = DataFrame([[1, 2, -1, 3], [3, 4, -1, 7]], - columns=list('ABCD')) - assert_frame_equal(result, expected) - result = df.assign(C=df.A - df.B, D=df.A + df.B) - assert_frame_equal(result, expected) - - def test_assign_bad(self): - df = DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]}) - # non-keyword argument - with tm.assertRaises(TypeError): - df.assign(lambda x: x.A) - with tm.assertRaises(AttributeError): - df.assign(C=df.A, D=df.A + df.C) - with tm.assertRaises(KeyError): - df.assign(C=lambda df: df.A, D=lambda df: df['A'] + df['C']) - with tm.assertRaises(KeyError): - df.assign(C=df.A, D=lambda x: x['A'] + x['C']) - - def test_dataframe_metadata(self): - - df = SubclassedDataFrame({'X': [1, 2, 3], 'Y': [1, 2, 3]}, - index=['a', 'b', 'c']) - df.testattr = 'XXX' - - self.assertEqual(df.testattr, 'XXX') - self.assertEqual(df[['X']].testattr, 'XXX') - self.assertEqual(df.loc[['a', 'b'], :].testattr, 'XXX') - self.assertEqual(df.iloc[[0, 1], :].testattr, 'XXX') - - # GH9776 - self.assertEqual(df.iloc[0:1, :].testattr, 'XXX') - - # GH10553 - unpickled = self.round_trip_pickle(df) - assert_frame_equal(df, unpickled) - self.assertEqual(df._metadata, unpickled._metadata) - self.assertEqual(df.testattr, unpickled.testattr) - - def test_nlargest(self): - # GH10393 - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10])}) - result = df.nlargest(5, 'a') - expected = df.sort_values('a', ascending=False).head(5) - assert_frame_equal(result, expected) - - def test_nlargest_multiple_columns(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10]), - 'c': np.random.permutation(10).astype('float64')}) - result = df.nlargest(5, ['a', 'b']) - expected = df.sort_values(['a', 'b'], ascending=False).head(5) - assert_frame_equal(result, expected) - - def test_nsmallest(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10])}) - result = df.nsmallest(5, 'a') - expected = df.sort_values('a').head(5) - assert_frame_equal(result, expected) - - def test_nsmallest_multiple_columns(self): - from string import ascii_lowercase - df = pd.DataFrame({'a': np.random.permutation(10), - 'b': list(ascii_lowercase[:10]), - 'c': np.random.permutation(10).astype('float64')}) - result = df.nsmallest(5, ['a', 'c']) - expected = df.sort_values(['a', 'c']).head(5) - assert_frame_equal(result, expected) - - def test_to_panel_expanddim(self): - # GH 9762 - - class SubclassedFrame(DataFrame): - @property - def _constructor_expanddim(self): - return SubclassedPanel - - class SubclassedPanel(Panel): - pass - - index = MultiIndex.from_tuples([(0, 0), (0, 1), (0, 2)]) - df = SubclassedFrame({'X':[1, 2, 3], 'Y': [4, 5, 6]}, index=index) - result = df.to_panel() - self.assertTrue(isinstance(result, SubclassedPanel)) - expected = SubclassedPanel([[[1, 2, 3]], [[4, 5, 6]]], - items=['X', 'Y'], major_axis=[0], - minor_axis=[0, 1, 2], - dtype='int64') - tm.assert_panel_equal(result, expected) - - def test_subclass_attr_err_propagation(self): - # GH 11808 - class A(DataFrame): - - @property - def bar(self): - return self.i_dont_exist - with tm.assertRaisesRegexp(AttributeError, '.*i_dont_exist.*'): - A().bar - - -def skip_if_no_ne(engine='numexpr'): - if engine == 'numexpr': - try: - import numexpr as ne - except ImportError: - raise nose.SkipTest("cannot query engine numexpr when numexpr not " - "installed") - - -def skip_if_no_pandas_parser(parser): - if parser != 'pandas': - raise nose.SkipTest("cannot evaluate with parser {0!r}".format(parser)) - - -class TestDataFrameQueryWithMultiIndex(object): - def check_query_with_named_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = tm.choice(['eggs', 'ham'], size=10) - index = MultiIndex.from_arrays([a, b], names=['color', 'food']) - df = DataFrame(randn(10, 2), index=index) - ind = Series(df.index.get_level_values('color').values, index=index, - name='color') - - # equality - res1 = df.query('color == "red"', parser=parser, engine=engine) - res2 = df.query('"red" == color', parser=parser, engine=engine) - exp = df[ind == 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('color != "red"', parser=parser, engine=engine) - res2 = df.query('"red" != color', parser=parser, engine=engine) - exp = df[ind != 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('color == ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] == color', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('color != ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] != color', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["red"] in color', parser=parser, engine=engine) - res2 = df.query('"red" in color', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["red"] not in color', parser=parser, engine=engine) - res2 = df.query('"red" not in color', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - def test_query_with_named_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_named_multiindex, parser, engine - - def check_query_with_unnamed_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = tm.choice(['eggs', 'ham'], size=10) - index = MultiIndex.from_arrays([a, b]) - df = DataFrame(randn(10, 2), index=index) - ind = Series(df.index.get_level_values(0).values, index=index) - - res1 = df.query('ilevel_0 == "red"', parser=parser, engine=engine) - res2 = df.query('"red" == ilevel_0', parser=parser, engine=engine) - exp = df[ind == 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('ilevel_0 != "red"', parser=parser, engine=engine) - res2 = df.query('"red" != ilevel_0', parser=parser, engine=engine) - exp = df[ind != 'red'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('ilevel_0 == ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] == ilevel_0', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('ilevel_0 != ["red"]', parser=parser, engine=engine) - res2 = df.query('["red"] != ilevel_0', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["red"] in ilevel_0', parser=parser, engine=engine) - res2 = df.query('"red" in ilevel_0', parser=parser, engine=engine) - exp = df[ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["red"] not in ilevel_0', parser=parser, engine=engine) - res2 = df.query('"red" not in ilevel_0', parser=parser, engine=engine) - exp = df[~ind.isin(['red'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - #### LEVEL 1 #### - ind = Series(df.index.get_level_values(1).values, index=index) - res1 = df.query('ilevel_1 == "eggs"', parser=parser, engine=engine) - res2 = df.query('"eggs" == ilevel_1', parser=parser, engine=engine) - exp = df[ind == 'eggs'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # inequality - res1 = df.query('ilevel_1 != "eggs"', parser=parser, engine=engine) - res2 = df.query('"eggs" != ilevel_1', parser=parser, engine=engine) - exp = df[ind != 'eggs'] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # list equality (really just set membership) - res1 = df.query('ilevel_1 == ["eggs"]', parser=parser, engine=engine) - res2 = df.query('["eggs"] == ilevel_1', parser=parser, engine=engine) - exp = df[ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('ilevel_1 != ["eggs"]', parser=parser, engine=engine) - res2 = df.query('["eggs"] != ilevel_1', parser=parser, engine=engine) - exp = df[~ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - # in/not in ops - res1 = df.query('["eggs"] in ilevel_1', parser=parser, engine=engine) - res2 = df.query('"eggs" in ilevel_1', parser=parser, engine=engine) - exp = df[ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - res1 = df.query('["eggs"] not in ilevel_1', parser=parser, engine=engine) - res2 = df.query('"eggs" not in ilevel_1', parser=parser, engine=engine) - exp = df[~ind.isin(['eggs'])] - assert_frame_equal(res1, exp) - assert_frame_equal(res2, exp) - - def test_query_with_unnamed_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_unnamed_multiindex, parser, engine - - def check_query_with_partially_named_multiindex(self, parser, engine): - tm.skip_if_no_ne(engine) - a = tm.choice(['red', 'green'], size=10) - b = np.arange(10) - index = MultiIndex.from_arrays([a, b]) - index.names = [None, 'rating'] - df = DataFrame(randn(10, 2), index=index) - res = df.query('rating == 1', parser=parser, engine=engine) - ind = Series(df.index.get_level_values('rating').values, index=index, - name='rating') - exp = df[ind == 1] - assert_frame_equal(res, exp) - - res = df.query('rating != 1', parser=parser, engine=engine) - ind = Series(df.index.get_level_values('rating').values, index=index, - name='rating') - exp = df[ind != 1] - assert_frame_equal(res, exp) - - res = df.query('ilevel_0 == "red"', parser=parser, engine=engine) - ind = Series(df.index.get_level_values(0).values, index=index) - exp = df[ind == "red"] - assert_frame_equal(res, exp) - - res = df.query('ilevel_0 != "red"', parser=parser, engine=engine) - ind = Series(df.index.get_level_values(0).values, index=index) - exp = df[ind != "red"] - assert_frame_equal(res, exp) - - def test_query_with_partially_named_multiindex(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_with_partially_named_multiindex, parser, engine - - def test_query_multiindex_get_index_resolvers(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_multiindex_get_index_resolvers, parser, engine - - def check_query_multiindex_get_index_resolvers(self, parser, engine): - df = mkdf(10, 3, r_idx_nlevels=2, r_idx_names=['spam', 'eggs']) - resolvers = df._get_index_resolvers() - - def to_series(mi, level): - level_values = mi.get_level_values(level) - s = level_values.to_series() - s.index = mi - return s - - col_series = df.columns.to_series() - expected = {'index': df.index, - 'columns': col_series, - 'spam': to_series(df.index, 'spam'), - 'eggs': to_series(df.index, 'eggs'), - 'C0': col_series} - for k, v in resolvers.items(): - if isinstance(v, Index): - assert v.is_(expected[k]) - elif isinstance(v, Series): - assert_series_equal(v, expected[k]) - else: - raise AssertionError("object must be a Series or Index") - - def test_raise_on_panel_with_multiindex(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_raise_on_panel_with_multiindex, parser, engine - - def check_raise_on_panel_with_multiindex(self, parser, engine): - tm.skip_if_no_ne() - p = tm.makePanel(7) - p.items = tm.makeCustomIndex(len(p.items), nlevels=2) - with tm.assertRaises(NotImplementedError): - pd.eval('p + 1', parser=parser, engine=engine) - - def test_raise_on_panel4d_with_multiindex(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_raise_on_panel4d_with_multiindex, parser, engine - - def check_raise_on_panel4d_with_multiindex(self, parser, engine): - tm.skip_if_no_ne() - p4d = tm.makePanel4D(7) - p4d.items = tm.makeCustomIndex(len(p4d.items), nlevels=2) - with tm.assertRaises(NotImplementedError): - pd.eval('p4d + 1', parser=parser, engine=engine) - - -class TestDataFrameQueryNumExprPandas(tm.TestCase): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryNumExprPandas, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'pandas' - tm.skip_if_no_ne(cls.engine) - - @classmethod - def tearDownClass(cls): - super(TestDataFrameQueryNumExprPandas, cls).tearDownClass() - del cls.engine, cls.parser - - def test_date_query_with_attribute_access(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('@df.dates1 < 20130101 < @df.dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_no_attribute_access(self): - engine, parser = self.engine, self.parser - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('dates1 < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates2'] = date_range('1/1/2013', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT - res = df.query('dates1 < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.iloc[0, 0] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, - parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT_duplicates(self): - engine, parser = self.engine, self.parser - n = 10 - d = {} - d['dates1'] = date_range('1/1/2012', periods=n) - d['dates3'] = date_range('1/1/2014', periods=n) - df = DataFrame(d) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('index < 20130101 < dates3', engine=engine, parser=parser) - expec = df[(df.index.to_series() < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_non_date(self): - engine, parser = self.engine, self.parser - - n = 10 - df = DataFrame({'dates': date_range('1/1/2012', periods=n), - 'nondate': np.arange(n)}) - - ops = '==', '!=', '<', '>', '<=', '>=' - - for op in ops: - with tm.assertRaises(TypeError): - df.query('dates %s nondate' % op, parser=parser, engine=engine) - - def test_query_syntax_error(self): - engine, parser = self.engine, self.parser - df = DataFrame({"i": lrange(10), "+": lrange(3, 13), - "r": lrange(4, 14)}) - with tm.assertRaises(SyntaxError): - df.query('i - +', engine=engine, parser=parser) - - def test_query_scope(self): - from pandas.computation.ops import UndefinedVariableError - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - - df = DataFrame(np.random.randn(20, 2), columns=list('ab')) - - a, b = 1, 2 # noqa - res = df.query('a > b', engine=engine, parser=parser) - expected = df[df.a > df.b] - assert_frame_equal(res, expected) - - res = df.query('@a > b', engine=engine, parser=parser) - expected = df[a > df.b] - assert_frame_equal(res, expected) - - # no local variable c - with tm.assertRaises(UndefinedVariableError): - df.query('@a > b > @c', engine=engine, parser=parser) - - # no column named 'c' - with tm.assertRaises(UndefinedVariableError): - df.query('@a > b > c', engine=engine, parser=parser) - - def test_query_doesnt_pickup_local(self): - from pandas.computation.ops import UndefinedVariableError - - engine, parser = self.engine, self.parser - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - # we don't pick up the local 'sin' - with tm.assertRaises(UndefinedVariableError): - df.query('sin > 5', engine=engine, parser=parser) - - def test_query_builtin(self): - from pandas.computation.engines import NumExprClobberingError - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - with tm.assertRaisesRegexp(NumExprClobberingError, - 'Variables in expression.+'): - df.query('sin > 5', engine=engine, parser=parser) - - def test_query(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randn(10, 3), columns=['a', 'b', 'c']) - - assert_frame_equal(df.query('a < b', engine=engine, parser=parser), - df[df.a < df.b]) - assert_frame_equal(df.query('a + b > b * c', engine=engine, - parser=parser), - df[df.a + df.b > df.b * df.c]) - - def test_query_index_with_name(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randint(10, size=(10, 3)), - index=Index(range(10), name='blob'), - columns=['a', 'b', 'c']) - res = df.query('(blob < 5) & (a < b)', engine=engine, parser=parser) - expec = df[(df.index < 5) & (df.a < df.b)] - assert_frame_equal(res, expec) - - res = df.query('blob < b', engine=engine, parser=parser) - expec = df[df.index < df.b] - - assert_frame_equal(res, expec) - - def test_query_index_without_name(self): - engine, parser = self.engine, self.parser - df = DataFrame(np.random.randint(10, size=(10, 3)), - index=range(10), columns=['a', 'b', 'c']) - - # "index" should refer to the index - res = df.query('index < b', engine=engine, parser=parser) - expec = df[df.index < df.b] - assert_frame_equal(res, expec) - - # test against a scalar - res = df.query('index < 5', engine=engine, parser=parser) - expec = df[df.index < 5] - assert_frame_equal(res, expec) - - def test_nested_scope(self): - engine = self.engine - parser = self.parser - - skip_if_no_pandas_parser(parser) - - df = DataFrame(np.random.randn(5, 3)) - df2 = DataFrame(np.random.randn(5, 3)) - expected = df[(df > 0) & (df2 > 0)] - - result = df.query('(@df > 0) & (@df2 > 0)', engine=engine, parser=parser) - assert_frame_equal(result, expected) - - result = pd.eval('df[df > 0 and df2 > 0]', engine=engine, - parser=parser) - assert_frame_equal(result, expected) - - result = pd.eval('df[df > 0 and df2 > 0 and df[df > 0] > 0]', - engine=engine, parser=parser) - expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] - assert_frame_equal(result, expected) - - result = pd.eval('df[(df>0) & (df2>0)]', engine=engine, parser=parser) - expected = df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) - assert_frame_equal(result, expected) - - def test_nested_raises_on_local_self_reference(self): - from pandas.computation.ops import UndefinedVariableError - - df = DataFrame(np.random.randn(5, 3)) - - # can't reference ourself b/c we're a local so @ is necessary - with tm.assertRaises(UndefinedVariableError): - df.query('df > 0', engine=self.engine, parser=self.parser) - - def test_local_syntax(self): - skip_if_no_pandas_parser(self.parser) - - engine, parser = self.engine, self.parser - df = DataFrame(randn(100, 10), columns=list('abcdefghij')) - b = 1 - expect = df[df.a < b] - result = df.query('a < @b', engine=engine, parser=parser) - assert_frame_equal(result, expect) - - expect = df[df.a < df.b] - result = df.query('a < b', engine=engine, parser=parser) - assert_frame_equal(result, expect) - - def test_chained_cmp_and_in(self): - skip_if_no_pandas_parser(self.parser) - engine, parser = self.engine, self.parser - cols = list('abc') - df = DataFrame(randn(100, len(cols)), columns=cols) - res = df.query('a < b < c and a not in b not in c', engine=engine, - parser=parser) - ind = (df.a < df.b) & (df.b < df.c) & ~df.b.isin(df.a) & ~df.c.isin(df.b) - expec = df[ind] - assert_frame_equal(res, expec) - - def test_local_variable_with_in(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - a = Series(np.random.randint(3, size=15), name='a') - b = Series(np.random.randint(10, size=15), name='b') - df = DataFrame({'a': a, 'b': b}) - - expected = df.loc[(df.b - 1).isin(a)] - result = df.query('b - 1 in a', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - b = Series(np.random.randint(10, size=15), name='b') - expected = df.loc[(b - 1).isin(a)] - result = df.query('@b - 1 in a', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - def test_at_inside_string(self): - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - c = 1 - df = DataFrame({'a': ['a', 'a', 'b', 'b', '@c', '@c']}) - result = df.query('a == "@c"', engine=engine, parser=parser) - expected = df[df.a == "@c"] - assert_frame_equal(result, expected) - - def test_query_undefined_local(self): - from pandas.computation.ops import UndefinedVariableError - engine, parser = self.engine, self.parser - skip_if_no_pandas_parser(parser) - df = DataFrame(np.random.rand(10, 2), columns=list('ab')) - with tm.assertRaisesRegexp(UndefinedVariableError, - "local variable 'c' is not defined"): - df.query('a == @c', engine=engine, parser=parser) - - def test_index_resolvers_come_after_columns_with_the_same_name(self): - n = 1 - a = np.r_[20:101:20] - - df = DataFrame({'index': a, 'b': np.random.randn(a.size)}) - df.index.name = 'index' - result = df.query('index > 5', engine=self.engine, parser=self.parser) - expected = df[df['index'] > 5] - assert_frame_equal(result, expected) - - df = DataFrame({'index': a, - 'b': np.random.randn(a.size)}) - result = df.query('ilevel_0 > 5', engine=self.engine, - parser=self.parser) - expected = df.loc[df.index[df.index > 5]] - assert_frame_equal(result, expected) - - df = DataFrame({'a': a, 'b': np.random.randn(a.size)}) - df.index.name = 'a' - result = df.query('a > 5', engine=self.engine, parser=self.parser) - expected = df[df.a > 5] - assert_frame_equal(result, expected) - - result = df.query('index > 5', engine=self.engine, parser=self.parser) - expected = df.loc[df.index[df.index > 5]] - assert_frame_equal(result, expected) - - def test_inf(self): - n = 10 - df = DataFrame({'a': np.random.rand(n), 'b': np.random.rand(n)}) - df.loc[::2, 0] = np.inf - ops = '==', '!=' - d = dict(zip(ops, (operator.eq, operator.ne))) - for op, f in d.items(): - q = 'a %s inf' % op - expected = df[f(df.a, np.inf)] - result = df.query(q, engine=self.engine, parser=self.parser) - assert_frame_equal(result, expected) - - -class TestDataFrameQueryNumExprPython(TestDataFrameQueryNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryNumExprPython, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'python' - tm.skip_if_no_ne(cls.engine) - cls.frame = _frame.copy() - - def test_date_query_no_attribute_access(self): - engine, parser = self.engine, self.parser - df = DataFrame(randn(5, 3)) - df['dates1'] = date_range('1/1/2012', periods=5) - df['dates2'] = date_range('1/1/2013', periods=5) - df['dates3'] = date_range('1/1/2014', periods=5) - res = df.query('(dates1 < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates2'] = date_range('1/1/2013', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.loc[np.random.rand(n) > 0.5, 'dates3'] = pd.NaT - res = df.query('(dates1 < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.dates1 < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.set_index('dates1', inplace=True, drop=True) - res = df.query('(index < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.iloc[0, 0] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - res = df.query('(index < 20130101) & (20130101 < dates3)', - engine=engine, parser=parser) - expec = df[(df.index < '20130101') & ('20130101' < df.dates3)] - assert_frame_equal(res, expec) - - def test_date_index_query_with_NaT_duplicates(self): - engine, parser = self.engine, self.parser - n = 10 - df = DataFrame(randn(n, 3)) - df['dates1'] = date_range('1/1/2012', periods=n) - df['dates3'] = date_range('1/1/2014', periods=n) - df.loc[np.random.rand(n) > 0.5, 'dates1'] = pd.NaT - df.set_index('dates1', inplace=True, drop=True) - with tm.assertRaises(NotImplementedError): - df.query('index < 20130101 < dates3', engine=engine, parser=parser) - - def test_nested_scope(self): - from pandas.computation.ops import UndefinedVariableError - engine = self.engine - parser = self.parser - # smoke test - x = 1 - result = pd.eval('x + 1', engine=engine, parser=parser) - self.assertEqual(result, 2) - - df = DataFrame(np.random.randn(5, 3)) - df2 = DataFrame(np.random.randn(5, 3)) - - # don't have the pandas parser - with tm.assertRaises(SyntaxError): - df.query('(@df>0) & (@df2>0)', engine=engine, parser=parser) - - with tm.assertRaises(UndefinedVariableError): - df.query('(df>0) & (df2>0)', engine=engine, parser=parser) - - expected = df[(df > 0) & (df2 > 0)] - result = pd.eval('df[(df > 0) & (df2 > 0)]', engine=engine, - parser=parser) - assert_frame_equal(expected, result) - - expected = df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)] - result = pd.eval('df[(df > 0) & (df2 > 0) & (df[df > 0] > 0)]', - engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -class TestDataFrameQueryPythonPandas(TestDataFrameQueryNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryPythonPandas, cls).setUpClass() - cls.engine = 'python' - cls.parser = 'pandas' - cls.frame = _frame.copy() - - def test_query_builtin(self): - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - expected = df[df.index > 5] - result = df.query('sin > 5', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -class TestDataFrameQueryPythonPython(TestDataFrameQueryNumExprPython): - - @classmethod - def setUpClass(cls): - super(TestDataFrameQueryPythonPython, cls).setUpClass() - cls.engine = cls.parser = 'python' - cls.frame = _frame.copy() - - def test_query_builtin(self): - engine, parser = self.engine, self.parser - - n = m = 10 - df = DataFrame(np.random.randint(m, size=(n, 3)), columns=list('abc')) - - df.index.name = 'sin' - expected = df[df.index > 5] - result = df.query('sin > 5', engine=engine, parser=parser) - assert_frame_equal(expected, result) - - -PARSERS = 'python', 'pandas' -ENGINES = 'python', 'numexpr' - - -class TestDataFrameQueryStrings(object): - def check_str_query_method(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame(randn(10, 1), columns=['b']) - df['strings'] = Series(list('aabbccddee')) - expect = df[df.strings == 'a'] - - if parser != 'pandas': - col = 'strings' - lst = '"a"' - - lhs = [col] * 2 + [lst] * 2 - rhs = lhs[::-1] - - eq, ne = '==', '!=' - ops = 2 * ([eq] + [ne]) - - for lhs, op, rhs in zip(lhs, ops, rhs): - ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) - assertRaises(NotImplementedError, df.query, ex, engine=engine, - parser=parser, local_dict={'strings': df.strings}) - else: - res = df.query('"a" == strings', engine=engine, parser=parser) - assert_frame_equal(res, expect) - - res = df.query('strings == "a"', engine=engine, parser=parser) - assert_frame_equal(res, expect) - assert_frame_equal(res, df[df.strings.isin(['a'])]) - - expect = df[df.strings != 'a'] - res = df.query('strings != "a"', engine=engine, parser=parser) - assert_frame_equal(res, expect) - - res = df.query('"a" != strings', engine=engine, parser=parser) - assert_frame_equal(res, expect) - assert_frame_equal(res, df[~df.strings.isin(['a'])]) - - def test_str_query_method(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_str_query_method, parser, engine - - def test_str_list_query_method(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_str_list_query_method, parser, engine - - def check_str_list_query_method(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame(randn(10, 1), columns=['b']) - df['strings'] = Series(list('aabbccddee')) - expect = df[df.strings.isin(['a', 'b'])] - - if parser != 'pandas': - col = 'strings' - lst = '["a", "b"]' - - lhs = [col] * 2 + [lst] * 2 - rhs = lhs[::-1] - - eq, ne = '==', '!=' - ops = 2 * ([eq] + [ne]) - - for lhs, op, rhs in zip(lhs, ops, rhs): - ex = '{lhs} {op} {rhs}'.format(lhs=lhs, op=op, rhs=rhs) - with tm.assertRaises(NotImplementedError): - df.query(ex, engine=engine, parser=parser) - else: - res = df.query('strings == ["a", "b"]', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - res = df.query('["a", "b"] == strings', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - expect = df[~df.strings.isin(['a', 'b'])] - - res = df.query('strings != ["a", "b"]', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - res = df.query('["a", "b"] != strings', engine=engine, - parser=parser) - assert_frame_equal(res, expect) - - def check_query_with_string_columns(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame({'a': list('aaaabbbbcccc'), - 'b': list('aabbccddeeff'), - 'c': np.random.randint(5, size=12), - 'd': np.random.randint(9, size=12)}) - if parser == 'pandas': - res = df.query('a in b', parser=parser, engine=engine) - expec = df[df.a.isin(df.b)] - assert_frame_equal(res, expec) - - res = df.query('a in b and c < d', parser=parser, engine=engine) - expec = df[df.a.isin(df.b) & (df.c < df.d)] - assert_frame_equal(res, expec) - else: - with assertRaises(NotImplementedError): - df.query('a in b', parser=parser, engine=engine) - - with assertRaises(NotImplementedError): - df.query('a in b and c < d', parser=parser, engine=engine) - - def test_query_with_string_columns(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_string_columns, parser, engine - - def check_object_array_eq_ne(self, parser, engine): - tm.skip_if_no_ne(engine) - df = DataFrame({'a': list('aaaabbbbcccc'), - 'b': list('aabbccddeeff'), - 'c': np.random.randint(5, size=12), - 'd': np.random.randint(9, size=12)}) - res = df.query('a == b', parser=parser, engine=engine) - exp = df[df.a == df.b] - assert_frame_equal(res, exp) - - res = df.query('a != b', parser=parser, engine=engine) - exp = df[df.a != df.b] - assert_frame_equal(res, exp) - - def test_object_array_eq_ne(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_object_array_eq_ne, parser, engine - - def check_query_with_nested_strings(self, parser, engine): - tm.skip_if_no_ne(engine) - skip_if_no_pandas_parser(parser) - from pandas.compat import StringIO - raw = """id event timestamp - 1 "page 1 load" 1/1/2014 0:00:01 - 1 "page 1 exit" 1/1/2014 0:00:31 - 2 "page 2 load" 1/1/2014 0:01:01 - 2 "page 2 exit" 1/1/2014 0:01:31 - 3 "page 3 load" 1/1/2014 0:02:01 - 3 "page 3 exit" 1/1/2014 0:02:31 - 4 "page 1 load" 2/1/2014 1:00:01 - 4 "page 1 exit" 2/1/2014 1:00:31 - 5 "page 2 load" 2/1/2014 1:01:01 - 5 "page 2 exit" 2/1/2014 1:01:31 - 6 "page 3 load" 2/1/2014 1:02:01 - 6 "page 3 exit" 2/1/2014 1:02:31 - """ - df = pd.read_csv(StringIO(raw), sep=r'\s{2,}', engine='python', - parse_dates=['timestamp']) - expected = df[df.event == '"page 1 load"'] - res = df.query("""'"page 1 load"' in event""", parser=parser, - engine=engine) - assert_frame_equal(expected, res) - - def test_query_with_nested_string(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_nested_strings, parser, engine - - def check_query_with_nested_special_character(self, parser, engine): - skip_if_no_pandas_parser(parser) - tm.skip_if_no_ne(engine) - df = DataFrame({'a': ['a', 'b', 'test & test'], - 'b': [1, 2, 3]}) - res = df.query('a == "test & test"', parser=parser, engine=engine) - expec = df[df.a == 'test & test'] - assert_frame_equal(res, expec) - - def test_query_with_nested_special_character(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_with_nested_special_character, parser, engine - - def check_query_lex_compare_strings(self, parser, engine): - tm.skip_if_no_ne(engine=engine) - import operator as opr - - a = Series(tm.choice(list('abcde'), 20)) - b = Series(np.arange(a.size)) - df = DataFrame({'X': a, 'Y': b}) - - ops = {'<': opr.lt, '>': opr.gt, '<=': opr.le, '>=': opr.ge} - - for op, func in ops.items(): - res = df.query('X %s "d"' % op, engine=engine, parser=parser) - expected = df[func(df.X, 'd')] - assert_frame_equal(res, expected) - - def test_query_lex_compare_strings(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_lex_compare_strings, parser, engine - - def check_query_single_element_booleans(self, parser, engine): - tm.skip_if_no_ne(engine) - columns = 'bid', 'bidsize', 'ask', 'asksize' - data = np.random.randint(2, size=(1, len(columns))).astype(bool) - df = DataFrame(data, columns=columns) - res = df.query('bid & ask', engine=engine, parser=parser) - expected = df[df.bid & df.ask] - assert_frame_equal(res, expected) - - def test_query_single_element_booleans(self): - for parser, engine in product(PARSERS, ENGINES): - yield self.check_query_single_element_booleans, parser, engine - - def check_query_string_scalar_variable(self, parser, engine): - tm.skip_if_no_ne(engine) - df = pd.DataFrame({'Symbol': ['BUD US', 'BUD US', 'IBM US', 'IBM US'], - 'Price': [109.70, 109.72, 183.30, 183.35]}) - e = df[df.Symbol == 'BUD US'] - symb = 'BUD US' # noqa - r = df.query('Symbol == @symb', parser=parser, engine=engine) - assert_frame_equal(e, r) - - def test_query_string_scalar_variable(self): - for parser, engine in product(['pandas'], ENGINES): - yield self.check_query_string_scalar_variable, parser, engine - - -class TestDataFrameEvalNumExprPandas(tm.TestCase): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalNumExprPandas, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'pandas' - tm.skip_if_no_ne() - - def setUp(self): - self.frame = DataFrame(randn(10, 3), columns=list('abc')) - - def tearDown(self): - del self.frame - - def test_simple_expr(self): - res = self.frame.eval('a + b', engine=self.engine, parser=self.parser) - expect = self.frame.a + self.frame.b - assert_series_equal(res, expect) - - def test_bool_arith_expr(self): - res = self.frame.eval('a[a < 1] + b', engine=self.engine, - parser=self.parser) - expect = self.frame.a[self.frame.a < 1] + self.frame.b - assert_series_equal(res, expect) - - def test_invalid_type_for_operator_raises(self): - df = DataFrame({'a': [1, 2], 'b': ['c', 'd']}) - ops = '+', '-', '*', '/' - for op in ops: - with tm.assertRaisesRegexp(TypeError, - "unsupported operand type\(s\) for " - ".+: '.+' and '.+'"): - df.eval('a {0} b'.format(op), engine=self.engine, - parser=self.parser) - - -class TestDataFrameEvalNumExprPython(TestDataFrameEvalNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalNumExprPython, cls).setUpClass() - cls.engine = 'numexpr' - cls.parser = 'python' - tm.skip_if_no_ne(cls.engine) - - -class TestDataFrameEvalPythonPandas(TestDataFrameEvalNumExprPandas): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalPythonPandas, cls).setUpClass() - cls.engine = 'python' - cls.parser = 'pandas' - - -class TestDataFrameEvalPythonPython(TestDataFrameEvalNumExprPython): - - @classmethod - def setUpClass(cls): - super(TestDataFrameEvalPythonPython, cls).tearDownClass() - cls.engine = cls.parser = 'python' - - -if __name__ == '__main__': - nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'], - exit=False) diff --git a/setup.cfg b/setup.cfg index 3e13665f0ee22..5c07a44ff4f7f 100644 --- a/setup.cfg +++ b/setup.cfg @@ -12,4 +12,4 @@ tag_prefix = v parentdir_prefix = pandas- [flake8] -ignore = F401,E731 +ignore = E731 diff --git a/setup.py b/setup.py index 0f4492d9821ee..62d9062de1155 100755 --- a/setup.py +++ b/setup.py @@ -552,6 +552,7 @@ def pxd(name): 'pandas.stats', 'pandas.util', 'pandas.tests', + 'pandas.tests.frame', 'pandas.tests.test_msgpack', 'pandas.tools', 'pandas.tools.tests',
@jreback I re-enabled flake8 checking for unused imports. We should be explicit about what imports are part of a module API (and those warnings explicitly suppressed with `# noqa`) and delete all unused imports.
https://api.github.com/repos/pandas-dev/pandas/pulls/12032
2016-01-13T15:35:36Z
2016-01-14T14:11:00Z
2016-01-14T14:11:00Z
2016-01-16T04:14:26Z
CI: Revert disable of some window test on Windows
diff --git a/ci/run_tests.sh b/ci/run_tests.sh index 261d6364cb5e1..0d6f26d8c29f8 100755 --- a/ci/run_tests.sh +++ b/ci/run_tests.sh @@ -24,7 +24,7 @@ PYTEST_CMD="${XVFB}pytest -m \"$PATTERN\" -n $PYTEST_WORKERS --dist=loadfile $TE if [[ $(uname) != "Linux" && $(uname) != "Darwin" ]]; then # GH#37455 windows py38 build appears to be running out of memory # skip collection of window tests - PYTEST_CMD="$PYTEST_CMD --ignore=pandas/tests/window/ --ignore=pandas/tests/plotting/" + PYTEST_CMD="$PYTEST_CMD --ignore=pandas/tests/window/moments --ignore=pandas/tests/plotting/" fi echo $PYTEST_CMD
- [x] xref #37535 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/41481
2021-05-15T03:18:46Z
2021-05-17T15:18:37Z
2021-05-17T15:18:37Z
2021-05-17T16:16:35Z
DEPR: dropping nuisance columns in DataFrame reductions
diff --git a/doc/source/whatsnew/v1.2.0.rst b/doc/source/whatsnew/v1.2.0.rst index 6dd011c588702..36b591c3c3142 100644 --- a/doc/source/whatsnew/v1.2.0.rst +++ b/doc/source/whatsnew/v1.2.0.rst @@ -381,6 +381,7 @@ this pathological behavior (:issue:`37827`): *New behavior*: .. ipython:: python + :okwarning: df.mean() @@ -394,6 +395,7 @@ instead of casting to a NumPy array which may have different semantics (:issue:` :issue:`28949`, :issue:`21020`). .. ipython:: python + :okwarning: ser = pd.Series([0, 1], dtype="category", name="A") df = ser.to_frame() @@ -411,6 +413,7 @@ instead of casting to a NumPy array which may have different semantics (:issue:` *New behavior*: .. ipython:: python + :okwarning: df.any() diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index c26f8288f59ab..5b92fbb730b35 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -678,6 +678,47 @@ Deprecations - Deprecated passing arguments as positional (except for ``"method"``) in :meth:`DataFrame.interpolate` and :meth:`Series.interpolate` (:issue:`41485`) - Deprecated passing arguments (apart from ``value``) as positional in :meth:`DataFrame.fillna` and :meth:`Series.fillna` (:issue:`41485`) +.. _whatsnew_130.deprecations.nuisance_columns: + +Deprecated Dropping Nuisance Columns in DataFrame Reductions and DataFrameGroupBy Operations +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ +The default of calling a reduction (.min, .max, .sum, ...) on a :class:`DataFrame` with +``numeric_only=None`` (the default, columns on which the reduction raises ``TypeError`` +are silently ignored and dropped from the result. + +This behavior is deprecated. In a future version, the ``TypeError`` will be raised, +and users will need to select only valid columns before calling the function. + +For example: + +.. ipython:: python + + df = pd.DataFrame({"A": [1, 2, 3, 4], "B": pd.date_range("2016-01-01", periods=4)}) + df + +*Old behavior*: + +.. code-block:: ipython + + In [3]: df.prod() + Out[3]: + Out[3]: + A 24 + dtype: int64 + +*Future behavior*: + +.. code-block:: ipython + + In [4]: df.prod() + ... + TypeError: 'DatetimeArray' does not implement reduction 'prod' + + In [5]: df[["A"]].prod() + Out[5]: + A 24 + dtype: int64 + .. --------------------------------------------------------------------------- diff --git a/pandas/core/frame.py b/pandas/core/frame.py index e55b3984b1c39..18ee1ad9bcd96 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -9854,6 +9854,21 @@ def _get_data() -> DataFrame: # Even if we are object dtype, follow numpy and return # float64, see test_apply_funcs_over_empty out = out.astype(np.float64) + + if numeric_only is None and out.shape[0] != df.shape[1]: + # columns have been dropped GH#41480 + arg_name = "numeric_only" + if name in ["all", "any"]: + arg_name = "bool_only" + warnings.warn( + "Dropping of nuisance columns in DataFrame reductions " + f"(with '{arg_name}=None') is deprecated; in a future " + "version this will raise TypeError. Select only valid " + "columns before calling the reduction.", + FutureWarning, + stacklevel=5, + ) + return out assert numeric_only is None @@ -9874,6 +9889,19 @@ def _get_data() -> DataFrame: with np.errstate(all="ignore"): result = func(values) + # columns have been dropped GH#41480 + arg_name = "numeric_only" + if name in ["all", "any"]: + arg_name = "bool_only" + warnings.warn( + "Dropping of nuisance columns in DataFrame reductions " + f"(with '{arg_name}=None') is deprecated; in a future " + "version this will raise TypeError. Select only valid " + "columns before calling the reduction.", + FutureWarning, + stacklevel=5, + ) + if hasattr(result, "dtype"): if filter_type == "bool" and notna(result).all(): result = result.astype(np.bool_) diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py index cc91cdae942fd..2511f6fc2563c 100644 --- a/pandas/tests/apply/test_frame_apply.py +++ b/pandas/tests/apply/test_frame_apply.py @@ -1209,7 +1209,10 @@ def test_nuiscance_columns(): ) tm.assert_frame_equal(result, expected) - result = df.agg("sum") + with tm.assert_produces_warning( + FutureWarning, match="Select only valid", check_stacklevel=False + ): + result = df.agg("sum") expected = Series([6, 6.0, "foobarbaz"], index=["A", "B", "C"]) tm.assert_series_equal(result, expected) @@ -1426,8 +1429,9 @@ def test_apply_datetime_tz_issue(): @pytest.mark.parametrize("method", ["min", "max", "sum"]) def test_consistency_of_aggregates_of_columns_with_missing_values(df, method): # GH 16832 - none_in_first_column_result = getattr(df[["A", "B"]], method)() - none_in_second_column_result = getattr(df[["B", "A"]], method)() + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + none_in_first_column_result = getattr(df[["A", "B"]], method)() + none_in_second_column_result = getattr(df[["B", "A"]], method)() tm.assert_series_equal(none_in_first_column_result, none_in_second_column_result) diff --git a/pandas/tests/apply/test_invalid_arg.py b/pandas/tests/apply/test_invalid_arg.py index 698f85a04a757..83a1baa9d13d6 100644 --- a/pandas/tests/apply/test_invalid_arg.py +++ b/pandas/tests/apply/test_invalid_arg.py @@ -342,6 +342,7 @@ def test_transform_wont_agg_series(string_series, func): @pytest.mark.parametrize( "op_wrapper", [lambda x: x, lambda x: [x], lambda x: {"A": x}, lambda x: {"A": [x]}] ) +@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_transform_reducer_raises(all_reductions, frame_or_series, op_wrapper): # GH 35964 op = op_wrapper(all_reductions) diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py index aca061cdd197b..c01195a6afff1 100644 --- a/pandas/tests/frame/methods/test_quantile.py +++ b/pandas/tests/frame/methods/test_quantile.py @@ -56,7 +56,8 @@ def test_quantile(self, datetime_frame): # non-numeric exclusion df = DataFrame({"col1": ["A", "A", "B", "B"], "col2": [1, 2, 3, 4]}) rs = df.quantile(0.5) - xp = df.median().rename(0.5) + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + xp = df.median().rename(0.5) tm.assert_series_equal(rs, xp) # axis diff --git a/pandas/tests/frame/methods/test_rank.py b/pandas/tests/frame/methods/test_rank.py index 6538eda8cdeff..5ba4ab4408f11 100644 --- a/pandas/tests/frame/methods/test_rank.py +++ b/pandas/tests/frame/methods/test_rank.py @@ -248,6 +248,7 @@ def test_rank_methods_frame(self): @td.skip_array_manager_not_yet_implemented @pytest.mark.parametrize("dtype", ["O", "f8", "i8"]) + @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_rank_descending(self, method, dtype): if "i" in dtype: diff --git a/pandas/tests/frame/test_arithmetic.py b/pandas/tests/frame/test_arithmetic.py index b9f6e72acf71b..7fe921571ee2e 100644 --- a/pandas/tests/frame/test_arithmetic.py +++ b/pandas/tests/frame/test_arithmetic.py @@ -1021,6 +1021,7 @@ def test_zero_len_frame_with_series_corner_cases(): tm.assert_frame_equal(result, expected) +@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_frame_single_columns_object_sum_axis_1(): # GH 13758 data = { diff --git a/pandas/tests/frame/test_reductions.py b/pandas/tests/frame/test_reductions.py index 0ca523db60889..564f5d20b0301 100644 --- a/pandas/tests/frame/test_reductions.py +++ b/pandas/tests/frame/test_reductions.py @@ -8,6 +8,8 @@ from pandas.compat import is_platform_windows import pandas.util._test_decorators as td +from pandas.core.dtypes.common import is_categorical_dtype + import pandas as pd from pandas import ( Categorical, @@ -90,7 +92,7 @@ def wrapper(x): tm.assert_series_equal( result0, frame.apply(wrapper), check_dtype=check_dtype, rtol=rtol, atol=atol ) - # HACK: win32 + # FIXME: HACK: win32 tm.assert_series_equal( result1, frame.apply(wrapper, axis=1), @@ -140,7 +142,7 @@ def wrapper(x): tm.assert_series_equal(r1, expected) -def assert_stat_op_api(opname, float_frame, float_string_frame, has_numeric_only=False): +def assert_stat_op_api(opname, float_frame, float_string_frame, has_numeric_only=True): """ Check that API for operator opname works as advertised on frame @@ -199,7 +201,7 @@ def wrapper(x): tm.assert_series_equal(result0, frame.apply(wrapper)) tm.assert_series_equal( result1, frame.apply(wrapper, axis=1), check_dtype=False - ) # HACK: win32 + ) # FIXME: HACK: win32 else: skipna_wrapper = alternative wrapper = alternative @@ -249,6 +251,7 @@ def assert_bool_op_api( # make sure op works on mixed-type frame mixed = float_string_frame mixed["_bool_"] = np.random.randn(len(mixed)) > 0.5 + getattr(mixed, opname)(axis=0) getattr(mixed, opname)(axis=1) @@ -264,21 +267,22 @@ class TestDataFrameAnalytics: # --------------------------------------------------------------------- # Reductions + @pytest.mark.filterwarnings("ignore:Dropping of nuisance:FutureWarning") def test_stat_op_api(self, float_frame, float_string_frame): + assert_stat_op_api("count", float_frame, float_string_frame) + assert_stat_op_api("sum", float_frame, float_string_frame) + assert_stat_op_api( - "count", float_frame, float_string_frame, has_numeric_only=True - ) - assert_stat_op_api( - "sum", float_frame, float_string_frame, has_numeric_only=True + "nunique", float_frame, float_string_frame, has_numeric_only=False ) - - assert_stat_op_api("nunique", float_frame, float_string_frame) assert_stat_op_api("mean", float_frame, float_string_frame) assert_stat_op_api("product", float_frame, float_string_frame) assert_stat_op_api("median", float_frame, float_string_frame) assert_stat_op_api("min", float_frame, float_string_frame) assert_stat_op_api("max", float_frame, float_string_frame) - assert_stat_op_api("mad", float_frame, float_string_frame) + assert_stat_op_api( + "mad", float_frame, float_string_frame, has_numeric_only=False + ) assert_stat_op_api("var", float_frame, float_string_frame) assert_stat_op_api("std", float_frame, float_string_frame) assert_stat_op_api("sem", float_frame, float_string_frame) @@ -435,12 +439,17 @@ def test_mixed_ops(self, op): "str": ["a", "b", "c", "d"], } ) - - result = getattr(df, op)() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = getattr(df, op)() assert len(result) == 2 with pd.option_context("use_bottleneck", False): - result = getattr(df, op)() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = getattr(df, op)() assert len(result) == 2 def test_reduce_mixed_frame(self): @@ -457,7 +466,8 @@ def test_reduce_mixed_frame(self): tm.assert_numpy_array_equal( test.values, np.array([2, 150, "abcde"], dtype=object) ) - tm.assert_series_equal(test, df.T.sum(axis=1)) + alt = df.T.sum(axis=1) + tm.assert_series_equal(test, alt) def test_nunique(self): df = DataFrame({"A": [1, 1, 1], "B": [1, 2, 3], "C": [1, np.nan, 3]}) @@ -510,7 +520,10 @@ def test_mean_mixed_string_decimal(self): df = DataFrame(d) - result = df.mean() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = df.mean() expected = Series([2.7, 681.6], index=["A", "C"]) tm.assert_series_equal(result, expected) @@ -740,7 +753,8 @@ def test_operators_timedelta64(self): tm.assert_series_equal(result, expected) # excludes numeric - result = mixed.min(axis=1) + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + result = mixed.min(axis=1) expected = Series([1, 1, 1.0], index=[0, 1, 2]) tm.assert_series_equal(result, expected) @@ -801,8 +815,9 @@ def test_sum_prod_nanops(self, method, unit): idx = ["a", "b", "c"] df = DataFrame({"a": [unit, unit], "b": [unit, np.nan], "c": [np.nan, np.nan]}) # The default - result = getattr(df, method) + result = getattr(df, method)() expected = Series([unit, unit, unit], index=idx, dtype="float64") + tm.assert_series_equal(result, expected) # min_count=1 result = getattr(df, method)(min_count=1) @@ -873,20 +888,23 @@ def test_sum_mixed_datetime(self): df = DataFrame({"A": date_range("2000", periods=4), "B": [1, 2, 3, 4]}).reindex( [2, 3, 4] ) - result = df.sum() + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + result = df.sum() expected = Series({"B": 7.0}) tm.assert_series_equal(result, expected) def test_mean_corner(self, float_frame, float_string_frame): # unit test when have object data - the_mean = float_string_frame.mean(axis=0) + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + the_mean = float_string_frame.mean(axis=0) the_sum = float_string_frame.sum(axis=0, numeric_only=True) tm.assert_index_equal(the_sum.index, the_mean.index) assert len(the_mean.index) < len(float_string_frame.columns) # xs sum mixed type, just want to know it works... - the_mean = float_string_frame.mean(axis=1) + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + the_mean = float_string_frame.mean(axis=1) the_sum = float_string_frame.sum(axis=1, numeric_only=True) tm.assert_index_equal(the_sum.index, the_mean.index) @@ -947,10 +965,13 @@ def test_mean_extensionarray_numeric_only_true(self): def test_stats_mixed_type(self, float_string_frame): # don't blow up - float_string_frame.std(1) - float_string_frame.var(1) - float_string_frame.mean(1) - float_string_frame.skew(1) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + float_string_frame.std(1) + float_string_frame.var(1) + float_string_frame.mean(1) + float_string_frame.skew(1) def test_sum_bools(self): df = DataFrame(index=range(1), columns=range(10)) @@ -1125,7 +1146,6 @@ def test_any_all_object_dtype(self, axis, bool_agg_func, skipna): [np.nan, np.nan, "5", np.nan], ] ) - result = getattr(df, bool_agg_func)(axis=axis, skipna=skipna) expected = Series([True, True, True, True]) tm.assert_series_equal(result, expected) @@ -1224,12 +1244,23 @@ def test_any_all_bool_only(self): def test_any_all_np_func(self, func, data, expected): # GH 19976 data = DataFrame(data) - result = func(data) + + warn = None + if any(is_categorical_dtype(x) for x in data.dtypes): + warn = FutureWarning + + with tm.assert_produces_warning( + warn, match="Select only valid columns", check_stacklevel=False + ): + result = func(data) assert isinstance(result, np.bool_) assert result.item() is expected # method version - result = getattr(DataFrame(data), func.__name__)(axis=None) + with tm.assert_produces_warning( + warn, match="Select only valid columns", check_stacklevel=False + ): + result = getattr(DataFrame(data), func.__name__)(axis=None) assert isinstance(result, np.bool_) assert result.item() is expected @@ -1349,7 +1380,6 @@ def test_min_max_dt64_with_NaT_skipna_false(self, request, tz_naive_fixture): "b": [Timestamp("2020-02-01 08:00:00", tz=tz), pd.NaT], } ) - res = df.min(axis=1, skipna=False) expected = Series([df.loc[0, "a"], pd.NaT]) assert expected.dtype == df["a"].dtype @@ -1411,12 +1441,12 @@ def test_frame_any_all_with_level(self): ], ) - with tm.assert_produces_warning(FutureWarning): + with tm.assert_produces_warning(FutureWarning, match="Using the level"): result = df.any(level=0) ex = DataFrame({"data": [False, True]}, index=["one", "two"]) tm.assert_frame_equal(result, ex) - with tm.assert_produces_warning(FutureWarning): + with tm.assert_produces_warning(FutureWarning, match="Using the level"): result = df.all(level=0) ex = DataFrame({"data": [False, False]}, index=["one", "two"]) tm.assert_frame_equal(result, ex) @@ -1463,7 +1493,7 @@ def test_reductions_deprecation_level_argument(self, frame_or_series, func): obj = frame_or_series( [1, 2, 3], index=MultiIndex.from_arrays([[1, 2, 3], [4, 5, 6]]) ) - with tm.assert_produces_warning(FutureWarning): + with tm.assert_produces_warning(FutureWarning, match="level"): getattr(obj, func)(level=0) @@ -1486,11 +1516,17 @@ def test_any_all_categorical_dtype_nuisance_column(self, method): # With bool_only=None, operating on this column raises and is ignored, # so we expect an empty result. - result = getattr(df, method)(bool_only=None) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = getattr(df, method)(bool_only=None) expected = Series([], index=Index([]), dtype=bool) tm.assert_series_equal(result, expected) - result = getattr(np, method)(df, axis=0) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns", check_stacklevel=False + ): + result = getattr(np, method)(df, axis=0) tm.assert_series_equal(result, expected) def test_median_categorical_dtype_nuisance_column(self): @@ -1505,7 +1541,10 @@ def test_median_categorical_dtype_nuisance_column(self): with pytest.raises(TypeError, match="does not implement reduction"): df.median(numeric_only=False) - result = df.median() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = df.median() expected = Series([], index=Index([]), dtype=np.float64) tm.assert_series_equal(result, expected) @@ -1515,7 +1554,10 @@ def test_median_categorical_dtype_nuisance_column(self): with pytest.raises(TypeError, match="does not implement reduction"): df.median(numeric_only=False) - result = df.median() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = df.median() expected = Series([2.0], index=["B"]) tm.assert_series_equal(result, expected) @@ -1539,23 +1581,35 @@ def test_min_max_categorical_dtype_non_ordered_nuisance_column(self, method): with pytest.raises(TypeError, match="is not ordered for operation"): getattr(df, method)(numeric_only=False) - result = getattr(df, method)() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = getattr(df, method)() expected = Series([], index=Index([]), dtype=np.float64) tm.assert_series_equal(result, expected) - result = getattr(np, method)(df) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns", check_stacklevel=False + ): + result = getattr(np, method)(df) tm.assert_series_equal(result, expected) # same thing, but with an additional non-categorical column df["B"] = df["A"].astype(object) - result = getattr(df, method)() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = getattr(df, method)() if method == "min": expected = Series(["a"], index=["B"]) else: expected = Series(["c"], index=["B"]) tm.assert_series_equal(result, expected) - result = getattr(np, method)(df) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns", check_stacklevel=False + ): + result = getattr(np, method)(df) tm.assert_series_equal(result, expected) def test_reduction_object_block_splits_nuisance_columns(self): @@ -1563,14 +1617,20 @@ def test_reduction_object_block_splits_nuisance_columns(self): df = DataFrame({"A": [0, 1, 2], "B": ["a", "b", "c"]}, dtype=object) # We should only exclude "B", not "A" - result = df.mean() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = df.mean() expected = Series([1.0], index=["A"]) tm.assert_series_equal(result, expected) # Same behavior but heterogeneous dtype df["C"] = df["A"].astype(int) + 4 - result = df.mean() + with tm.assert_produces_warning( + FutureWarning, match="Select only valid columns" + ): + result = df.mean() expected = Series([1.0, 5.0], index=["A", "C"]) tm.assert_series_equal(result, expected) @@ -1644,6 +1704,7 @@ def test_groupy_regular_arithmetic_equivalent(meth): def test_frame_mixed_numeric_object_with_timestamp(ts_value): # GH 13912 df = DataFrame({"a": [1], "b": [1.1], "c": ["foo"], "d": [ts_value]}) - result = df.sum() + with tm.assert_produces_warning(FutureWarning, match="Dropping of nuisance"): + result = df.sum() expected = Series([1, 1.1, "foo"], index=list("abc")) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/frame/test_subclass.py b/pandas/tests/frame/test_subclass.py index 3214290465832..42474ff00ad6d 100644 --- a/pandas/tests/frame/test_subclass.py +++ b/pandas/tests/frame/test_subclass.py @@ -567,6 +567,7 @@ def stretch(row): assert not isinstance(result, tm.SubclassedDataFrame) tm.assert_series_equal(result, expected) + @pytest.mark.filterwarnings("ignore:.*None will no longer:FutureWarning") def test_subclassed_reductions(self, all_reductions): # GH 25596 diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py index 2f87f4a19b93f..cf4127da79bf9 100644 --- a/pandas/tests/groupby/test_apply.py +++ b/pandas/tests/groupby/test_apply.py @@ -1003,7 +1003,8 @@ def test_apply_function_with_indexing_return_column(): "foo2": [1, 2, 4, 4, 5, 6], } ) - result = df.groupby("foo1", as_index=False).apply(lambda x: x.mean()) + with tm.assert_produces_warning(FutureWarning, match="Select only valid"): + result = df.groupby("foo1", as_index=False).apply(lambda x: x.mean()) expected = DataFrame({"foo1": ["one", "three", "two"], "foo2": [3.0, 4.0, 4.0]}) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/groupby/test_categorical.py b/pandas/tests/groupby/test_categorical.py index 217aa0d7ede17..8ce7841bcc2c2 100644 --- a/pandas/tests/groupby/test_categorical.py +++ b/pandas/tests/groupby/test_categorical.py @@ -282,7 +282,10 @@ def test_apply(ordered): # GH#21636 tracking down the xfail, in some builds np.mean(df.loc[[0]]) # is coming back as Series([0., 1., 0.], index=["missing", "dense", "values"]) # when we expect Series(0., index=["values"]) - result = grouped.apply(lambda x: np.mean(x)) + with tm.assert_produces_warning( + FutureWarning, match="Select only valid", check_stacklevel=False + ): + result = grouped.apply(lambda x: np.mean(x)) tm.assert_frame_equal(result, expected) result = grouped.mean() @@ -1289,6 +1292,7 @@ def test_groupby_categorical_axis_1(code): tm.assert_frame_equal(result, expected) +@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_groupby_cat_preserves_structure(observed, ordered): # GH 28787 df = DataFrame( diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index 3f43c34b6eb34..4fa21a259e7cb 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -333,6 +333,7 @@ def gni(self, df): return gni # TODO: non-unique columns, as_index=False + @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_idxmax(self, gb): # object dtype so idxmax goes through _aggregate_item_by_item # GH#5610 @@ -342,6 +343,7 @@ def test_idxmax(self, gb): result = gb.idxmax() tm.assert_frame_equal(result, expected) + @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_idxmin(self, gb): # object dtype so idxmax goes through _aggregate_item_by_item # GH#5610 @@ -524,6 +526,7 @@ def test_groupby_non_arithmetic_agg_int_like_precision(i): ("idxmax", {"c_int": [1, 3], "c_float": [0, 2], "c_date": [0, 3]}), ], ) +@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_idxmin_idxmax_returns_int_types(func, values): # GH 25444 df = DataFrame( diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index afa28866f0833..c37dc17b85dd2 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -1757,6 +1757,7 @@ def test_pivot_table_values_key_error(): @pytest.mark.parametrize( "op", ["idxmax", "idxmin", "mad", "min", "max", "sum", "prod", "skew"] ) +@pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_empty_groupby(columns, keys, values, method, op, request): # GH8093 & GH26411 override_dtype = None
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry Discussed on this week's call
https://api.github.com/repos/pandas-dev/pandas/pulls/41480
2021-05-15T01:37:10Z
2021-05-21T20:59:56Z
2021-05-21T20:59:56Z
2021-05-21T21:18:11Z
REF: Various things preparing instantiable NumericIndex
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 83946491f32a8..08ca84228a301 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -347,7 +347,7 @@ def _outer_indexer( joined = self._from_join_target(joined_ndarray) return joined, lidx, ridx - _typ = "index" + _typ: str = "index" _data: ExtensionArray | np.ndarray _id: object | None = None _name: Hashable = None @@ -355,11 +355,11 @@ def _outer_indexer( # don't allow this anymore, and raise if it happens rather than # failing silently. _no_setting_name: bool = False - _comparables = ["name"] - _attributes = ["name"] - _is_numeric_dtype = False - _can_hold_na = True - _can_hold_strings = True + _comparables: list[str] = ["name"] + _attributes: list[str] = ["name"] + _is_numeric_dtype: bool = False + _can_hold_na: bool = True + _can_hold_strings: bool = True # would we like our indexing holder to defer to us _defer_to_indexing = False @@ -5465,7 +5465,7 @@ def map(self, mapper, na_action=None): """ from pandas.core.indexes.multi import MultiIndex - new_values = super()._map_values(mapper, na_action=na_action) + new_values = self._map_values(mapper, na_action=na_action) attributes = self._get_attributes_dict() diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py index 4c2c38df601ce..c796a25faf0a6 100644 --- a/pandas/tests/indexes/numeric/test_numeric.py +++ b/pandas/tests/indexes/numeric/test_numeric.py @@ -18,12 +18,12 @@ class TestFloat64Index(NumericBase): _index_cls = Float64Index - @pytest.fixture + @pytest.fixture(params=[np.float64]) def dtype(self, request): - return np.float64 + return request.param @pytest.fixture( - params=["int64", "uint64", "category", "datetime64"], + params=["int64", "uint64", "category", "datetime64", "object"], ) def invalid_dtype(self, request): return request.param @@ -42,16 +42,16 @@ def simple_index(self, dtype): ], ids=["mixed", "float", "mixed_dec", "float_dec"], ) - def index(self, request): - return self._index_cls(request.param) + def index(self, request, dtype): + return self._index_cls(request.param, dtype=dtype) @pytest.fixture - def mixed_index(self): - return self._index_cls([1.5, 2, 3, 4, 5]) + def mixed_index(self, dtype): + return self._index_cls([1.5, 2, 3, 4, 5], dtype=dtype) @pytest.fixture - def float_index(self): - return self._index_cls([0.0, 2.5, 5.0, 7.5, 10.0]) + def float_index(self, dtype): + return self._index_cls([0.0, 2.5, 5.0, 7.5, 10.0], dtype=dtype) def test_repr_roundtrip(self, index): tm.assert_index_equal(eval(repr(index)), index) @@ -72,22 +72,23 @@ def test_constructor(self, dtype): index_cls = self._index_cls # explicit construction - index = index_cls([1, 2, 3, 4, 5]) + index = index_cls([1, 2, 3, 4, 5], dtype=dtype) assert isinstance(index, index_cls) - assert index.dtype.type is dtype + assert index.dtype == dtype expected = np.array([1, 2, 3, 4, 5], dtype=dtype) tm.assert_numpy_array_equal(index.values, expected) - index = index_cls(np.array([1, 2, 3, 4, 5])) + + index = index_cls(np.array([1, 2, 3, 4, 5]), dtype=dtype) assert isinstance(index, index_cls) assert index.dtype == dtype - index = index_cls([1.0, 2, 3, 4, 5]) + index = index_cls([1.0, 2, 3, 4, 5], dtype=dtype) assert isinstance(index, index_cls) assert index.dtype == dtype - index = index_cls(np.array([1.0, 2, 3, 4, 5])) + index = index_cls(np.array([1.0, 2, 3, 4, 5]), dtype=dtype) assert isinstance(index, index_cls) assert index.dtype == dtype @@ -100,13 +101,13 @@ def test_constructor(self, dtype): assert index.dtype == dtype # nan handling - result = index_cls([np.nan, np.nan]) + result = index_cls([np.nan, np.nan], dtype=dtype) assert pd.isna(result.values).all() - result = index_cls(np.array([np.nan])) + result = index_cls(np.array([np.nan]), dtype=dtype) assert pd.isna(result.values).all() - result = Index(np.array([np.nan])) + result = Index(np.array([np.nan], dtype=dtype)) assert isinstance(result, index_cls) assert result.dtype == dtype assert pd.isna(result.values).all() @@ -281,7 +282,7 @@ class NumericInt(NumericBase): def test_view(self, dtype): index_cls = self._index_cls - idx = index_cls([], name="Foo") + idx = index_cls([], dtype=dtype, name="Foo") idx_view = idx.view() assert idx_view.name == "Foo" @@ -382,12 +383,12 @@ def test_prevent_casting(self, simple_index): class TestInt64Index(NumericInt): _index_cls = Int64Index - @pytest.fixture - def dtype(self): - return np.int64 + @pytest.fixture(params=[np.int64]) + def dtype(self, request): + return request.param @pytest.fixture( - params=["uint64", "float64", "category", "datetime64"], + params=["uint64", "float64", "category", "datetime64", "object"], ) def invalid_dtype(self, request): return request.param @@ -399,14 +400,14 @@ def simple_index(self, dtype): @pytest.fixture( params=[range(0, 20, 2), range(19, -1, -1)], ids=["index_inc", "index_dec"] ) - def index(self, request): - return self._index_cls(request.param) + def index(self, request, dtype): + return self._index_cls(request.param, dtype=dtype) def test_constructor(self, dtype): index_cls = self._index_cls # pass list, coerce fine - index = index_cls([-5, 0, 1, 2]) + index = index_cls([-5, 0, 1, 2], dtype=dtype) expected = Index([-5, 0, 1, 2], dtype=dtype) tm.assert_index_equal(index, expected) @@ -486,7 +487,7 @@ def dtype(self): return np.uint64 @pytest.fixture( - params=["int64", "float64", "category", "datetime64"], + params=["int64", "float64", "category", "datetime64", "object"], ) def invalid_dtype(self, request): return request.param diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py index d9b093cc97fda..e80868fb08a09 100644 --- a/pandas/tests/indexes/ranges/test_range.py +++ b/pandas/tests/indexes/ranges/test_range.py @@ -28,7 +28,7 @@ def dtype(self): return np.int64 @pytest.fixture( - params=["uint64", "float64", "category", "datetime64"], + params=["uint64", "float64", "category", "datetime64", "object"], ) def invalid_dtype(self, request): return request.param
Separating various stuff from #41153, making that PR easier to review after this is merged, Similar idea as #41472. Nothing in this PR changes anything, but just makes it e.g. simpler to update test fixtures after this is merged.
https://api.github.com/repos/pandas-dev/pandas/pulls/41479
2021-05-14T23:13:21Z
2021-05-17T15:19:46Z
2021-05-17T15:19:46Z
2021-05-17T16:09:49Z
Backport PR #41462: CI: Fix changed flake8 error message after upgrade (#41462)
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 31c926233d5b6..ffe615a63b7e3 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -4,7 +4,7 @@ repos: hooks: - id: black - repo: https://gitlab.com/pycqa/flake8 - rev: 3.8.4 + rev: 3.9.2 hooks: - id: flake8 additional_dependencies: [flake8-comprehensions>=3.1.0] diff --git a/environment.yml b/environment.yml index 15c1611169427..72826124bc35d 100644 --- a/environment.yml +++ b/environment.yml @@ -20,7 +20,7 @@ dependencies: # code checks - black=20.8b1 - cpplint - - flake8 + - flake8=3.9.2 - flake8-comprehensions>=3.1.0 # used by flake8, linting of unnecessary comprehensions - isort>=5.2.1 # check that imports are in the right order - mypy=0.782 diff --git a/requirements-dev.txt b/requirements-dev.txt index f026fd421f937..5a64156fe997f 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -8,7 +8,7 @@ asv cython>=0.29.21 black==20.8b1 cpplint -flake8 +flake8==3.9.2 flake8-comprehensions>=3.1.0 isort>=5.2.1 mypy==0.782 diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py index 7e4c68ddc183b..cbf3e84044d53 100644 --- a/scripts/tests/test_validate_docstrings.py +++ b/scripts/tests/test_validate_docstrings.py @@ -165,7 +165,7 @@ def test_bad_class(self, capsys): "indentation_is_not_a_multiple_of_four", # with flake8 3.9.0, the message ends with four spaces, # whereas in earlier versions, it ended with "four" - ("flake8 error: E111 indentation is not a multiple of ",), + ("flake8 error: E111 indentation is not a multiple of 4",), ), ( "BadDocstrings",
cc @jreback We could avoid the version changes in the pre commits if you prefer
https://api.github.com/repos/pandas-dev/pandas/pulls/41478
2021-05-14T20:25:23Z
2021-05-24T10:46:22Z
2021-05-24T10:46:22Z
2021-11-13T19:32:56Z
DEPS: bump pyarrow version to 0.17.0 #38870
diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml index a5aef7825c770..69f2e689c0228 100644 --- a/.github/workflows/database.yml +++ b/.github/workflows/database.yml @@ -70,7 +70,7 @@ jobs: - uses: conda-incubator/setup-miniconda@v2 with: activate-environment: pandas-dev - channel-priority: strict + channel-priority: flexible environment-file: ${{ matrix.ENV_FILE }} use-only-tar-bz2: true diff --git a/ci/deps/actions-37-db-min.yaml b/ci/deps/actions-37-db-min.yaml index 1d3794576220a..65c4c5769b1a3 100644 --- a/ci/deps/actions-37-db-min.yaml +++ b/ci/deps/actions-37-db-min.yaml @@ -31,7 +31,8 @@ dependencies: - openpyxl - pandas-gbq - google-cloud-bigquery>=1.27.2 # GH 36436 - - pyarrow=0.17 # GH 38803 + - protobuf>=3.12.4 + - pyarrow=0.17.1 # GH 38803 - pytables>=3.5.1 - scipy - xarray=0.12.3 diff --git a/ci/deps/actions-37-db.yaml b/ci/deps/actions-37-db.yaml index 8755e1a02c3cf..fa58f412cebf4 100644 --- a/ci/deps/actions-37-db.yaml +++ b/ci/deps/actions-37-db.yaml @@ -31,7 +31,7 @@ dependencies: - pandas-gbq - google-cloud-bigquery>=1.27.2 # GH 36436 - psycopg2 - - pyarrow>=0.15.0 + - pyarrow>=0.17.0 - pymysql - pytables - python-snappy diff --git a/ci/deps/actions-37-minimum_versions.yaml b/ci/deps/actions-37-minimum_versions.yaml index 3237cf9770220..aa5284e4f35d1 100644 --- a/ci/deps/actions-37-minimum_versions.yaml +++ b/ci/deps/actions-37-minimum_versions.yaml @@ -23,7 +23,7 @@ dependencies: - pytables=3.5.1 - python-dateutil=2.7.3 - pytz=2017.3 - - pyarrow=0.15 + - pyarrow=0.17.0 - scipy=1.2 - xlrd=1.2.0 - xlsxwriter=1.0.2 diff --git a/ci/deps/actions-37.yaml b/ci/deps/actions-37.yaml index f29830e9b3e79..a209a9099d2bb 100644 --- a/ci/deps/actions-37.yaml +++ b/ci/deps/actions-37.yaml @@ -18,7 +18,7 @@ dependencies: - numpy=1.19 - python-dateutil - nomkl - - pyarrow=0.15.1 + - pyarrow - pytz - s3fs>=0.4.0 - moto>=1.3.14 diff --git a/ci/deps/azure-macos-37.yaml b/ci/deps/azure-macos-37.yaml index 8c8b49ff3df5b..a0b1cdc684d2c 100644 --- a/ci/deps/azure-macos-37.yaml +++ b/ci/deps/azure-macos-37.yaml @@ -1,6 +1,7 @@ name: pandas-dev channels: - defaults + - conda-forge dependencies: - python=3.7.* @@ -21,7 +22,7 @@ dependencies: - numexpr - numpy=1.17.3 - openpyxl - - pyarrow=0.15.1 + - pyarrow=0.17.0 - pytables - python-dateutil==2.7.3 - pytz diff --git a/ci/deps/azure-windows-37.yaml b/ci/deps/azure-windows-37.yaml index c9d22ffbead45..8266e3bc4d07d 100644 --- a/ci/deps/azure-windows-37.yaml +++ b/ci/deps/azure-windows-37.yaml @@ -26,7 +26,7 @@ dependencies: - numexpr - numpy=1.17.* - openpyxl - - pyarrow=0.15 + - pyarrow=0.17.0 - pytables - python-dateutil - pytz diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml index 661d8813d32d2..200e695a69d1f 100644 --- a/ci/deps/azure-windows-38.yaml +++ b/ci/deps/azure-windows-38.yaml @@ -25,7 +25,7 @@ dependencies: - numpy=1.18.* - openpyxl - jinja2 - - pyarrow>=0.15.0 + - pyarrow>=0.17.0 - pytables - python-dateutil - pytz diff --git a/doc/source/getting_started/install.rst b/doc/source/getting_started/install.rst index 16beb00d201b7..ce35e9e15976f 100644 --- a/doc/source/getting_started/install.rst +++ b/doc/source/getting_started/install.rst @@ -358,7 +358,7 @@ PyTables 3.5.1 HDF5-based reading / writing blosc 1.17.0 Compression for HDF5 zlib Compression for HDF5 fastparquet 0.4.0 Parquet reading / writing -pyarrow 0.15.0 Parquet, ORC, and feather reading / writing +pyarrow 0.17.0 Parquet, ORC, and feather reading / writing pyreadstat SPSS files (.sav) reading ========================= ================== ============================================================= diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 622029adf357f..b83a5916a075e 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -578,7 +578,7 @@ Optional libraries below the lowest tested version may still work, but are not c +-----------------+-----------------+---------+ | openpyxl | 3.0.0 | X | +-----------------+-----------------+---------+ -| pyarrow | 0.15.0 | | +| pyarrow | 0.17.0 | X | +-----------------+-----------------+---------+ | pymysql | 0.8.1 | X | +-----------------+-----------------+---------+ diff --git a/environment.yml b/environment.yml index 67b42d545af88..56a36c593a458 100644 --- a/environment.yml +++ b/environment.yml @@ -100,7 +100,7 @@ dependencies: - odfpy - fastparquet>=0.3.2 # pandas.read_parquet, DataFrame.to_parquet - - pyarrow>=0.15.0 # pandas.read_parquet, DataFrame.to_parquet, pandas.read_feather, DataFrame.to_feather + - pyarrow>=0.17.0 # pandas.read_parquet, DataFrame.to_parquet, pandas.read_feather, DataFrame.to_feather - python-snappy # required by pyarrow - pyqt>=5.9.2 # pandas.read_clipboard diff --git a/pandas/compat/_optional.py b/pandas/compat/_optional.py index 0ef6da53191c5..f8eccfeb2c60a 100644 --- a/pandas/compat/_optional.py +++ b/pandas/compat/_optional.py @@ -21,7 +21,7 @@ "odfpy": "1.3.0", "openpyxl": "3.0.0", "pandas_gbq": "0.12.0", - "pyarrow": "0.15.0", + "pyarrow": "0.17.0", "pytest": "5.0.1", "pyxlsb": "1.0.6", "s3fs": "0.4.0", diff --git a/pandas/tests/arrays/interval/test_interval.py b/pandas/tests/arrays/interval/test_interval.py index 6ae3f75069899..7d27b617c0e6e 100644 --- a/pandas/tests/arrays/interval/test_interval.py +++ b/pandas/tests/arrays/interval/test_interval.py @@ -165,7 +165,7 @@ def test_repr(): # Arrow interaction -pyarrow_skip = td.skip_if_no("pyarrow", min_version="0.16.0") +pyarrow_skip = td.skip_if_no("pyarrow") @pyarrow_skip diff --git a/pandas/tests/arrays/masked/test_arrow_compat.py b/pandas/tests/arrays/masked/test_arrow_compat.py index 193017ddfcadf..9f755412dbf39 100644 --- a/pandas/tests/arrays/masked/test_arrow_compat.py +++ b/pandas/tests/arrays/masked/test_arrow_compat.py @@ -6,7 +6,7 @@ import pandas as pd import pandas._testing as tm -pa = pytest.importorskip("pyarrow", minversion="0.15.0") +pa = pytest.importorskip("pyarrow", minversion="0.17.0") from pandas.core.arrays._arrow_utils import pyarrow_array_to_numpy_and_mask @@ -21,8 +21,6 @@ def data(request): def test_arrow_array(data): - # protocol added in 0.15.0 - arr = pa.array(data) expected = pa.array( data.to_numpy(object, na_value=None), @@ -31,10 +29,8 @@ def test_arrow_array(data): assert arr.equals(expected) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_roundtrip(data): - # roundtrip possible from arrow 0.16.0 - df = pd.DataFrame({"a": data}) table = pa.table(df) assert table.field("a").type == str(data.dtype.numpy_dtype) @@ -43,7 +39,7 @@ def test_arrow_roundtrip(data): tm.assert_frame_equal(result, df) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_load_from_zero_chunks(data): # GH-41040 @@ -58,7 +54,7 @@ def test_arrow_load_from_zero_chunks(data): tm.assert_frame_equal(result, df) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_from_arrow_uint(): # https://github.com/pandas-dev/pandas/issues/31896 # possible mismatch in types @@ -70,7 +66,7 @@ def test_arrow_from_arrow_uint(): tm.assert_extension_array_equal(result, expected) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_sliced(data): # https://github.com/pandas-dev/pandas/issues/38525 @@ -165,7 +161,7 @@ def test_pyarrow_array_to_numpy_and_mask(np_dtype_to_arrays): tm.assert_numpy_array_equal(mask, mask_expected_empty) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_from_arrow_type_error(request, data): # ensure that __from_arrow__ returns a TypeError when getting a wrong # array type diff --git a/pandas/tests/arrays/period/test_arrow_compat.py b/pandas/tests/arrays/period/test_arrow_compat.py index d7b0704cdfb05..5211397f20c36 100644 --- a/pandas/tests/arrays/period/test_arrow_compat.py +++ b/pandas/tests/arrays/period/test_arrow_compat.py @@ -11,7 +11,7 @@ period_array, ) -pyarrow_skip = pyarrow_skip = td.skip_if_no("pyarrow", min_version="0.16.0") +pyarrow_skip = td.skip_if_no("pyarrow", min_version="0.17.0") @pyarrow_skip diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py index 3205664e7c80a..e3b43c544a477 100644 --- a/pandas/tests/arrays/string_/test_string.py +++ b/pandas/tests/arrays/string_/test_string.py @@ -437,7 +437,7 @@ def test_fillna_args(dtype, request): arr.fillna(value=1) -@td.skip_if_no("pyarrow", min_version="0.15.0") +@td.skip_if_no("pyarrow") def test_arrow_array(dtype): # protocol added in 0.15.0 import pyarrow as pa @@ -451,7 +451,7 @@ def test_arrow_array(dtype): assert arr.equals(expected) -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_roundtrip(dtype, dtype_object): # roundtrip possible from arrow 1.0.0 import pyarrow as pa @@ -467,7 +467,7 @@ def test_arrow_roundtrip(dtype, dtype_object): assert result.loc[2, "a"] is pd.NA -@td.skip_if_no("pyarrow", min_version="0.16.0") +@td.skip_if_no("pyarrow") def test_arrow_load_from_zero_chunks(dtype, dtype_object): # GH-41040 import pyarrow as pa diff --git a/pandas/tests/io/test_feather.py b/pandas/tests/io/test_feather.py index a5254f5ff7988..ba8a9ed070236 100644 --- a/pandas/tests/io/test_feather.py +++ b/pandas/tests/io/test_feather.py @@ -6,14 +6,12 @@ import pandas as pd import pandas._testing as tm -from pandas.util.version import Version from pandas.io.feather_format import read_feather, to_feather # isort:skip pyarrow = pytest.importorskip("pyarrow") -pyarrow_version = Version(pyarrow.__version__) filter_sparse = pytest.mark.filterwarnings("ignore:The Sparse") @@ -89,12 +87,11 @@ def test_basic(self): ), } ) - if pyarrow_version >= Version("0.17.0"): - df["periods"] = pd.period_range("2013", freq="M", periods=3) - df["timedeltas"] = pd.timedelta_range("1 day", periods=3) - # TODO temporary disable due to regression in pyarrow 0.17.1 - # https://github.com/pandas-dev/pandas/issues/34255 - # df["intervals"] = pd.interval_range(0, 3, 3) + df["periods"] = pd.period_range("2013", freq="M", periods=3) + df["timedeltas"] = pd.timedelta_range("1 day", periods=3) + # TODO temporary disable due to regression in pyarrow 0.17.1 + # https://github.com/pandas-dev/pandas/issues/34255 + # df["intervals"] = pd.interval_range(0, 3, 3) assert df.dttz.dtype.tz.zone == "US/Eastern" self.check_round_trip(df) diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py index f66451cd72309..ae6425cd93ac5 100644 --- a/pandas/tests/io/test_parquet.py +++ b/pandas/tests/io/test_parquet.py @@ -17,6 +17,10 @@ PY38, is_platform_windows, ) +from pandas.compat.pyarrow import ( + pa_version_under1p0, + pa_version_under2p0, +) import pandas.util._test_decorators as td import pandas as pd @@ -653,8 +657,6 @@ def test_categorical(self, pa): ) def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa, s3so): s3fs = pytest.importorskip("s3fs") - if Version(pyarrow.__version__) <= Version("0.17.0"): - pytest.skip() s3 = s3fs.S3FileSystem(**s3so) kw = {"filesystem": s3} check_round_trip( @@ -666,8 +668,6 @@ def test_s3_roundtrip_explicit_fs(self, df_compat, s3_resource, pa, s3so): ) def test_s3_roundtrip(self, df_compat, s3_resource, pa, s3so): - if Version(pyarrow.__version__) <= Version("0.17.0"): - pytest.skip() # GH #19134 s3so = {"storage_options": s3so} check_round_trip( @@ -698,14 +698,12 @@ def test_s3_roundtrip_for_dir( # These are added to back of dataframe on read. In new API category dtype is # only used if partition field is string, but this changed again to use # category dtype for all types (not only strings) in pyarrow 2.0.0 - pa10 = (Version(pyarrow.__version__) >= Version("1.0.0")) and ( - Version(pyarrow.__version__) < Version("2.0.0") - ) if partition_col: - if pa10: - partition_col_type = "int32" - else: - partition_col_type = "category" + partition_col_type = ( + "int32" + if (not pa_version_under1p0) and pa_version_under2p0 + else "category" + ) expected_df[partition_col] = expected_df[partition_col].astype( partition_col_type @@ -795,7 +793,7 @@ def test_write_with_schema(self, pa): out_df = df.astype(bool) check_round_trip(df, pa, write_kwargs={"schema": schema}, expected=out_df) - @td.skip_if_no("pyarrow", min_version="0.15.0") + @td.skip_if_no("pyarrow") def test_additional_extension_arrays(self, pa): # test additional ExtensionArrays that are supported through the # __arrow_array__ protocol @@ -806,22 +804,10 @@ def test_additional_extension_arrays(self, pa): "c": pd.Series(["a", None, "c"], dtype="string"), } ) - if Version(pyarrow.__version__) >= Version("0.16.0"): - expected = df - else: - # de-serialized as plain int / object - expected = df.assign( - a=df.a.astype("int64"), b=df.b.astype("int64"), c=df.c.astype("object") - ) - check_round_trip(df, pa, expected=expected) + check_round_trip(df, pa) df = pd.DataFrame({"a": pd.Series([1, 2, 3, None], dtype="Int64")}) - if Version(pyarrow.__version__) >= Version("0.16.0"): - expected = df - else: - # if missing values in integer, currently de-serialized as float - expected = df.assign(a=df.a.astype("float64")) - check_round_trip(df, pa, expected=expected) + check_round_trip(df, pa) @td.skip_if_no("pyarrow", min_version="1.0.0") def test_pyarrow_backed_string_array(self, pa): @@ -831,7 +817,7 @@ def test_pyarrow_backed_string_array(self, pa): df = pd.DataFrame({"a": pd.Series(["a", None, "c"], dtype="arrow_string")}) check_round_trip(df, pa, expected=df) - @td.skip_if_no("pyarrow", min_version="0.16.0") + @td.skip_if_no("pyarrow") def test_additional_extension_types(self, pa): # test additional ExtensionArrays that are supported through the # __arrow_array__ protocol + by defining a custom ExtensionType @@ -844,7 +830,7 @@ def test_additional_extension_types(self, pa): ) check_round_trip(df, pa) - @td.skip_if_no("pyarrow", min_version="0.16.0") + @td.skip_if_no("pyarrow") def test_use_nullable_dtypes(self, pa): import pyarrow.parquet as pq @@ -880,7 +866,7 @@ def test_timestamp_nanoseconds(self, pa): check_round_trip(df, pa, write_kwargs={"version": "2.0"}) def test_timezone_aware_index(self, pa, timezone_aware_date_list): - if Version(pyarrow.__version__) >= Version("2.0.0"): + if not pa_version_under2p0: # temporary skip this test until it is properly resolved # https://github.com/pandas-dev/pandas/issues/37286 pytest.skip() diff --git a/requirements-dev.txt b/requirements-dev.txt index 35fb6d18a7e81..d1fafbbf9101d 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -65,7 +65,7 @@ xlsxwriter xlwt odfpy fastparquet>=0.3.2 -pyarrow>=0.15.0 +pyarrow>=0.17.0 python-snappy pyqt5>=5.9.2 tables>=3.5.1
closes #38870
https://api.github.com/repos/pandas-dev/pandas/pulls/41476
2021-05-14T16:58:11Z
2021-05-17T15:21:56Z
2021-05-17T15:21:56Z
2021-06-18T02:25:25Z
DEPR: dropping nuisance columns in DataFrameGroupby apply, agg, transform
diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst index ef6d45fa0140b..7a55acbd3031d 100644 --- a/doc/source/user_guide/groupby.rst +++ b/doc/source/user_guide/groupby.rst @@ -1000,6 +1000,7 @@ instance method on each data group. This is pretty easy to do by passing lambda functions: .. ipython:: python + :okwarning: grouped = df.groupby("A") grouped.agg(lambda x: x.std()) @@ -1009,6 +1010,7 @@ arguments. Using a bit of metaprogramming cleverness, GroupBy now has the ability to "dispatch" method calls to the groups: .. ipython:: python + :okwarning: grouped.std() diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index d357e4a633347..e2e05d98845f6 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -720,6 +720,44 @@ For example: A 24 dtype: int64 + +Similarly, when applying a function to :class:`DataFrameGroupBy`, columns on which +the function raises ``TypeError`` are currently silently ignored and dropped +from the result. + +This behavior is deprecated. In a future version, the ``TypeError`` +will be raised, and users will need to select only valid columns before calling +the function. + +For example: + +.. ipython:: python + + df = pd.DataFrame({"A": [1, 2, 3, 4], "B": pd.date_range("2016-01-01", periods=4)}) + gb = df.groupby([1, 1, 2, 2]) + +*Old behavior*: + +.. code-block:: ipython + + In [4]: gb.prod(numeric_only=False) + Out[4]: + A + 1 2 + 2 12 + +.. code-block:: ipython + + In [5]: gb.prod(numeric_only=False) + ... + TypeError: datetime64 type does not support prod operations + + In [6]: gb[["A"]].prod(numeric_only=False) + Out[6]: + A + 1 2 + 2 12 + .. --------------------------------------------------------------------------- diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index c38c51d46f83e..dec68ab8f392d 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1087,6 +1087,15 @@ def array_func(values: ArrayLike) -> ArrayLike: if not len(new_mgr) and len(orig): # If the original Manager was already empty, no need to raise raise DataError("No numeric types to aggregate") + if len(new_mgr) < len(data): + warnings.warn( + f"Dropping invalid columns in {type(self).__name__}.{how} " + "is deprecated. In a future version, a TypeError will be raised. " + f"Before calling .{how}, select only columns which should be " + "valid for the function.", + FutureWarning, + stacklevel=4, + ) return self._wrap_agged_manager(new_mgr) @@ -1283,6 +1292,16 @@ def arr_func(bvalues: ArrayLike) -> ArrayLike: res_mgr = mgr.grouped_reduce(arr_func, ignore_failures=True) res_mgr.set_axis(1, mgr.axes[1]) + if len(res_mgr) < len(mgr): + warnings.warn( + f"Dropping invalid columns in {type(self).__name__}.{how} " + "is deprecated. In a future version, a TypeError will be raised. " + f"Before calling .{how}, select only columns which should be " + "valid for the transforming function.", + FutureWarning, + stacklevel=4, + ) + res_df = self.obj._constructor(res_mgr) if self.axis == 1: res_df = res_df.T @@ -1420,7 +1439,14 @@ def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame: output[i] = sgb.transform(wrapper) except TypeError: # e.g. trying to call nanmean with string values - pass + warnings.warn( + f"Dropping invalid columns in {type(self).__name__}.transform " + "is deprecated. In a future version, a TypeError will be raised. " + "Before calling .transform, select only columns which should be " + "valid for the transforming function.", + FutureWarning, + stacklevel=5, + ) else: inds.append(i) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index b27eb4bb8f325..3ce9b9bececf0 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -30,6 +30,7 @@ class providing the base-class of operations. Union, cast, ) +import warnings import numpy as np @@ -1270,6 +1271,14 @@ def _python_agg_general(self, func, *args, **kwargs): # if this function is invalid for this dtype, we will ignore it. result = self.grouper.agg_series(obj, f) except TypeError: + warnings.warn( + f"Dropping invalid columns in {type(self).__name__}.agg " + "is deprecated. In a future version, a TypeError will be raised. " + "Before calling .agg, select only columns which should be " + "valid for the aggregating function.", + FutureWarning, + stacklevel=3, + ) continue key = base.OutputKey(label=name, position=idx) @@ -2829,6 +2838,16 @@ def _get_cythonized_result( vals, inferences = pre_processing(vals) except TypeError as err: error_msg = str(err) + howstr = how.replace("group_", "") + warnings.warn( + "Dropping invalid columns in " + f"{type(self).__name__}.{howstr} is deprecated. " + "In a future version, a TypeError will be raised. " + f"Before calling .{howstr}, select only columns which " + "should be valid for the function.", + FutureWarning, + stacklevel=3, + ) continue vals = vals.astype(cython_dtype, copy=False) if needs_2d: diff --git a/pandas/tests/groupby/aggregate/test_aggregate.py b/pandas/tests/groupby/aggregate/test_aggregate.py index 18739320d90d3..eb82e03aea82f 100644 --- a/pandas/tests/groupby/aggregate/test_aggregate.py +++ b/pandas/tests/groupby/aggregate/test_aggregate.py @@ -257,7 +257,8 @@ def func(ser): else: return ser.sum() - result = grouped.aggregate(func) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"): + result = grouped.aggregate(func) exp_grouped = three_group.loc[:, three_group.columns != "C"] expected = exp_grouped.groupby(["A", "B"]).aggregate(func) tm.assert_frame_equal(result, expected) @@ -1020,6 +1021,7 @@ def test_mangle_series_groupby(self): tm.assert_frame_equal(result, expected) @pytest.mark.xfail(reason="GH-26611. kwargs for multi-agg.") + @pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning") def test_with_kwargs(self): f1 = lambda x, y, b=1: x.sum() + y + b f2 = lambda x, y, b=2: x.sum() + y * b diff --git a/pandas/tests/groupby/aggregate/test_other.py b/pandas/tests/groupby/aggregate/test_other.py index 681192881c301..4d30543355d47 100644 --- a/pandas/tests/groupby/aggregate/test_other.py +++ b/pandas/tests/groupby/aggregate/test_other.py @@ -44,9 +44,16 @@ def test_agg_api(): def peak_to_peak(arr): return arr.max() - arr.min() - expected = grouped.agg([peak_to_peak]) + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid", check_stacklevel=False + ): + expected = grouped.agg([peak_to_peak]) expected.columns = ["data1", "data2"] - result = grouped.agg(peak_to_peak) + + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid", check_stacklevel=False + ): + result = grouped.agg(peak_to_peak) tm.assert_frame_equal(result, expected) @@ -294,7 +301,8 @@ def raiseException(df): raise TypeError("test") with pytest.raises(TypeError, match="test"): - df.groupby(0).agg(raiseException) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"): + df.groupby(0).agg(raiseException) def test_series_agg_multikey(): diff --git a/pandas/tests/groupby/test_function.py b/pandas/tests/groupby/test_function.py index 4fa21a259e7cb..95bb010015f62 100644 --- a/pandas/tests/groupby/test_function.py +++ b/pandas/tests/groupby/test_function.py @@ -87,13 +87,15 @@ def test_max_min_object_multiple_columns(using_array_manager): gb = df.groupby("A") - result = gb.max(numeric_only=False) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"): + result = gb.max(numeric_only=False) # "max" is valid for column "C" but not for "B" ei = Index([1, 2, 3], name="A") expected = DataFrame({"C": ["b", "d", "e"]}, index=ei) tm.assert_frame_equal(result, expected) - result = gb.min(numeric_only=False) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid"): + result = gb.min(numeric_only=False) # "min" is valid for column "C" but not for "B" ei = Index([1, 2, 3], name="A") expected = DataFrame({"C": ["a", "c", "e"]}, index=ei) @@ -221,7 +223,10 @@ def test_averages(self, df, method): ], ) - result = getattr(gb, method)(numeric_only=False) + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid", check_stacklevel=False + ): + result = getattr(gb, method)(numeric_only=False) tm.assert_frame_equal(result.reindex_like(expected), expected) expected_columns = expected.columns @@ -303,10 +308,27 @@ def test_cummin_cummax(self, df, method): def _check(self, df, method, expected_columns, expected_columns_numeric): gb = df.groupby("group") - result = getattr(gb, method)() + # cummin, cummax dont have numeric_only kwarg, always use False + warn = None + if method in ["cummin", "cummax"]: + # these dont have numeric_only kwarg, always use False + warn = FutureWarning + elif method in ["min", "max"]: + # these have numeric_only kwarg, but default to False + warn = FutureWarning + + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + result = getattr(gb, method)() + tm.assert_index_equal(result.columns, expected_columns_numeric) - result = getattr(gb, method)(numeric_only=False) + # GH#41475 deprecated silently ignoring nuisance columns + warn = None + if len(expected_columns) < len(gb._obj_with_exclusions.columns): + warn = FutureWarning + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + result = getattr(gb, method)(numeric_only=False) + tm.assert_index_equal(result.columns, expected_columns) diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index c37dc17b85dd2..29402d6b8d538 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -923,7 +923,8 @@ def aggfun(ser): else: return ser.sum() - agged2 = df.groupby(keys).aggregate(aggfun) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"): + agged2 = df.groupby(keys).aggregate(aggfun) assert len(agged2.columns) + 1 == len(df.columns) @@ -1757,6 +1758,7 @@ def test_pivot_table_values_key_error(): @pytest.mark.parametrize( "op", ["idxmax", "idxmin", "mad", "min", "max", "sum", "prod", "skew"] ) +@pytest.mark.filterwarnings("ignore:Dropping invalid columns:FutureWarning") @pytest.mark.filterwarnings("ignore:.*Select only valid:FutureWarning") def test_empty_groupby(columns, keys, values, method, op, request): # GH8093 & GH26411 diff --git a/pandas/tests/groupby/test_quantile.py b/pandas/tests/groupby/test_quantile.py index 9c9d1aa881890..90437b9139594 100644 --- a/pandas/tests/groupby/test_quantile.py +++ b/pandas/tests/groupby/test_quantile.py @@ -155,7 +155,10 @@ def test_quantile_raises(): df = DataFrame([["foo", "a"], ["foo", "b"], ["foo", "c"]], columns=["key", "val"]) with pytest.raises(TypeError, match="cannot be performed against 'object' dtypes"): - df.groupby("key").quantile() + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid columns" + ): + df.groupby("key").quantile() def test_quantile_out_of_bounds_q_raises(): @@ -236,7 +239,11 @@ def test_groupby_quantile_nullable_array(values, q): @pytest.mark.parametrize("q", [0.5, [0.0, 0.5, 1.0]]) def test_groupby_quantile_skips_invalid_dtype(q): df = DataFrame({"a": [1], "b": [2.0], "c": ["x"]}) - result = df.groupby("a").quantile(q) + + warn = None if isinstance(q, list) else FutureWarning + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + result = df.groupby("a").quantile(q) + expected = df.groupby("a")[["b"]].quantile(q) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index 09317cbeec658..1949d03998512 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -409,7 +409,9 @@ def test_transform_exclude_nuisance(df, duplicates): grouped = df.groupby("A") gbc = grouped["C"] - expected["C"] = gbc.transform(np.mean) + warn = FutureWarning if duplicates else None + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + expected["C"] = gbc.transform(np.mean) if duplicates: # squeeze 1-column DataFrame down to Series expected["C"] = expected["C"]["C"] @@ -422,14 +424,16 @@ def test_transform_exclude_nuisance(df, duplicates): expected["D"] = grouped["D"].transform(np.mean) expected = DataFrame(expected) - result = df.groupby("A").transform(np.mean) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"): + result = df.groupby("A").transform(np.mean) tm.assert_frame_equal(result, expected) def test_transform_function_aliases(df): - result = df.groupby("A").transform("mean") - expected = df.groupby("A").transform(np.mean) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"): + result = df.groupby("A").transform("mean") + expected = df.groupby("A").transform(np.mean) tm.assert_frame_equal(result, expected) result = df.groupby("A")["C"].transform("mean") @@ -498,7 +502,10 @@ def test_groupby_transform_with_int(): } ) with np.errstate(all="ignore"): - result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid columns" + ): + result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) expected = DataFrame( {"B": np.nan, "C": Series([-1, 0, 1, -1, 0, 1], dtype="float64")} ) @@ -514,7 +521,10 @@ def test_groupby_transform_with_int(): } ) with np.errstate(all="ignore"): - result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid columns" + ): + result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) expected = DataFrame({"B": np.nan, "C": [-1.0, 0.0, 1.0, -1.0, 0.0, 1.0]}) tm.assert_frame_equal(result, expected) @@ -522,7 +532,10 @@ def test_groupby_transform_with_int(): s = Series([2, 3, 4, 10, 5, -1]) df = DataFrame({"A": [1, 1, 1, 2, 2, 2], "B": 1, "C": s, "D": "foo"}) with np.errstate(all="ignore"): - result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) + with tm.assert_produces_warning( + FutureWarning, match="Dropping invalid columns" + ): + result = df.groupby("A").transform(lambda x: (x - x.mean()) / x.std()) s1 = s.iloc[0:3] s1 = (s1 - s1.mean()) / s1.std() @@ -532,7 +545,8 @@ def test_groupby_transform_with_int(): tm.assert_frame_equal(result, expected) # int doesn't get downcasted - result = df.groupby("A").transform(lambda x: x * 2 / 2) + with tm.assert_produces_warning(FutureWarning, match="Dropping invalid columns"): + result = df.groupby("A").transform(lambda x: x * 2 / 2) expected = DataFrame({"B": 1.0, "C": [2.0, 3.0, 4.0, 10.0, 5.0, -1.0]}) tm.assert_frame_equal(result, expected) @@ -791,7 +805,11 @@ def test_transform_numeric_ret(cols, exp, comp_func, agg_func, request): {"a": date_range("2018-01-01", periods=3), "b": range(3), "c": range(7, 10)} ) - result = df.groupby("b")[cols].transform(agg_func) + warn = FutureWarning + if isinstance(exp, Series) or agg_func != "size": + warn = None + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + result = df.groupby("b")[cols].transform(agg_func) if agg_func == "rank": exp = exp.astype("float") @@ -1103,7 +1121,12 @@ def test_transform_agg_by_name(request, reduction_func, obj): args = {"nth": [0], "quantile": [0.5], "corrwith": [obj]}.get(func, []) - result = g.transform(func, *args) + warn = None + if isinstance(obj, DataFrame) and func == "size": + warn = FutureWarning + + with tm.assert_produces_warning(warn, match="Dropping invalid columns"): + result = g.transform(func, *args) # this is the *definition* of a transformation tm.assert_index_equal(result.index, obj.index)
- [x] closes #21664 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry Discussed on this week's call.
https://api.github.com/repos/pandas-dev/pandas/pulls/41475
2021-05-14T16:52:37Z
2021-05-26T01:53:59Z
2021-05-26T01:53:59Z
2022-02-19T17:29:08Z
TST/CLN: move some tests
diff --git a/pandas/tests/strings/test_case_justify.py b/pandas/tests/strings/test_case_justify.py index d6e2ca7399b4e..24e1f99ad7956 100644 --- a/pandas/tests/strings/test_case_justify.py +++ b/pandas/tests/strings/test_case_justify.py @@ -81,6 +81,15 @@ def test_swapcase_mixed_object(): tm.assert_series_equal(result, expected) +def test_casefold(): + # GH25405 + expected = Series(["ss", np.nan, "case", "ssd"]) + s = Series(["ß", np.nan, "case", "ßd"]) + result = s.str.casefold() + + tm.assert_series_equal(result, expected) + + def test_casemethods(any_string_dtype): values = ["aaa", "bbb", "CCC", "Dddd", "eEEE"] s = Series(values, dtype=any_string_dtype) diff --git a/pandas/tests/strings/test_get_dummies.py b/pandas/tests/strings/test_get_dummies.py new file mode 100644 index 0000000000000..31386e4e342ae --- /dev/null +++ b/pandas/tests/strings/test_get_dummies.py @@ -0,0 +1,53 @@ +import numpy as np + +from pandas import ( + DataFrame, + Index, + MultiIndex, + Series, + _testing as tm, +) + + +def test_get_dummies(any_string_dtype): + s = Series(["a|b", "a|c", np.nan], dtype=any_string_dtype) + result = s.str.get_dummies("|") + expected = DataFrame([[1, 1, 0], [1, 0, 1], [0, 0, 0]], columns=list("abc")) + tm.assert_frame_equal(result, expected) + + s = Series(["a;b", "a", 7], dtype=any_string_dtype) + result = s.str.get_dummies(";") + expected = DataFrame([[0, 1, 1], [0, 1, 0], [1, 0, 0]], columns=list("7ab")) + tm.assert_frame_equal(result, expected) + + +def test_get_dummies_index(): + # GH9980, GH8028 + idx = Index(["a|b", "a|c", "b|c"]) + result = idx.str.get_dummies("|") + + expected = MultiIndex.from_tuples( + [(1, 1, 0), (1, 0, 1), (0, 1, 1)], names=("a", "b", "c") + ) + tm.assert_index_equal(result, expected) + + +def test_get_dummies_with_name_dummy(any_string_dtype): + # GH 12180 + # Dummies named 'name' should work as expected + s = Series(["a", "b,name", "b"], dtype=any_string_dtype) + result = s.str.get_dummies(",") + expected = DataFrame([[1, 0, 0], [0, 1, 1], [0, 1, 0]], columns=["a", "b", "name"]) + tm.assert_frame_equal(result, expected) + + +def test_get_dummies_with_name_dummy_index(): + # GH 12180 + # Dummies named 'name' should work as expected + idx = Index(["a|b", "name|c", "b|name"]) + result = idx.str.get_dummies("|") + + expected = MultiIndex.from_tuples( + [(1, 1, 0, 0), (0, 0, 1, 1), (0, 1, 0, 1)], names=("a", "b", "c", "name") + ) + tm.assert_index_equal(result, expected) diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py index 80010de047cd5..42d81154dea0f 100644 --- a/pandas/tests/strings/test_strings.py +++ b/pandas/tests/strings/test_strings.py @@ -303,50 +303,6 @@ def test_isnumeric(any_string_dtype): tm.assert_series_equal(s.str.isdecimal(), Series(decimal_e, dtype=dtype)) -def test_get_dummies(any_string_dtype): - s = Series(["a|b", "a|c", np.nan], dtype=any_string_dtype) - result = s.str.get_dummies("|") - expected = DataFrame([[1, 1, 0], [1, 0, 1], [0, 0, 0]], columns=list("abc")) - tm.assert_frame_equal(result, expected) - - s = Series(["a;b", "a", 7], dtype=any_string_dtype) - result = s.str.get_dummies(";") - expected = DataFrame([[0, 1, 1], [0, 1, 0], [1, 0, 0]], columns=list("7ab")) - tm.assert_frame_equal(result, expected) - - -def test_get_dummies_index(): - # GH9980, GH8028 - idx = Index(["a|b", "a|c", "b|c"]) - result = idx.str.get_dummies("|") - - expected = MultiIndex.from_tuples( - [(1, 1, 0), (1, 0, 1), (0, 1, 1)], names=("a", "b", "c") - ) - tm.assert_index_equal(result, expected) - - -def test_get_dummies_with_name_dummy(any_string_dtype): - # GH 12180 - # Dummies named 'name' should work as expected - s = Series(["a", "b,name", "b"], dtype=any_string_dtype) - result = s.str.get_dummies(",") - expected = DataFrame([[1, 0, 0], [0, 1, 1], [0, 1, 0]], columns=["a", "b", "name"]) - tm.assert_frame_equal(result, expected) - - -def test_get_dummies_with_name_dummy_index(): - # GH 12180 - # Dummies named 'name' should work as expected - idx = Index(["a|b", "name|c", "b|name"]) - result = idx.str.get_dummies("|") - - expected = MultiIndex.from_tuples( - [(1, 1, 0, 0), (0, 0, 1, 1), (0, 1, 0, 1)], names=("a", "b", "c", "name") - ) - tm.assert_index_equal(result, expected) - - def test_join(): values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) result = values.str.split("_").str.join("_") @@ -782,15 +738,6 @@ def test_method_on_bytes(): lhs.str.cat(rhs) -def test_casefold(): - # GH25405 - expected = Series(["ss", np.nan, "case", "ssd"]) - s = Series(["ß", np.nan, "case", "ßd"]) - result = s.str.casefold() - - tm.assert_series_equal(result, expected) - - def test_str_accessor_in_apply_func(): # https://github.com/pandas-dev/pandas/issues/38979 df = DataFrame(zip("abc", "def"))
no changes, only moved. maybe could split pandas/tests/strings/test_strings.py by method
https://api.github.com/repos/pandas-dev/pandas/pulls/41474
2021-05-14T16:04:44Z
2021-05-14T17:11:30Z
2021-05-14T17:11:30Z
2021-05-14T17:14:48Z
[ArrowStringArray] TST: move/combine a couple of tests
diff --git a/pandas/tests/strings/test_case_justify.py b/pandas/tests/strings/test_case_justify.py index d6e2ca7399b4e..73f6b2e9a1deb 100644 --- a/pandas/tests/strings/test_case_justify.py +++ b/pandas/tests/strings/test_case_justify.py @@ -49,10 +49,21 @@ def test_lower_upper_mixed_object(): tm.assert_series_equal(result, expected) -def test_capitalize(any_string_dtype): - s = Series(["FOO", "BAR", np.nan, "Blah", "blurg"], dtype=any_string_dtype) +@pytest.mark.parametrize( + "data, expected", + [ + ( + ["FOO", "BAR", np.nan, "Blah", "blurg"], + ["Foo", "Bar", np.nan, "Blah", "Blurg"], + ), + (["a", "b", "c"], ["A", "B", "C"]), + (["a b", "a bc. de"], ["A b", "A bc. de"]), + ], +) +def test_capitalize(data, expected, any_string_dtype): + s = Series(data, dtype=any_string_dtype) result = s.str.capitalize() - expected = Series(["Foo", "Bar", np.nan, "Blah", "Blurg"], dtype=any_string_dtype) + expected = Series(expected, dtype=any_string_dtype) tm.assert_series_equal(result, expected) diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py index e59105eccc67c..f3f5acd0d2f1c 100644 --- a/pandas/tests/strings/test_split_partition.py +++ b/pandas/tests/strings/test_split_partition.py @@ -614,56 +614,61 @@ def test_partition_sep_kwarg(any_string_dtype): def test_get(): - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) - - result = values.str.split("_").str.get(1) + ser = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) + result = ser.str.split("_").str.get(1) expected = Series(["b", "d", np.nan, "g"]) tm.assert_series_equal(result, expected) - # mixed - mixed = Series(["a_b_c", np.nan, "c_d_e", True, datetime.today(), None, 1, 2.0]) - rs = Series(mixed).str.split("_").str.get(1) - xp = Series(["b", np.nan, "d", np.nan, np.nan, np.nan, np.nan, np.nan]) +def test_get_mixed_object(): + ser = Series(["a_b_c", np.nan, "c_d_e", True, datetime.today(), None, 1, 2.0]) + result = ser.str.split("_").str.get(1) + expected = Series(["b", np.nan, "d", np.nan, np.nan, np.nan, np.nan, np.nan]) + tm.assert_series_equal(result, expected) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) - # bounds testing - values = Series(["1_2_3_4_5", "6_7_8_9_10", "11_12"]) +def test_get_bounds(): + ser = Series(["1_2_3_4_5", "6_7_8_9_10", "11_12"]) # positive index - result = values.str.split("_").str.get(2) + result = ser.str.split("_").str.get(2) expected = Series(["3", "8", np.nan]) tm.assert_series_equal(result, expected) # negative index - result = values.str.split("_").str.get(-3) + result = ser.str.split("_").str.get(-3) expected = Series(["3", "8", np.nan]) tm.assert_series_equal(result, expected) def test_get_complex(): # GH 20671, getting value not in dict raising `KeyError` - values = Series([(1, 2, 3), [1, 2, 3], {1, 2, 3}, {1: "a", 2: "b", 3: "c"}]) + ser = Series([(1, 2, 3), [1, 2, 3], {1, 2, 3}, {1: "a", 2: "b", 3: "c"}]) - result = values.str.get(1) + result = ser.str.get(1) expected = Series([2, 2, np.nan, "a"]) tm.assert_series_equal(result, expected) - result = values.str.get(-1) + result = ser.str.get(-1) expected = Series([3, 3, np.nan, np.nan]) tm.assert_series_equal(result, expected) @pytest.mark.parametrize("to_type", [tuple, list, np.array]) def test_get_complex_nested(to_type): - values = Series([to_type([to_type([1, 2])])]) + ser = Series([to_type([to_type([1, 2])])]) - result = values.str.get(0) + result = ser.str.get(0) expected = Series([to_type([1, 2])]) tm.assert_series_equal(result, expected) - result = values.str.get(1) + result = ser.str.get(1) expected = Series([np.nan]) tm.assert_series_equal(result, expected) + + +def test_get_strings(any_string_dtype): + ser = Series(["a", "ab", np.nan, "abc"], dtype=any_string_dtype) + result = ser.str.get(2) + expected = Series([np.nan, np.nan, np.nan, "c"], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) diff --git a/pandas/tests/strings/test_string_array.py b/pandas/tests/strings/test_string_array.py index f90d219159c7e..0de93b479e43e 100644 --- a/pandas/tests/strings/test_string_array.py +++ b/pandas/tests/strings/test_string_array.py @@ -1,11 +1,8 @@ -import operator - import numpy as np import pytest from pandas._libs import lib -import pandas as pd from pandas import ( DataFrame, Series, @@ -99,27 +96,3 @@ def test_string_array_extract(nullable_string_dtype): result = result.astype(object) tm.assert_equal(result, expected) - - -def test_str_get_stringarray_multiple_nans(nullable_string_dtype): - s = Series(pd.array(["a", "ab", pd.NA, "abc"], dtype=nullable_string_dtype)) - result = s.str.get(2) - expected = Series(pd.array([pd.NA, pd.NA, pd.NA, "c"], dtype=nullable_string_dtype)) - tm.assert_series_equal(result, expected) - - -@pytest.mark.parametrize( - "input, method", - [ - (["a", "b", "c"], operator.methodcaller("capitalize")), - (["a b", "a bc. de"], operator.methodcaller("capitalize")), - ], -) -def test_capitalize(input, method, nullable_string_dtype): - a = Series(input, dtype=nullable_string_dtype) - b = Series(input, dtype="object") - result = method(a.str) - expected = method(b.str) - - assert result.dtype.name == nullable_string_dtype - tm.assert_series_equal(result.astype(object), expected)
separate commits to simplify review (we may be able to remove pandas/tests/strings/test_string_array.py eventually)
https://api.github.com/repos/pandas-dev/pandas/pulls/41473
2021-05-14T15:48:45Z
2021-05-17T15:31:31Z
2021-05-17T15:31:31Z
2021-05-17T15:56:23Z
REF: collect methods in NumericIndex
diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py index 88b0b019324ea..de7c522b4fbec 100644 --- a/pandas/core/indexes/numeric.py +++ b/pandas/core/indexes/numeric.py @@ -16,7 +16,10 @@ Dtype, DtypeObj, ) -from pandas.util._decorators import doc +from pandas.util._decorators import ( + cache_readonly, + doc, +) from pandas.core.dtypes.cast import astype_nansafe from pandas.core.dtypes.common import ( @@ -43,6 +46,40 @@ _num_index_shared_docs = {} +_num_index_shared_docs[ + "class_descr" +] = """ + Immutable sequence used for indexing and alignment. The basic object + storing axis labels for all pandas objects. %(klass)s is a special case + of `Index` with purely %(ltype)s labels. %(extra)s. + + Parameters + ---------- + data : array-like (1-dimensional) + dtype : NumPy dtype (default: %(dtype)s) + copy : bool + Make a copy of input ndarray. + name : object + Name to be stored in the index. + + Attributes + ---------- + None + + Methods + ------- + None + + See Also + -------- + Index : The base pandas Index type. + + Notes + ----- + An Index instance can **only** contain hashable objects. +""" + + class NumericIndex(Index): """ Provide numeric type operations. @@ -50,6 +87,12 @@ class NumericIndex(Index): This is an abstract class. """ + _index_descr_args = { + "klass": "NumericIndex", + "ltype": "integer or float", + "dtype": "inferred", + "extra": "", + } _values: np.ndarray _default_dtype: np.dtype _dtype_validation_metadata: tuple[Callable[..., bool], str] @@ -57,6 +100,36 @@ class NumericIndex(Index): _is_numeric_dtype = True _can_hold_strings = False + @cache_readonly + def _can_hold_na(self) -> bool: + if is_float_dtype(self.dtype): + return True + else: + return False + + @cache_readonly + def _engine_type(self): + return { + np.int8: libindex.Int8Engine, + np.int16: libindex.Int16Engine, + np.int32: libindex.Int32Engine, + np.int64: libindex.Int64Engine, + np.uint8: libindex.UInt8Engine, + np.uint16: libindex.UInt16Engine, + np.uint32: libindex.UInt32Engine, + np.uint64: libindex.UInt64Engine, + np.float32: libindex.Float32Engine, + np.float64: libindex.Float64Engine, + }[self.dtype.type] + + @cache_readonly + def inferred_type(self) -> str: + return { + "i": "integer", + "u": "integer", + "f": "floating", + }[self.dtype.kind] + def __new__(cls, data=None, dtype: Dtype | None = None, copy=False, name=None): name = maybe_extract_name(name, data, cls) @@ -84,8 +157,10 @@ def _ensure_array(cls, data, dtype, copy: bool): if issubclass(data.dtype.type, str): cls._string_data_error(data) - if copy or not is_dtype_equal(data.dtype, cls._default_dtype): - subarr = np.array(data, dtype=cls._default_dtype, copy=copy) + dtype = cls._ensure_dtype(dtype) + + if copy or not is_dtype_equal(data.dtype, dtype): + subarr = np.array(data, dtype=dtype, copy=copy) cls._assert_safe_casting(data, subarr) else: subarr = data @@ -108,9 +183,65 @@ def _validate_dtype(cls, dtype: Dtype | None) -> None: f"Incorrect `dtype` passed: expected {expected}, received {dtype}" ) + @classmethod + def _ensure_dtype( + cls, + dtype: Dtype | None, + ) -> np.dtype | None: + """Ensure int64 dtype for Int64Index, etc. Assumed dtype is validated.""" + return cls._default_dtype + + def __contains__(self, key) -> bool: + """ + Check if key is a float and has a decimal. If it has, return False. + """ + if not is_integer_dtype(self.dtype): + return super().__contains__(key) + + hash(key) + try: + if is_float(key) and int(key) != key: + # otherwise the `key in self._engine` check casts e.g. 1.1 -> 1 + return False + return key in self._engine + except (OverflowError, TypeError, ValueError): + return False + + @doc(Index.astype) + def astype(self, dtype, copy=True): + if is_float_dtype(self.dtype): + dtype = pandas_dtype(dtype) + if needs_i8_conversion(dtype): + raise TypeError( + f"Cannot convert Float64Index to dtype {dtype}; integer " + "values are required for conversion" + ) + elif is_integer_dtype(dtype) and not is_extension_array_dtype(dtype): + # TODO(jreback); this can change once we have an EA Index type + # GH 13149 + arr = astype_nansafe(self._values, dtype=dtype) + return Int64Index(arr, name=self.name) + + return super().astype(dtype, copy=copy) + # ---------------------------------------------------------------- # Indexing Methods + @doc(Index._should_fallback_to_positional) + def _should_fallback_to_positional(self) -> bool: + return False + + @doc(Index._convert_slice_indexer) + def _convert_slice_indexer(self, key: slice, kind: str): + if is_float_dtype(self.dtype): + assert kind in ["loc", "getitem"] + + # We always treat __getitem__ slicing as label-based + # translate to locations + return self.slice_indexer(key.start, key.stop, key.step, kind=kind) + + return super()._convert_slice_indexer(key, kind=kind) + @doc(Index._maybe_cast_slice_bound) def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): assert kind in ["loc", "getitem", None, lib.no_default] @@ -119,6 +250,21 @@ def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): # we will try to coerce to integers return self._maybe_cast_indexer(label) + @doc(Index._convert_arr_indexer) + def _convert_arr_indexer(self, keyarr) -> np.ndarray: + if not is_unsigned_integer_dtype(self.dtype): + return super()._convert_arr_indexer(keyarr) + + # Cast the indexer to uint64 if possible so that the values returned + # from indexing are also uint64. + dtype = None + if is_integer_dtype(keyarr) or ( + lib.infer_dtype(keyarr, skipna=False) == "integer" + ): + dtype = np.dtype(np.uint64) + + return com.asarray_tuplesafe(keyarr, dtype=dtype) + # ---------------------------------------------------------------- @doc(Index._shallow_copy) @@ -150,13 +296,16 @@ def _is_comparable_dtype(self, dtype: DtypeObj) -> bool: return is_numeric_dtype(dtype) @classmethod - def _assert_safe_casting(cls, data, subarr): + def _assert_safe_casting(cls, data: np.ndarray, subarr: np.ndarray) -> None: """ - Subclasses need to override this only if the process of casting data - from some accepted dtype to the internal dtype(s) bears the risk of - truncation (e.g. float to int). + Ensure incoming data can be represented with matching signed-ness. + + Needed if the process of casting data from some accepted dtype to the internal + dtype(s) bears the risk of truncation (e.g. float to int). """ - pass + if is_integer_dtype(subarr.dtype): + if not np.array_equal(data, subarr): + raise TypeError("Unsafe NumPy casting, you must explicitly cast") @property def _is_all_dates(self) -> bool: @@ -165,46 +314,29 @@ def _is_all_dates(self) -> bool: """ return False + def _format_native_types( + self, na_rep="", float_format=None, decimal=".", quoting=None, **kwargs + ): + from pandas.io.formats.format import FloatArrayFormatter -_num_index_shared_docs[ - "class_descr" -] = """ - Immutable sequence used for indexing and alignment. The basic object - storing axis labels for all pandas objects. %(klass)s is a special case - of `Index` with purely %(ltype)s labels. %(extra)s. - - Parameters - ---------- - data : array-like (1-dimensional) - dtype : NumPy dtype (default: %(dtype)s) - copy : bool - Make a copy of input ndarray. - name : object - Name to be stored in the index. - - Attributes - ---------- - None - - Methods - ------- - None - - See Also - -------- - Index : The base pandas Index type. - - Notes - ----- - An Index instance can **only** contain hashable objects. -""" + if is_float_dtype(self.dtype): + formatter = FloatArrayFormatter( + self._values, + na_rep=na_rep, + float_format=float_format, + decimal=decimal, + quoting=quoting, + fixed_width=False, + ) + return formatter.get_result_as_array() -_int64_descr_args = { - "klass": "Int64Index", - "ltype": "integer", - "dtype": "int64", - "extra": "", -} + return super()._format_native_types( + na_rep=na_rep, + float_format=float_format, + decimal=decimal, + quoting=quoting, + **kwargs, + ) class IntegerIndex(NumericIndex): @@ -212,38 +344,6 @@ class IntegerIndex(NumericIndex): This is an abstract class for Int64Index, UInt64Index. """ - _default_dtype: np.dtype - _can_hold_na = False - - @classmethod - def _assert_safe_casting(cls, data, subarr): - """ - Ensure incoming data can be represented with matching signed-ness. - """ - if data.dtype.kind != cls._default_dtype.kind: - if not np.array_equal(data, subarr): - raise TypeError("Unsafe NumPy casting, you must explicitly cast") - - def __contains__(self, key) -> bool: - """ - Check if key is a float and has a decimal. If it has, return False. - """ - hash(key) - try: - if is_float(key) and int(key) != key: - # otherwise the `key in self._engine` check casts e.g. 1.1 -> 1 - return False - return key in self._engine - except (OverflowError, TypeError, ValueError): - return False - - @property - def inferred_type(self) -> str: - """ - Always 'integer' for ``Int64Index`` and ``UInt64Index`` - """ - return "integer" - @property def asi8(self) -> np.ndarray: # do not cache or you'll create a memory leak @@ -256,7 +356,13 @@ def asi8(self) -> np.ndarray: class Int64Index(IntegerIndex): - __doc__ = _num_index_shared_docs["class_descr"] % _int64_descr_args + _index_descr_args = { + "klass": "Int64Index", + "ltype": "integer", + "dtype": "int64", + "extra": "", + } + __doc__ = _num_index_shared_docs["class_descr"] % _index_descr_args _typ = "int64index" _engine_type = libindex.Int64Engine @@ -264,104 +370,31 @@ class Int64Index(IntegerIndex): _dtype_validation_metadata = (is_signed_integer_dtype, "signed integer") -_uint64_descr_args = { - "klass": "UInt64Index", - "ltype": "unsigned integer", - "dtype": "uint64", - "extra": "", -} - - class UInt64Index(IntegerIndex): - __doc__ = _num_index_shared_docs["class_descr"] % _uint64_descr_args + _index_descr_args = { + "klass": "UInt64Index", + "ltype": "unsigned integer", + "dtype": "uint64", + "extra": "", + } + __doc__ = _num_index_shared_docs["class_descr"] % _index_descr_args _typ = "uint64index" _engine_type = libindex.UInt64Engine _default_dtype = np.dtype(np.uint64) _dtype_validation_metadata = (is_unsigned_integer_dtype, "unsigned integer") - # ---------------------------------------------------------------- - # Indexing Methods - - @doc(Index._convert_arr_indexer) - def _convert_arr_indexer(self, keyarr): - # Cast the indexer to uint64 if possible so that the values returned - # from indexing are also uint64. - dtype = None - if is_integer_dtype(keyarr) or ( - lib.infer_dtype(keyarr, skipna=False) == "integer" - ): - dtype = np.dtype(np.uint64) - - return com.asarray_tuplesafe(keyarr, dtype=dtype) - - -_float64_descr_args = { - "klass": "Float64Index", - "dtype": "float64", - "ltype": "float", - "extra": "", -} - class Float64Index(NumericIndex): - __doc__ = _num_index_shared_docs["class_descr"] % _float64_descr_args + _index_descr_args = { + "klass": "Float64Index", + "dtype": "float64", + "ltype": "float", + "extra": "", + } + __doc__ = _num_index_shared_docs["class_descr"] % _index_descr_args _typ = "float64index" _engine_type = libindex.Float64Engine _default_dtype = np.dtype(np.float64) _dtype_validation_metadata = (is_float_dtype, "float") - - @property - def inferred_type(self) -> str: - """ - Always 'floating' for ``Float64Index`` - """ - return "floating" - - @doc(Index.astype) - def astype(self, dtype, copy=True): - dtype = pandas_dtype(dtype) - if needs_i8_conversion(dtype): - raise TypeError( - f"Cannot convert Float64Index to dtype {dtype}; integer " - "values are required for conversion" - ) - elif is_integer_dtype(dtype) and not is_extension_array_dtype(dtype): - # TODO(jreback); this can change once we have an EA Index type - # GH 13149 - arr = astype_nansafe(self._values, dtype=dtype) - return Int64Index(arr, name=self.name) - return super().astype(dtype, copy=copy) - - # ---------------------------------------------------------------- - # Indexing Methods - - @doc(Index._should_fallback_to_positional) - def _should_fallback_to_positional(self) -> bool: - return False - - @doc(Index._convert_slice_indexer) - def _convert_slice_indexer(self, key: slice, kind: str): - assert kind in ["loc", "getitem"] - - # We always treat __getitem__ slicing as label-based - # translate to locations - return self.slice_indexer(key.start, key.stop, key.step) - - # ---------------------------------------------------------------- - - def _format_native_types( - self, na_rep="", float_format=None, decimal=".", quoting=None, **kwargs - ): - from pandas.io.formats.format import FloatArrayFormatter - - formatter = FloatArrayFormatter( - self._values, - na_rep=na_rep, - float_format=float_format, - decimal=decimal, - quoting=quoting, - fixed_width=False, - ) - return formatter.get_result_as_array() diff --git a/pandas/core/indexes/range.py b/pandas/core/indexes/range.py index 8a91ba22fcba1..0e6fb77e8b51b 100644 --- a/pandas/core/indexes/range.py +++ b/pandas/core/indexes/range.py @@ -97,7 +97,6 @@ class RangeIndex(NumericIndex): _typ = "rangeindex" _engine_type = libindex.Int64Engine _dtype_validation_metadata = (is_signed_integer_dtype, "signed integer") - _can_hold_na = False _range: range # --------------------------------------------------------------------
Seperates the refactoring of existing methods out of #41153. Will make it easier to see what #41153 actually brings in of new things. This PR changes no functionality, but only makes the exisiting numeric indexes into thin subclasses of the existing `NumericIndex`.
https://api.github.com/repos/pandas-dev/pandas/pulls/41472
2021-05-14T14:58:28Z
2021-05-18T13:05:52Z
2021-05-18T13:05:52Z
2021-05-18T13:56:07Z
[ArrowStringArray] TST: parametrize tests/strings/test_find_replace.py
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py index 06a7c6d56a61d..843b0ba55e691 100644 --- a/pandas/tests/strings/test_find_replace.py +++ b/pandas/tests/strings/test_find_replace.py @@ -6,7 +6,6 @@ import pandas as pd from pandas import ( - Index, Series, _testing as tm, ) @@ -273,15 +272,14 @@ def test_replace_unicode(any_string_dtype): tm.assert_series_equal(result, expected) -@pytest.mark.parametrize("klass", [Series, Index]) @pytest.mark.parametrize("repl", [None, 3, {"a": "b"}]) @pytest.mark.parametrize("data", [["a", "b", None], ["a", "b", "c", "ad"]]) -def test_replace_raises(any_string_dtype, klass, repl, data): +def test_replace_raises(any_string_dtype, index_or_series, repl, data): # https://github.com/pandas-dev/pandas/issues/13438 msg = "repl must be a string or callable" - values = klass(data, dtype=any_string_dtype) + obj = index_or_series(data, dtype=any_string_dtype) with pytest.raises(TypeError, match=msg): - values.str.replace("a", repl) + obj.str.replace("a", repl) def test_replace_callable(any_string_dtype): @@ -486,39 +484,32 @@ def test_match_case_kwarg(any_string_dtype): tm.assert_series_equal(result, expected) -def test_fullmatch(): +def test_fullmatch(any_string_dtype): # GH 32806 - ser = Series(["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"]) + ser = Series( + ["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype + ) result = ser.str.fullmatch(".*BAD[_]+.*BAD") - expected = Series([True, False, np.nan, False]) + expected_dtype = "object" if any_string_dtype == "object" else "boolean" + expected = Series([True, False, np.nan, False], dtype=expected_dtype) tm.assert_series_equal(result, expected) - ser = Series(["ab", "AB", "abc", "ABC"]) + ser = Series(["ab", "AB", "abc", "ABC"], dtype=any_string_dtype) result = ser.str.fullmatch("ab", case=False) - expected = Series([True, True, False, False]) + expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean" + expected = Series([True, True, False, False], dtype=expected_dtype) tm.assert_series_equal(result, expected) -def test_fullmatch_nullable_string_dtype(nullable_string_dtype): - ser = Series( - ["fooBAD__barBAD", "BAD_BADleroybrown", None, "foo"], - dtype=nullable_string_dtype, - ) - result = ser.str.fullmatch(".*BAD[_]+.*BAD") - # Result is nullable boolean - expected = Series([True, False, np.nan, False], dtype="boolean") +def test_findall(any_string_dtype): + ser = Series(["fooBAD__barBAD", np.nan, "foo", "BAD"], dtype=any_string_dtype) + result = ser.str.findall("BAD[_]*") + expected = Series([["BAD__", "BAD"], np.nan, [], ["BAD"]]) tm.assert_series_equal(result, expected) -def test_findall(): - values = Series(["fooBAD__barBAD", np.nan, "foo", "BAD"]) - - result = values.str.findall("BAD[_]*") - exp = Series([["BAD__", "BAD"], np.nan, [], ["BAD"]]) - tm.assert_almost_equal(result, exp) - - # mixed - mixed = Series( +def test_findall_mixed_object(): + ser = Series( [ "fooBAD__barBAD", np.nan, @@ -532,8 +523,8 @@ def test_findall(): ] ) - rs = Series(mixed).str.findall("BAD[_]*") - xp = Series( + result = ser.str.findall("BAD[_]*") + expected = Series( [ ["BAD__", "BAD"], np.nan, @@ -547,86 +538,111 @@ def test_findall(): ] ) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(result, expected) -def test_find(): - values = Series(["ABCDEFG", "BCDEFEF", "DEFGHIJEF", "EFGHEF", "XXXX"]) - result = values.str.find("EF") - tm.assert_series_equal(result, Series([4, 3, 1, 0, -1])) - expected = np.array([v.find("EF") for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) +def test_find(any_string_dtype): + ser = Series( + ["ABCDEFG", "BCDEFEF", "DEFGHIJEF", "EFGHEF", "XXXX"], dtype=any_string_dtype + ) + expected_dtype = np.int64 if any_string_dtype == "object" else "Int64" - result = values.str.rfind("EF") - tm.assert_series_equal(result, Series([4, 5, 7, 4, -1])) - expected = np.array([v.rfind("EF") for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) + result = ser.str.find("EF") + expected = Series([4, 3, 1, 0, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.find("EF") for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) - result = values.str.find("EF", 3) - tm.assert_series_equal(result, Series([4, 3, 7, 4, -1])) - expected = np.array([v.find("EF", 3) for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) + result = ser.str.rfind("EF") + expected = Series([4, 5, 7, 4, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.rfind("EF") for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) + + result = ser.str.find("EF", 3) + expected = Series([4, 3, 7, 4, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.find("EF", 3) for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) + + result = ser.str.rfind("EF", 3) + expected = Series([4, 5, 7, 4, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.rfind("EF", 3) for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) - result = values.str.rfind("EF", 3) - tm.assert_series_equal(result, Series([4, 5, 7, 4, -1])) - expected = np.array([v.rfind("EF", 3) for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) + result = ser.str.find("EF", 3, 6) + expected = Series([4, 3, -1, 4, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.find("EF", 3, 6) for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) - result = values.str.find("EF", 3, 6) - tm.assert_series_equal(result, Series([4, 3, -1, 4, -1])) - expected = np.array([v.find("EF", 3, 6) for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) + result = ser.str.rfind("EF", 3, 6) + expected = Series([4, 3, -1, 4, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + expected = np.array([v.rfind("EF", 3, 6) for v in np.array(ser)], dtype=np.int64) + tm.assert_numpy_array_equal(np.array(result, dtype=np.int64), expected) - result = values.str.rfind("EF", 3, 6) - tm.assert_series_equal(result, Series([4, 3, -1, 4, -1])) - expected = np.array([v.rfind("EF", 3, 6) for v in values.values], dtype=np.int64) - tm.assert_numpy_array_equal(result.values, expected) +def test_find_bad_arg_raises(any_string_dtype): + ser = Series([], dtype=any_string_dtype) with pytest.raises(TypeError, match="expected a string object, not int"): - result = values.str.find(0) + ser.str.find(0) with pytest.raises(TypeError, match="expected a string object, not int"): - result = values.str.rfind(0) + ser.str.rfind(0) -def test_find_nan(): - values = Series(["ABCDEFG", np.nan, "DEFGHIJEF", np.nan, "XXXX"]) - result = values.str.find("EF") - tm.assert_series_equal(result, Series([4, np.nan, 1, np.nan, -1])) +def test_find_nan(any_string_dtype): + ser = Series( + ["ABCDEFG", np.nan, "DEFGHIJEF", np.nan, "XXXX"], dtype=any_string_dtype + ) + expected_dtype = np.float64 if any_string_dtype == "object" else "Int64" - result = values.str.rfind("EF") - tm.assert_series_equal(result, Series([4, np.nan, 7, np.nan, -1])) + result = ser.str.find("EF") + expected = Series([4, np.nan, 1, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) - result = values.str.find("EF", 3) - tm.assert_series_equal(result, Series([4, np.nan, 7, np.nan, -1])) + result = ser.str.rfind("EF") + expected = Series([4, np.nan, 7, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) - result = values.str.rfind("EF", 3) - tm.assert_series_equal(result, Series([4, np.nan, 7, np.nan, -1])) + result = ser.str.find("EF", 3) + expected = Series([4, np.nan, 7, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) - result = values.str.find("EF", 3, 6) - tm.assert_series_equal(result, Series([4, np.nan, -1, np.nan, -1])) + result = ser.str.rfind("EF", 3) + expected = Series([4, np.nan, 7, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) - result = values.str.rfind("EF", 3, 6) - tm.assert_series_equal(result, Series([4, np.nan, -1, np.nan, -1])) + result = ser.str.find("EF", 3, 6) + expected = Series([4, np.nan, -1, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) + result = ser.str.rfind("EF", 3, 6) + expected = Series([4, np.nan, -1, np.nan, -1], dtype=expected_dtype) + tm.assert_series_equal(result, expected) -def test_translate(): - def _check(result, expected): - if isinstance(result, Series): - tm.assert_series_equal(result, expected) - else: - tm.assert_index_equal(result, expected) - for klass in [Series, Index]: - s = klass(["abcdefg", "abcc", "cdddfg", "cdefggg"]) - table = str.maketrans("abc", "cde") - result = s.str.translate(table) - expected = klass(["cdedefg", "cdee", "edddfg", "edefggg"]) - _check(result, expected) +def test_translate(index_or_series, any_string_dtype): + obj = index_or_series( + ["abcdefg", "abcc", "cdddfg", "cdefggg"], dtype=any_string_dtype + ) + table = str.maketrans("abc", "cde") + result = obj.str.translate(table) + expected = index_or_series( + ["cdedefg", "cdee", "edddfg", "edefggg"], dtype=any_string_dtype + ) + if index_or_series is Series: + tm.assert_series_equal(result, expected) + else: + tm.assert_index_equal(result, expected) + +def test_translate_mixed_object(): # Series with non-string values s = Series(["a", "b", "c", 1.2]) + table = str.maketrans("abc", "cde") expected = Series(["c", "d", "e", np.nan]) result = s.str.translate(table) tm.assert_series_equal(result, expected)
https://api.github.com/repos/pandas-dev/pandas/pulls/41471
2021-05-14T14:52:56Z
2021-05-14T16:05:55Z
2021-05-14T16:05:54Z
2021-05-14T17:10:49Z
[ArrowStringArray] TST: parametrize (part) pandas/tests/strings/test_api.py
diff --git a/pandas/tests/strings/test_api.py b/pandas/tests/strings/test_api.py index 7f864a503486e..ec8b5bfa11ad5 100644 --- a/pandas/tests/strings/test_api.py +++ b/pandas/tests/strings/test_api.py @@ -10,11 +10,11 @@ from pandas.core import strings as strings -def test_api(): +def test_api(any_string_dtype): # GH 6106, GH 9322 assert Series.str is strings.StringMethods - assert isinstance(Series([""]).str, strings.StringMethods) + assert isinstance(Series([""], dtype=any_string_dtype).str, strings.StringMethods) def test_api_mi_raises(): @@ -74,18 +74,26 @@ def test_api_per_method( reason = None if box is Index and values.size == 0: if method_name in ["partition", "rpartition"] and kwargs.get("expand", True): + raises = TypeError reason = "Method cannot deal with empty Index" elif method_name == "split" and kwargs.get("expand", None): + raises = TypeError reason = "Split fails on empty Series when expand=True" elif method_name == "get_dummies": + raises = ValueError reason = "Need to fortify get_dummies corner cases" - elif box is Index and inferred_dtype == "empty" and dtype == object: - if method_name == "get_dummies": - reason = "Need to fortify get_dummies corner cases" + elif ( + box is Index + and inferred_dtype == "empty" + and dtype == object + and method_name == "get_dummies" + ): + raises = ValueError + reason = "Need to fortify get_dummies corner cases" if reason is not None: - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=raises, reason=reason) request.node.add_marker(mark) t = box(values, dtype=dtype) # explicit dtype to avoid casting @@ -117,9 +125,15 @@ def test_api_per_method( method(*args, **kwargs) -def test_api_for_categorical(any_string_method): +def test_api_for_categorical(any_string_method, any_string_dtype, request): # https://github.com/pandas-dev/pandas/issues/10661 - s = Series(list("aabb")) + + if any_string_dtype == "arrow_string": + # unsupported operand type(s) for +: 'ArrowStringArray' and 'str' + mark = pytest.mark.xfail(raises=TypeError, reason="Not Implemented") + request.node.add_marker(mark) + + s = Series(list("aabb"), dtype=any_string_dtype) s = s + " " + s c = s.astype("category") assert isinstance(c.str, strings.StringMethods) @@ -127,7 +141,7 @@ def test_api_for_categorical(any_string_method): method_name, args, kwargs = any_string_method result = getattr(c.str, method_name)(*args, **kwargs) - expected = getattr(s.str, method_name)(*args, **kwargs) + expected = getattr(s.astype("object").str, method_name)(*args, **kwargs) if isinstance(result, DataFrame): tm.assert_frame_equal(result, expected)
https://api.github.com/repos/pandas-dev/pandas/pulls/41470
2021-05-14T13:13:19Z
2021-05-14T16:07:34Z
2021-05-14T16:07:34Z
2021-05-14T17:04:52Z
TST/CLN: tighten some xfails
diff --git a/doc/source/development/code_style.rst b/doc/source/development/code_style.rst index 8f399ef6f1192..77c8d56765e5e 100644 --- a/doc/source/development/code_style.rst +++ b/doc/source/development/code_style.rst @@ -57,7 +57,8 @@ xfail during the testing phase. To do so, use the ``request`` fixture: import pytest def test_xfail(request): - request.node.add_marker(pytest.mark.xfail(reason="Indicate why here")) + mark = pytest.mark.xfail(raises=TypeError, reason="Indicate why here") + request.node.add_marker(mark) xfail is not to be used for tests involving failure due to invalid user arguments. For these tests, we need to verify the correct exception type and error message diff --git a/pandas/tests/apply/test_frame_apply.py b/pandas/tests/apply/test_frame_apply.py index cee8a0218e9e8..a724f3d9f2a7d 100644 --- a/pandas/tests/apply/test_frame_apply.py +++ b/pandas/tests/apply/test_frame_apply.py @@ -164,8 +164,9 @@ def test_apply_with_string_funcs(request, float_frame, func, args, kwds, how): if len(args) > 1 and how == "agg": request.node.add_marker( pytest.mark.xfail( + raises=TypeError, reason="agg/apply signature mismatch - agg passes 2nd " - "argument to func" + "argument to func", ) ) result = getattr(float_frame, how)(func, *args, **kwds) diff --git a/pandas/tests/apply/test_frame_transform.py b/pandas/tests/apply/test_frame_transform.py index 18c36e4096b2b..9050fab702881 100644 --- a/pandas/tests/apply/test_frame_transform.py +++ b/pandas/tests/apply/test_frame_transform.py @@ -173,7 +173,9 @@ def test_transform_bad_dtype(op, frame_or_series, request): # GH 35964 if op == "rank": request.node.add_marker( - pytest.mark.xfail(reason="GH 40418: rank does not raise a TypeError") + pytest.mark.xfail( + raises=ValueError, reason="GH 40418: rank does not raise a TypeError" + ) ) obj = DataFrame({"A": 3 * [object]}) # DataFrame that will fail on most transforms diff --git a/pandas/tests/apply/test_series_apply.py b/pandas/tests/apply/test_series_apply.py index bfa88f54e4f10..88c3ad228f8c3 100644 --- a/pandas/tests/apply/test_series_apply.py +++ b/pandas/tests/apply/test_series_apply.py @@ -262,7 +262,9 @@ def test_transform_partial_failure(op, request): # GH 35964 if op in ("ffill", "bfill", "pad", "backfill", "shift"): request.node.add_marker( - pytest.mark.xfail(reason=f"{op} is successful on any dtype") + pytest.mark.xfail( + raises=AssertionError, reason=f"{op} is successful on any dtype" + ) ) if op in ("rank", "fillna"): pytest.skip(f"{op} doesn't raise TypeError on object") diff --git a/pandas/tests/arithmetic/test_interval.py b/pandas/tests/arithmetic/test_interval.py index f1815b3e05367..1bbe90f3cb58c 100644 --- a/pandas/tests/arithmetic/test_interval.py +++ b/pandas/tests/arithmetic/test_interval.py @@ -135,7 +135,8 @@ def test_compare_scalar_na(self, op, interval_array, nulls_fixture, request): if nulls_fixture is pd.NA and interval_array.dtype.subtype != "int64": mark = pytest.mark.xfail( - reason="broken for non-integer IntervalArray; see GH 31882" + raises=AssertionError, + reason="broken for non-integer IntervalArray; see GH 31882", ) request.node.add_marker(mark) @@ -220,7 +221,7 @@ def test_compare_list_like_nan(self, op, interval_array, nulls_fixture, request) if nulls_fixture is pd.NA and interval_array.dtype.subtype != "i8": reason = "broken for non-integer IntervalArray; see GH 31882" - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=AssertionError, reason=reason) request.node.add_marker(mark) tm.assert_numpy_array_equal(result, expected) diff --git a/pandas/tests/arrays/masked/test_arithmetic.py b/pandas/tests/arrays/masked/test_arithmetic.py index adb52fce17f8b..29998831777f8 100644 --- a/pandas/tests/arrays/masked/test_arithmetic.py +++ b/pandas/tests/arrays/masked/test_arithmetic.py @@ -170,7 +170,9 @@ def test_unary_op_does_not_propagate_mask(data, op, request): data, _ = data if data.dtype in ["Float32", "Float64"] and op == "__invert__": request.node.add_marker( - pytest.mark.xfail(reason="invert is not implemented for float ea dtypes") + pytest.mark.xfail( + raises=TypeError, reason="invert is not implemented for float ea dtypes" + ) ) s = pd.Series(data) result = getattr(s, op)() diff --git a/pandas/tests/arrays/masked/test_arrow_compat.py b/pandas/tests/arrays/masked/test_arrow_compat.py index e06b8749fbf11..193017ddfcadf 100644 --- a/pandas/tests/arrays/masked/test_arrow_compat.py +++ b/pandas/tests/arrays/masked/test_arrow_compat.py @@ -173,7 +173,7 @@ def test_from_arrow_type_error(request, data): # TODO numeric dtypes cast any incoming array to the correct dtype # instead of erroring request.node.add_marker( - pytest.mark.xfail(reason="numeric dtypes don't error but cast") + pytest.mark.xfail(raises=None, reason="numeric dtypes don't error but cast") ) arr = pa.array(data).cast("string") diff --git a/pandas/tests/arrays/sparse/test_arithmetics.py b/pandas/tests/arrays/sparse/test_arithmetics.py index 79cf8298ab1a6..2ae60a90fee60 100644 --- a/pandas/tests/arrays/sparse/test_arithmetics.py +++ b/pandas/tests/arrays/sparse/test_arithmetics.py @@ -132,7 +132,7 @@ def test_float_scalar( elif op is ops.rfloordiv and scalar == 0: pass else: - mark = pytest.mark.xfail(reason="GH#38172") + mark = pytest.mark.xfail(raises=AssertionError, reason="GH#38172") request.node.add_marker(mark) values = self._base([np.nan, 1, 2, 0, np.nan, 0, 1, 2, 1, np.nan]) @@ -177,11 +177,13 @@ def test_float_same_index_with_nans( # when sp_index are the same op = all_arithmetic_functions - if not np_version_under1p20: - if op is ops.rfloordiv: - if not (mix and kind == "block"): - mark = pytest.mark.xfail(reason="GH#38172") - request.node.add_marker(mark) + if ( + not np_version_under1p20 + and op is ops.rfloordiv + and not (mix and kind == "block") + ): + mark = pytest.mark.xfail(raises=AssertionError, reason="GH#38172") + request.node.add_marker(mark) values = self._base([np.nan, 1, 2, 0, np.nan, 0, 1, 2, 1, np.nan]) rvalues = self._base([np.nan, 2, 3, 4, np.nan, 0, 1, 3, 2, np.nan]) @@ -358,10 +360,13 @@ def test_bool_array_logical(self, kind, fill_value): def test_mixed_array_float_int(self, kind, mix, all_arithmetic_functions, request): op = all_arithmetic_functions - if not np_version_under1p20: - if op in [operator.floordiv, ops.rfloordiv] and mix: - mark = pytest.mark.xfail(reason="GH#38172") - request.node.add_marker(mark) + if ( + not np_version_under1p20 + and op in [operator.floordiv, ops.rfloordiv] + and mix + ): + mark = pytest.mark.xfail(raises=AssertionError, reason="GH#38172") + request.node.add_marker(mark) rdtype = "int64" diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py index 17d05ebeb0fc5..3205664e7c80a 100644 --- a/pandas/tests/arrays/string_/test_string.py +++ b/pandas/tests/arrays/string_/test_string.py @@ -115,10 +115,10 @@ def test_astype_roundtrip(dtype, request): def test_add(dtype, request): if dtype == "arrow_string": reason = ( - "TypeError: unsupported operand type(s) for +: 'ArrowStringArray' and " + "unsupported operand type(s) for +: 'ArrowStringArray' and " "'ArrowStringArray'" ) - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) a = pd.Series(["a", "b", "c", None, None], dtype=dtype) @@ -143,7 +143,7 @@ def test_add(dtype, request): def test_add_2d(dtype, request): if dtype == "arrow_string": reason = "Failed: DID NOT RAISE <class 'ValueError'>" - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=None, reason=reason) request.node.add_marker(mark) a = pd.array(["a", "b", "c"], dtype=dtype) @@ -158,11 +158,8 @@ def test_add_2d(dtype, request): def test_add_sequence(dtype, request): if dtype == "arrow_string": - reason = ( - "TypeError: unsupported operand type(s) for +: 'ArrowStringArray' " - "and 'list'" - ) - mark = pytest.mark.xfail(reason=reason) + reason = "unsupported operand type(s) for +: 'ArrowStringArray' and 'list'" + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) a = pd.array(["a", "b", None, None], dtype=dtype) @@ -179,10 +176,8 @@ def test_add_sequence(dtype, request): def test_mul(dtype, request): if dtype == "arrow_string": - reason = ( - "TypeError: unsupported operand type(s) for *: 'ArrowStringArray' and 'int'" - ) - mark = pytest.mark.xfail(reason=reason) + reason = "unsupported operand type(s) for *: 'ArrowStringArray' and 'int'" + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) a = pd.array(["a", "b", None], dtype=dtype) @@ -246,7 +241,7 @@ def test_comparison_methods_scalar_pd_na(all_compare_operators, dtype): def test_comparison_methods_scalar_not_string(all_compare_operators, dtype, request): if all_compare_operators not in ["__eq__", "__ne__"]: reason = "comparison op not supported between instances of 'str' and 'int'" - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) op_name = all_compare_operators @@ -262,11 +257,9 @@ def test_comparison_methods_scalar_not_string(all_compare_operators, dtype, requ def test_comparison_methods_array(all_compare_operators, dtype, request): if dtype == "arrow_string": - if all_compare_operators in ["__eq__", "__ne__"]: - reason = "NotImplementedError: Neither scalar nor ArrowStringArray" - else: - reason = "AssertionError: left is not an ExtensionArray" - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail( + raises=AssertionError, reason="left is not an ExtensionArray" + ) request.node.add_marker(mark) op_name = all_compare_operators @@ -309,8 +302,9 @@ def test_constructor_raises(cls): @pytest.mark.parametrize("copy", [True, False]) def test_from_sequence_no_mutate(copy, cls, request): if cls is ArrowStringArray and copy is False: - reason = "AssertionError: numpy array are different" - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail( + raises=AssertionError, reason="numpy array are different" + ) request.node.add_marker(mark) nan_arr = np.array(["a", np.nan], dtype=object) @@ -333,8 +327,8 @@ def test_from_sequence_no_mutate(copy, cls, request): def test_astype_int(dtype, request): if dtype == "arrow_string": - reason = "TypeError: Cannot interpret 'Int64Dtype()' as a data type" - mark = pytest.mark.xfail(reason=reason) + reason = "Cannot interpret 'Int64Dtype()' as a data type" + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) arr = pd.array(["1", pd.NA, "3"], dtype=dtype) @@ -349,12 +343,10 @@ def test_astype_float(dtype, any_float_allowed_nullable_dtype, request): if dtype == "arrow_string": if any_float_allowed_nullable_dtype in {"Float32", "Float64"}: - reason = "TypeError: Cannot interpret 'Float32Dtype()' as a data type" + reason = "Cannot interpret 'Float32Dtype()' as a data type" else: - reason = ( - "TypeError: float() argument must be a string or a number, not 'NAType'" - ) - mark = pytest.mark.xfail(reason=reason) + reason = "float() argument must be a string or a number, not 'NAType'" + mark = pytest.mark.xfail(raises=TypeError, reason=reason) request.node.add_marker(mark) ser = pd.Series(["1.1", pd.NA, "3.3"], dtype=dtype) @@ -376,8 +368,8 @@ def test_reduce(skipna, dtype): @pytest.mark.parametrize("skipna", [True, False]) def test_min_max(method, skipna, dtype, request): if dtype == "arrow_string": - reason = "AttributeError: 'ArrowStringArray' object has no attribute 'max'" - mark = pytest.mark.xfail(reason=reason) + reason = "'ArrowStringArray' object has no attribute 'max'" + mark = pytest.mark.xfail(raises=AttributeError, reason=reason) request.node.add_marker(mark) arr = pd.Series(["a", "b", "c", None], dtype=dtype) @@ -394,13 +386,12 @@ def test_min_max(method, skipna, dtype, request): def test_min_max_numpy(method, box, dtype, request): if dtype == "arrow_string": if box is pd.array: - reason = ( - "TypeError: '<=' not supported between instances of 'str' and " - "'NoneType'" - ) + raises = TypeError + reason = "'<=' not supported between instances of 'str' and 'NoneType'" else: - reason = "AttributeError: 'ArrowStringArray' object has no attribute 'max'" - mark = pytest.mark.xfail(reason=reason) + raises = AttributeError + reason = "'ArrowStringArray' object has no attribute 'max'" + mark = pytest.mark.xfail(raises=raises, reason=reason) request.node.add_marker(mark) arr = box(["a", "b", "c", None], dtype=dtype) @@ -425,10 +416,10 @@ def test_fillna_args(dtype, request): if dtype == "arrow_string": reason = ( - "AssertionError: Regex pattern \"Cannot set non-string value '1' into " + "Regex pattern \"Cannot set non-string value '1' into " "a StringArray.\" does not match 'Scalar must be NA or str'" ) - mark = pytest.mark.xfail(reason=reason) + mark = pytest.mark.xfail(raises=AssertionError, reason=reason) request.node.add_marker(mark) arr = pd.array(["a", pd.NA], dtype=dtype) diff --git a/pandas/tests/arrays/test_datetimelike.py b/pandas/tests/arrays/test_datetimelike.py index 771d60b000a7d..8581e9a20526f 100644 --- a/pandas/tests/arrays/test_datetimelike.py +++ b/pandas/tests/arrays/test_datetimelike.py @@ -306,7 +306,9 @@ def test_searchsorted_castable_strings(self, arr1d, box, request): # If we have e.g. tzutc(), when we cast to string and parse # back we get pytz.UTC, and then consider them different timezones # so incorrectly raise. - mark = pytest.mark.xfail(reason="timezone comparisons inconsistent") + mark = pytest.mark.xfail( + raises=TypeError, reason="timezone comparisons inconsistent" + ) request.node.add_marker(mark) arr = arr1d @@ -471,7 +473,9 @@ def test_setitem_strs(self, arr1d, request): # If we have e.g. tzutc(), when we cast to string and parse # back we get pytz.UTC, and then consider them different timezones # so incorrectly raise. - mark = pytest.mark.xfail(reason="timezone comparisons inconsistent") + mark = pytest.mark.xfail( + raises=TypeError, reason="timezone comparisons inconsistent" + ) request.node.add_marker(mark) # Setting list-like of strs
https://api.github.com/repos/pandas-dev/pandas/pulls/41469
2021-05-14T12:53:14Z
2021-05-14T16:07:01Z
2021-05-14T16:07:01Z
2021-05-14T17:07:25Z
DOC: freeze old whatsnew notes part 1 #6856
diff --git a/doc/source/whatsnew/v0.5.0.rst b/doc/source/whatsnew/v0.5.0.rst index 7447a10fa1d6b..8757d9c887785 100644 --- a/doc/source/whatsnew/v0.5.0.rst +++ b/doc/source/whatsnew/v0.5.0.rst @@ -6,12 +6,6 @@ Version 0.5.0 (October 24, 2011) {{ header }} -.. ipython:: python - :suppress: - - from pandas import * # noqa F401, F403 - - New features ~~~~~~~~~~~~ diff --git a/doc/source/whatsnew/v0.6.0.rst b/doc/source/whatsnew/v0.6.0.rst index 253ca4d4188e5..19e2e85c09a87 100644 --- a/doc/source/whatsnew/v0.6.0.rst +++ b/doc/source/whatsnew/v0.6.0.rst @@ -5,12 +5,6 @@ Version 0.6.0 (November 25, 2011) {{ header }} -.. ipython:: python - :suppress: - - from pandas import * # noqa F401, F403 - - New features ~~~~~~~~~~~~ - :ref:`Added <reshaping.melt>` ``melt`` function to ``pandas.core.reshape`` diff --git a/doc/source/whatsnew/v0.7.0.rst b/doc/source/whatsnew/v0.7.0.rst index 2fe686d8858a2..52747f2992dc4 100644 --- a/doc/source/whatsnew/v0.7.0.rst +++ b/doc/source/whatsnew/v0.7.0.rst @@ -31,10 +31,22 @@ New features - Handle differently-indexed output values in ``DataFrame.apply`` (:issue:`498`) -.. ipython:: python +.. code-block:: ipython - df = pd.DataFrame(np.random.randn(10, 4)) - df.apply(lambda x: x.describe()) + In [1]: df = pd.DataFrame(np.random.randn(10, 4)) + In [2]: df.apply(lambda x: x.describe()) + Out[2]: + 0 1 2 3 + count 10.000000 10.000000 10.000000 10.000000 + mean 0.190912 -0.395125 -0.731920 -0.403130 + std 0.730951 0.813266 1.112016 0.961912 + min -0.861849 -2.104569 -1.776904 -1.469388 + 25% -0.411391 -0.698728 -1.501401 -1.076610 + 50% 0.380863 -0.228039 -1.191943 -1.004091 + 75% 0.658444 0.057974 -0.034326 0.461706 + max 1.212112 0.577046 1.643563 1.071804 + + [8 rows x 4 columns] - :ref:`Add<advanced.reorderlevels>` ``reorder_levels`` method to Series and DataFrame (:issue:`534`) @@ -116,13 +128,31 @@ One of the potentially riskiest API changes in 0.7.0, but also one of the most important, was a complete review of how **integer indexes** are handled with regard to label-based indexing. Here is an example: -.. ipython:: python +.. code-block:: ipython - s = pd.Series(np.random.randn(10), index=range(0, 20, 2)) - s - s[0] - s[2] - s[4] + In [3]: s = pd.Series(np.random.randn(10), index=range(0, 20, 2)) + In [4]: s + Out[4]: + 0 -1.294524 + 2 0.413738 + 4 0.276662 + 6 -0.472035 + 8 -0.013960 + 10 -0.362543 + 12 -0.006154 + 14 -0.923061 + 16 0.895717 + 18 0.805244 + Length: 10, dtype: float64 + + In [5]: s[0] + Out[5]: -1.2945235902555294 + + In [6]: s[2] + Out[6]: 0.41373810535784006 + + In [7]: s[4] + Out[7]: 0.2766617129497566 This is all exactly identical to the behavior before. However, if you ask for a key **not** contained in the Series, in versions 0.6.1 and prior, Series would @@ -235,22 +265,65 @@ slice to a Series when getting and setting values via ``[]`` (i.e. the ``__getitem__`` and ``__setitem__`` methods). The behavior will be the same as passing similar input to ``ix`` **except in the case of integer indexing**: -.. ipython:: python +.. code-block:: ipython - s = pd.Series(np.random.randn(6), index=list('acegkm')) - s - s[['m', 'a', 'c', 'e']] - s['b':'l'] - s['c':'k'] + In [8]: s = pd.Series(np.random.randn(6), index=list('acegkm')) + + In [9]: s + Out[9]: + a -1.206412 + c 2.565646 + e 1.431256 + g 1.340309 + k -1.170299 + m -0.226169 + Length: 6, dtype: float64 + + In [10]: s[['m', 'a', 'c', 'e']] + Out[10]: + m -0.226169 + a -1.206412 + c 2.565646 + e 1.431256 + Length: 4, dtype: float64 + + In [11]: s['b':'l'] + Out[11]: + c 2.565646 + e 1.431256 + g 1.340309 + k -1.170299 + Length: 4, dtype: float64 + + In [12]: s['c':'k'] + Out[12]: + c 2.565646 + e 1.431256 + g 1.340309 + k -1.170299 + Length: 4, dtype: float64 In the case of integer indexes, the behavior will be exactly as before (shadowing ``ndarray``): -.. ipython:: python +.. code-block:: ipython - s = pd.Series(np.random.randn(6), index=range(0, 12, 2)) - s[[4, 0, 2]] - s[1:5] + In [13]: s = pd.Series(np.random.randn(6), index=range(0, 12, 2)) + + In [14]: s[[4, 0, 2]] + Out[14]: + 4 0.132003 + 0 0.410835 + 2 0.813850 + Length: 3, dtype: float64 + + In [15]: s[1:5] + Out[15]: + 2 0.813850 + 4 0.132003 + 6 -0.827317 + 8 -0.076467 + Length: 4, dtype: float64 If you wish to do indexing with sequences and slicing on an integer index with label semantics, use ``ix``. diff --git a/doc/source/whatsnew/v0.7.3.rst b/doc/source/whatsnew/v0.7.3.rst index 4ca31baf560bb..5da6bef0c4f03 100644 --- a/doc/source/whatsnew/v0.7.3.rst +++ b/doc/source/whatsnew/v0.7.3.rst @@ -51,21 +51,37 @@ NA boolean comparison API change Reverted some changes to how NA values (represented typically as ``NaN`` or ``None``) are handled in non-numeric Series: -.. ipython:: python +.. code-block:: ipython - series = pd.Series(["Steve", np.nan, "Joe"]) - series == "Steve" - series != "Steve" + In [1]: series = pd.Series(["Steve", np.nan, "Joe"]) + + In [2]: series == "Steve" + Out[2]: + 0 True + 1 False + 2 False + Length: 3, dtype: bool + + In [3]: series != "Steve" + Out[3]: + 0 False + 1 True + 2 True + Length: 3, dtype: bool In comparisons, NA / NaN will always come through as ``False`` except with ``!=`` which is ``True``. *Be very careful* with boolean arithmetic, especially negation, in the presence of NA data. You may wish to add an explicit NA filter into boolean array operations if you are worried about this: -.. ipython:: python +.. code-block:: ipython + + In [4]: mask = series == "Steve" - mask = series == "Steve" - series[mask & series.notnull()] + In [5]: series[mask & series.notnull()] + Out[5]: + 0 Steve + Length: 1, dtype: object While propagating NA in comparisons may seem like the right behavior to some users (and you could argue on purely technical grounds that this is the right @@ -80,21 +96,51 @@ Other API changes When calling ``apply`` on a grouped Series, the return value will also be a Series, to be more consistent with the ``groupby`` behavior with DataFrame: -.. ipython:: python - :okwarning: - - df = pd.DataFrame( - { - "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"], - "B": ["one", "one", "two", "three", "two", "two", "one", "three"], - "C": np.random.randn(8), - "D": np.random.randn(8), - } - ) - df - grouped = df.groupby("A")["C"] - grouped.describe() - grouped.apply(lambda x: x.sort_values()[-2:]) # top 2 values +.. code-block:: ipython + + In [6]: df = pd.DataFrame( + ...: { + ...: "A": ["foo", "bar", "foo", "bar", "foo", "bar", "foo", "foo"], + ...: "B": ["one", "one", "two", "three", "two", "two", "one", "three"], + ...: "C": np.random.randn(8), + ...: "D": np.random.randn(8), + ...: } + ...: ) + ...: + + In [7]: df + Out[7]: + A B C D + 0 foo one 0.469112 -0.861849 + 1 bar one -0.282863 -2.104569 + 2 foo two -1.509059 -0.494929 + 3 bar three -1.135632 1.071804 + 4 foo two 1.212112 0.721555 + 5 bar two -0.173215 -0.706771 + 6 foo one 0.119209 -1.039575 + 7 foo three -1.044236 0.271860 + + [8 rows x 4 columns] + + In [8]: grouped = df.groupby("A")["C"] + + In [9]: grouped.describe() + Out[9]: + count mean std min 25% 50% 75% max + A + bar 3.0 -0.530570 0.526860 -1.135632 -0.709248 -0.282863 -0.228039 -0.173215 + foo 5.0 -0.150572 1.113308 -1.509059 -1.044236 0.119209 0.469112 1.212112 + + [2 rows x 8 columns] + + In [10]: grouped.apply(lambda x: x.sort_values()[-2:]) # top 2 values + Out[10]: + A + bar 1 -0.282863 + 5 -0.173215 + foo 0 0.469112 + 4 1.212112 + Name: C, Length: 4, dtype: float64 .. _whatsnew_0.7.3.contributors:
xref #6856
https://api.github.com/repos/pandas-dev/pandas/pulls/41464
2021-05-14T03:02:11Z
2021-05-14T16:08:45Z
2021-05-14T16:08:45Z
2022-11-18T02:21:36Z
Fix deprecation warnings for empty series in docstrings
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index 0d39f13afc426..a09cc0a6324c0 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -11359,7 +11359,7 @@ def _doc_params(cls): True >>> pd.Series([True, False]).all() False ->>> pd.Series([]).all() +>>> pd.Series([], dtype="float64").all() True >>> pd.Series([np.nan]).all() True @@ -11727,7 +11727,7 @@ def _doc_params(cls): False >>> pd.Series([True, False]).any() True ->>> pd.Series([]).any() +>>> pd.Series([], dtype="float64").any() False >>> pd.Series([np.nan]).any() False @@ -11815,13 +11815,13 @@ def _doc_params(cls): By default, the sum of an empty or all-NA Series is ``0``. ->>> pd.Series([]).sum() # min_count=0 is the default +>>> pd.Series([], dtype="float64").sum() # min_count=0 is the default 0.0 This can be controlled with the ``min_count`` parameter. For example, if you'd like the sum of an empty series to be NaN, pass ``min_count=1``. ->>> pd.Series([]).sum(min_count=1) +>>> pd.Series([], dtype="float64").sum(min_count=1) nan Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and @@ -11862,12 +11862,12 @@ def _doc_params(cls): -------- By default, the product of an empty or all-NA Series is ``1`` ->>> pd.Series([]).prod() +>>> pd.Series([], dtype="float64").prod() 1.0 This can be controlled with the ``min_count`` parameter ->>> pd.Series([]).prod(min_count=1) +>>> pd.Series([], dtype="float64").prod(min_count=1) nan Thanks to the ``skipna`` parameter, ``min_count`` handles all-NA and
Fixing the deprecation warnings
https://api.github.com/repos/pandas-dev/pandas/pulls/41463
2021-05-13T22:35:42Z
2021-05-13T23:29:13Z
2021-05-13T23:29:13Z
2021-05-14T20:15:33Z
CI: Fix changed flake8 error message after upgrade
diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 2f46190ef5eb7..db3fc1853ea71 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -35,7 +35,7 @@ repos: exclude: ^pandas/_libs/src/(klib|headers)/ args: [--quiet, '--extensions=c,h', '--headers=h', --recursive, '--filter=-readability/casting,-runtime/int,-build/include_subdir'] - repo: https://gitlab.com/pycqa/flake8 - rev: 3.9.1 + rev: 3.9.2 hooks: - id: flake8 additional_dependencies: @@ -75,7 +75,7 @@ repos: hooks: - id: yesqa additional_dependencies: - - flake8==3.9.1 + - flake8==3.9.2 - flake8-comprehensions==3.1.0 - flake8-bugbear==21.3.2 - pandas-dev-flaker==0.2.0 diff --git a/environment.yml b/environment.yml index 338331b54c824..1368c402d9c68 100644 --- a/environment.yml +++ b/environment.yml @@ -20,7 +20,7 @@ dependencies: # code checks - black=20.8b1 - cpplint - - flake8=3.9.1 + - flake8=3.9.2 - flake8-bugbear=21.3.2 # used by flake8, find likely bugs - flake8-comprehensions=3.1.0 # used by flake8, linting of unnecessary comprehensions - isort>=5.2.1 # check that imports are in the right order diff --git a/requirements-dev.txt b/requirements-dev.txt index 3c1b91220c3fe..2c109c2d3aac0 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -8,7 +8,7 @@ asv cython>=0.29.21 black==20.8b1 cpplint -flake8==3.9.1 +flake8==3.9.2 flake8-bugbear==21.3.2 flake8-comprehensions==3.1.0 isort>=5.2.1 diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py index 7e4c68ddc183b..cbf3e84044d53 100644 --- a/scripts/tests/test_validate_docstrings.py +++ b/scripts/tests/test_validate_docstrings.py @@ -165,7 +165,7 @@ def test_bad_class(self, capsys): "indentation_is_not_a_multiple_of_four", # with flake8 3.9.0, the message ends with four spaces, # whereas in earlier versions, it ended with "four" - ("flake8 error: E111 indentation is not a multiple of ",), + ("flake8 error: E111 indentation is not a multiple of 4",), ), ( "BadDocstrings",
Upgrade caused ci on 1.2.x to fail. Should backport probably
https://api.github.com/repos/pandas-dev/pandas/pulls/41462
2021-05-13T22:25:55Z
2021-05-14T00:27:45Z
2021-05-14T00:27:45Z
2021-05-14T22:38:03Z
REF: remove _wrap_frame_output
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 71d19cdd877a6..5c28a15532174 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1101,22 +1101,28 @@ def _aggregate_frame(self, func, *args, **kwargs) -> DataFrame: if self.grouper.nkeys != 1: raise AssertionError("Number of keys must be 1") - axis = self.axis obj = self._obj_with_exclusions result: dict[Hashable, NDFrame | np.ndarray] = {} - if axis != obj._info_axis_number: + if self.axis == 0: # test_pass_args_kwargs_duplicate_columns gets here with non-unique columns for name, data in self: fres = func(data, *args, **kwargs) result[name] = fres else: + # we get here in a number of test_multilevel tests for name in self.indices: data = self.get_group(name, obj=obj) fres = func(data, *args, **kwargs) result[name] = fres - return self._wrap_frame_output(result, obj) + result_index = self.grouper.result_index + other_ax = obj.axes[1 - self.axis] + out = self.obj._constructor(result, index=other_ax, columns=result_index) + if self.axis == 0: + out = out.T + + return out def _aggregate_item_by_item(self, func, *args, **kwargs) -> DataFrame: # only for axis==0 @@ -1568,16 +1574,6 @@ def _gotitem(self, key, ndim: int, subset=None): raise AssertionError("invalid ndim for _gotitem") - def _wrap_frame_output(self, result: dict, obj: DataFrame) -> DataFrame: - result_index = self.grouper.levels[0] - - if self.axis == 0: - return self.obj._constructor( - result, index=obj.columns, columns=result_index - ).T - else: - return self.obj._constructor(result, index=obj.index, columns=result_index) - def _get_data_to_aggregate(self) -> Manager2D: obj = self._obj_with_exclusions if self.axis == 1:
There's too many _wrap_foo_output methods and this one is only used in one place
https://api.github.com/repos/pandas-dev/pandas/pulls/41461
2021-05-13T22:02:52Z
2021-05-13T23:23:22Z
2021-05-13T23:23:22Z
2021-05-13T23:29:41Z
Backport PR #41452 on branch 1.2.x (CI: Pin jinja2 to version lower than 3.0)
diff --git a/environment.yml b/environment.yml index 61c8351070de9..15c1611169427 100644 --- a/environment.yml +++ b/environment.yml @@ -77,7 +77,7 @@ dependencies: - bottleneck>=1.2.1 - ipykernel - ipython>=7.11.1 - - jinja2 # pandas.Styler + - jinja2<3.0.0 # pandas.Styler - matplotlib>=2.2.2 # pandas.plotting, Series.plot, DataFrame.plot - numexpr>=2.6.8 - scipy>=1.2 diff --git a/requirements-dev.txt b/requirements-dev.txt index 595b2ee537e63..f026fd421f937 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -49,7 +49,7 @@ blosc bottleneck>=1.2.1 ipykernel ipython>=7.11.1 -jinja2 +jinja2<3.0.0 matplotlib>=2.2.2 numexpr>=2.6.8 scipy>=1.2
Backport PR #41452: CI: Pin jinja2 to version lower than 3.0
https://api.github.com/repos/pandas-dev/pandas/pulls/41460
2021-05-13T20:49:45Z
2021-05-13T22:17:36Z
2021-05-13T22:17:36Z
2021-05-24T10:13:12Z
DEPR: DatetimeIndex.union with mixed timezones
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 6ceae4dfd8a91..427b16cc02953 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -645,6 +645,7 @@ Deprecations - The ``inplace`` parameter of :meth:`Categorical.remove_categories`, :meth:`Categorical.add_categories`, :meth:`Categorical.reorder_categories`, :meth:`Categorical.rename_categories`, :meth:`Categorical.set_categories` is deprecated and will be removed in a future version (:issue:`37643`) - Deprecated :func:`merge` producing duplicated columns through the ``suffixes`` keyword and already existing columns (:issue:`22818`) - Deprecated setting :attr:`Categorical._codes`, create a new :class:`Categorical` with the desired codes instead (:issue:`40606`) +- Deprecated behavior of :meth:`DatetimeIndex.union` with mixed timezones; in a future version both will be cast to UTC instead of object dtype (:issue:`39328`) - Deprecated using ``usecols`` with out of bounds indices for ``read_csv`` with ``engine="c"`` (:issue:`25623`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 83946491f32a8..a366b49ce3c55 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -2930,6 +2930,21 @@ def union(self, other, sort=None): "Can only union MultiIndex with MultiIndex or Index of tuples, " "try mi.to_flat_index().union(other) instead." ) + if ( + isinstance(self, ABCDatetimeIndex) + and isinstance(other, ABCDatetimeIndex) + and self.tz is not None + and other.tz is not None + ): + # GH#39328 + warnings.warn( + "In a future version, the union of DatetimeIndex objects " + "with mismatched timezones will cast both to UTC instead of " + "object dtype. To retain the old behavior, " + "use `index.astype(object).union(other)`", + FutureWarning, + stacklevel=2, + ) dtype = find_common_type([self.dtype, other.dtype]) if self._is_numeric_dtype and other._is_numeric_dtype: diff --git a/pandas/tests/indexes/datetimes/test_timezones.py b/pandas/tests/indexes/datetimes/test_timezones.py index bc01d44de0529..8bba11786e3e5 100644 --- a/pandas/tests/indexes/datetimes/test_timezones.py +++ b/pandas/tests/indexes/datetimes/test_timezones.py @@ -1146,7 +1146,10 @@ def test_dti_union_aware(self): rng2 = date_range("2012-11-15 12:00:00", periods=6, freq="H", tz="US/Eastern") - result = rng.union(rng2) + with tm.assert_produces_warning(FutureWarning): + # # GH#39328 will cast both to UTC + result = rng.union(rng2) + expected = rng.astype("O").union(rng2.astype("O")) tm.assert_index_equal(result, expected) assert result[0].tz.zone == "US/Central"
- [x] closes #39328 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41458
2021-05-13T19:59:06Z
2021-05-14T16:09:46Z
2021-05-14T16:09:46Z
2021-05-14T16:26:06Z
[ArrowStringArray] PERF: Series.str.get_dummies
diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py index 79ea2a4fba284..0f68d1043b49d 100644 --- a/asv_bench/benchmarks/strings.py +++ b/asv_bench/benchmarks/strings.py @@ -249,10 +249,18 @@ def time_rsplit(self, dtype, expand): class Dummies: - def setup(self): - self.s = Series(tm.makeStringIndex(10 ** 5)).str.join("|") + params = ["str", "string", "arrow_string"] + param_names = ["dtype"] + + def setup(self, dtype): + from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401 + + try: + self.s = Series(tm.makeStringIndex(10 ** 5), dtype=dtype).str.join("|") + except ImportError: + raise NotImplementedError - def time_get_dummies(self): + def time_get_dummies(self, dtype): self.s.str.get_dummies("|") diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index 2646ddfa45b58..3b4549b55d1aa 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -24,6 +24,7 @@ is_categorical_dtype, is_integer, is_list_like, + is_object_dtype, is_re, ) from pandas.core.dtypes.generic import ( @@ -265,7 +266,11 @@ def _wrap_result( # infer from ndim if expand is not specified expand = result.ndim != 1 - elif expand is True and not isinstance(self._orig, ABCIndex): + elif ( + expand is True + and is_object_dtype(result) + and not isinstance(self._orig, ABCIndex) + ): # required when expand=True is explicitly specified # not needed when inferred diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py index 5d8a63fe481f8..86c90398d0259 100644 --- a/pandas/tests/strings/test_strings.py +++ b/pandas/tests/strings/test_strings.py @@ -301,17 +301,19 @@ def test_isnumeric(any_string_dtype): tm.assert_series_equal(s.str.isdecimal(), Series(decimal_e, dtype=dtype)) -def test_get_dummies(): - s = Series(["a|b", "a|c", np.nan]) +def test_get_dummies(any_string_dtype): + s = Series(["a|b", "a|c", np.nan], dtype=any_string_dtype) result = s.str.get_dummies("|") expected = DataFrame([[1, 1, 0], [1, 0, 1], [0, 0, 0]], columns=list("abc")) tm.assert_frame_equal(result, expected) - s = Series(["a;b", "a", 7]) + s = Series(["a;b", "a", 7], dtype=any_string_dtype) result = s.str.get_dummies(";") expected = DataFrame([[0, 1, 1], [0, 1, 0], [1, 0, 0]], columns=list("7ab")) tm.assert_frame_equal(result, expected) + +def test_get_dummies_index(): # GH9980, GH8028 idx = Index(["a|b", "a|c", "b|c"]) result = idx.str.get_dummies("|") @@ -322,14 +324,18 @@ def test_get_dummies(): tm.assert_index_equal(result, expected) -def test_get_dummies_with_name_dummy(): +def test_get_dummies_with_name_dummy(any_string_dtype): # GH 12180 # Dummies named 'name' should work as expected - s = Series(["a", "b,name", "b"]) + s = Series(["a", "b,name", "b"], dtype=any_string_dtype) result = s.str.get_dummies(",") expected = DataFrame([[1, 0, 0], [0, 1, 1], [0, 1, 0]], columns=["a", "b", "name"]) tm.assert_frame_equal(result, expected) + +def test_get_dummies_with_name_dummy_index(): + # GH 12180 + # Dummies named 'name' should work as expected idx = Index(["a|b", "name|c", "b|name"]) result = idx.str.get_dummies("|")
planning to remove the padding code from _wrap_result eventually, but until then we can skip it when we return a integer array from get_dummies adding tests and benchmarks as precursor to potential changes to _wrap_result #41372 have a working implementation for ArrowStringArray using pyarrow native functions but is slower than object fallback, so am leaving that for a followup. ``` before after ratio [4ec6925c] [091b0b02] <master> <get_dummies> - 2.58±0.02s 655±10ms 0.25 strings.Dummies.time_get_dummies('arrow_string') - 2.58±0.03s 643±9ms 0.25 strings.Dummies.time_get_dummies('string') - 2.59±0.07s 638±7ms 0.25 strings.Dummies.time_get_dummies('str') SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY. PERFORMANCE INCREASED. ```
https://api.github.com/repos/pandas-dev/pandas/pulls/41455
2021-05-13T18:28:36Z
2021-05-13T23:25:14Z
2021-05-13T23:25:14Z
2021-05-14T09:17:56Z
DOC: Complete first sentence in DataFrame.hist (#41421).
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py index 55097054fec88..27f8835968b54 100644 --- a/pandas/plotting/_core.py +++ b/pandas/plotting/_core.py @@ -132,7 +132,7 @@ def hist_frame( **kwargs, ): """ - Make a histogram of the DataFrame's. + Make a histogram of the DataFrame's columns. A `histogram`_ is a representation of the distribution of data. This function calls :meth:`matplotlib.pyplot.hist`, on each series in @@ -144,7 +144,7 @@ def hist_frame( ---------- data : DataFrame The pandas object holding the data. - column : str or sequence + column : str or sequence, optional If passed, will be used to limit data to a subset of columns. by : object, optional If passed, then used to form histograms for separate groups. @@ -171,7 +171,7 @@ def hist_frame( sharey : bool, default False In case subplots=True, share y axis and set some y axis labels to invisible. - figsize : tuple + figsize : tuple, optional The size in inches of the figure to create. Uses the value in `matplotlib.rcParams` by default. layout : tuple, optional
Complete sentence and mark column and figsize as optional parameters, following suggested fix. - [ ] closes #41421 - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41454
2021-05-13T17:34:26Z
2021-05-14T16:09:24Z
2021-05-14T16:09:23Z
2021-05-14T16:09:27Z
CI: Unpin nbformat after bugfix releases
diff --git a/environment.yml b/environment.yml index 30fa7c0dea696..1347ed696f1c2 100644 --- a/environment.yml +++ b/environment.yml @@ -70,7 +70,7 @@ dependencies: # unused (required indirectly may be?) - ipywidgets - - nbformat=5.0.8 + - nbformat - notebook>=5.7.5 - pip diff --git a/requirements-dev.txt b/requirements-dev.txt index 3e421c7715566..a53bedb87241d 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -44,7 +44,7 @@ pytest-instafail seaborn statsmodels ipywidgets -nbformat==5.0.8 +nbformat notebook>=5.7.5 pip blosc
- [x] closes #39176 Pinned this a few months ago, checking if new releases fixed the bugs
https://api.github.com/repos/pandas-dev/pandas/pulls/41453
2021-05-13T13:52:58Z
2021-05-13T17:48:57Z
2021-05-13T17:48:56Z
2021-05-13T17:50:16Z
CI: Pin jinja2 to version lower than 3.0
diff --git a/environment.yml b/environment.yml index 30fa7c0dea696..99ce0d9f9ea01 100644 --- a/environment.yml +++ b/environment.yml @@ -79,7 +79,7 @@ dependencies: - bottleneck>=1.2.1 - ipykernel - ipython>=7.11.1 - - jinja2 # pandas.Styler + - jinja2<3.0.0 # pandas.Styler - matplotlib>=2.2.2 # pandas.plotting, Series.plot, DataFrame.plot - numexpr>=2.6.8 - scipy>=1.2 diff --git a/requirements-dev.txt b/requirements-dev.txt index 3e421c7715566..6ba867f470f8f 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -51,7 +51,7 @@ blosc bottleneck>=1.2.1 ipykernel ipython>=7.11.1 -jinja2 +jinja2<3.0.0 matplotlib>=2.2.2 numexpr>=2.6.8 scipy>=1.2
- [x] xref #41450
https://api.github.com/repos/pandas-dev/pandas/pulls/41452
2021-05-13T12:39:19Z
2021-05-13T13:43:03Z
2021-05-13T13:43:03Z
2021-05-24T10:12:29Z
WEB: Fix maintainers grid not displaying correctly (GH41438)
diff --git a/web/pandas/about/team.md b/web/pandas/about/team.md index 39f63202e1986..c8318dd8758ed 100644 --- a/web/pandas/about/team.md +++ b/web/pandas/about/team.md @@ -8,30 +8,22 @@ If you want to support pandas development, you can find information in the [dona ## Maintainers -<div class="row maintainers"> - {% for row in maintainers.people | batch(6, "") %} - <div class="card-group maintainers"> - {% for person in row %} - {% if person %} - <div class="card"> - <img class="card-img-top" alt="" src="{{ person.avatar_url }}"/> - <div class="card-body"> - <h6 class="card-title"> - {% if person.blog %} - <a href="{{ person.blog }}"> - {{ person.name or person.login }} - </a> - {% else %} - {{ person.name or person.login }} - {% endif %} - </h6> - <p class="card-text small"><a href="{{ person.html_url }}">{{ person.login }}</a></p> - </div> - </div> - {% else %} - <div class="card border-0"></div> - {% endif %} - {% endfor %} +<div class="card-group maintainers"> + {% for person in maintainers.people %} + <div class="card"> + <img class="card-img-top" alt="" src="{{ person.avatar_url }}"/> + <div class="card-body"> + <h6 class="card-title"> + {% if person.blog %} + <a href="{{ person.blog }}"> + {{ person.name or person.login }} + </a> + {% else %} + {{ person.name or person.login }} + {% endif %} + </h6> + <p class="card-text small"><a href="{{ person.html_url }}">{{ person.login }}</a></p> + </div> </div> {% endfor %} </div> diff --git a/web/pandas/static/css/pandas.css b/web/pandas/static/css/pandas.css index d76d1a0befeba..459f006db5727 100644 --- a/web/pandas/static/css/pandas.css +++ b/web/pandas/static/css/pandas.css @@ -45,6 +45,12 @@ a.navbar-brand img { div.card { margin: 0 0 .2em .2em !important; } +@media (min-width: 576px) { + .card-group.maintainers div.card { + min-width: 10rem; + max-width: 10rem; + } +} div.card .card-title { font-weight: 500; color: #130654;
- [x] closes #41438 - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them Screenshot taken on Google Chrome 90.0.4430.93 ![screenshot](https://user-images.githubusercontent.com/18680207/118068760-ef740800-b370-11eb-902d-237da39e2b7f.png) To reproduce, - go to https://pandas.pydata.org/about/team.html - open developer tools - go to elements tab - navigate to <div class="row maintainers"> - edit as HTML - delete <div class="row maintainers"> and the closing tag at the end
https://api.github.com/repos/pandas-dev/pandas/pulls/41447
2021-05-13T02:33:03Z
2021-05-14T20:06:42Z
2021-05-14T20:06:42Z
2021-05-14T21:24:37Z
BUG: Raise ValueError if names and prefix are both defined
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 84f9dae8a0850..6fbaa85c7adcd 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -840,6 +840,7 @@ I/O - Bug in :func:`read_excel` raising ``AttributeError`` with ``MultiIndex`` header followed by two empty rows and no index, and bug affecting :func:`read_excel`, :func:`read_csv`, :func:`read_table`, :func:`read_fwf`, and :func:`read_clipboard` where one blank row after a ``MultiIndex`` header with no index would be dropped (:issue:`40442`) - Bug in :meth:`DataFrame.to_string` misplacing the truncation column when ``index=False`` (:issue:`40907`) - Bug in :func:`read_orc` always raising ``AttributeError`` (:issue:`40918`) +- Bug in :func:`read_csv` and :func:`read_table` silently ignoring ``prefix`` if ``names`` and ``prefix`` are defined, now raising ``ValueError`` (:issue:`39123`) - Bug in :func:`read_csv` and :func:`read_excel` not respecting dtype for duplicated column name when ``mangle_dupe_cols`` is set to ``True`` (:issue:`35211`) - Bug in :func:`read_csv` and :func:`read_table` misinterpreting arguments when ``sys.setprofile`` had been previously called (:issue:`41069`) - Bug in the conversion from pyarrow to pandas (e.g. for reading Parquet) with nullable dtypes and a pyarrow array whose data buffer size is not a multiple of dtype size (:issue:`40896`) diff --git a/pandas/io/parsers/readers.py b/pandas/io/parsers/readers.py index 55e3e14a0969d..9f7539f575308 100644 --- a/pandas/io/parsers/readers.py +++ b/pandas/io/parsers/readers.py @@ -20,6 +20,7 @@ import pandas._libs.lib as lib from pandas._libs.parsers import STR_NA_VALUES from pandas._typing import ( + ArrayLike, DtypeArg, FilePathOrBuffer, StorageOptions, @@ -485,11 +486,11 @@ def read_csv( delimiter=None, # Column and Index Locations and Names header="infer", - names=None, + names=lib.no_default, index_col=None, usecols=None, squeeze=False, - prefix=None, + prefix=lib.no_default, mangle_dupe_cols=True, # General Parsing Configuration dtype: Optional[DtypeArg] = None, @@ -546,7 +547,14 @@ def read_csv( del kwds["sep"] kwds_defaults = _refine_defaults_read( - dialect, delimiter, delim_whitespace, engine, sep, defaults={"delimiter": ","} + dialect, + delimiter, + delim_whitespace, + engine, + sep, + names, + prefix, + defaults={"delimiter": ","}, ) kwds.update(kwds_defaults) @@ -567,11 +575,11 @@ def read_table( delimiter=None, # Column and Index Locations and Names header="infer", - names=None, + names=lib.no_default, index_col=None, usecols=None, squeeze=False, - prefix=None, + prefix=lib.no_default, mangle_dupe_cols=True, # General Parsing Configuration dtype: Optional[DtypeArg] = None, @@ -627,7 +635,14 @@ def read_table( del kwds["sep"] kwds_defaults = _refine_defaults_read( - dialect, delimiter, delim_whitespace, engine, sep, defaults={"delimiter": "\t"} + dialect, + delimiter, + delim_whitespace, + engine, + sep, + names, + prefix, + defaults={"delimiter": "\t"}, ) kwds.update(kwds_defaults) @@ -1174,6 +1189,8 @@ def _refine_defaults_read( delim_whitespace: bool, engine: str, sep: Union[str, object], + names: Union[Optional[ArrayLike], object], + prefix: Union[Optional[str], object], defaults: Dict[str, Any], ): """Validate/refine default values of input parameters of read_csv, read_table. @@ -1199,6 +1216,12 @@ def _refine_defaults_read( sep : str or object A delimiter provided by the user (str) or a sentinel value, i.e. pandas._libs.lib.no_default. + names : array-like, optional + List of column names to use. If the file contains a header row, + then you should explicitly pass ``header=0`` to override the column names. + Duplicates in this list are not allowed. + prefix : str, optional + Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ... defaults: dict Default values of input parameters. @@ -1232,6 +1255,12 @@ def _refine_defaults_read( sep is lib.no_default or sep == delim_default ) + if names is not lib.no_default and prefix is not lib.no_default: + raise ValueError("Specified named and prefix; you can only specify one.") + + kwds["names"] = None if names is lib.no_default else names + kwds["prefix"] = None if prefix is lib.no_default else prefix + # Alias sep -> delimiter. if delimiter is None: delimiter = sep diff --git a/pandas/tests/io/parser/common/test_common_basic.py b/pandas/tests/io/parser/common/test_common_basic.py index fe68597d11f0b..ed395df53432e 100644 --- a/pandas/tests/io/parser/common/test_common_basic.py +++ b/pandas/tests/io/parser/common/test_common_basic.py @@ -740,6 +740,18 @@ def test_read_table_delim_whitespace_non_default_sep(all_parsers, delimiter): parser.read_table(f, delim_whitespace=True, delimiter=delimiter) +@pytest.mark.parametrize("func", ["read_csv", "read_table"]) +@pytest.mark.parametrize("prefix", [None, "x"]) +@pytest.mark.parametrize("names", [None, ["a"]]) +def test_names_and_prefix_not_lib_no_default(all_parsers, names, prefix, func): + # GH#39123 + f = StringIO("a,b\n1,2") + parser = all_parsers + msg = "Specified named and prefix; you can only specify one." + with pytest.raises(ValueError, match=msg): + getattr(parser, func)(f, names=names, prefix=prefix) + + def test_dict_keys_as_names(all_parsers): # GH: 36928 data = "1,2"
- [x] closes #39123 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41446
2021-05-12T22:52:11Z
2021-05-14T16:05:15Z
2021-05-14T16:05:15Z
2023-04-27T19:52:24Z
BUG: resample.apply with non-unique columns
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 84f9dae8a0850..793818419c910 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -901,6 +901,7 @@ Groupby/resample/rolling - Bug in :meth:`DataFrameGroupBy.rank` with the GroupBy object's ``axis=0`` and the ``rank`` method's keyword ``axis=1`` (:issue:`41320`) - Bug in :meth:`DataFrameGroupBy.__getitem__` with non-unique columns incorrectly returning a malformed :class:`SeriesGroupBy` instead of :class:`DataFrameGroupBy` (:issue:`41427`) - Bug in :meth:`DataFrameGroupBy.transform` with non-unique columns incorrectly raising ``AttributeError`` (:issue:`41427`) +- Bug in :meth:`Resampler.apply` with non-unique columns incorrectly dropping duplicated columns (:issue:`41445`) Reshaping ^^^^^^^^^ diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index cb5b54ca0c598..71d19cdd877a6 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1106,6 +1106,7 @@ def _aggregate_frame(self, func, *args, **kwargs) -> DataFrame: result: dict[Hashable, NDFrame | np.ndarray] = {} if axis != obj._info_axis_number: + # test_pass_args_kwargs_duplicate_columns gets here with non-unique columns for name, data in self: fres = func(data, *args, **kwargs) result[name] = fres @@ -1119,18 +1120,23 @@ def _aggregate_frame(self, func, *args, **kwargs) -> DataFrame: def _aggregate_item_by_item(self, func, *args, **kwargs) -> DataFrame: # only for axis==0 + # tests that get here with non-unique cols: + # test_resample_with_timedelta_yields_no_empty_groups, + # test_resample_apply_product obj = self._obj_with_exclusions result: dict[int | str, NDFrame] = {} - for item in obj: - data = obj[item] - colg = SeriesGroupBy(data, selection=item, grouper=self.grouper) - - result[item] = colg.aggregate(func, *args, **kwargs) + for i, item in enumerate(obj): + ser = obj.iloc[:, i] + colg = SeriesGroupBy( + ser, selection=item, grouper=self.grouper, exclusions=self.exclusions + ) - result_columns = obj.columns + result[i] = colg.aggregate(func, *args, **kwargs) - return self.obj._constructor(result, columns=result_columns) + res_df = self.obj._constructor(result) + res_df.columns = obj.columns + return res_df def _wrap_applied_output(self, data, keys, values, not_indexed_same=False): if len(keys) == 0: @@ -1401,6 +1407,7 @@ def _choose_path(self, fast_path: Callable, slow_path: Callable, group: DataFram def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame: # iterate through columns, see test_transform_exclude_nuisance + # gets here with non-unique columns output = {} inds = [] for i, col in enumerate(obj): diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index 4368e57a7da4d..83aeb29ec53df 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -248,6 +248,26 @@ def f(x, q=None, axis=0): tm.assert_frame_equal(apply_result, expected, check_names=False) +@pytest.mark.parametrize("as_index", [True, False]) +def test_pass_args_kwargs_duplicate_columns(tsframe, as_index): + # go through _aggregate_frame with self.axis == 0 and duplicate columns + tsframe.columns = ["A", "B", "A", "C"] + gb = tsframe.groupby(lambda x: x.month, as_index=as_index) + + res = gb.agg(np.percentile, 80, axis=0) + + ex_data = { + 1: tsframe[tsframe.index.month == 1].quantile(0.8), + 2: tsframe[tsframe.index.month == 2].quantile(0.8), + } + expected = DataFrame(ex_data).T + if not as_index: + # TODO: try to get this more consistent? + expected.index = Index(range(2)) + + tm.assert_frame_equal(res, expected) + + def test_len(): df = tm.makeTimeDataFrame() grouped = df.groupby([lambda x: x.year, lambda x: x.month, lambda x: x.day]) diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py index 66cb2f2291e98..1c7aa5c444da9 100644 --- a/pandas/tests/resample/test_datetime_index.py +++ b/pandas/tests/resample/test_datetime_index.py @@ -1748,19 +1748,23 @@ def test_get_timestamp_range_edges(first, last, freq, exp_first, exp_last): assert result == expected -def test_resample_apply_product(): +@pytest.mark.parametrize("duplicates", [True, False]) +def test_resample_apply_product(duplicates): # GH 5586 index = date_range(start="2012-01-31", freq="M", periods=12) ts = Series(range(12), index=index) df = DataFrame({"A": ts, "B": ts + 2}) + if duplicates: + df.columns = ["A", "A"] + result = df.resample("Q").apply(np.product) expected = DataFrame( np.array([[0, 24], [60, 210], [336, 720], [990, 1716]], dtype=np.int64), index=DatetimeIndex( ["2012-03-31", "2012-06-30", "2012-09-30", "2012-12-31"], freq="Q-DEC" ), - columns=["A", "B"], + columns=df.columns, ) tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/resample/test_timedelta.py b/pandas/tests/resample/test_timedelta.py index b1560623cd871..e127f69b12674 100644 --- a/pandas/tests/resample/test_timedelta.py +++ b/pandas/tests/resample/test_timedelta.py @@ -153,18 +153,24 @@ def test_resample_timedelta_edge_case(start, end, freq, resample_freq): assert not np.isnan(result[-1]) -def test_resample_with_timedelta_yields_no_empty_groups(): +@pytest.mark.parametrize("duplicates", [True, False]) +def test_resample_with_timedelta_yields_no_empty_groups(duplicates): # GH 10603 df = DataFrame( np.random.normal(size=(10000, 4)), index=timedelta_range(start="0s", periods=10000, freq="3906250n"), ) + if duplicates: + # case with non-unique columns + df.columns = ["A", "B", "A", "C"] + result = df.loc["1s":, :].resample("3s").apply(lambda x: len(x)) expected = DataFrame( [[768] * 4] * 12 + [[528] * 4], index=timedelta_range(start="1s", periods=13, freq="3s"), ) + expected.columns = df.columns tm.assert_frame_equal(result, expected)
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41445
2021-05-12T22:44:38Z
2021-05-13T17:53:29Z
2021-05-13T17:53:29Z
2021-05-13T17:59:00Z
DOC: Improve examples of df.append to better show the ignore_index param
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 2941b6ac01904..d90487647d35b 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -8793,6 +8793,7 @@ def append( Returns ------- DataFrame + A new DataFrame consisting of the rows of caller and the rows of `other`. See Also -------- @@ -8811,18 +8812,18 @@ def append( Examples -------- - >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB')) + >>> df = pd.DataFrame([[1, 2], [3, 4]], columns=list('AB'), index=['x', 'y']) >>> df A B - 0 1 2 - 1 3 4 - >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB')) + x 1 2 + y 3 4 + >>> df2 = pd.DataFrame([[5, 6], [7, 8]], columns=list('AB'), index=['x', 'y']) >>> df.append(df2) A B - 0 1 2 - 1 3 4 - 0 5 6 - 1 7 8 + x 1 2 + y 3 4 + x 5 6 + y 7 8 With `ignore_index` set to True:
Change index to [0, 2] to stress that ignore_index=True resets the index of both dataframes - [x] closes #41407 - [x] tests passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry Test Output: ``` ################################################################################ ################################## Validation ################################## ################################################################################ 1 Errors found: Return value has no description ```
https://api.github.com/repos/pandas-dev/pandas/pulls/41444
2021-05-12T21:26:53Z
2021-05-17T13:18:28Z
2021-05-17T13:18:28Z
2021-05-17T13:18:33Z
Revert "Pin fastparquet to leq 0.5.0"
diff --git a/ci/deps/actions-37-db.yaml b/ci/deps/actions-37-db.yaml index edca7b51a3420..8755e1a02c3cf 100644 --- a/ci/deps/actions-37-db.yaml +++ b/ci/deps/actions-37-db.yaml @@ -15,7 +15,7 @@ dependencies: - beautifulsoup4 - botocore>=1.11 - dask - - fastparquet>=0.4.0, <=0.5.0 + - fastparquet>=0.4.0 - fsspec>=0.7.4 - gcsfs>=0.6.0 - geopandas diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml index fdea34d573340..661d8813d32d2 100644 --- a/ci/deps/azure-windows-38.yaml +++ b/ci/deps/azure-windows-38.yaml @@ -15,7 +15,7 @@ dependencies: # pandas dependencies - blosc - bottleneck - - fastparquet>=0.4.0, <=0.5.0 + - fastparquet>=0.4.0 - flask - fsspec>=0.8.0 - matplotlib=3.1.3 diff --git a/environment.yml b/environment.yml index 30fa7c0dea696..2e0228a15272e 100644 --- a/environment.yml +++ b/environment.yml @@ -99,7 +99,7 @@ dependencies: - xlwt - odfpy - - fastparquet>=0.3.2, <=0.5.0 # pandas.read_parquet, DataFrame.to_parquet + - fastparquet>=0.3.2 # pandas.read_parquet, DataFrame.to_parquet - pyarrow>=0.15.0 # pandas.read_parquet, DataFrame.to_parquet, pandas.read_feather, DataFrame.to_feather - python-snappy # required by pyarrow diff --git a/pandas/io/parquet.py b/pandas/io/parquet.py index 5ad014a334c27..34d5edee06791 100644 --- a/pandas/io/parquet.py +++ b/pandas/io/parquet.py @@ -327,9 +327,14 @@ def read( if is_fsspec_url(path): fsspec = import_optional_dependency("fsspec") - parquet_kwargs["open_with"] = lambda path, _: fsspec.open( - path, "rb", **(storage_options or {}) - ).open() + if Version(self.api.__version__) > Version("0.6.1"): + parquet_kwargs["fs"] = fsspec.open( + path, "rb", **(storage_options or {}) + ).fs + else: + parquet_kwargs["open_with"] = lambda path, _: fsspec.open( + path, "rb", **(storage_options or {}) + ).open() elif isinstance(path, str) and not os.path.isdir(path): # use get_handle only when we are very certain that it is not a directory # fsspec resources can also point to directories diff --git a/requirements-dev.txt b/requirements-dev.txt index 3e421c7715566..ea7ca43742934 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -64,7 +64,7 @@ xlrd xlsxwriter xlwt odfpy -fastparquet>=0.3.2, <=0.5.0 +fastparquet>=0.3.2 pyarrow>=0.15.0 python-snappy pyqt5>=5.9.2
closes #41366 Reverts pandas-dev/pandas#41370 Looks like fastparquet released a new version.
https://api.github.com/repos/pandas-dev/pandas/pulls/41443
2021-05-12T20:42:36Z
2021-05-14T17:13:47Z
2021-05-14T17:13:47Z
2021-06-01T14:56:11Z
BUG: Series.str.extract with StringArray returning object dtype
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 84f9dae8a0850..73a6360c361db 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -750,6 +750,7 @@ Strings - Bug in the conversion from ``pyarrow.ChunkedArray`` to :class:`~arrays.StringArray` when the original had zero chunks (:issue:`41040`) - Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` ignoring replacements with ``regex=True`` for ``StringDType`` data (:issue:`41333`, :issue:`35977`) +- Bug in :meth:`Series.str.extract` with :class:`~arrays.StringArray` returning object dtype for empty :class:`DataFrame` (:issue:`41441`) Interval ^^^^^^^^ diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index 2646ddfa45b58..025ec232adcb5 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -3108,17 +3108,16 @@ def _str_extract_noexpand(arr, pat, flags=0): # error: Incompatible types in assignment (expression has type # "DataFrame", variable has type "ndarray") result = DataFrame( # type: ignore[assignment] - columns=columns, dtype=object + columns=columns, dtype=result_dtype ) else: - dtype = _result_dtype(arr) # error: Incompatible types in assignment (expression has type # "DataFrame", variable has type "ndarray") result = DataFrame( # type:ignore[assignment] [groups_or_na(val) for val in arr], columns=columns, index=arr.index, - dtype=dtype, + dtype=result_dtype, ) return result, name @@ -3135,19 +3134,19 @@ def _str_extract_frame(arr, pat, flags=0): regex = re.compile(pat, flags=flags) groups_or_na = _groups_or_na_fun(regex) columns = _get_group_names(regex) + result_dtype = _result_dtype(arr) if len(arr) == 0: - return DataFrame(columns=columns, dtype=object) + return DataFrame(columns=columns, dtype=result_dtype) try: result_index = arr.index except AttributeError: result_index = None - dtype = _result_dtype(arr) return DataFrame( [groups_or_na(val) for val in arr], columns=columns, index=result_index, - dtype=dtype, + dtype=result_dtype, ) diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py index 5d8a63fe481f8..a18d54b4de44d 100644 --- a/pandas/tests/strings/test_strings.py +++ b/pandas/tests/strings/test_strings.py @@ -175,17 +175,19 @@ def test_empty_str_methods(any_string_dtype): tm.assert_series_equal(empty_str, empty.str.repeat(3)) tm.assert_series_equal(empty_bool, empty.str.match("^a")) tm.assert_frame_equal( - DataFrame(columns=[0], dtype=str), empty.str.extract("()", expand=True) + DataFrame(columns=[0], dtype=any_string_dtype), + empty.str.extract("()", expand=True), ) tm.assert_frame_equal( - DataFrame(columns=[0, 1], dtype=str), empty.str.extract("()()", expand=True) + DataFrame(columns=[0, 1], dtype=any_string_dtype), + empty.str.extract("()()", expand=True), ) tm.assert_series_equal(empty_str, empty.str.extract("()", expand=False)) tm.assert_frame_equal( - DataFrame(columns=[0, 1], dtype=str), + DataFrame(columns=[0, 1], dtype=any_string_dtype), empty.str.extract("()()", expand=False), ) - tm.assert_frame_equal(DataFrame(dtype=str), empty.str.get_dummies()) + tm.assert_frame_equal(DataFrame(), empty.str.get_dummies()) tm.assert_series_equal(empty_str, empty_str.str.join("")) tm.assert_series_equal(empty_int, empty.str.len()) tm.assert_series_equal(empty_object, empty_str.str.findall("a"))
test changes needed in #41372 revealed an issue with the current implementation. fixing for completeness and to add release note. the dtype for an empty DataFrame from .str.get_dummies looks iffy so marking this as draft until further investigation.
https://api.github.com/repos/pandas-dev/pandas/pulls/41441
2021-05-12T17:47:15Z
2021-05-14T00:26:37Z
2021-05-14T00:26:36Z
2021-05-14T09:14:28Z
TST: Added regression test
diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py index e46eed05caa86..2aea2cc9b37cd 100644 --- a/pandas/tests/indexing/test_datetime.py +++ b/pandas/tests/indexing/test_datetime.py @@ -10,6 +10,21 @@ class TestDatetimeIndex: + def test_datetimeindex_transpose_empty_df(self): + """ + Regression test for: + https://github.com/pandas-dev/pandas/issues/41382 + """ + df = DataFrame(index=pd.DatetimeIndex([])) + + expected = pd.DatetimeIndex([], dtype="datetime64[ns]", freq=None) + + result1 = df.T.sum().index + result2 = df.sum(axis=1).index + + tm.assert_index_equal(result1, expected) + tm.assert_index_equal(result2, expected) + def test_indexing_with_datetime_tz(self): # GH#8260
- [x] closes #41382 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/41436
2021-05-12T12:45:38Z
2021-05-13T18:47:29Z
2021-05-13T18:47:29Z
2021-05-19T12:11:26Z
TYP: define `subset` arg in `Styler`
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py index 8fc2825ffcfc5..a96196a698f43 100644 --- a/pandas/io/formats/style.py +++ b/pandas/io/formats/style.py @@ -43,6 +43,7 @@ CSSProperties, CSSStyles, StylerRenderer, + Subset, Tooltips, maybe_convert_css_to_tuples, non_reducing_slice, @@ -545,7 +546,7 @@ def _apply( self, func: Callable[..., Styler], axis: Axis | None = 0, - subset=None, + subset: Subset | None = None, **kwargs, ) -> Styler: subset = slice(None) if subset is None else subset @@ -590,7 +591,7 @@ def apply( self, func: Callable[..., Styler], axis: Axis | None = 0, - subset=None, + subset: Subset | None = None, **kwargs, ) -> Styler: """ @@ -651,7 +652,9 @@ def apply( ) return self - def _applymap(self, func: Callable, subset=None, **kwargs) -> Styler: + def _applymap( + self, func: Callable, subset: Subset | None = None, **kwargs + ) -> Styler: func = partial(func, **kwargs) # applymap doesn't take kwargs? if subset is None: subset = pd.IndexSlice[:] @@ -660,7 +663,9 @@ def _applymap(self, func: Callable, subset=None, **kwargs) -> Styler: self._update_ctx(result) return self - def applymap(self, func: Callable, subset=None, **kwargs) -> Styler: + def applymap( + self, func: Callable, subset: Subset | None = None, **kwargs + ) -> Styler: """ Apply a CSS-styling function elementwise. @@ -707,7 +712,7 @@ def where( cond: Callable, value: str, other: str | None = None, - subset=None, + subset: Subset | None = None, **kwargs, ) -> Styler: """ @@ -1061,7 +1066,7 @@ def hide_index(self) -> Styler: self.hidden_index = True return self - def hide_columns(self, subset) -> Styler: + def hide_columns(self, subset: Subset) -> Styler: """ Hide columns from rendering. @@ -1093,7 +1098,7 @@ def background_gradient( low: float = 0, high: float = 0, axis: Axis | None = 0, - subset=None, + subset: Subset | None = None, text_color_threshold: float = 0.408, vmin: float | None = None, vmax: float | None = None, @@ -1239,7 +1244,7 @@ def background_gradient( ) return self - def set_properties(self, subset=None, **kwargs) -> Styler: + def set_properties(self, subset: Subset | None = None, **kwargs) -> Styler: """ Set defined CSS-properties to each ``<td>`` HTML element within the given subset. @@ -1331,7 +1336,7 @@ def css(x): def bar( self, - subset=None, + subset: Subset | None = None, axis: Axis | None = 0, color="#d65f5f", width: float = 100, @@ -1417,7 +1422,7 @@ def bar( def highlight_null( self, null_color: str = "red", - subset: IndexLabel | None = None, + subset: Subset | None = None, props: str | None = None, ) -> Styler: """ @@ -1462,7 +1467,7 @@ def f(data: DataFrame, props: str) -> np.ndarray: def highlight_max( self, - subset: IndexLabel | None = None, + subset: Subset | None = None, color: str = "yellow", axis: Axis | None = 0, props: str | None = None, @@ -1511,7 +1516,7 @@ def f(data: FrameOrSeries, props: str) -> np.ndarray: def highlight_min( self, - subset: IndexLabel | None = None, + subset: Subset | None = None, color: str = "yellow", axis: Axis | None = 0, props: str | None = None, @@ -1560,7 +1565,7 @@ def f(data: FrameOrSeries, props: str) -> np.ndarray: def highlight_between( self, - subset: IndexLabel | None = None, + subset: Subset | None = None, color: str = "yellow", axis: Axis | None = 0, left: Scalar | Sequence | None = None, @@ -1667,7 +1672,7 @@ def highlight_between( def highlight_quantile( self, - subset: IndexLabel | None = None, + subset: Subset | None = None, color: str = "yellow", axis: Axis | None = 0, q_left: float = 0.0, diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index 6917daaede2c6..6f7d298c7dec0 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -55,6 +55,7 @@ class CSSDict(TypedDict): CSSStyles = List[CSSDict] +Subset = Union[slice, Sequence, Index] class StylerRenderer: @@ -402,7 +403,7 @@ def _translate_body( def format( self, formatter: ExtFormatter | None = None, - subset: slice | Sequence[Any] | None = None, + subset: Subset | None = None, na_rep: str | None = None, precision: int | None = None, decimal: str = ".", @@ -772,7 +773,7 @@ def _maybe_wrap_formatter( return lambda x: na_rep if isna(x) else func_2(x) -def non_reducing_slice(slice_): +def non_reducing_slice(slice_: Subset): """ Ensure that a slice doesn't reduce to a Series or Scalar. @@ -809,7 +810,9 @@ def pred(part) -> bool: # slice(a, b, c) slice_ = [slice_] # to tuplize later else: - slice_ = [part if pred(part) else [part] for part in slice_] + # error: Item "slice" of "Union[slice, Sequence[Any]]" has no attribute + # "__iter__" (not iterable) -> is specifically list_like in conditional + slice_ = [p if pred(p) else [p] for p in slice_] # type: ignore[union-attr] return tuple(slice_)
This works but someone with more knowledge might be able to point out a better definition of the type for `Subset`, i.e. here `Subset = Union[slice, Sequence, Index]` It types all the `subset` args in `Styler`
https://api.github.com/repos/pandas-dev/pandas/pulls/41433
2021-05-12T09:34:33Z
2021-05-17T17:15:59Z
2021-05-17T17:15:59Z
2021-05-22T15:09:03Z
TST: Add test for old issues
diff --git a/pandas/tests/arrays/sparse/test_array.py b/pandas/tests/arrays/sparse/test_array.py index a96e5b07b7f7e..b29855caf6c1d 100644 --- a/pandas/tests/arrays/sparse/test_array.py +++ b/pandas/tests/arrays/sparse/test_array.py @@ -1313,6 +1313,14 @@ def test_dropna(fill_value): tm.assert_equal(df.dropna(), expected_df) +def test_drop_duplicates_fill_value(): + # GH 11726 + df = pd.DataFrame(np.zeros((5, 5))).apply(lambda x: SparseArray(x, fill_value=0)) + result = df.drop_duplicates() + expected = pd.DataFrame({i: SparseArray([0.0], fill_value=0) for i in range(5)}) + tm.assert_frame_equal(result, expected) + + class TestMinMax: plain_data = np.arange(5).astype(float) data_neg = plain_data * (-1) diff --git a/pandas/tests/frame/methods/test_to_dict.py b/pandas/tests/frame/methods/test_to_dict.py index 6d0d4e045e491..022b0f273493b 100644 --- a/pandas/tests/frame/methods/test_to_dict.py +++ b/pandas/tests/frame/methods/test_to_dict.py @@ -304,3 +304,10 @@ def test_to_dict_scalar_constructor_orient_dtype(self, data, expected_dtype): d = df.to_dict(orient="records") result = type(d[0]["a"]) assert result is expected_dtype + + def test_to_dict_mixed_numeric_frame(self): + # GH 12859 + df = DataFrame({"a": [1.0], "b": [9.0]}) + result = df.reset_index().to_dict("records") + expected = [{"index": 0, "a": 1.0, "b": 9.0}] + assert result == expected diff --git a/pandas/tests/groupby/test_apply.py b/pandas/tests/groupby/test_apply.py index 117612696df11..2f87f4a19b93f 100644 --- a/pandas/tests/groupby/test_apply.py +++ b/pandas/tests/groupby/test_apply.py @@ -1121,3 +1121,27 @@ def test_apply_dropna_with_indexed_same(): ) tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize( + "as_index, expected", + [ + [ + False, + DataFrame( + [[1, 1, 1], [2, 2, 1]], columns=Index(["a", "b", None], dtype=object) + ), + ], + [ + True, + Series( + [1, 1], index=MultiIndex.from_tuples([(1, 1), (2, 2)], names=["a", "b"]) + ), + ], + ], +) +def test_apply_as_index_constant_lambda(as_index, expected): + # GH 13217 + df = DataFrame({"a": [1, 1, 2, 2], "b": [1, 1, 2, 2], "c": [1, 1, 1, 1]}) + result = df.groupby(["a", "b"], as_index=as_index).apply(lambda x: 1) + tm.assert_equal(result, expected) diff --git a/pandas/tests/groupby/test_apply_mutate.py b/pandas/tests/groupby/test_apply_mutate.py index 529f76bf692ce..05c1f5b716f40 100644 --- a/pandas/tests/groupby/test_apply_mutate.py +++ b/pandas/tests/groupby/test_apply_mutate.py @@ -68,3 +68,63 @@ def fn(x): name="col2", ) tm.assert_series_equal(result, expected) + + +def test_apply_mutate_columns_multiindex(): + # GH 12652 + df = pd.DataFrame( + { + ("C", "julian"): [1, 2, 3], + ("B", "geoffrey"): [1, 2, 3], + ("A", "julian"): [1, 2, 3], + ("B", "julian"): [1, 2, 3], + ("A", "geoffrey"): [1, 2, 3], + ("C", "geoffrey"): [1, 2, 3], + }, + columns=pd.MultiIndex.from_tuples( + [ + ("A", "julian"), + ("A", "geoffrey"), + ("B", "julian"), + ("B", "geoffrey"), + ("C", "julian"), + ("C", "geoffrey"), + ] + ), + ) + + def add_column(grouped): + name = grouped.columns[0][1] + grouped["sum", name] = grouped.sum(axis=1) + return grouped + + result = df.groupby(level=1, axis=1).apply(add_column) + expected = pd.DataFrame( + [ + [1, 1, 1, 3, 1, 1, 1, 3], + [2, 2, 2, 6, 2, 2, 2, 6], + [ + 3, + 3, + 3, + 9, + 3, + 3, + 3, + 9, + ], + ], + columns=pd.MultiIndex.from_tuples( + [ + ("geoffrey", "A", "geoffrey"), + ("geoffrey", "B", "geoffrey"), + ("geoffrey", "C", "geoffrey"), + ("geoffrey", "sum", "geoffrey"), + ("julian", "A", "julian"), + ("julian", "B", "julian"), + ("julian", "C", "julian"), + ("julian", "sum", "julian"), + ] + ), + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/indexes/object/__init__.py b/pandas/tests/indexes/object/__init__.py new file mode 100644 index 0000000000000..e69de29bb2d1d diff --git a/pandas/tests/indexes/test_base.py b/pandas/tests/indexes/test_base.py index b5822b768fdde..47657fff56ceb 100644 --- a/pandas/tests/indexes/test_base.py +++ b/pandas/tests/indexes/test_base.py @@ -1366,7 +1366,7 @@ async def test_tab_complete_warning(self, ip): pytest.importorskip("IPython", minversion="6.0.0") from IPython.core.completer import provisionalcompleter - code = "import pandas as pd; idx = Index([1, 2])" + code = "import pandas as pd; idx = pd.Index([1, 2])" await ip.run_code(code) # GH 31324 newer jedi version raises Deprecation warning; @@ -1720,3 +1720,21 @@ def test_validate_1d_input(): ser = Series(0, range(4)) with pytest.raises(ValueError, match=msg): ser.index = np.array([[2, 3]] * 4) + + +@pytest.mark.parametrize( + "klass, extra_kwargs", + [ + [Index, {}], + [Int64Index, {}], + [Float64Index, {}], + [DatetimeIndex, {}], + [TimedeltaIndex, {}], + [PeriodIndex, {"freq": "Y"}], + ], +) +def test_construct_from_memoryview(klass, extra_kwargs): + # GH 13120 + result = klass(memoryview(np.arange(2000, 2005)), **extra_kwargs) + expected = klass(range(2000, 2005), **extra_kwargs) + tm.assert_index_equal(result, expected) diff --git a/pandas/tests/reshape/merge/test_merge.py b/pandas/tests/reshape/merge/test_merge.py index 1495a34274a94..edd100219143c 100644 --- a/pandas/tests/reshape/merge/test_merge.py +++ b/pandas/tests/reshape/merge/test_merge.py @@ -2446,3 +2446,14 @@ def test_merge_duplicate_columns_with_suffix_causing_another_duplicate(): result = merge(left, right, on="a") expected = DataFrame([[1, 1, 1, 1, 2]], columns=["a", "b_x", "b_x", "b_x", "b_y"]) tm.assert_frame_equal(result, expected) + + +def test_merge_string_float_column_result(): + # GH 13353 + df1 = DataFrame([[1, 2], [3, 4]], columns=pd.Index(["a", 114.0])) + df2 = DataFrame([[9, 10], [11, 12]], columns=["x", "y"]) + result = merge(df2, df1, how="inner", left_index=True, right_index=True) + expected = DataFrame( + [[9, 10, 1, 2], [11, 12, 3, 4]], columns=pd.Index(["x", "y", "a", 114.0]) + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py index 0e43e351bc082..8793026ee74ab 100644 --- a/pandas/tests/series/indexing/test_getitem.py +++ b/pandas/tests/series/indexing/test_getitem.py @@ -662,3 +662,11 @@ def test_getitem_categorical_str(): def test_slice_can_reorder_not_uniquely_indexed(): ser = Series(1, index=["a", "a", "b", "b", "c"]) ser[::-1] # it works! + + +@pytest.mark.parametrize("index_vals", ["aabcd", "aadcb"]) +def test_duplicated_index_getitem_positional_indexer(index_vals): + # GH 11747 + s = Series(range(5), index=list(index_vals)) + result = s[3] + assert result == 3 diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py index 675120e03d821..3f850dfbc6a39 100644 --- a/pandas/tests/series/indexing/test_setitem.py +++ b/pandas/tests/series/indexing/test_setitem.py @@ -286,6 +286,13 @@ def test_setitem_with_bool_mask_and_values_matching_n_trues_in_length(self): expected = Series([None] * 3 + list(range(5)) + [None] * 2).astype("object") tm.assert_series_equal(result, expected) + def test_setitem_nan_with_bool(self): + # GH 13034 + result = Series([True, False, True]) + result[0] = np.nan + expected = Series([np.nan, False, True], dtype=object) + tm.assert_series_equal(result, expected) + class TestSetitemViewCopySemantics: def test_setitem_invalidates_datetime_index_freq(self):
- [x] closes #11747 - [x] closes #11726 - [x] closes #12652 - [x] closes #12859 - [x] closes #13034 - [x] closes #13120 - [x] closes #13217 - [x] closes #13353 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/41431
2021-05-12T04:43:22Z
2021-05-12T13:27:56Z
2021-05-12T13:27:55Z
2021-05-12T16:32:33Z
CLN: Create tests/window/moments/conftest for specific fixtures
diff --git a/pandas/tests/window/conftest.py b/pandas/tests/window/conftest.py index b1f1bb7086149..24b28356a3099 100644 --- a/pandas/tests/window/conftest.py +++ b/pandas/tests/window/conftest.py @@ -1,18 +1,11 @@ -from datetime import ( - datetime, - timedelta, -) +from datetime import timedelta -import numpy as np import pytest import pandas.util._test_decorators as td from pandas import ( DataFrame, - Series, - bdate_range, - notna, to_datetime, ) @@ -141,168 +134,6 @@ def engine_and_raw(request): return request.param -# create the data only once as we are not setting it -def _create_consistency_data(): - def create_series(): - return [ - Series(dtype=object), - Series([np.nan]), - Series([np.nan, np.nan]), - Series([3.0]), - Series([np.nan, 3.0]), - Series([3.0, np.nan]), - Series([1.0, 3.0]), - Series([2.0, 2.0]), - Series([3.0, 1.0]), - Series( - [5.0, 5.0, 5.0, 5.0, np.nan, np.nan, np.nan, 5.0, 5.0, np.nan, np.nan] - ), - Series( - [ - np.nan, - 5.0, - 5.0, - 5.0, - np.nan, - np.nan, - np.nan, - 5.0, - 5.0, - np.nan, - np.nan, - ] - ), - Series( - [ - np.nan, - np.nan, - 5.0, - 5.0, - np.nan, - np.nan, - np.nan, - 5.0, - 5.0, - np.nan, - np.nan, - ] - ), - Series( - [ - np.nan, - 3.0, - np.nan, - 3.0, - 4.0, - 5.0, - 6.0, - np.nan, - np.nan, - 7.0, - 12.0, - 13.0, - 14.0, - 15.0, - ] - ), - Series( - [ - np.nan, - 5.0, - np.nan, - 2.0, - 4.0, - 0.0, - 9.0, - np.nan, - np.nan, - 3.0, - 12.0, - 13.0, - 14.0, - 15.0, - ] - ), - Series( - [ - 2.0, - 3.0, - np.nan, - 3.0, - 4.0, - 5.0, - 6.0, - np.nan, - np.nan, - 7.0, - 12.0, - 13.0, - 14.0, - 15.0, - ] - ), - Series( - [ - 2.0, - 5.0, - np.nan, - 2.0, - 4.0, - 0.0, - 9.0, - np.nan, - np.nan, - 3.0, - 12.0, - 13.0, - 14.0, - 15.0, - ] - ), - Series(range(10)), - Series(range(20, 0, -2)), - ] - - def create_dataframes(): - return [ - DataFrame(), - DataFrame(columns=["a"]), - DataFrame(columns=["a", "a"]), - DataFrame(columns=["a", "b"]), - DataFrame(np.arange(10).reshape((5, 2))), - DataFrame(np.arange(25).reshape((5, 5))), - DataFrame(np.arange(25).reshape((5, 5)), columns=["a", "b", 99, "d", "d"]), - ] + [DataFrame(s) for s in create_series()] - - def is_constant(x): - values = x.values.ravel("K") - return len(set(values[notna(values)])) == 1 - - def no_nans(x): - return x.notna().all().all() - - # data is a tuple(object, is_constant, no_nans) - data = create_series() + create_dataframes() - - return [(x, is_constant(x), no_nans(x)) for x in data] - - -@pytest.fixture(params=_create_consistency_data()) -def consistency_data(request): - """Create consistency data""" - return request.param - - -@pytest.fixture -def frame(): - """Make mocked frame as fixture.""" - return DataFrame( - np.random.randn(100, 10), - index=bdate_range(datetime(2009, 1, 1), periods=100), - columns=np.arange(10), - ) - - @pytest.fixture def times_frame(): """Frame for testing times argument in EWM groupby.""" @@ -328,16 +159,6 @@ def times_frame(): ) -@pytest.fixture -def series(): - """Make mocked series as fixture.""" - arr = np.random.randn(100) - locs = np.arange(20, 40) - arr[locs] = np.NaN - series = Series(arr, index=bdate_range(datetime(2009, 1, 1), periods=100)) - return series - - @pytest.fixture(params=["1 day", timedelta(days=1)]) def halflife_with_times(request): """Halflife argument for EWM when times is specified.""" diff --git a/pandas/tests/window/moments/conftest.py b/pandas/tests/window/moments/conftest.py new file mode 100644 index 0000000000000..829df1f3bfe2f --- /dev/null +++ b/pandas/tests/window/moments/conftest.py @@ -0,0 +1,183 @@ +from datetime import datetime + +import numpy as np +import pytest + +from pandas import ( + DataFrame, + Series, + bdate_range, + notna, +) + + +@pytest.fixture +def series(): + """Make mocked series as fixture.""" + arr = np.random.randn(100) + locs = np.arange(20, 40) + arr[locs] = np.NaN + series = Series(arr, index=bdate_range(datetime(2009, 1, 1), periods=100)) + return series + + +@pytest.fixture +def frame(): + """Make mocked frame as fixture.""" + return DataFrame( + np.random.randn(100, 10), + index=bdate_range(datetime(2009, 1, 1), periods=100), + columns=np.arange(10), + ) + + +# create the data only once as we are not setting it +def _create_consistency_data(): + def create_series(): + return [ + Series(dtype=object), + Series([np.nan]), + Series([np.nan, np.nan]), + Series([3.0]), + Series([np.nan, 3.0]), + Series([3.0, np.nan]), + Series([1.0, 3.0]), + Series([2.0, 2.0]), + Series([3.0, 1.0]), + Series( + [5.0, 5.0, 5.0, 5.0, np.nan, np.nan, np.nan, 5.0, 5.0, np.nan, np.nan] + ), + Series( + [ + np.nan, + 5.0, + 5.0, + 5.0, + np.nan, + np.nan, + np.nan, + 5.0, + 5.0, + np.nan, + np.nan, + ] + ), + Series( + [ + np.nan, + np.nan, + 5.0, + 5.0, + np.nan, + np.nan, + np.nan, + 5.0, + 5.0, + np.nan, + np.nan, + ] + ), + Series( + [ + np.nan, + 3.0, + np.nan, + 3.0, + 4.0, + 5.0, + 6.0, + np.nan, + np.nan, + 7.0, + 12.0, + 13.0, + 14.0, + 15.0, + ] + ), + Series( + [ + np.nan, + 5.0, + np.nan, + 2.0, + 4.0, + 0.0, + 9.0, + np.nan, + np.nan, + 3.0, + 12.0, + 13.0, + 14.0, + 15.0, + ] + ), + Series( + [ + 2.0, + 3.0, + np.nan, + 3.0, + 4.0, + 5.0, + 6.0, + np.nan, + np.nan, + 7.0, + 12.0, + 13.0, + 14.0, + 15.0, + ] + ), + Series( + [ + 2.0, + 5.0, + np.nan, + 2.0, + 4.0, + 0.0, + 9.0, + np.nan, + np.nan, + 3.0, + 12.0, + 13.0, + 14.0, + 15.0, + ] + ), + Series(range(10)), + Series(range(20, 0, -2)), + ] + + def create_dataframes(): + return [ + DataFrame(), + DataFrame(columns=["a"]), + DataFrame(columns=["a", "a"]), + DataFrame(columns=["a", "b"]), + DataFrame(np.arange(10).reshape((5, 2))), + DataFrame(np.arange(25).reshape((5, 5))), + DataFrame(np.arange(25).reshape((5, 5)), columns=["a", "b", 99, "d", "d"]), + ] + [DataFrame(s) for s in create_series()] + + def is_constant(x): + values = x.values.ravel("K") + return len(set(values[notna(values)])) == 1 + + def no_nans(x): + return x.notna().all().all() + + # data is a tuple(object, is_constant, no_nans) + data = create_series() + create_dataframes() + + return [(x, is_constant(x), no_nans(x)) for x in data] + + +@pytest.fixture(params=_create_consistency_data()) +def consistency_data(request): + """Create consistency data""" + return request.param diff --git a/pandas/tests/window/test_api.py b/pandas/tests/window/test_api.py index 300f3f5729614..e70d079739003 100644 --- a/pandas/tests/window/test_api.py +++ b/pandas/tests/window/test_api.py @@ -16,7 +16,8 @@ from pandas.core.base import SpecificationError -def test_getitem(frame): +def test_getitem(): + frame = DataFrame(np.random.randn(5, 5)) r = frame.rolling(window=5) tm.assert_index_equal(r._selected_obj.columns, frame.columns)
- [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them These fixtures were only specific to the tests in this directory.
https://api.github.com/repos/pandas-dev/pandas/pulls/41430
2021-05-12T03:17:28Z
2021-05-12T13:29:36Z
2021-05-12T13:29:36Z
2021-05-12T16:32:21Z
DEPR: setting Categorical._codes
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 7ec74b7045437..e5eb8ccf7cf65 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -644,6 +644,7 @@ Deprecations - Deprecated the ``level`` keyword for :class:`DataFrame` and :class:`Series` aggregations; use groupby instead (:issue:`39983`) - The ``inplace`` parameter of :meth:`Categorical.remove_categories`, :meth:`Categorical.add_categories`, :meth:`Categorical.reorder_categories`, :meth:`Categorical.rename_categories`, :meth:`Categorical.set_categories` is deprecated and will be removed in a future version (:issue:`37643`) - Deprecated :func:`merge` producing duplicated columns through the ``suffixes`` keyword and already existing columns (:issue:`22818`) +- Deprecated setting :attr:`Categorical._codes`, create a new :class:`Categorical` with the desired codes instead (:issue:`40606`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py index 26c582561cd3d..cb8a08f5668ac 100644 --- a/pandas/core/arrays/categorical.py +++ b/pandas/core/arrays/categorical.py @@ -1861,6 +1861,12 @@ def _codes(self) -> np.ndarray: @_codes.setter def _codes(self, value: np.ndarray): + warn( + "Setting the codes on a Categorical is deprecated and will raise in " + "a future version. Create a new Categorical object instead", + FutureWarning, + stacklevel=2, + ) # GH#40606 NDArrayBacked.__init__(self, value, self.dtype) def _box_func(self, i: int): diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py index a063491cd08fa..bde75051389ca 100644 --- a/pandas/tests/arrays/categorical/test_api.py +++ b/pandas/tests/arrays/categorical/test_api.py @@ -489,6 +489,15 @@ def test_set_categories_inplace(self): tm.assert_index_equal(cat.categories, Index(["a", "b", "c", "d"])) + def test_codes_setter_deprecated(self): + cat = Categorical([1, 2, 3, 1, 2, 3, 3, 2, 1, 1, 1]) + new_codes = cat._codes + 1 + with tm.assert_produces_warning(FutureWarning): + # GH#40606 + cat._codes = new_codes + + assert cat._codes is new_codes + class TestPrivateCategoricalAPI: def test_codes_immutable(self):
- [x] closes #40606 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41429
2021-05-12T01:49:28Z
2021-05-12T13:52:59Z
2021-05-12T13:52:59Z
2021-05-12T13:58:28Z
REG: quantile with IntegerArray/FloatingArray
diff --git a/pandas/core/array_algos/quantile.py b/pandas/core/array_algos/quantile.py index efa36a5bd3ae9..32c50ed38eba0 100644 --- a/pandas/core/array_algos/quantile.py +++ b/pandas/core/array_algos/quantile.py @@ -37,7 +37,18 @@ def quantile_compat(values: ArrayLike, qs: np.ndarray, interpolation: str) -> Ar mask = isna(values) return _quantile_with_mask(values, mask, fill_value, qs, interpolation) else: - return _quantile_ea_compat(values, qs, interpolation) + # In general we don't want to import from arrays here; + # this is temporary pending discussion in GH#41428 + from pandas.core.arrays import BaseMaskedArray + + if isinstance(values, BaseMaskedArray): + # e.g. IntegerArray, does not implement _from_factorized + out = _quantile_ea_fallback(values, qs, interpolation) + + else: + out = _quantile_ea_compat(values, qs, interpolation) + + return out def _quantile_with_mask( @@ -144,3 +155,31 @@ def _quantile_ea_compat( # error: Incompatible return value type (got "ndarray", expected "ExtensionArray") return result # type: ignore[return-value] + + +def _quantile_ea_fallback( + values: ExtensionArray, qs: np.ndarray, interpolation: str +) -> ExtensionArray: + """ + quantile compatibility for ExtensionArray subclasses that do not + implement `_from_factorized`, e.g. IntegerArray. + + Notes + ----- + We assume that all impacted cases are 1D-only. + """ + mask = np.atleast_2d(np.asarray(values.isna())) + npvalues = np.atleast_2d(np.asarray(values)) + + res = _quantile_with_mask( + npvalues, + mask=mask, + fill_value=values.dtype.na_value, + qs=qs, + interpolation=interpolation, + ) + assert res.ndim == 2 + assert res.shape[0] == 1 + res = res[0] + out = type(values)._from_sequence(res, dtype=values.dtype) + return out diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index bd4dfdb4ebad0..e051e765b2ba3 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1316,7 +1316,6 @@ def quantile( assert is_list_like(qs) # caller is responsible for this result = quantile_compat(self.values, np.asarray(qs._values), interpolation) - return new_block(result, placement=self._mgr_locs, ndim=2) diff --git a/pandas/tests/frame/methods/test_quantile.py b/pandas/tests/frame/methods/test_quantile.py index dbb5cb357de47..7926ec52b1f28 100644 --- a/pandas/tests/frame/methods/test_quantile.py +++ b/pandas/tests/frame/methods/test_quantile.py @@ -548,22 +548,28 @@ class TestQuantileExtensionDtype: ), pd.period_range("2016-01-01", periods=9, freq="D"), pd.date_range("2016-01-01", periods=9, tz="US/Pacific"), - pytest.param( - pd.array(np.arange(9), dtype="Int64"), - marks=pytest.mark.xfail(reason="doesn't implement from_factorized"), - ), - pytest.param( - pd.array(np.arange(9), dtype="Float64"), - marks=pytest.mark.xfail(reason="doesn't implement from_factorized"), - ), + pd.array(np.arange(9), dtype="Int64"), + pd.array(np.arange(9), dtype="Float64"), ], ids=lambda x: str(x.dtype), ) def index(self, request): + # NB: not actually an Index object idx = request.param idx.name = "A" return idx + @pytest.fixture + def obj(self, index, frame_or_series): + # bc index is not always an Index (yet), we need to re-patch .name + obj = frame_or_series(index).copy() + + if frame_or_series is Series: + obj.name = "A" + else: + obj.columns = ["A"] + return obj + def compute_quantile(self, obj, qs): if isinstance(obj, Series): result = obj.quantile(qs) @@ -571,8 +577,7 @@ def compute_quantile(self, obj, qs): result = obj.quantile(qs, numeric_only=False) return result - def test_quantile_ea(self, index, frame_or_series): - obj = frame_or_series(index).copy() + def test_quantile_ea(self, obj, index): # result should be invariant to shuffling indexer = np.arange(len(index), dtype=np.intp) @@ -583,13 +588,14 @@ def test_quantile_ea(self, index, frame_or_series): result = self.compute_quantile(obj, qs) # expected here assumes len(index) == 9 - expected = Series([index[4], index[0], index[-1]], index=qs, name="A") - expected = frame_or_series(expected) + expected = Series( + [index[4], index[0], index[-1]], dtype=index.dtype, index=qs, name="A" + ) + expected = type(obj)(expected) tm.assert_equal(result, expected) - def test_quantile_ea_with_na(self, index, frame_or_series): - obj = frame_or_series(index).copy() + def test_quantile_ea_with_na(self, obj, index): obj.iloc[0] = index._na_value obj.iloc[-1] = index._na_value @@ -603,15 +609,15 @@ def test_quantile_ea_with_na(self, index, frame_or_series): result = self.compute_quantile(obj, qs) # expected here assumes len(index) == 9 - expected = Series([index[4], index[1], index[-2]], index=qs, name="A") - expected = frame_or_series(expected) + expected = Series( + [index[4], index[1], index[-2]], dtype=index.dtype, index=qs, name="A" + ) + expected = type(obj)(expected) tm.assert_equal(result, expected) # TODO: filtering can be removed after GH#39763 is fixed @pytest.mark.filterwarnings("ignore:Using .astype to convert:FutureWarning") - def test_quantile_ea_all_na(self, index, frame_or_series): - - obj = frame_or_series(index).copy() + def test_quantile_ea_all_na(self, obj, index, frame_or_series): obj.iloc[:] = index._na_value @@ -628,13 +634,12 @@ def test_quantile_ea_all_na(self, index, frame_or_series): result = self.compute_quantile(obj, qs) expected = index.take([-1, -1, -1], allow_fill=True, fill_value=index._na_value) - expected = Series(expected, index=qs) - expected = frame_or_series(expected) + expected = Series(expected, index=qs, name="A") + expected = type(obj)(expected) tm.assert_equal(result, expected) - def test_quantile_ea_scalar(self, index, frame_or_series): + def test_quantile_ea_scalar(self, obj, index): # scalar qs - obj = frame_or_series(index).copy() # result should be invariant to shuffling indexer = np.arange(len(index), dtype=np.intp) @@ -644,8 +649,8 @@ def test_quantile_ea_scalar(self, index, frame_or_series): qs = 0.5 result = self.compute_quantile(obj, qs) - expected = Series({"A": index[4]}, name=0.5) - if frame_or_series is Series: + expected = Series({"A": index[4]}, dtype=index.dtype, name=0.5) + if isinstance(obj, Series): expected = expected["A"] assert result == expected else:
- [x] closes #39771 - [x] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry Not present in any released version, so no release note needed as long as before 1.3
https://api.github.com/repos/pandas-dev/pandas/pulls/41428
2021-05-11T22:46:46Z
2021-05-31T17:41:02Z
2021-05-31T17:41:02Z
2021-05-31T18:57:57Z
BUG: DataFrameGroupBy.__getitem__ with non-unique columns
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 5adc8540e6864..14f33cc2c8535 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -895,6 +895,8 @@ Groupby/resample/rolling - Bug in :meth:`SeriesGroupBy.agg` failing to retain ordered :class:`CategoricalDtype` on order-preserving aggregations (:issue:`41147`) - Bug in :meth:`DataFrameGroupBy.min` and :meth:`DataFrameGroupBy.max` with multiple object-dtype columns and ``numeric_only=False`` incorrectly raising ``ValueError`` (:issue:41111`) - Bug in :meth:`DataFrameGroupBy.rank` with the GroupBy object's ``axis=0`` and the ``rank`` method's keyword ``axis=1`` (:issue:`41320`) +- Bug in :meth:`DataFrameGroupBy.__getitem__` with non-unique columns incorrectly returning a malformed :class:`SeriesGroupBy` instead of :class:`DataFrameGroupBy` (:issue:`41427`) +- Bug in :meth:`DataFrameGroupBy.transform` with non-unique columns incorrectly raising ``AttributeError`` (:issue:`41427`) Reshaping ^^^^^^^^^ diff --git a/pandas/core/base.py b/pandas/core/base.py index e45c4bf514973..28f19a6beeec0 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -214,7 +214,7 @@ def ndim(self) -> int: @cache_readonly def _obj_with_exclusions(self): if self._selection is not None and isinstance(self.obj, ABCDataFrame): - return self.obj.reindex(columns=self._selection_list) + return self.obj[self._selection_list] if len(self.exclusions) > 0: return self.obj.drop(self.exclusions, axis=1) @@ -239,7 +239,9 @@ def __getitem__(self, key): else: if key not in self.obj: raise KeyError(f"Column not found: {key}") - return self._gotitem(key, ndim=1) + subset = self.obj[key] + ndim = subset.ndim + return self._gotitem(key, ndim=ndim, subset=subset) def _gotitem(self, key, ndim: int, subset=None): """ diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index c5d9144893f48..2997deb41c78b 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1417,12 +1417,19 @@ def _choose_path(self, fast_path: Callable, slow_path: Callable, group: DataFram return path, res def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame: - # iterate through columns + # iterate through columns, see test_transform_exclude_nuisance output = {} inds = [] for i, col in enumerate(obj): + subset = obj.iloc[:, i] + sgb = SeriesGroupBy( + subset, + selection=col, + grouper=self.grouper, + exclusions=self.exclusions, + ) try: - output[col] = self[col].transform(wrapper) + output[i] = sgb.transform(wrapper) except TypeError: # e.g. trying to call nanmean with string values pass @@ -1434,7 +1441,9 @@ def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame: columns = obj.columns.take(inds) - return self.obj._constructor(output, index=obj.index, columns=columns) + result = self.obj._constructor(output, index=obj.index) + result.columns = columns + return result def filter(self, func, dropna=True, *args, **kwargs): """ @@ -1504,7 +1513,7 @@ def filter(self, func, dropna=True, *args, **kwargs): return self._apply_filter(indices, dropna) - def __getitem__(self, key): + def __getitem__(self, key) -> DataFrameGroupBy | SeriesGroupBy: if self.axis == 1: # GH 37725 raise ValueError("Cannot subset columns when using axis=1") diff --git a/pandas/tests/groupby/transform/test_transform.py b/pandas/tests/groupby/transform/test_transform.py index b22e4749bfdfc..09317cbeec658 100644 --- a/pandas/tests/groupby/transform/test_transform.py +++ b/pandas/tests/groupby/transform/test_transform.py @@ -20,6 +20,10 @@ date_range, ) import pandas._testing as tm +from pandas.core.groupby.generic import ( + DataFrameGroupBy, + SeriesGroupBy, +) from pandas.core.groupby.groupby import DataError @@ -391,13 +395,31 @@ def test_transform_select_columns(df): tm.assert_frame_equal(result, expected) -def test_transform_exclude_nuisance(df): +@pytest.mark.parametrize("duplicates", [True, False]) +def test_transform_exclude_nuisance(df, duplicates): + # case that goes through _transform_item_by_item + + if duplicates: + # make sure we work with duplicate columns GH#41427 + df.columns = ["A", "C", "C", "D"] # this also tests orderings in transform between # series/frame to make sure it's consistent expected = {} grouped = df.groupby("A") - expected["C"] = grouped["C"].transform(np.mean) + + gbc = grouped["C"] + expected["C"] = gbc.transform(np.mean) + if duplicates: + # squeeze 1-column DataFrame down to Series + expected["C"] = expected["C"]["C"] + + assert isinstance(gbc.obj, DataFrame) + assert isinstance(gbc, DataFrameGroupBy) + else: + assert isinstance(gbc, SeriesGroupBy) + assert isinstance(gbc.obj, Series) + expected["D"] = grouped["D"].transform(np.mean) expected = DataFrame(expected) result = df.groupby("A").transform(np.mean)
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41427
2021-05-11T20:27:50Z
2021-05-12T01:10:28Z
2021-05-12T01:10:28Z
2021-05-12T01:36:48Z
REF: remove name arg from Grouping
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 598750475f3e8..4aac2630feb2c 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -446,15 +446,14 @@ def __init__( index: Index, grouper=None, obj: FrameOrSeries | None = None, - name: Hashable = None, level=None, sort: bool = True, observed: bool = False, in_axis: bool = False, dropna: bool = True, ): - self.name = name self.level = level + self._orig_grouper = grouper self.grouper = _convert_grouper(index, grouper) self.all_grouper = None self.index = index @@ -466,18 +465,11 @@ def __init__( self._passed_categorical = False - # right place for this? - if isinstance(grouper, (Series, Index)) and name is None: - self.name = grouper.name - # we have a single grouper which may be a myriad of things, # some of which are dependent on the passing in level ilevel = self._ilevel if ilevel is not None: - if self.name is None: - self.name = index.names[ilevel] - ( self.grouper, # Index self._codes, @@ -491,16 +483,22 @@ def __init__( # what key/level refer to exactly, don't need to # check again as we have by this point converted these # to an actual value (rather than a pd.Grouper) - _, grouper, _ = self.grouper._get_grouper( + _, newgrouper, newobj = self.grouper._get_grouper( # error: Value of type variable "FrameOrSeries" of "_get_grouper" # of "Grouper" cannot be "Optional[FrameOrSeries]" self.obj, # type: ignore[type-var] validate=False, ) - if self.name is None: - self.name = grouper.result_index.name - self.obj = self.grouper.obj - self.grouper = grouper._get_grouper() + self.obj = newobj + + ng = newgrouper._get_grouper() + if isinstance(newgrouper, ops.BinGrouper): + # in this case we have `ng is newgrouper` + self.grouper = ng + else: + # ops.BaseGrouper + # use Index instead of ndarray so we can recover the name + self.grouper = Index(ng, name=newgrouper.result_index.name) else: # a passed Categorical @@ -511,10 +509,6 @@ def __init__( self.grouper, self.sort, observed ) - # we are done - elif isinstance(self.grouper, Grouping): - self.grouper = self.grouper.grouper - # no level passed elif not isinstance( self.grouper, (Series, Index, ExtensionArray, np.ndarray) @@ -546,6 +540,23 @@ def __repr__(self) -> str: def __iter__(self): return iter(self.indices) + @cache_readonly + def name(self) -> Hashable: + ilevel = self._ilevel + if ilevel is not None: + return self.index.names[ilevel] + + if isinstance(self._orig_grouper, (Index, Series)): + return self._orig_grouper.name + + elif isinstance(self.grouper, ops.BaseGrouper): + return self.grouper.result_index.name + + elif isinstance(self.grouper, Index): + return self.grouper.name + + return None + @cache_readonly def _ilevel(self) -> int | None: """ @@ -814,25 +825,29 @@ def is_in_obj(gpr) -> bool: for gpr, level in zip(keys, levels): if is_in_obj(gpr): # df.groupby(df['name']) - in_axis, name = True, gpr.name - exclusions.add(name) + in_axis = True + exclusions.add(gpr.name) elif is_in_axis(gpr): # df.groupby('name') if gpr in obj: if validate: obj._check_label_or_level_ambiguity(gpr, axis=axis) in_axis, name, gpr = True, gpr, obj[gpr] + if gpr.ndim != 1: + # non-unique columns; raise here to get the name in the + # exception message + raise ValueError(f"Grouper for '{name}' not 1-dimensional") exclusions.add(name) elif obj._is_level_reference(gpr, axis=axis): - in_axis, name, level, gpr = False, None, gpr, None + in_axis, level, gpr = False, gpr, None else: raise KeyError(gpr) elif isinstance(gpr, Grouper) and gpr.key is not None: # Add key to exclusions exclusions.add(gpr.key) - in_axis, name = False, None + in_axis = False else: - in_axis, name = False, None + in_axis = False if is_categorical_dtype(gpr) and len(gpr) != obj.shape[axis]: raise ValueError( @@ -847,7 +862,6 @@ def is_in_obj(gpr) -> bool: group_axis, gpr, obj=obj, - name=name, level=level, sort=sort, observed=observed, diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 3045451974ee7..4226158f53ffa 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -1200,7 +1200,7 @@ def names(self) -> list[Hashable]: @property def groupings(self) -> list[grouper.Grouping]: lev = self.binlabels - ping = grouper.Grouping(lev, lev, in_axis=False, level=None, name=lev.name) + ping = grouper.Grouping(lev, lev, in_axis=False, level=None) return [ping] def _aggregate_series_fast(self, obj: Series, func: F) -> np.ndarray:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41426
2021-05-11T20:22:52Z
2021-05-17T18:05:47Z
2021-05-17T18:05:47Z
2021-05-17T18:06:45Z
BUG: always strip .freq when putting DTI/TDI into Series/DataFrame
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 9968a103a13bf..c2ff7361ff48b 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -972,6 +972,7 @@ Other - Bug in :func:`pandas.util.show_versions` where console JSON output was not proper JSON (:issue:`39701`) - Bug in :meth:`DataFrame.convert_dtypes` incorrectly raised ValueError when called on an empty DataFrame (:issue:`40393`) - Bug in :meth:`DataFrame.clip` not interpreting missing values as no threshold (:issue:`40420`) +- Bug in :class:`Series` backed by :class:`DatetimeArray` or :class:`TimedeltaArray` sometimes failing to set the array's ``freq`` to ``None`` (:issue:`41425`) .. --------------------------------------------------------------------------- diff --git a/pandas/core/base.py b/pandas/core/base.py index 7a48b1fdfda1e..e2720fcbc7ec4 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -498,8 +498,8 @@ def to_numpy( >>> ser = pd.Series(pd.date_range('2000', periods=2, tz="CET")) >>> ser.to_numpy(dtype=object) - array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), - Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], + array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), + Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object) Or ``dtype='datetime64[ns]'`` to return an ndarray of native diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index 0541b76b377f7..31e32b053367b 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -85,6 +85,7 @@ from pandas.core.internals.blocks import ( ensure_block_shape, external_values, + maybe_coerce_values, new_block, to_native_types, ) @@ -701,7 +702,7 @@ def __init__( if verify_integrity: self._axes = [ensure_index(ax) for ax in axes] - self.arrays = [ensure_wrapped_if_datetimelike(arr) for arr in arrays] + self.arrays = [maybe_coerce_values(arr) for arr in arrays] self._verify_integrity() def _verify_integrity(self) -> None: @@ -814,7 +815,7 @@ def iset(self, loc: int | slice | np.ndarray, value: ArrayLike): # TODO we receive a datetime/timedelta64 ndarray from DataFrame._iset_item # but we should avoid that and pass directly the proper array - value = ensure_wrapped_if_datetimelike(value) + value = maybe_coerce_values(value) assert isinstance(value, (np.ndarray, ExtensionArray)) assert value.ndim == 1 @@ -873,7 +874,7 @@ def insert(self, loc: int, item: Hashable, value: ArrayLike) -> None: raise ValueError( f"Expected a 1D array, got an array with shape {value.shape}" ) - value = ensure_wrapped_if_datetimelike(value) + value = maybe_coerce_values(value) # TODO self.arrays can be empty # assert len(value) == len(self.arrays[0]) @@ -1188,7 +1189,7 @@ def __init__( assert len(arrays) == 1 self._axes = [ensure_index(ax) for ax in self._axes] arr = arrays[0] - arr = ensure_wrapped_if_datetimelike(arr) + arr = maybe_coerce_values(arr) if isinstance(arr, ABCPandasArray): arr = arr.to_numpy() self.arrays = [arr] diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index bd4dfdb4ebad0..4f1b16e747394 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -1860,6 +1860,10 @@ def maybe_coerce_values(values) -> ArrayLike: if issubclass(values.dtype.type, str): values = np.array(values, dtype=object) + if isinstance(values, (DatetimeArray, TimedeltaArray)) and values.freq is not None: + # freq is only stored in DatetimeIndex/TimedeltaIndex, not in Series/DataFrame + values = values._with_freq(None) + return values diff --git a/pandas/core/series.py b/pandas/core/series.py index c8e9898f9462a..d0ff50cca5355 100644 --- a/pandas/core/series.py +++ b/pandas/core/series.py @@ -813,8 +813,8 @@ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray: >>> tzser = pd.Series(pd.date_range('2000', periods=2, tz="CET")) >>> np.asarray(tzser, dtype="object") - array([Timestamp('2000-01-01 00:00:00+0100', tz='CET', freq='D'), - Timestamp('2000-01-02 00:00:00+0100', tz='CET', freq='D')], + array([Timestamp('2000-01-01 00:00:00+0100', tz='CET'), + Timestamp('2000-01-02 00:00:00+0100', tz='CET')], dtype=object) Or the values may be localized to UTC and the tzinfo discarded with diff --git a/pandas/tests/extension/test_datetime.py b/pandas/tests/extension/test_datetime.py index 33589027c0d0f..bb8347f0a0122 100644 --- a/pandas/tests/extension/test_datetime.py +++ b/pandas/tests/extension/test_datetime.py @@ -97,7 +97,10 @@ class TestDatetimeDtype(BaseDatetimeTests, base.BaseDtypeTests): class TestConstructors(BaseDatetimeTests, base.BaseConstructorsTests): - pass + def test_series_constructor(self, data): + # Series construction drops any .freq attr + data = data._with_freq(None) + super().test_series_constructor(data) class TestGetitem(BaseDatetimeTests, base.BaseGetitemTests): diff --git a/pandas/tests/frame/methods/test_set_index.py b/pandas/tests/frame/methods/test_set_index.py index 62dc400f8de9f..51f66128b1500 100644 --- a/pandas/tests/frame/methods/test_set_index.py +++ b/pandas/tests/frame/methods/test_set_index.py @@ -96,7 +96,7 @@ def test_set_index_cast_datetimeindex(self): idf = df.set_index("A") assert isinstance(idf.index, DatetimeIndex) - def test_set_index_dst(self, using_array_manager): + def test_set_index_dst(self): di = date_range("2006-10-29 00:00:00", periods=3, freq="H", tz="US/Pacific") df = DataFrame(data={"a": [0, 1, 2], "b": [3, 4, 5]}, index=di).reset_index() @@ -106,8 +106,7 @@ def test_set_index_dst(self, using_array_manager): data={"a": [0, 1, 2], "b": [3, 4, 5]}, index=Index(di, name="index"), ) - if not using_array_manager: - exp.index = exp.index._with_freq(None) + exp.index = exp.index._with_freq(None) tm.assert_frame_equal(res, exp) # GH#12920 diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index 28465e3a979a7..4846e15da039f 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -586,7 +586,7 @@ def test_rolling_datetime(axis_frame, tz_naive_fixture): ), ], ) -def test_rolling_window_as_string(center, expected_data, using_array_manager): +def test_rolling_window_as_string(center, expected_data): # see gh-22590 date_today = datetime.now() days = date_range(date_today, date_today + timedelta(365), freq="D") @@ -602,9 +602,7 @@ def test_rolling_window_as_string(center, expected_data, using_array_manager): ].agg("max") index = days.rename("DateCol") - if not using_array_manager: - # INFO(ArrayManager) preserves the frequence of the index - index = index._with_freq(None) + index = index._with_freq(None) expected = Series(expected_data, index=index, name="metric") tm.assert_series_equal(result, expected)
ATM when we put a DatetimeArray/TimedeltaArray/DatetimeIndex/TimedeltaIndex (from here on, just "DTI") into a Series or DataFrame, we drop its .freq for BlockManager-DataFrame cases, but not the other three cases (BlockManager-Series, ArrayManager-DataFrame, ArrayManager-Series). The long-term behavior is definitely going to always drop the freq (more specifically, DTA/TDA won't _have_ freq, xref #31218). So this PR standardizes always-dropping. cc @mroeschke this should unblock your window PR cc @jorisvandenbossche bc this touches ArrayManager
https://api.github.com/repos/pandas-dev/pandas/pulls/41425
2021-05-11T20:14:32Z
2021-05-17T16:31:51Z
2021-05-17T16:31:51Z
2021-05-17T16:35:51Z
CLN: Grouping.__init__ logic belonging in _convert_grouper
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 4650dbea27de1..1b5c11b363457 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -16,12 +16,11 @@ from pandas.errors import InvalidIndexError from pandas.util._decorators import cache_readonly +from pandas.core.dtypes.cast import sanitize_to_nanoseconds from pandas.core.dtypes.common import ( is_categorical_dtype, - is_datetime64_dtype, is_list_like, is_scalar, - is_timedelta64_dtype, ) import pandas.core.algorithms as algorithms @@ -466,9 +465,6 @@ def __init__( if isinstance(grouper, (Series, Index)) and name is None: self.name = grouper.name - if isinstance(grouper, MultiIndex): - self.grouper = grouper._values - # we have a single grouper which may be a myriad of things, # some of which are dependent on the passing in level @@ -506,14 +502,9 @@ def __init__( self.grouper = grouper._get_grouper() else: - if self.grouper is None and self.name is not None and self.obj is not None: - self.grouper = self.obj[self.name] - - elif isinstance(self.grouper, (list, tuple)): - self.grouper = com.asarray_tuplesafe(self.grouper) # a passed Categorical - elif is_categorical_dtype(self.grouper): + if is_categorical_dtype(self.grouper): self.grouper, self.all_grouper = recode_for_groupby( self.grouper, self.sort, observed @@ -539,7 +530,7 @@ def __init__( ) # we are done - if isinstance(self.grouper, Grouping): + elif isinstance(self.grouper, Grouping): self.grouper = self.grouper.grouper # no level passed @@ -562,14 +553,10 @@ def __init__( self.grouper = None # Try for sanity raise AssertionError(errmsg) - # if we have a date/time-like grouper, make sure that we have - # Timestamps like - if getattr(self.grouper, "dtype", None) is not None: - if is_datetime64_dtype(self.grouper): - self.grouper = self.grouper.astype("datetime64[ns]") - elif is_timedelta64_dtype(self.grouper): - - self.grouper = self.grouper.astype("timedelta64[ns]") + if isinstance(self.grouper, np.ndarray): + # if we have a date/time-like grouper, make sure that we have + # Timestamps like + self.grouper = sanitize_to_nanoseconds(self.grouper) def __repr__(self) -> str: return f"Grouping({self.name})" @@ -876,9 +863,14 @@ def _convert_grouper(axis: Index, grouper): return grouper._values else: return grouper.reindex(axis)._values - elif isinstance(grouper, (list, Series, Index, np.ndarray)): + elif isinstance(grouper, MultiIndex): + return grouper._values + elif isinstance(grouper, (list, tuple, Series, Index, np.ndarray)): if len(grouper) != len(axis): raise ValueError("Grouper and axis must be same length") + + if isinstance(grouper, (list, tuple)): + grouper = com.asarray_tuplesafe(grouper) return grouper else: return grouper
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41424
2021-05-11T19:59:13Z
2021-05-12T01:08:32Z
2021-05-12T01:08:32Z
2021-05-12T01:36:06Z
[ArrowStringArray] TST: parametrize tests/strings/test_case_justify.py
diff --git a/pandas/tests/strings/test_case_justify.py b/pandas/tests/strings/test_case_justify.py index b46f50e430b54..d6e2ca7399b4e 100644 --- a/pandas/tests/strings/test_case_justify.py +++ b/pandas/tests/strings/test_case_justify.py @@ -1,4 +1,5 @@ from datetime import datetime +import operator import numpy as np import pytest @@ -9,68 +10,80 @@ ) -def test_title(): - values = Series(["FOO", "BAR", np.nan, "Blah", "blurg"]) +def test_title(any_string_dtype): + s = Series(["FOO", "BAR", np.nan, "Blah", "blurg"], dtype=any_string_dtype) + result = s.str.title() + expected = Series(["Foo", "Bar", np.nan, "Blah", "Blurg"], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) - result = values.str.title() - exp = Series(["Foo", "Bar", np.nan, "Blah", "Blurg"]) - tm.assert_series_equal(result, exp) - # mixed - mixed = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0]) - mixed = mixed.str.title() - exp = Series(["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", np.nan, np.nan, np.nan]) - tm.assert_almost_equal(mixed, exp) +def test_title_mixed_object(): + s = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0]) + result = s.str.title() + expected = Series( + ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", np.nan, np.nan, np.nan] + ) + tm.assert_almost_equal(result, expected) -def test_lower_upper(): - values = Series(["om", np.nan, "nom", "nom"]) +def test_lower_upper(any_string_dtype): + s = Series(["om", np.nan, "nom", "nom"], dtype=any_string_dtype) - result = values.str.upper() - exp = Series(["OM", np.nan, "NOM", "NOM"]) - tm.assert_series_equal(result, exp) + result = s.str.upper() + expected = Series(["OM", np.nan, "NOM", "NOM"], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) result = result.str.lower() - tm.assert_series_equal(result, values) + tm.assert_series_equal(result, s) + - # mixed - mixed = Series(["a", np.nan, "b", True, datetime.today(), "foo", None, 1, 2.0]) - mixed = mixed.str.upper() - rs = Series(mixed).str.lower() - xp = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan]) - assert isinstance(rs, Series) - tm.assert_series_equal(rs, xp) +def test_lower_upper_mixed_object(): + s = Series(["a", np.nan, "b", True, datetime.today(), "foo", None, 1, 2.0]) + result = s.str.upper() + expected = Series(["A", np.nan, "B", np.nan, np.nan, "FOO", np.nan, np.nan, np.nan]) + tm.assert_series_equal(result, expected) -def test_capitalize(): - values = Series(["FOO", "BAR", np.nan, "Blah", "blurg"]) - result = values.str.capitalize() - exp = Series(["Foo", "Bar", np.nan, "Blah", "Blurg"]) - tm.assert_series_equal(result, exp) + result = s.str.lower() + expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan]) + tm.assert_series_equal(result, expected) - # mixed - mixed = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0]) - mixed = mixed.str.capitalize() - exp = Series(["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", np.nan, np.nan, np.nan]) - tm.assert_almost_equal(mixed, exp) +def test_capitalize(any_string_dtype): + s = Series(["FOO", "BAR", np.nan, "Blah", "blurg"], dtype=any_string_dtype) + result = s.str.capitalize() + expected = Series(["Foo", "Bar", np.nan, "Blah", "Blurg"], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) -def test_swapcase(): - values = Series(["FOO", "BAR", np.nan, "Blah", "blurg"]) - result = values.str.swapcase() - exp = Series(["foo", "bar", np.nan, "bLAH", "BLURG"]) - tm.assert_series_equal(result, exp) - # mixed - mixed = Series(["FOO", np.nan, "bar", True, datetime.today(), "Blah", None, 1, 2.0]) - mixed = mixed.str.swapcase() - exp = Series(["foo", np.nan, "BAR", np.nan, np.nan, "bLAH", np.nan, np.nan, np.nan]) - tm.assert_almost_equal(mixed, exp) +def test_capitalize_mixed_object(): + s = Series(["FOO", np.nan, "bar", True, datetime.today(), "blah", None, 1, 2.0]) + result = s.str.capitalize() + expected = Series( + ["Foo", np.nan, "Bar", np.nan, np.nan, "Blah", np.nan, np.nan, np.nan] + ) + tm.assert_series_equal(result, expected) + + +def test_swapcase(any_string_dtype): + s = Series(["FOO", "BAR", np.nan, "Blah", "blurg"], dtype=any_string_dtype) + result = s.str.swapcase() + expected = Series(["foo", "bar", np.nan, "bLAH", "BLURG"], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) + + +def test_swapcase_mixed_object(): + s = Series(["FOO", np.nan, "bar", True, datetime.today(), "Blah", None, 1, 2.0]) + result = s.str.swapcase() + expected = Series( + ["foo", np.nan, "BAR", np.nan, np.nan, "bLAH", np.nan, np.nan, np.nan] + ) + tm.assert_series_equal(result, expected) -def test_casemethods(): +def test_casemethods(any_string_dtype): values = ["aaa", "bbb", "CCC", "Dddd", "eEEE"] - s = Series(values) + s = Series(values, dtype=any_string_dtype) assert s.str.lower().tolist() == [v.lower() for v in values] assert s.str.upper().tolist() == [v.upper() for v in values] assert s.str.title().tolist() == [v.title() for v in values] @@ -78,108 +91,122 @@ def test_casemethods(): assert s.str.swapcase().tolist() == [v.swapcase() for v in values] -def test_pad(): - values = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"]) +def test_pad(any_string_dtype): + s = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"], dtype=any_string_dtype) + + result = s.str.pad(5, side="left") + expected = Series( + [" a", " b", np.nan, " c", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.pad(5, side="left") - exp = Series([" a", " b", np.nan, " c", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.pad(5, side="right") + expected = Series( + ["a ", "b ", np.nan, "c ", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.pad(5, side="right") - exp = Series(["a ", "b ", np.nan, "c ", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.pad(5, side="both") + expected = Series( + [" a ", " b ", np.nan, " c ", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.pad(5, side="both") - exp = Series([" a ", " b ", np.nan, " c ", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) - # mixed - mixed = Series(["a", np.nan, "b", True, datetime.today(), "ee", None, 1, 2.0]) +def test_pad_mixed_object(): + s = Series(["a", np.nan, "b", True, datetime.today(), "ee", None, 1, 2.0]) - rs = Series(mixed).str.pad(5, side="left") - xp = Series( + result = s.str.pad(5, side="left") + expected = Series( [" a", np.nan, " b", np.nan, np.nan, " ee", np.nan, np.nan, np.nan] ) + tm.assert_series_equal(result, expected) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) - - mixed = Series(["a", np.nan, "b", True, datetime.today(), "ee", None, 1, 2.0]) - - rs = Series(mixed).str.pad(5, side="right") - xp = Series( + result = s.str.pad(5, side="right") + expected = Series( ["a ", np.nan, "b ", np.nan, np.nan, "ee ", np.nan, np.nan, np.nan] ) + tm.assert_series_equal(result, expected) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) - - mixed = Series(["a", np.nan, "b", True, datetime.today(), "ee", None, 1, 2.0]) - - rs = Series(mixed).str.pad(5, side="both") - xp = Series( + result = s.str.pad(5, side="both") + expected = Series( [" a ", np.nan, " b ", np.nan, np.nan, " ee ", np.nan, np.nan, np.nan] ) + tm.assert_series_equal(result, expected) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) +def test_pad_fillchar(any_string_dtype): + s = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"], dtype=any_string_dtype) -def test_pad_fillchar(): + result = s.str.pad(5, side="left", fillchar="X") + expected = Series( + ["XXXXa", "XXXXb", np.nan, "XXXXc", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - values = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"]) + result = s.str.pad(5, side="right", fillchar="X") + expected = Series( + ["aXXXX", "bXXXX", np.nan, "cXXXX", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.pad(5, side="left", fillchar="X") - exp = Series(["XXXXa", "XXXXb", np.nan, "XXXXc", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.pad(5, side="both", fillchar="X") + expected = Series( + ["XXaXX", "XXbXX", np.nan, "XXcXX", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.pad(5, side="right", fillchar="X") - exp = Series(["aXXXX", "bXXXX", np.nan, "cXXXX", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) - result = values.str.pad(5, side="both", fillchar="X") - exp = Series(["XXaXX", "XXbXX", np.nan, "XXcXX", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) +def test_pad_fillchar_bad_arg_raises(any_string_dtype): + s = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"], dtype=any_string_dtype) msg = "fillchar must be a character, not str" with pytest.raises(TypeError, match=msg): - result = values.str.pad(5, fillchar="XY") + s.str.pad(5, fillchar="XY") msg = "fillchar must be a character, not int" with pytest.raises(TypeError, match=msg): - result = values.str.pad(5, fillchar=5) + s.str.pad(5, fillchar=5) -@pytest.mark.parametrize("f", ["center", "ljust", "rjust", "zfill", "pad"]) -def test_pad_width(f): +@pytest.mark.parametrize("method_name", ["center", "ljust", "rjust", "zfill", "pad"]) +def test_pad_width_bad_arg_raises(method_name, any_string_dtype): # see gh-13598 - s = Series(["1", "22", "a", "bb"]) - msg = "width must be of integer type, not*" + s = Series(["1", "22", "a", "bb"], dtype=any_string_dtype) + op = operator.methodcaller(method_name, "f") + msg = "width must be of integer type, not str" with pytest.raises(TypeError, match=msg): - getattr(s.str, f)("f") + op(s.str) -def test_center_ljust_rjust(): - values = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"]) +def test_center_ljust_rjust(any_string_dtype): + s = Series(["a", "b", np.nan, "c", np.nan, "eeeeee"], dtype=any_string_dtype) - result = values.str.center(5) - exp = Series([" a ", " b ", np.nan, " c ", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.center(5) + expected = Series( + [" a ", " b ", np.nan, " c ", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.ljust(5) - exp = Series(["a ", "b ", np.nan, "c ", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.ljust(5) + expected = Series( + ["a ", "b ", np.nan, "c ", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - result = values.str.rjust(5) - exp = Series([" a", " b", np.nan, " c", np.nan, "eeeeee"]) - tm.assert_almost_equal(result, exp) + result = s.str.rjust(5) + expected = Series( + [" a", " b", np.nan, " c", np.nan, "eeeeee"], dtype=any_string_dtype + ) + tm.assert_series_equal(result, expected) - # mixed - mixed = Series(["a", np.nan, "b", True, datetime.today(), "c", "eee", None, 1, 2.0]) - rs = Series(mixed).str.center(5) - xp = Series( +def test_center_ljust_rjust_mixed_object(): + s = Series(["a", np.nan, "b", True, datetime.today(), "c", "eee", None, 1, 2.0]) + + result = s.str.center(5) + expected = Series( [ " a ", np.nan, @@ -193,11 +220,10 @@ def test_center_ljust_rjust(): np.nan, ] ) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(result, expected) - rs = Series(mixed).str.ljust(5) - xp = Series( + result = s.str.ljust(5) + expected = Series( [ "a ", np.nan, @@ -211,11 +237,10 @@ def test_center_ljust_rjust(): np.nan, ] ) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(result, expected) - rs = Series(mixed).str.rjust(5) - xp = Series( + result = s.str.rjust(5) + expected = Series( [ " a", np.nan, @@ -229,82 +254,95 @@ def test_center_ljust_rjust(): np.nan, ] ) - assert isinstance(rs, Series) - tm.assert_almost_equal(rs, xp) + tm.assert_series_equal(result, expected) -def test_center_ljust_rjust_fillchar(): - values = Series(["a", "bb", "cccc", "ddddd", "eeeeee"]) +def test_center_ljust_rjust_fillchar(any_string_dtype): + s = Series(["a", "bb", "cccc", "ddddd", "eeeeee"], dtype=any_string_dtype) - result = values.str.center(5, fillchar="X") - expected = Series(["XXaXX", "XXbbX", "Xcccc", "ddddd", "eeeeee"]) + result = s.str.center(5, fillchar="X") + expected = Series( + ["XXaXX", "XXbbX", "Xcccc", "ddddd", "eeeeee"], dtype=any_string_dtype + ) tm.assert_series_equal(result, expected) - expected = np.array([v.center(5, "X") for v in values.values], dtype=np.object_) - tm.assert_numpy_array_equal(result.values, expected) + expected = np.array([v.center(5, "X") for v in np.array(s)], dtype=np.object_) + tm.assert_numpy_array_equal(np.array(result, dtype=np.object_), expected) - result = values.str.ljust(5, fillchar="X") - expected = Series(["aXXXX", "bbXXX", "ccccX", "ddddd", "eeeeee"]) + result = s.str.ljust(5, fillchar="X") + expected = Series( + ["aXXXX", "bbXXX", "ccccX", "ddddd", "eeeeee"], dtype=any_string_dtype + ) tm.assert_series_equal(result, expected) - expected = np.array([v.ljust(5, "X") for v in values.values], dtype=np.object_) - tm.assert_numpy_array_equal(result.values, expected) + expected = np.array([v.ljust(5, "X") for v in np.array(s)], dtype=np.object_) + tm.assert_numpy_array_equal(np.array(result, dtype=np.object_), expected) - result = values.str.rjust(5, fillchar="X") - expected = Series(["XXXXa", "XXXbb", "Xcccc", "ddddd", "eeeeee"]) + result = s.str.rjust(5, fillchar="X") + expected = Series( + ["XXXXa", "XXXbb", "Xcccc", "ddddd", "eeeeee"], dtype=any_string_dtype + ) tm.assert_series_equal(result, expected) - expected = np.array([v.rjust(5, "X") for v in values.values], dtype=np.object_) - tm.assert_numpy_array_equal(result.values, expected) + expected = np.array([v.rjust(5, "X") for v in np.array(s)], dtype=np.object_) + tm.assert_numpy_array_equal(np.array(result, dtype=np.object_), expected) - # If fillchar is not a charatter, normal str raises TypeError + +def test_center_ljust_rjust_fillchar_bad_arg_raises(any_string_dtype): + s = Series(["a", "bb", "cccc", "ddddd", "eeeeee"], dtype=any_string_dtype) + + # If fillchar is not a character, normal str raises TypeError # 'aaa'.ljust(5, 'XY') # TypeError: must be char, not str template = "fillchar must be a character, not {dtype}" with pytest.raises(TypeError, match=template.format(dtype="str")): - values.str.center(5, fillchar="XY") + s.str.center(5, fillchar="XY") with pytest.raises(TypeError, match=template.format(dtype="str")): - values.str.ljust(5, fillchar="XY") + s.str.ljust(5, fillchar="XY") with pytest.raises(TypeError, match=template.format(dtype="str")): - values.str.rjust(5, fillchar="XY") + s.str.rjust(5, fillchar="XY") with pytest.raises(TypeError, match=template.format(dtype="int")): - values.str.center(5, fillchar=1) + s.str.center(5, fillchar=1) with pytest.raises(TypeError, match=template.format(dtype="int")): - values.str.ljust(5, fillchar=1) + s.str.ljust(5, fillchar=1) with pytest.raises(TypeError, match=template.format(dtype="int")): - values.str.rjust(5, fillchar=1) + s.str.rjust(5, fillchar=1) -def test_zfill(): - values = Series(["1", "22", "aaa", "333", "45678"]) +def test_zfill(any_string_dtype): + s = Series(["1", "22", "aaa", "333", "45678"], dtype=any_string_dtype) - result = values.str.zfill(5) - expected = Series(["00001", "00022", "00aaa", "00333", "45678"]) + result = s.str.zfill(5) + expected = Series( + ["00001", "00022", "00aaa", "00333", "45678"], dtype=any_string_dtype + ) tm.assert_series_equal(result, expected) - expected = np.array([v.zfill(5) for v in values.values], dtype=np.object_) - tm.assert_numpy_array_equal(result.values, expected) + expected = np.array([v.zfill(5) for v in np.array(s)], dtype=np.object_) + tm.assert_numpy_array_equal(np.array(result, dtype=np.object_), expected) - result = values.str.zfill(3) - expected = Series(["001", "022", "aaa", "333", "45678"]) + result = s.str.zfill(3) + expected = Series(["001", "022", "aaa", "333", "45678"], dtype=any_string_dtype) tm.assert_series_equal(result, expected) - expected = np.array([v.zfill(3) for v in values.values], dtype=np.object_) - tm.assert_numpy_array_equal(result.values, expected) + expected = np.array([v.zfill(3) for v in np.array(s)], dtype=np.object_) + tm.assert_numpy_array_equal(np.array(result, dtype=np.object_), expected) - values = Series(["1", np.nan, "aaa", np.nan, "45678"]) - result = values.str.zfill(5) - expected = Series(["00001", np.nan, "00aaa", np.nan, "45678"]) + s = Series(["1", np.nan, "aaa", np.nan, "45678"], dtype=any_string_dtype) + result = s.str.zfill(5) + expected = Series( + ["00001", np.nan, "00aaa", np.nan, "45678"], dtype=any_string_dtype + ) tm.assert_series_equal(result, expected) -def test_wrap(): +def test_wrap(any_string_dtype): # test values are: two words less than width, two words equal to width, # two words greater than width, one word less than width, one word # equal to width, one word greater than width, multiple tokens with # trailing whitespace equal to width - values = Series( + s = Series( [ "hello world", "hello world!", @@ -315,11 +353,12 @@ def test_wrap(): "ab ab ab ab ", "ab ab ab ab a", "\t", - ] + ], + dtype=any_string_dtype, ) # expected values - xp = Series( + expected = Series( [ "hello world", "hello world!", @@ -330,15 +369,21 @@ def test_wrap(): "ab ab ab ab", "ab ab ab ab\na", "", - ] + ], + dtype=any_string_dtype, ) - rs = values.str.wrap(12, break_long_words=True) - tm.assert_series_equal(rs, xp) + result = s.str.wrap(12, break_long_words=True) + tm.assert_series_equal(result, expected) + - # test with pre and post whitespace (non-unicode), NaN, and non-ascii - # Unicode - values = Series([" pre ", np.nan, "\xac\u20ac\U00008000 abadcafe"]) - xp = Series([" pre", np.nan, "\xac\u20ac\U00008000 ab\nadcafe"]) - rs = values.str.wrap(6) - tm.assert_series_equal(rs, xp) +def test_wrap_unicode(any_string_dtype): + # test with pre and post whitespace (non-unicode), NaN, and non-ascii Unicode + s = Series( + [" pre ", np.nan, "\xac\u20ac\U00008000 abadcafe"], dtype=any_string_dtype + ) + expected = Series( + [" pre", np.nan, "\xac\u20ac\U00008000 ab\nadcafe"], dtype=any_string_dtype + ) + result = s.str.wrap(6) + tm.assert_series_equal(result, expected)
https://api.github.com/repos/pandas-dev/pandas/pulls/41420
2021-05-11T15:28:33Z
2021-05-12T13:30:18Z
2021-05-12T13:30:18Z
2021-05-12T13:37:41Z
[ArrowStringArray] TST: parametrize str.extractall tests
diff --git a/pandas/tests/strings/test_extract.py b/pandas/tests/strings/test_extract.py index c1564a5c256a1..83401de0e5443 100644 --- a/pandas/tests/strings/test_extract.py +++ b/pandas/tests/strings/test_extract.py @@ -358,8 +358,8 @@ def test_extract_single_group_returns_frame(): tm.assert_frame_equal(r, e) -def test_extractall(): - subject_list = [ +def test_extractall(any_string_dtype): + data = [ "dave@google.com", "tdhock5@gmail.com", "maudelaperriere@gmail.com", @@ -378,7 +378,7 @@ def test_extractall(): ("c", "d", "com"), ("e", "f", "com"), ] - named_pattern = r""" + pat = r""" (?P<user>[a-z0-9]+) @ (?P<domain>[a-z]+) @@ -386,20 +386,22 @@ def test_extractall(): (?P<tld>[a-z]{2,4}) """ expected_columns = ["user", "domain", "tld"] - S = Series(subject_list) - # extractall should return a DataFrame with one row for each - # match, indexed by the subject from which the match came. + s = Series(data, dtype=any_string_dtype) + # extractall should return a DataFrame with one row for each match, indexed by the + # subject from which the match came. expected_index = MultiIndex.from_tuples( [(0, 0), (1, 0), (2, 0), (3, 0), (3, 1), (4, 0), (4, 1), (4, 2)], names=(None, "match"), ) - expected_df = DataFrame(expected_tuples, expected_index, expected_columns) - computed_df = S.str.extractall(named_pattern, re.VERBOSE) - tm.assert_frame_equal(computed_df, expected_df) + expected = DataFrame( + expected_tuples, expected_index, expected_columns, dtype=any_string_dtype + ) + result = s.str.extractall(pat, flags=re.VERBOSE) + tm.assert_frame_equal(result, expected) - # The index of the input Series should be used to construct - # the index of the output DataFrame: - series_index = MultiIndex.from_tuples( + # The index of the input Series should be used to construct the index of the output + # DataFrame: + mi = MultiIndex.from_tuples( [ ("single", "Dave"), ("single", "Toby"), @@ -410,7 +412,7 @@ def test_extractall(): ("none", "empty"), ] ) - Si = Series(subject_list, series_index) + s = Series(data, index=mi, dtype=any_string_dtype) expected_index = MultiIndex.from_tuples( [ ("single", "Dave", 0), @@ -424,67 +426,80 @@ def test_extractall(): ], names=(None, None, "match"), ) - expected_df = DataFrame(expected_tuples, expected_index, expected_columns) - computed_df = Si.str.extractall(named_pattern, re.VERBOSE) - tm.assert_frame_equal(computed_df, expected_df) + expected = DataFrame( + expected_tuples, expected_index, expected_columns, dtype=any_string_dtype + ) + result = s.str.extractall(pat, flags=re.VERBOSE) + tm.assert_frame_equal(result, expected) # MultiIndexed subject with names. - Sn = Series(subject_list, series_index) - Sn.index.names = ("matches", "description") + s = Series(data, index=mi, dtype=any_string_dtype) + s.index.names = ("matches", "description") expected_index.names = ("matches", "description", "match") - expected_df = DataFrame(expected_tuples, expected_index, expected_columns) - computed_df = Sn.str.extractall(named_pattern, re.VERBOSE) - tm.assert_frame_equal(computed_df, expected_df) - - # optional groups. - subject_list = ["", "A1", "32"] - named_pattern = "(?P<letter>[AB])?(?P<number>[123])" - computed_df = Series(subject_list).str.extractall(named_pattern) - expected_index = MultiIndex.from_tuples( - [(1, 0), (2, 0), (2, 1)], names=(None, "match") - ) - expected_df = DataFrame( - [("A", "1"), (np.nan, "3"), (np.nan, "2")], - expected_index, - columns=["letter", "number"], + expected = DataFrame( + expected_tuples, expected_index, expected_columns, dtype=any_string_dtype ) - tm.assert_frame_equal(computed_df, expected_df) + result = s.str.extractall(pat, flags=re.VERBOSE) + tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize( + "pat,expected_names", + [ + # optional groups. + ("(?P<letter>[AB])?(?P<number>[123])", ["letter", "number"]), + # only one of two groups has a name. + ("([AB])?(?P<number>[123])", [0, "number"]), + ], +) +def test_extractall_column_names(pat, expected_names, any_string_dtype): + s = Series(["", "A1", "32"], dtype=any_string_dtype) - # only one of two groups has a name. - pattern = "([AB])?(?P<number>[123])" - computed_df = Series(subject_list).str.extractall(pattern) - expected_df = DataFrame( + result = s.str.extractall(pat) + expected = DataFrame( [("A", "1"), (np.nan, "3"), (np.nan, "2")], - expected_index, - columns=[0, "number"], + index=MultiIndex.from_tuples([(1, 0), (2, 0), (2, 1)], names=(None, "match")), + columns=expected_names, + dtype=any_string_dtype, ) - tm.assert_frame_equal(computed_df, expected_df) + tm.assert_frame_equal(result, expected) -def test_extractall_single_group(): - # extractall(one named group) returns DataFrame with one named - # column. - s = Series(["a3", "b3", "d4c2"], name="series_name") - r = s.str.extractall(r"(?P<letter>[a-z])") - i = MultiIndex.from_tuples([(0, 0), (1, 0), (2, 0), (2, 1)], names=(None, "match")) - e = DataFrame({"letter": ["a", "b", "d", "c"]}, i) - tm.assert_frame_equal(r, e) +def test_extractall_single_group(any_string_dtype): + s = Series(["a3", "b3", "d4c2"], name="series_name", dtype=any_string_dtype) + expected_index = MultiIndex.from_tuples( + [(0, 0), (1, 0), (2, 0), (2, 1)], names=(None, "match") + ) - # extractall(one un-named group) returns DataFrame with one - # un-named column. - r = s.str.extractall(r"([a-z])") - e = DataFrame(["a", "b", "d", "c"], i) - tm.assert_frame_equal(r, e) + # extractall(one named group) returns DataFrame with one named column. + result = s.str.extractall(r"(?P<letter>[a-z])") + expected = DataFrame( + {"letter": ["a", "b", "d", "c"]}, index=expected_index, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) + + # extractall(one un-named group) returns DataFrame with one un-named column. + result = s.str.extractall(r"([a-z])") + expected = DataFrame( + ["a", "b", "d", "c"], index=expected_index, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) -def test_extractall_single_group_with_quantifier(): - # extractall(one un-named group with quantifier) returns - # DataFrame with one un-named column (GH13382). - s = Series(["ab3", "abc3", "d4cd2"], name="series_name") - r = s.str.extractall(r"([a-z]+)") - i = MultiIndex.from_tuples([(0, 0), (1, 0), (2, 0), (2, 1)], names=(None, "match")) - e = DataFrame(["ab", "abc", "d", "cd"], i) - tm.assert_frame_equal(r, e) +def test_extractall_single_group_with_quantifier(any_string_dtype): + # GH#13382 + # extractall(one un-named group with quantifier) returns DataFrame with one un-named + # column. + s = Series(["ab3", "abc3", "d4cd2"], name="series_name", dtype=any_string_dtype) + result = s.str.extractall(r"([a-z]+)") + expected = DataFrame( + ["ab", "abc", "d", "cd"], + index=MultiIndex.from_tuples( + [(0, 0), (1, 0), (2, 0), (2, 1)], names=(None, "match") + ), + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) @pytest.mark.parametrize( @@ -500,78 +515,91 @@ def test_extractall_single_group_with_quantifier(): (["a3", "b3", "d4c2"], ("i1", "i2")), ], ) -def test_extractall_no_matches(data, names): +def test_extractall_no_matches(data, names, any_string_dtype): # GH19075 extractall with no matches should return a valid MultiIndex n = len(data) if len(names) == 1: - i = Index(range(n), name=names[0]) + index = Index(range(n), name=names[0]) else: - a = (tuple([i] * (n - 1)) for i in range(n)) - i = MultiIndex.from_tuples(a, names=names) - s = Series(data, name="series_name", index=i, dtype="object") - ei = MultiIndex.from_tuples([], names=(names + ("match",))) + tuples = (tuple([i] * (n - 1)) for i in range(n)) + index = MultiIndex.from_tuples(tuples, names=names) + s = Series(data, name="series_name", index=index, dtype=any_string_dtype) + expected_index = MultiIndex.from_tuples([], names=(names + ("match",))) # one un-named group. - r = s.str.extractall("(z)") - e = DataFrame(columns=[0], index=ei) - tm.assert_frame_equal(r, e) + result = s.str.extractall("(z)") + expected = DataFrame(columns=[0], index=expected_index, dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) # two un-named groups. - r = s.str.extractall("(z)(z)") - e = DataFrame(columns=[0, 1], index=ei) - tm.assert_frame_equal(r, e) + result = s.str.extractall("(z)(z)") + expected = DataFrame(columns=[0, 1], index=expected_index, dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) # one named group. - r = s.str.extractall("(?P<first>z)") - e = DataFrame(columns=["first"], index=ei) - tm.assert_frame_equal(r, e) + result = s.str.extractall("(?P<first>z)") + expected = DataFrame( + columns=["first"], index=expected_index, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) # two named groups. - r = s.str.extractall("(?P<first>z)(?P<second>z)") - e = DataFrame(columns=["first", "second"], index=ei) - tm.assert_frame_equal(r, e) + result = s.str.extractall("(?P<first>z)(?P<second>z)") + expected = DataFrame( + columns=["first", "second"], index=expected_index, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) # one named, one un-named. - r = s.str.extractall("(z)(?P<second>z)") - e = DataFrame(columns=[0, "second"], index=ei) - tm.assert_frame_equal(r, e) + result = s.str.extractall("(z)(?P<second>z)") + expected = DataFrame( + columns=[0, "second"], index=expected_index, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) -def test_extractall_stringindex(): - s = Series(["a1a2", "b1", "c1"], name="xxx") - res = s.str.extractall(r"[ab](?P<digit>\d)") - exp_idx = MultiIndex.from_tuples([(0, 0), (0, 1), (1, 0)], names=[None, "match"]) - exp = DataFrame({"digit": ["1", "2", "1"]}, index=exp_idx) - tm.assert_frame_equal(res, exp) +def test_extractall_stringindex(any_string_dtype): + s = Series(["a1a2", "b1", "c1"], name="xxx", dtype=any_string_dtype) + result = s.str.extractall(r"[ab](?P<digit>\d)") + expected = DataFrame( + {"digit": ["1", "2", "1"]}, + index=MultiIndex.from_tuples([(0, 0), (0, 1), (1, 0)], names=[None, "match"]), + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) - # index should return the same result as the default index without name - # thus index.name doesn't affect to the result - for idx in [ - Index(["a1a2", "b1", "c1"]), - Index(["a1a2", "b1", "c1"], name="xxx"), - ]: + # index should return the same result as the default index without name thus + # index.name doesn't affect to the result + if any_string_dtype == "object": + for idx in [ + Index(["a1a2", "b1", "c1"]), + Index(["a1a2", "b1", "c1"], name="xxx"), + ]: - res = idx.str.extractall(r"[ab](?P<digit>\d)") - tm.assert_frame_equal(res, exp) + result = idx.str.extractall(r"[ab](?P<digit>\d)") + tm.assert_frame_equal(result, expected) s = Series( ["a1a2", "b1", "c1"], name="s_name", index=Index(["XX", "yy", "zz"], name="idx_name"), + dtype=any_string_dtype, ) - res = s.str.extractall(r"[ab](?P<digit>\d)") - exp_idx = MultiIndex.from_tuples( - [("XX", 0), ("XX", 1), ("yy", 0)], names=["idx_name", "match"] + result = s.str.extractall(r"[ab](?P<digit>\d)") + expected = DataFrame( + {"digit": ["1", "2", "1"]}, + index=MultiIndex.from_tuples( + [("XX", 0), ("XX", 1), ("yy", 0)], names=["idx_name", "match"] + ), + dtype=any_string_dtype, ) - exp = DataFrame({"digit": ["1", "2", "1"]}, index=exp_idx) - tm.assert_frame_equal(res, exp) + tm.assert_frame_equal(result, expected) -def test_extractall_errors(): - # Does not make sense to use extractall with a regex that has - # no capture groups. (it returns DataFrame with one column for - # each capture group) - s = Series(["a3", "b3", "d4c2"], name="series_name") +def test_extractall_no_capture_groups_raises(any_string_dtype): + # Does not make sense to use extractall with a regex that has no capture groups. + # (it returns DataFrame with one column for each capture group) + s = Series(["a3", "b3", "d4c2"], name="series_name", dtype=any_string_dtype) with pytest.raises(ValueError, match="no capture groups"): s.str.extractall(r"[a-z]") @@ -591,8 +619,8 @@ def test_extract_index_one_two_groups(): tm.assert_frame_equal(r, e) -def test_extractall_same_as_extract(): - s = Series(["a3", "b3", "c2"], name="series_name") +def test_extractall_same_as_extract(any_string_dtype): + s = Series(["a3", "b3", "c2"], name="series_name", dtype=any_string_dtype) pattern_two_noname = r"([a-z])([0-9])" extract_two_noname = s.str.extract(pattern_two_noname, expand=True) @@ -619,13 +647,13 @@ def test_extractall_same_as_extract(): tm.assert_frame_equal(extract_one_noname, no_multi_index) -def test_extractall_same_as_extract_subject_index(): +def test_extractall_same_as_extract_subject_index(any_string_dtype): # same as above tests, but s has an MultiIndex. - i = MultiIndex.from_tuples( + mi = MultiIndex.from_tuples( [("A", "first"), ("B", "second"), ("C", "third")], names=("capital", "ordinal"), ) - s = Series(["a3", "b3", "c2"], i, name="series_name") + s = Series(["a3", "b3", "c2"], index=mi, name="series_name", dtype=any_string_dtype) pattern_two_noname = r"([a-z])([0-9])" extract_two_noname = s.str.extract(pattern_two_noname, expand=True)
https://api.github.com/repos/pandas-dev/pandas/pulls/41419
2021-05-11T14:01:24Z
2021-05-11T21:50:46Z
2021-05-11T21:50:46Z
2021-05-12T16:01:17Z
[ArrowStringArray] REF: str.extract argument validation
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index 55a12a301c6e6..2646ddfa45b58 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -2,17 +2,20 @@ from functools import wraps import re from typing import ( + TYPE_CHECKING, Dict, Hashable, List, Optional, Pattern, + Union, ) import warnings import numpy as np import pandas._libs.lib as lib +from pandas._typing import FrameOrSeriesUnion from pandas.util._decorators import Appender from pandas.core.dtypes.common import ( @@ -33,6 +36,9 @@ from pandas.core.base import NoNewAttributesMixin +if TYPE_CHECKING: + from pandas import Index + _shared_docs: Dict[str, str] = {} _cpython_optimized_encoders = ( "utf-8", @@ -2276,7 +2282,9 @@ def findall(self, pat, flags=0): return self._wrap_result(result, returns_string=False) @forbid_nonstring_types(["bytes"]) - def extract(self, pat, flags=0, expand=True): + def extract( + self, pat: str, flags: int = 0, expand: bool = True + ) -> Union[FrameOrSeriesUnion, "Index"]: r""" Extract capture groups in the regex `pat` as columns in a DataFrame. @@ -2357,6 +2365,16 @@ def extract(self, pat, flags=0, expand=True): 2 NaN dtype: object """ + if not isinstance(expand, bool): + raise ValueError("expand must be True or False") + + regex = re.compile(pat, flags=flags) + if regex.groups == 0: + raise ValueError("pattern contains no capture groups") + + if not expand and regex.groups > 1 and isinstance(self._data, ABCIndex): + raise ValueError("only one regex group is supported with Index") + # TODO: dispatch return str_extract(self, pat, flags, expand=expand) @@ -3010,8 +3028,6 @@ def cat_core(list_of_columns: List, sep: str): def _groups_or_na_fun(regex): """Used in both extract_noexpand and extract_frame""" - if regex.groups == 0: - raise ValueError("pattern contains no capture groups") empty_row = [np.nan] * regex.groups def f(x): @@ -3086,8 +3102,6 @@ def _str_extract_noexpand(arr, pat, flags=0): # not dispatching, so we have to reconstruct here. result = pd_array(result, dtype=result_dtype) else: - if isinstance(arr, ABCIndex): - raise ValueError("only one regex group is supported with Index") name = None columns = _get_group_names(regex) if arr.size == 0: @@ -3138,8 +3152,6 @@ def _str_extract_frame(arr, pat, flags=0): def str_extract(arr, pat, flags=0, expand=True): - if not isinstance(expand, bool): - raise ValueError("expand must be True or False") if expand: result = _str_extract_frame(arr._orig, pat, flags=flags) return result.__finalize__(arr._orig, method="str_extract")
https://api.github.com/repos/pandas-dev/pandas/pulls/41418
2021-05-11T11:48:32Z
2021-05-12T13:30:56Z
2021-05-12T13:30:56Z
2021-05-12T13:34:03Z
[ArrowStringArray] REF: extract/extractall column names
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index 9f8c9fa2f0515..55a12a301c6e6 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -3,8 +3,10 @@ import re from typing import ( Dict, + Hashable, List, Optional, + Pattern, ) import warnings @@ -3036,13 +3038,31 @@ def _result_dtype(arr): return object -def _get_single_group_name(rx): - try: - return list(rx.groupindex.keys()).pop() - except IndexError: +def _get_single_group_name(regex: Pattern) -> Hashable: + if regex.groupindex: + return next(iter(regex.groupindex)) + else: return None +def _get_group_names(regex: Pattern) -> List[Hashable]: + """ + Get named groups from compiled regex. + + Unnamed groups are numbered. + + Parameters + ---------- + regex : compiled regex + + Returns + ------- + list of column labels + """ + names = {v: k for k, v in regex.groupindex.items()} + return [names.get(1 + i, i) for i in range(regex.groups)] + + def _str_extract_noexpand(arr, pat, flags=0): """ Find groups in each string in the Series using passed regular @@ -3069,8 +3089,7 @@ def _str_extract_noexpand(arr, pat, flags=0): if isinstance(arr, ABCIndex): raise ValueError("only one regex group is supported with Index") name = None - names = dict(zip(regex.groupindex.values(), regex.groupindex.keys())) - columns = [names.get(1 + i, i) for i in range(regex.groups)] + columns = _get_group_names(regex) if arr.size == 0: # error: Incompatible types in assignment (expression has type # "DataFrame", variable has type "ndarray") @@ -3101,8 +3120,7 @@ def _str_extract_frame(arr, pat, flags=0): regex = re.compile(pat, flags=flags) groups_or_na = _groups_or_na_fun(regex) - names = dict(zip(regex.groupindex.values(), regex.groupindex.keys())) - columns = [names.get(1 + i, i) for i in range(regex.groups)] + columns = _get_group_names(regex) if len(arr) == 0: return DataFrame(columns=columns, dtype=object) @@ -3139,8 +3157,7 @@ def str_extractall(arr, pat, flags=0): if isinstance(arr, ABCIndex): arr = arr.to_series().reset_index(drop=True) - names = dict(zip(regex.groupindex.values(), regex.groupindex.keys())) - columns = [names.get(1 + i, i) for i in range(regex.groups)] + columns = _get_group_names(regex) match_list = [] index_list = [] is_mi = arr.index.nlevels > 1
https://api.github.com/repos/pandas-dev/pandas/pulls/41417
2021-05-11T10:43:19Z
2021-05-12T01:07:02Z
2021-05-12T01:07:02Z
2021-05-12T08:39:43Z
CLN: fix numpy FutureWarning in tests
diff --git a/pandas/tests/arithmetic/test_object.py b/pandas/tests/arithmetic/test_object.py index c5e086d24ec0c..1961a2d9d89f8 100644 --- a/pandas/tests/arithmetic/test_object.py +++ b/pandas/tests/arithmetic/test_object.py @@ -311,7 +311,7 @@ def test_sub_object(self): index - "foo" with pytest.raises(TypeError, match=msg): - index - np.array([2, "foo"]) + index - np.array([2, "foo"], dtype=object) def test_rsub_object(self): # GH#19369 diff --git a/pandas/tests/dtypes/test_inference.py b/pandas/tests/dtypes/test_inference.py index 076cc155f3626..d16dda370c498 100644 --- a/pandas/tests/dtypes/test_inference.py +++ b/pandas/tests/dtypes/test_inference.py @@ -1042,7 +1042,7 @@ def test_infer_dtype_datetime64_with_na(self, na_value): np.array([np.datetime64("2011-01-01"), Timestamp("2011-01-02")]), np.array([Timestamp("2011-01-02"), np.datetime64("2011-01-01")]), np.array([np.nan, Timestamp("2011-01-02"), 1.1]), - np.array([np.nan, "2011-01-01", Timestamp("2011-01-02")]), + np.array([np.nan, "2011-01-01", Timestamp("2011-01-02")], dtype=object), np.array([np.datetime64("nat"), np.timedelta64(1, "D")], dtype=object), np.array([np.timedelta64(1, "D"), np.datetime64("nat")], dtype=object), ], diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py index 0b2a4cfb94d18..93c95b3004876 100644 --- a/pandas/tests/test_common.py +++ b/pandas/tests/test_common.py @@ -163,6 +163,5 @@ def test_serializable(obj): class TestIsBoolIndexer: def test_non_bool_array_with_na(self): # in particular, this should not raise - arr = np.array(["A", "B", np.nan]) - + arr = np.array(["A", "B", np.nan], dtype=object) assert not com.is_bool_indexer(arr) diff --git a/pandas/tests/tools/test_to_datetime.py b/pandas/tests/tools/test_to_datetime.py index 3e0c12c6a22cc..eb38f2f95d2d5 100644 --- a/pandas/tests/tools/test_to_datetime.py +++ b/pandas/tests/tools/test_to_datetime.py @@ -1899,7 +1899,10 @@ def test_to_datetime_infer_datetime_format_inconsistent_format(self, cache): @pytest.mark.parametrize("cache", [True, False]) def test_to_datetime_infer_datetime_format_series_with_nans(self, cache): s = Series( - np.array(["01/01/2011 00:00:00", np.nan, "01/03/2011 00:00:00", np.nan]) + np.array( + ["01/01/2011 00:00:00", np.nan, "01/03/2011 00:00:00", np.nan], + dtype=object, + ) ) tm.assert_series_equal( to_datetime(s, infer_datetime_format=False, cache=cache), @@ -1916,7 +1919,8 @@ def test_to_datetime_infer_datetime_format_series_start_with_nans(self, cache): "01/01/2011 00:00:00", "01/02/2011 00:00:00", "01/03/2011 00:00:00", - ] + ], + dtype=object, ) ) diff --git a/pandas/tests/util/test_hashing.py b/pandas/tests/util/test_hashing.py index e373323dfb6e1..8ce24dc963dc5 100644 --- a/pandas/tests/util/test_hashing.py +++ b/pandas/tests/util/test_hashing.py @@ -90,7 +90,7 @@ def test_hash_array(series): @pytest.mark.parametrize( - "arr2", [np.array([3, 4, "All"]), np.array([3, 4, "All"], dtype=object)] + "arr2", [np.array([3, 4, "All"], dtype="U"), np.array([3, 4, "All"], dtype=object)] ) def test_hash_array_mixed(arr2): result1 = hash_array(np.array(["3", "4", "All"]))
https://api.github.com/repos/pandas-dev/pandas/pulls/41414
2021-05-11T02:32:54Z
2021-05-12T01:06:21Z
2021-05-12T01:06:21Z
2022-11-18T02:21:38Z
Bug in read_csv and read_excel not applying dtype to second col with dup cols
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 5adc8540e6864..d9f8bee3acdec 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -837,6 +837,7 @@ I/O - Bug in :func:`read_excel` raising ``AttributeError`` with ``MultiIndex`` header followed by two empty rows and no index, and bug affecting :func:`read_excel`, :func:`read_csv`, :func:`read_table`, :func:`read_fwf`, and :func:`read_clipboard` where one blank row after a ``MultiIndex`` header with no index would be dropped (:issue:`40442`) - Bug in :meth:`DataFrame.to_string` misplacing the truncation column when ``index=False`` (:issue:`40907`) - Bug in :func:`read_orc` always raising ``AttributeError`` (:issue:`40918`) +- Bug in :func:`read_csv` and :func:`read_excel` not respecting dtype for duplicated column name when ``mangle_dupe_cols`` is set to ``True`` (:issue:`35211`) - Bug in :func:`read_csv` and :func:`read_table` misinterpreting arguments when ``sys.setprofile`` had been previously called (:issue:`41069`) - Bug in the conversion from pyarrow to pandas (e.g. for reading Parquet) with nullable dtypes and a pyarrow array whose data buffer size is not a multiple of dtype size (:issue:`40896`) diff --git a/pandas/_libs/parsers.pyx b/pandas/_libs/parsers.pyx index 8d9f1773590b0..0878aff562c12 100644 --- a/pandas/_libs/parsers.pyx +++ b/pandas/_libs/parsers.pyx @@ -685,10 +685,17 @@ cdef class TextReader: count = counts.get(name, 0) if not self.has_mi_columns and self.mangle_dupe_cols: - while count > 0: - counts[name] = count + 1 - name = f'{name}.{count}' - count = counts.get(name, 0) + if count > 0: + while count > 0: + counts[name] = count + 1 + name = f'{name}.{count}' + count = counts.get(name, 0) + if ( + self.dtype is not None + and self.dtype.get(old_name) is not None + and self.dtype.get(name) is None + ): + self.dtype.update({name: self.dtype.get(old_name)}) if old_name == '': unnamed_cols.add(name) diff --git a/pandas/io/parsers/python_parser.py b/pandas/io/parsers/python_parser.py index 0055f3123f3c0..1d70b6e59c51b 100644 --- a/pandas/io/parsers/python_parser.py +++ b/pandas/io/parsers/python_parser.py @@ -421,12 +421,20 @@ def _infer_columns(self): counts: DefaultDict = defaultdict(int) for i, col in enumerate(this_columns): + old_col = col cur_count = counts[col] - while cur_count > 0: - counts[col] = cur_count + 1 - col = f"{col}.{cur_count}" - cur_count = counts[col] + if cur_count > 0: + while cur_count > 0: + counts[col] = cur_count + 1 + col = f"{col}.{cur_count}" + cur_count = counts[col] + if ( + self.dtype is not None + and self.dtype.get(old_col) is not None + and self.dtype.get(col) is None + ): + self.dtype.update({col: self.dtype.get(old_col)}) this_columns[i] = col counts[col] = cur_count + 1 diff --git a/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.ods b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.ods new file mode 100644 index 0000000000000..66558c16319fc Binary files /dev/null and b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.ods differ diff --git a/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xls b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xls new file mode 100644 index 0000000000000..472ad75901286 Binary files /dev/null and b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xls differ diff --git a/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsb b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsb new file mode 100755 index 0000000000000..5052102c6655d Binary files /dev/null and b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsb differ diff --git a/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsm b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsm new file mode 100644 index 0000000000000..51edc7f94f9d8 Binary files /dev/null and b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsm differ diff --git a/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsx b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsx new file mode 100644 index 0000000000000..ec4e49add4233 Binary files /dev/null and b/pandas/tests/io/data/excel/df_mangle_dup_col_dtypes.xlsx differ diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py index 632de5f70f64a..a46cb70097bd8 100644 --- a/pandas/tests/io/excel/test_readers.py +++ b/pandas/tests/io/excel/test_readers.py @@ -555,6 +555,14 @@ def test_reader_dtype_str(self, read_ext, dtype, expected): actual = pd.read_excel(basename + read_ext, dtype=dtype) tm.assert_frame_equal(actual, expected) + @pytest.mark.parametrize("dtypes, exp_value", [({}, "1"), ({"a.1": "int64"}, 1)]) + def test_dtype_mangle_dup_cols(self, read_ext, dtypes, exp_value): + # GH#35211 + basename = "df_mangle_dup_col_dtypes" + result = pd.read_excel(basename + read_ext, dtype={"a": str, **dtypes}) + expected = DataFrame({"a": ["1"], "a.1": [exp_value]}) + tm.assert_frame_equal(result, expected) + def test_reader_spaces(self, read_ext): # see gh-32207 basename = "test_spaces" diff --git a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py index e452159189d4a..59fd3de60e0bf 100644 --- a/pandas/tests/io/parser/dtypes/test_dtypes_basic.py +++ b/pandas/tests/io/parser/dtypes/test_dtypes_basic.py @@ -238,3 +238,13 @@ def test_true_values_cast_to_bool(all_parsers): ) expected["a"] = expected["a"].astype("boolean") tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize("dtypes, exp_value", [({}, "1"), ({"a.1": "int64"}, 1)]) +def test_dtype_mangle_dup_cols(all_parsers, dtypes, exp_value): + # GH#35211 + parser = all_parsers + data = """a,a\n1,1""" + result = parser.read_csv(StringIO(data), dtype={"a": str, **dtypes}) + expected = DataFrame({"a": ["1"], "a.1": [exp_value]}) + tm.assert_frame_equal(result, expected)
- [x] closes #35211 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41411
2021-05-10T22:03:52Z
2021-05-12T13:55:50Z
2021-05-12T13:55:50Z
2021-06-15T18:26:58Z
BUG: fix TypeError when looking up a str subclass on a DataFrame with…
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index cf3dd1b0e3226..6b59349ce52b2 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -790,6 +790,7 @@ Indexing - Bug in :meth:`DataFrame.loc.__setitem__` when setting-with-expansion incorrectly raising when the index in the expanding axis contains duplicates (:issue:`40096`) - Bug in :meth:`DataFrame.loc` incorrectly matching non-boolean index elements (:issue:`20432`) - Bug in :meth:`Series.__delitem__` with ``ExtensionDtype`` incorrectly casting to ``ndarray`` (:issue:`40386`) +- Bug in :meth:`DataFrame.__setitem__` raising ``TypeError`` when using a str subclass as the column name with a :class:`DatetimeIndex` (:issue:`37366`) Missing ^^^^^^^ diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 96aeda955df01..2f8919644486b 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -2289,7 +2289,7 @@ def convert_to_index_sliceable(obj: DataFrame, key): # slice here via partial string indexing if idx._supports_partial_string_indexing: try: - res = idx._get_string_slice(key) + res = idx._get_string_slice(str(key)) warnings.warn( "Indexing a DataFrame with a datetimelike index using a single " "string to slice the rows, like `frame[string]`, is deprecated " diff --git a/pandas/tests/indexing/test_datetime.py b/pandas/tests/indexing/test_datetime.py index 29a037c1d3b52..e46eed05caa86 100644 --- a/pandas/tests/indexing/test_datetime.py +++ b/pandas/tests/indexing/test_datetime.py @@ -152,3 +152,16 @@ def test_getitem_millisecond_resolution(self, frame_or_series): ], ) tm.assert_equal(result, expected) + + def test_str_subclass(self): + # GH 37366 + class mystring(str): + pass + + data = ["2020-10-22 01:21:00+00:00"] + index = pd.DatetimeIndex(data) + df = DataFrame({"a": [1]}, index=index) + df["b"] = 2 + df[mystring("c")] = 3 + expected = DataFrame({"a": [1], "b": [2], mystring("c"): [3]}, index=index) + tm.assert_equal(df, expected)
… DatetimeIndex (#37366) - [X] closes #xxxx - [X] tests added / passed - [X] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [X] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41406
2021-05-10T04:52:06Z
2021-05-10T14:29:06Z
2021-05-10T14:29:06Z
2021-05-10T14:29:11Z
BUG: Rolling.__iter__ includes on index columns in the result
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 3a67e848024ac..2a0e9f44740bb 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -994,6 +994,7 @@ Groupby/resample/rolling - Bug in :meth:`DataFrameGroupBy.__getitem__` with non-unique columns incorrectly returning a malformed :class:`SeriesGroupBy` instead of :class:`DataFrameGroupBy` (:issue:`41427`) - Bug in :meth:`DataFrameGroupBy.transform` with non-unique columns incorrectly raising ``AttributeError`` (:issue:`41427`) - Bug in :meth:`Resampler.apply` with non-unique columns incorrectly dropping duplicated columns (:issue:`41445`) +- Bug in :meth:`DataFrame.rolling.__iter__` where ``on`` was not assigned to the index of the resulting objects (:issue:`40373`) - Bug in :meth:`DataFrameGroupBy.transform` and :meth:`DataFrameGroupBy.agg` with ``engine="numba"`` where ``*args`` were being cached with the user passed function (:issue:`41647`) Reshaping diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index 0ef0896df8d44..dfb74b38cd9cf 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -291,6 +291,7 @@ def __repr__(self) -> str: def __iter__(self): obj = self._create_data(self._selected_obj) + obj = obj.set_axis(self._on) indexer = self._get_window_indexer() start, end = indexer.get_window_bounds( diff --git a/pandas/tests/window/test_rolling.py b/pandas/tests/window/test_rolling.py index 4846e15da039f..7a3e1e002759d 100644 --- a/pandas/tests/window/test_rolling.py +++ b/pandas/tests/window/test_rolling.py @@ -742,7 +742,7 @@ def test_iter_rolling_dataframe(df, expected, window, min_periods): ], ) def test_iter_rolling_on_dataframe(expected, window): - # GH 11704 + # GH 11704, 40373 df = DataFrame( { "A": [1, 2, 3, 4, 5], @@ -751,7 +751,9 @@ def test_iter_rolling_on_dataframe(expected, window): } ) - expected = [DataFrame(values, index=index) for (values, index) in expected] + expected = [ + DataFrame(values, index=df.loc[index, "C"]) for (values, index) in expected + ] for (expected, actual) in zip(expected, df.rolling(window, on="C")): tm.assert_frame_equal(actual, expected)
- [x] closes #40373 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41405
2021-05-10T03:49:50Z
2021-05-26T03:27:26Z
2021-05-26T03:27:26Z
2021-05-26T03:27:29Z
REF: set _selection only in __init__
diff --git a/pandas/core/resample.py b/pandas/core/resample.py index aae6314968695..1ab2b90d6564a 100644 --- a/pandas/core/resample.py +++ b/pandas/core/resample.py @@ -139,6 +139,8 @@ def __init__( groupby: TimeGrouper, axis: int = 0, kind=None, + *, + selection=None, **kwargs, ): self.groupby = groupby @@ -152,6 +154,7 @@ def __init__( self.groupby._set_grouper(self._convert_obj(obj), sort=True) self.binner, self.grouper = self._get_binner() + self._selection = selection @final def _shallow_copy(self, obj, **kwargs): @@ -1080,13 +1083,16 @@ def _gotitem(self, key, ndim, subset=None): except IndexError: groupby = self._groupby - self = type(self)(subset, groupby=groupby, parent=self, **kwargs) - self._reset_cache() + selection = None if subset.ndim == 2 and ( - lib.is_scalar(key) and key in subset or lib.is_list_like(key) + (lib.is_scalar(key) and key in subset) or lib.is_list_like(key) ): - self._selection = key - return self + selection = key + + new_rs = type(self)( + subset, groupby=groupby, parent=self, selection=selection, **kwargs + ) + return new_rs class DatetimeIndexResampler(Resampler): diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py index 08a65964f278e..4187c56079060 100644 --- a/pandas/core/window/ewm.py +++ b/pandas/core/window/ewm.py @@ -268,6 +268,8 @@ def __init__( ignore_na: bool = False, axis: Axis = 0, times: str | np.ndarray | FrameOrSeries | None = None, + *, + selection=None, ): super().__init__( obj=obj, @@ -277,6 +279,7 @@ def __init__( closed=None, method="single", axis=axis, + selection=selection, ) self.com = com self.span = span diff --git a/pandas/core/window/expanding.py b/pandas/core/window/expanding.py index ac1ebfd4b0825..cddb3ef56250d 100644 --- a/pandas/core/window/expanding.py +++ b/pandas/core/window/expanding.py @@ -102,9 +102,15 @@ def __init__( center=None, axis: Axis = 0, method: str = "single", + selection=None, ): super().__init__( - obj=obj, min_periods=min_periods, center=center, axis=axis, method=method + obj=obj, + min_periods=min_periods, + center=center, + axis=axis, + method=method, + selection=selection, ) def _get_window_indexer(self) -> BaseIndexer: diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index 490cdca8519e6..5e038635305f1 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -123,6 +123,8 @@ def __init__( on: str | Index | None = None, closed: str | None = None, method: str = "single", + *, + selection=None, ): self.obj = obj self.on = on @@ -150,6 +152,8 @@ def __init__( f"invalid on specified as {self.on}, " "must be a column (of DataFrame), an Index or None" ) + + self._selection = selection self.validate() @property @@ -242,16 +246,22 @@ def _gotitem(self, key, ndim, subset=None): # create a new object to prevent aliasing if subset is None: subset = self.obj - # TODO: Remove once win_type deprecation is enforced + + # we need to make a shallow copy of ourselves + # with the same groupby with warnings.catch_warnings(): + # TODO: Remove once win_type deprecation is enforced warnings.filterwarnings("ignore", "win_type", FutureWarning) - self = type(self)( - subset, **{attr: getattr(self, attr) for attr in self._attributes} - ) - if subset.ndim == 2: - if is_scalar(key) and key in subset or is_list_like(key): - self._selection = key - return self + kwargs = {attr: getattr(self, attr) for attr in self._attributes} + + selection = None + if subset.ndim == 2 and ( + (is_scalar(key) and key in subset) or is_list_like(key) + ): + selection = key + + new_win = type(self)(subset, selection=selection, **kwargs) + return new_win def __getattr__(self, attr: str): if attr in self._internal_names_set:
Trying to make these less stateful.
https://api.github.com/repos/pandas-dev/pandas/pulls/41403
2021-05-09T19:56:50Z
2021-05-17T18:03:56Z
2021-05-17T18:03:56Z
2021-05-17T18:06:09Z
DOC: Adding Cylon under ecosystem/out of core
diff --git a/doc/source/ecosystem.rst b/doc/source/ecosystem.rst index d53d0556dca04..bc2325f15852c 100644 --- a/doc/source/ecosystem.rst +++ b/doc/source/ecosystem.rst @@ -405,6 +405,35 @@ Blaze provides a standard API for doing computations with various in-memory and on-disk backends: NumPy, pandas, SQLAlchemy, MongoDB, PyTables, PySpark. +`Cylon <https://cylondata.org/>`__ +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +Cylon is a fast, scalable, distributed memory parallel runtime with a pandas +like Python DataFrame API. ”Core Cylon” is implemented with C++ using Apache +Arrow format to represent the data in-memory. Cylon DataFrame API implements +most of the core operators of pandas such as merge, filter, join, concat, +group-by, drop_duplicates, etc. These operators are designed to work across +thousands of cores to scale applications. It can interoperate with pandas +DataFrame by reading data from pandas or converting data to pandas so users +can selectively scale parts of their pandas DataFrame applications. + +.. code:: python + + from pycylon import read_csv, DataFrame, CylonEnv + from pycylon.net import MPIConfig + + # Initialize Cylon distributed environment + config: MPIConfig = MPIConfig() + env: CylonEnv = CylonEnv(config=config, distributed=True) + + df1: DataFrame = read_csv('/tmp/csv1.csv') + df2: DataFrame = read_csv('/tmp/csv2.csv') + + # Using 1000s of cores across the cluster to compute the join + df3: Table = df1.join(other=df2, on=[0], algorithm="hash", env=env) + + print(df3) + `Dask <https://dask.readthedocs.io/en/latest/>`__ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them This PR adds a link and a description about Cylon to the ecosystem page. Github: [https://github.com/cylondata/cylon](https://github.com/cylondata/cylon) Web: [https://cylondata.org/](https://cylondata.org/)
https://api.github.com/repos/pandas-dev/pandas/pulls/41402
2021-05-09T16:57:45Z
2021-05-12T14:07:30Z
2021-05-12T14:07:30Z
2021-05-12T14:07:35Z
REF: groupby remove _selection_name
diff --git a/pandas/core/apply.py b/pandas/core/apply.py index 9d3437fe08b24..d0c6a1a841edb 100644 --- a/pandas/core/apply.py +++ b/pandas/core/apply.py @@ -322,6 +322,7 @@ def agg_list_like(self) -> FrameOrSeriesUnion: # i.e. obj is Series or DataFrame selected_obj = obj elif obj._selected_obj.ndim == 1: + # For SeriesGroupBy this matches _obj_with_exclusions selected_obj = obj._selected_obj else: selected_obj = obj._obj_with_exclusions diff --git a/pandas/core/base.py b/pandas/core/base.py index e45c4bf514973..ff94c833d5268 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -180,16 +180,6 @@ class SelectionMixin(Generic[FrameOrSeries]): _internal_names = ["_cache", "__setstate__"] _internal_names_set = set(_internal_names) - @property - def _selection_name(self): - """ - Return a name for myself; - - This would ideally be called the 'name' property, - but we cannot conflict with the Series.name property which can be set. - """ - return self._selection - @final @property def _selection_list(self): diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index c5d9144893f48..4fff12d45af7d 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -11,7 +11,6 @@ abc, namedtuple, ) -import copy from functools import partial from textwrap import dedent from typing import ( @@ -169,18 +168,6 @@ class SeriesGroupBy(GroupBy[Series]): def _iterate_slices(self) -> Iterable[Series]: yield self._selected_obj - @property - def _selection_name(self): - """ - since we are a series, we by definition only have - a single name, but may be the result of a selection or - the name of our object - """ - if self._selection is None: - return self.obj.name - else: - return self._selection - _agg_examples_doc = dedent( """ Examples @@ -314,15 +301,9 @@ def _aggregate_multiple_funcs(self, arg) -> DataFrame: results: dict[base.OutputKey, FrameOrSeriesUnion] = {} for idx, (name, func) in enumerate(arg): - obj = self - # reset the cache so that we - # only include the named selection - if name in self._selected_obj: - obj = copy.copy(obj) - obj._reset_cache() - obj._selection = name - results[base.OutputKey(label=name, position=idx)] = obj.aggregate(func) + key = base.OutputKey(label=name, position=idx) + results[key] = self.aggregate(func) if any(isinstance(x, DataFrame) for x in results.values()): from pandas import concat @@ -464,7 +445,7 @@ def _wrap_applied_output( # GH #6265 return self.obj._constructor( [], - name=self._selection_name, + name=self.obj.name, index=self.grouper.result_index, dtype=data.dtype, ) @@ -485,14 +466,14 @@ def _get_index() -> Index: # if self.observed is False, # keep all-NaN rows created while re-indexing res_ser = res_df.stack(dropna=self.observed) - res_ser.name = self._selection_name + res_ser.name = self.obj.name return res_ser elif isinstance(values[0], (Series, DataFrame)): return self._concat_objects(keys, values, not_indexed_same=not_indexed_same) else: # GH #6265 #24880 result = self.obj._constructor( - data=values, index=_get_index(), name=self._selection_name + data=values, index=_get_index(), name=self.obj.name ) return self._reindex_output(result) @@ -550,7 +531,7 @@ def _transform_general(self, func: Callable, *args, **kwargs) -> Series: Transform with a callable func`. """ assert callable(func) - klass = type(self._selected_obj) + klass = type(self.obj) results = [] for name, group in self: @@ -572,8 +553,10 @@ def _transform_general(self, func: Callable, *args, **kwargs) -> Series: else: result = self.obj._constructor(dtype=np.float64) - result.name = self._selected_obj.name - return result + result.name = self.obj.name + # error: Incompatible return value type (got "Union[DataFrame, Series]", + # expected "Series") + return result # type: ignore[return-value] def _can_use_transform_fast(self, result) -> bool: return True @@ -693,7 +676,7 @@ def nunique(self, dropna: bool = True) -> Series: res, out = np.zeros(len(ri), dtype=out.dtype), res res[ids[idx]] = out - result = self.obj._constructor(res, index=ri, name=self._selection_name) + result = self.obj._constructor(res, index=ri, name=self.obj.name) return self._reindex_output(result, fill_value=0) @doc(Series.describe) @@ -799,7 +782,7 @@ def apply_series_value_counts(): levels = [ping.group_index for ping in self.grouper.groupings] + [ lev # type: ignore[list-item] ] - names = self.grouper.names + [self._selection_name] + names = self.grouper.names + [self.obj.name] if dropna: mask = codes[-1] != -1 @@ -855,7 +838,7 @@ def build_codes(lev_codes: np.ndarray) -> np.ndarray: if is_integer_dtype(out.dtype): out = ensure_int64(out) - return self.obj._constructor(out, index=mi, name=self._selection_name) + return self.obj._constructor(out, index=mi, name=self.obj.name) def count(self) -> Series: """ @@ -876,7 +859,7 @@ def count(self) -> Series: result = self.obj._constructor( out, index=self.grouper.result_index, - name=self._selection_name, + name=self.obj.name, dtype="int64", ) return self._reindex_output(result, fill_value=0) @@ -1048,7 +1031,7 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) if isinstance(sobj, Series): # GH#35246 test_groupby_as_index_select_column_sum_empty_df - result.columns = [self._selected_obj.name] + result.columns = [sobj.name] else: # select everything except for the last level, which is the one # containing the name of the function(s), see GH#32040 @@ -1174,11 +1157,11 @@ def _wrap_applied_output(self, data, keys, values, not_indexed_same=False): # TODO: sure this is right? we used to do this # after raising AttributeError above return self.obj._constructor_sliced( - values, index=key_index, name=self._selection_name + values, index=key_index, name=self._selection ) elif not isinstance(first_not_none, Series): # values are not series or array-like but scalars - # self._selection_name not passed through to Series as the + # self._selection not passed through to Series as the # result should not take the name of original selection # of columns if self.as_index: diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 2510fcaa84f1c..2091d2fc484e1 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1052,9 +1052,10 @@ def reset_identity(values): values = reset_identity(values) result = concat(values, axis=self.axis) - if isinstance(result, Series) and self._selection_name is not None: + name = self.obj.name if self.obj.ndim == 1 else self._selection + if isinstance(result, Series) and name is not None: - result.name = self._selection_name + result.name = name return result
We use self._selected_obj and self._obj_with_exclusions in different places with no clear distinction as to when to use which. I _think_ in pretty much all cases we really want obj_with_exclusions, am moving towards pinning that down. The only non-trivial thing here is getting rid of pinning _selection in [here](https://github.com/pandas-dev/pandas/compare/master...jbrockmendel:cln-selection_name?expand=1#diff-f1ec980d06b0b54c8263f663f767636c3f5921f04f8a7ee91dfc71a2100b05cdL326). We have no tests that reach that. I can cook up a case that gets there, but haven't found any cases where removing it actually changes anything.
https://api.github.com/repos/pandas-dev/pandas/pulls/41401
2021-05-09T16:33:47Z
2021-05-12T01:11:45Z
2021-05-12T01:11:45Z
2021-05-12T01:37:26Z
DOC: Remove multiple blank lines in ipython directives
diff --git a/doc/source/user_guide/basics.rst b/doc/source/user_guide/basics.rst index 8d38c12252df4..70cfa3500f6b4 100644 --- a/doc/source/user_guide/basics.rst +++ b/doc/source/user_guide/basics.rst @@ -1184,11 +1184,9 @@ a single value and returning a single value. For example: df4 - def f(x): return len(str(x)) - df4["one"].map(f) df4.applymap(f) diff --git a/doc/source/user_guide/cookbook.rst b/doc/source/user_guide/cookbook.rst index 94a5f807d2262..e1aae0fd481b1 100644 --- a/doc/source/user_guide/cookbook.rst +++ b/doc/source/user_guide/cookbook.rst @@ -494,15 +494,12 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to S = pd.Series([i / 100.0 for i in range(1, 11)]) - def cum_ret(x, y): return x * (1 + y) - def red(x): return functools.reduce(cum_ret, x, 1.0) - S.expanding().apply(red, raw=True) @@ -514,12 +511,10 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to df = pd.DataFrame({"A": [1, 1, 2, 2], "B": [1, -1, 1, 2]}) gb = df.groupby("A") - def replace(g): mask = g < 0 return g.where(mask, g[~mask].mean()) - gb.transform(replace) `Sort groups by aggregated data @@ -551,13 +546,11 @@ Unlike agg, apply's callable is passed a sub-DataFrame which gives you access to rng = pd.date_range(start="2014-10-07", periods=10, freq="2min") ts = pd.Series(data=list(range(10)), index=rng) - def MyCust(x): if len(x) > 2: return x[1] * 1.234 return pd.NaT - mhc = {"Mean": np.mean, "Max": np.max, "Custom": MyCust} ts.resample("5min").apply(mhc) ts @@ -803,11 +796,9 @@ Apply index=["I", "II", "III"], ) - def SeriesFromSubList(aList): return pd.Series(aList) - df_orgz = pd.concat( {ind: row.apply(SeriesFromSubList) for ind, row in df.iterrows()} ) @@ -827,12 +818,10 @@ Rolling Apply to multiple columns where function calculates a Series before a Sc ) df - def gm(df, const): v = ((((df["A"] + df["B"]) + 1).cumprod()) - 1) * const return v.iloc[-1] - s = pd.Series( { df.index[i]: gm(df.iloc[i: min(i + 51, len(df) - 1)], 5) @@ -859,11 +848,9 @@ Rolling Apply to multiple columns where function returns a Scalar (Volume Weight ) df - def vwap(bars): return (bars.Close * bars.Volume).sum() / bars.Volume.sum() - window = 5 s = pd.concat( [ diff --git a/doc/source/user_guide/groupby.rst b/doc/source/user_guide/groupby.rst index 3f596388ca226..ef6d45fa0140b 100644 --- a/doc/source/user_guide/groupby.rst +++ b/doc/source/user_guide/groupby.rst @@ -1617,12 +1617,10 @@ column index name will be used as the name of the inserted column: } ) - def compute_metrics(x): result = {"b_sum": x["b"].sum(), "c_mean": x["c"].mean()} return pd.Series(result, name="metrics") - result = df.groupby("a").apply(compute_metrics) result diff --git a/doc/source/user_guide/io.rst b/doc/source/user_guide/io.rst index 5148bb87b0eb0..7f0cd613726dc 100644 --- a/doc/source/user_guide/io.rst +++ b/doc/source/user_guide/io.rst @@ -4648,11 +4648,9 @@ chunks. store.append("dfeq", dfeq, data_columns=["number"]) - def chunks(l, n): return [l[i: i + n] for i in range(0, len(l), n)] - evens = [2, 4, 6, 8, 10] coordinates = store.select_as_coordinates("dfeq", "number=evens") for c in chunks(coordinates, 2): diff --git a/doc/source/user_guide/merging.rst b/doc/source/user_guide/merging.rst index d8998a9a0a6e1..09b3d3a8c96df 100644 --- a/doc/source/user_guide/merging.rst +++ b/doc/source/user_guide/merging.rst @@ -1578,4 +1578,5 @@ to ``True``. You may also keep all the original values even if they are equal. .. ipython:: python + df.compare(df2, keep_shape=True, keep_equal=True) diff --git a/doc/source/user_guide/reshaping.rst b/doc/source/user_guide/reshaping.rst index 77cf43b2e2b19..7d1d03fe020a6 100644 --- a/doc/source/user_guide/reshaping.rst +++ b/doc/source/user_guide/reshaping.rst @@ -18,7 +18,6 @@ Reshaping by pivoting DataFrame objects import pandas._testing as tm - def unpivot(frame): N, K = frame.shape data = { @@ -29,7 +28,6 @@ Reshaping by pivoting DataFrame objects columns = ["date", "variable", "value"] return pd.DataFrame(data, columns=columns) - df = unpivot(tm.makeTimeDataFrame(3)) Data is often stored in so-called "stacked" or "record" format: diff --git a/doc/source/user_guide/sparse.rst b/doc/source/user_guide/sparse.rst index e4eea57c43dbb..982a5b0a70b55 100644 --- a/doc/source/user_guide/sparse.rst +++ b/doc/source/user_guide/sparse.rst @@ -325,7 +325,6 @@ In the example below, we transform the ``Series`` to a sparse representation of row_levels=["A", "B"], column_levels=["C", "D"], sort_labels=True ) - A A.todense() rows diff --git a/doc/source/user_guide/text.rst b/doc/source/user_guide/text.rst index 9b1c9b8d04270..db9485f3f2348 100644 --- a/doc/source/user_guide/text.rst +++ b/doc/source/user_guide/text.rst @@ -297,24 +297,19 @@ positional argument (a regex object) and return a string. # Reverse every lowercase alphabetic word pat = r"[a-z]+" - def repl(m): return m.group(0)[::-1] - pd.Series(["foo 123", "bar baz", np.nan], dtype="string").str.replace( pat, repl, regex=True ) - # Using regex groups pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)" - def repl(m): return m.group("two").swapcase() - pd.Series(["Foo Bar Baz", np.nan], dtype="string").str.replace( pat, repl, regex=True ) diff --git a/doc/source/user_guide/timeseries.rst b/doc/source/user_guide/timeseries.rst index 01ff62d984544..6f005f912fe37 100644 --- a/doc/source/user_guide/timeseries.rst +++ b/doc/source/user_guide/timeseries.rst @@ -1422,7 +1422,6 @@ An example of how holidays and holiday calendars are defined: MO, ) - class ExampleCalendar(AbstractHolidayCalendar): rules = [ USMemorialDay, @@ -1435,7 +1434,6 @@ An example of how holidays and holiday calendars are defined: ), ] - cal = ExampleCalendar() cal.holidays(datetime.datetime(2012, 1, 1), datetime.datetime(2012, 12, 31)) @@ -1707,13 +1705,11 @@ We can instead only resample those groups where we have points as follows: from functools import partial from pandas.tseries.frequencies import to_offset - def round(t, freq): # round a Timestamp to a specified freq freq = to_offset(freq) return pd.Timestamp((t.value // freq.delta.value) * freq.delta.value) - ts.groupby(partial(round, freq="3T")).sum() .. _timeseries.aggregate: @@ -2255,11 +2251,9 @@ To convert from an ``int64`` based YYYYMMDD representation. s = pd.Series([20121231, 20141130, 99991231]) s - def conv(x): return pd.Period(year=x // 10000, month=x // 100 % 100, day=x % 100, freq="D") - s.apply(conv) s.apply(conv)[2] diff --git a/doc/source/user_guide/window.rst b/doc/source/user_guide/window.rst index b692685b90234..c8687f808a802 100644 --- a/doc/source/user_guide/window.rst +++ b/doc/source/user_guide/window.rst @@ -212,7 +212,6 @@ from present information back to past information. This allows the rolling windo df - .. _window.custom_rolling_window: Custom window rolling @@ -294,13 +293,12 @@ conditions. In these cases it can be useful to perform forward-looking rolling w This :func:`BaseIndexer <pandas.api.indexers.BaseIndexer>` subclass implements a closed fixed-width forward-looking rolling window, and we can use it as follows: -.. ipython:: ipython +.. ipython:: python from pandas.api.indexers import FixedForwardWindowIndexer indexer = FixedForwardWindowIndexer(window_size=2) df.rolling(indexer, min_periods=1).sum() - .. _window.rolling_apply: Rolling apply @@ -319,7 +317,6 @@ the windows are cast as :class:`Series` objects (``raw=False``) or ndarray objec s = pd.Series(range(10)) s.rolling(window=4).apply(mad, raw=True) - .. _window.numba_engine: Numba engine
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them Related to #26788 This PR gets rid of all warnings of the form "UserWarning: Code input with no code at..." when building the docs. When there are multiple blank lines, e.g. ``` foo = 5 def bar(): pass ``` one of the blank lines is reported as having no code. That said, I'm not sure if this is the right approach, as the double blank likes are PEP8 compliant. Perhaps this should be reported upstream instead and the multiple blank lines left in the docs. cc @jorisvandenbossche @TomAugspurger
https://api.github.com/repos/pandas-dev/pandas/pulls/41400
2021-05-09T16:20:45Z
2021-05-10T14:37:40Z
2021-05-10T14:37:40Z
2021-05-10T20:05:35Z
DOC: Remove deprecated usage of Index.__xor__
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 84f1245299d53..cc044e52d0be1 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -3239,11 +3239,6 @@ def symmetric_difference(self, other, result_name=None, sort=None): >>> idx2 = pd.Index([2, 3, 4, 5]) >>> idx1.symmetric_difference(idx2) Int64Index([1, 5], dtype='int64') - - You can also use the ``^`` operator: - - >>> idx1 ^ idx2 - Int64Index([1, 5], dtype='int64') """ self._validate_sort_keyword(sort) self._assert_can_do_setop(other)
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them ref #37374 cc @jbrockmendel
https://api.github.com/repos/pandas-dev/pandas/pulls/41398
2021-05-09T15:14:39Z
2021-05-10T23:19:58Z
2021-05-10T23:19:58Z
2021-05-10T23:54:04Z
DOC: specify regex=True in str.replace
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py index f8df05a7022d1..9f8c9fa2f0515 100644 --- a/pandas/core/strings/accessor.py +++ b/pandas/core/strings/accessor.py @@ -1231,7 +1231,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None): Regex module flags, e.g. re.IGNORECASE. Cannot be set if `pat` is a compiled regex. regex : bool, default True - Determines if assumes the passed-in pattern is a regular expression: + Determines if the passed-in pattern is a regular expression: - If True, assumes the passed-in pattern is a regular expression. - If False, treats the pattern as a literal string @@ -1287,7 +1287,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None): To get the idea: - >>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f', repr) + >>> pd.Series(['foo', 'fuz', np.nan]).str.replace('f', repr, regex=True) 0 <re.Match object; span=(0, 1), match='f'>oo 1 <re.Match object; span=(0, 1), match='f'>uz 2 NaN @@ -1296,7 +1296,8 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None): Reverse every lowercase alphabetic word: >>> repl = lambda m: m.group(0)[::-1] - >>> pd.Series(['foo 123', 'bar baz', np.nan]).str.replace(r'[a-z]+', repl) + >>> ser = pd.Series(['foo 123', 'bar baz', np.nan]) + >>> ser.str.replace(r'[a-z]+', repl, regex=True) 0 oof 123 1 rab zab 2 NaN @@ -1306,7 +1307,8 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None): >>> pat = r"(?P<one>\w+) (?P<two>\w+) (?P<three>\w+)" >>> repl = lambda m: m.group('two').swapcase() - >>> pd.Series(['One Two Three', 'Foo Bar Baz']).str.replace(pat, repl) + >>> ser = pd.Series(['One Two Three', 'Foo Bar Baz']) + >>> ser.str.replace(pat, repl, regex=True) 0 tWO 1 bAR dtype: object @@ -1315,7 +1317,7 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None): >>> import re >>> regex_pat = re.compile(r'FUZ', flags=re.IGNORECASE) - >>> pd.Series(['foo', 'fuz', np.nan]).str.replace(regex_pat, 'bar') + >>> pd.Series(['foo', 'fuz', np.nan]).str.replace(regex_pat, 'bar', regex=True) 0 foo 1 bar 2 NaN
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them ref #36695 When passing a callable repl, str.replace will raise if regex is False. So it seems to me that in the future, when the default is changed, we should be ignoring the regex argument completely (and update the documentation to that effect). This is slightly magical, but I think better than having a default that raises. If this is the correct way forward, then we don't need to warn when repl is callable. cc @dsaxton @jorisvandenbossche
https://api.github.com/repos/pandas-dev/pandas/pulls/41397
2021-05-09T15:00:55Z
2021-05-10T21:57:20Z
2021-05-10T21:57:20Z
2021-05-10T22:12:56Z
[ArrowStringArray] TST: parametrize str.partition tests
diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py index 6df8fa805955d..f8804d6dd6266 100644 --- a/pandas/tests/strings/test_split_partition.py +++ b/pandas/tests/strings/test_split_partition.py @@ -335,90 +335,90 @@ def test_split_with_name(): tm.assert_index_equal(res, exp) -def test_partition_series(): +def test_partition_series(any_string_dtype): # https://github.com/pandas-dev/pandas/issues/23558 - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None]) + s = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None], dtype=any_string_dtype) - result = values.str.partition("_", expand=False) - exp = Series( + result = s.str.partition("_", expand=False) + expected = Series( [("a", "_", "b_c"), ("c", "_", "d_e"), np.nan, ("f", "_", "g_h"), None] ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) - result = values.str.rpartition("_", expand=False) - exp = Series( + result = s.str.rpartition("_", expand=False) + expected = Series( [("a_b", "_", "c"), ("c_d", "_", "e"), np.nan, ("f_g", "_", "h"), None] ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) # more than one char - values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h", None]) - result = values.str.partition("__", expand=False) - exp = Series( + s = Series(["a__b__c", "c__d__e", np.nan, "f__g__h", None]) + result = s.str.partition("__", expand=False) + expected = Series( [ ("a", "__", "b__c"), ("c", "__", "d__e"), np.nan, ("f", "__", "g__h"), None, - ] + ], ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) - result = values.str.rpartition("__", expand=False) - exp = Series( + result = s.str.rpartition("__", expand=False) + expected = Series( [ ("a__b", "__", "c"), ("c__d", "__", "e"), np.nan, ("f__g", "__", "h"), None, - ] + ], ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) # None - values = Series(["a b c", "c d e", np.nan, "f g h", None]) - result = values.str.partition(expand=False) - exp = Series( + s = Series(["a b c", "c d e", np.nan, "f g h", None], dtype=any_string_dtype) + result = s.str.partition(expand=False) + expected = Series( [("a", " ", "b c"), ("c", " ", "d e"), np.nan, ("f", " ", "g h"), None] ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) - result = values.str.rpartition(expand=False) - exp = Series( + result = s.str.rpartition(expand=False) + expected = Series( [("a b", " ", "c"), ("c d", " ", "e"), np.nan, ("f g", " ", "h"), None] ) - tm.assert_series_equal(result, exp) + tm.assert_series_equal(result, expected) # Not split - values = Series(["abc", "cde", np.nan, "fgh", None]) - result = values.str.partition("_", expand=False) - exp = Series([("abc", "", ""), ("cde", "", ""), np.nan, ("fgh", "", ""), None]) - tm.assert_series_equal(result, exp) + s = Series(["abc", "cde", np.nan, "fgh", None], dtype=any_string_dtype) + result = s.str.partition("_", expand=False) + expected = Series([("abc", "", ""), ("cde", "", ""), np.nan, ("fgh", "", ""), None]) + tm.assert_series_equal(result, expected) - result = values.str.rpartition("_", expand=False) - exp = Series([("", "", "abc"), ("", "", "cde"), np.nan, ("", "", "fgh"), None]) - tm.assert_series_equal(result, exp) + result = s.str.rpartition("_", expand=False) + expected = Series([("", "", "abc"), ("", "", "cde"), np.nan, ("", "", "fgh"), None]) + tm.assert_series_equal(result, expected) # unicode - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) + s = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype) - result = values.str.partition("_", expand=False) - exp = Series([("a", "_", "b_c"), ("c", "_", "d_e"), np.nan, ("f", "_", "g_h")]) - tm.assert_series_equal(result, exp) + result = s.str.partition("_", expand=False) + expected = Series([("a", "_", "b_c"), ("c", "_", "d_e"), np.nan, ("f", "_", "g_h")]) + tm.assert_series_equal(result, expected) - result = values.str.rpartition("_", expand=False) - exp = Series([("a_b", "_", "c"), ("c_d", "_", "e"), np.nan, ("f_g", "_", "h")]) - tm.assert_series_equal(result, exp) + result = s.str.rpartition("_", expand=False) + expected = Series([("a_b", "_", "c"), ("c_d", "_", "e"), np.nan, ("f_g", "_", "h")]) + tm.assert_series_equal(result, expected) # compare to standard lib - values = Series(["A_B_C", "B_C_D", "E_F_G", "EFGHEF"]) - result = values.str.partition("_", expand=False).tolist() - assert result == [v.partition("_") for v in values] - result = values.str.rpartition("_", expand=False).tolist() - assert result == [v.rpartition("_") for v in values] + s = Series(["A_B_C", "B_C_D", "E_F_G", "EFGHEF"], dtype=any_string_dtype) + result = s.str.partition("_", expand=False).tolist() + assert result == [v.partition("_") for v in s] + result = s.str.rpartition("_", expand=False).tolist() + assert result == [v.rpartition("_") for v in s] def test_partition_index(): @@ -475,88 +475,96 @@ def test_partition_index(): assert result.nlevels == 3 -def test_partition_to_dataframe(): +def test_partition_to_dataframe(any_string_dtype): # https://github.com/pandas-dev/pandas/issues/23558 - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None]) - result = values.str.partition("_") - exp = DataFrame( + s = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None], dtype=any_string_dtype) + result = s.str.partition("_") + expected = DataFrame( { 0: ["a", "c", np.nan, "f", None], 1: ["_", "_", np.nan, "_", None], 2: ["b_c", "d_e", np.nan, "g_h", None], - } + }, + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) - result = values.str.rpartition("_") - exp = DataFrame( + result = s.str.rpartition("_") + expected = DataFrame( { 0: ["a_b", "c_d", np.nan, "f_g", None], 1: ["_", "_", np.nan, "_", None], 2: ["c", "e", np.nan, "h", None], - } + }, + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None]) - result = values.str.partition("_", expand=True) - exp = DataFrame( + s = Series(["a_b_c", "c_d_e", np.nan, "f_g_h", None], dtype=any_string_dtype) + result = s.str.partition("_", expand=True) + expected = DataFrame( { 0: ["a", "c", np.nan, "f", None], 1: ["_", "_", np.nan, "_", None], 2: ["b_c", "d_e", np.nan, "g_h", None], - } + }, + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) - result = values.str.rpartition("_", expand=True) - exp = DataFrame( + result = s.str.rpartition("_", expand=True) + expected = DataFrame( { 0: ["a_b", "c_d", np.nan, "f_g", None], 1: ["_", "_", np.nan, "_", None], 2: ["c", "e", np.nan, "h", None], - } + }, + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) -def test_partition_with_name(): +def test_partition_with_name(any_string_dtype): # GH 12617 - s = Series(["a,b", "c,d"], name="xxx") - res = s.str.partition(",") - exp = DataFrame({0: ["a", "c"], 1: [",", ","], 2: ["b", "d"]}) - tm.assert_frame_equal(res, exp) + s = Series(["a,b", "c,d"], name="xxx", dtype=any_string_dtype) + result = s.str.partition(",") + expected = DataFrame( + {0: ["a", "c"], 1: [",", ","], 2: ["b", "d"]}, dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) # should preserve name - res = s.str.partition(",", expand=False) - exp = Series([("a", ",", "b"), ("c", ",", "d")], name="xxx") - tm.assert_series_equal(res, exp) + result = s.str.partition(",", expand=False) + expected = Series([("a", ",", "b"), ("c", ",", "d")], name="xxx") + tm.assert_series_equal(result, expected) + +def test_partition_index_with_name(): idx = Index(["a,b", "c,d"], name="xxx") - res = idx.str.partition(",") - exp = MultiIndex.from_tuples([("a", ",", "b"), ("c", ",", "d")]) - assert res.nlevels == 3 - tm.assert_index_equal(res, exp) + result = idx.str.partition(",") + expected = MultiIndex.from_tuples([("a", ",", "b"), ("c", ",", "d")]) + assert result.nlevels == 3 + tm.assert_index_equal(result, expected) # should preserve name - res = idx.str.partition(",", expand=False) - exp = Index(np.array([("a", ",", "b"), ("c", ",", "d")]), name="xxx") - assert res.nlevels == 1 - tm.assert_index_equal(res, exp) + result = idx.str.partition(",", expand=False) + expected = Index(np.array([("a", ",", "b"), ("c", ",", "d")]), name="xxx") + assert result.nlevels == 1 + tm.assert_index_equal(result, expected) -def test_partition_sep_kwarg(): +def test_partition_sep_kwarg(any_string_dtype): # GH 22676; depr kwarg "pat" in favor of "sep" - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) + s = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype) - expected = values.str.partition(sep="_") - result = values.str.partition("_") + expected = s.str.partition(sep="_") + result = s.str.partition("_") tm.assert_frame_equal(result, expected) - expected = values.str.rpartition(sep="_") - result = values.str.rpartition("_") + expected = s.str.rpartition(sep="_") + result = s.str.rpartition("_") tm.assert_frame_equal(result, expected)
https://api.github.com/repos/pandas-dev/pandas/pulls/41396
2021-05-09T14:48:14Z
2021-05-10T09:42:16Z
2021-05-10T09:42:16Z
2021-05-10T11:09:57Z
DOC: Remove deprecated use of level for agg functions
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index 9a57d86d62fdc..368d39ceac78b 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -9605,15 +9605,6 @@ def count( 3 3 4 3 dtype: int64 - - Counts for one level of a `MultiIndex`: - - >>> df.set_index(["Person", "Single"]).count(level="Person") - Age - Person - John 2 - Lewis 1 - Myla 1 """ axis = self._get_axis_number(axis) if level is not None: diff --git a/pandas/core/generic.py b/pandas/core/generic.py index d225ac6e6881b..bf87ed74dde2f 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -11806,21 +11806,7 @@ def _doc_params(cls): Name: legs, dtype: int64 >>> s.{stat_func}() -{default_output} - -{verb} using level names, as well as indices. - ->>> s.{stat_func}(level='blooded') -blooded -warm {level_output_0} -cold {level_output_1} -Name: legs, dtype: int64 - ->>> s.{stat_func}(level=0) -blooded -warm {level_output_0} -cold {level_output_1} -Name: legs, dtype: int64""" +{default_output}""" _sum_examples = _shared_docs["stat_func_example"].format( stat_func="sum", verb="Sum", default_output=14, level_output_0=6, level_output_1=8
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them ref #40869, cc @phofl
https://api.github.com/repos/pandas-dev/pandas/pulls/41394
2021-05-09T14:13:46Z
2021-05-10T23:21:43Z
2021-05-10T23:21:42Z
2021-05-11T02:00:34Z
[ArrowStringArray] TST: parametrize str.extract tests
diff --git a/pandas/tests/strings/test_extract.py b/pandas/tests/strings/test_extract.py index c1564a5c256a1..09a40728500e9 100644 --- a/pandas/tests/strings/test_extract.py +++ b/pandas/tests/strings/test_extract.py @@ -13,30 +13,31 @@ ) -def test_extract_expand_None(): - values = Series(["fooBAD__barBAD", np.nan, "foo"]) +def test_extract_expand_kwarg_wrong_type_raises(any_string_dtype): + # TODO: should this raise TypeError + values = Series(["fooBAD__barBAD", np.nan, "foo"], dtype=any_string_dtype) with pytest.raises(ValueError, match="expand must be True or False"): values.str.extract(".*(BAD[_]+).*(BAD)", expand=None) -def test_extract_expand_unspecified(): - values = Series(["fooBAD__barBAD", np.nan, "foo"]) - result_unspecified = values.str.extract(".*(BAD[_]+).*") - assert isinstance(result_unspecified, DataFrame) - result_true = values.str.extract(".*(BAD[_]+).*", expand=True) - tm.assert_frame_equal(result_unspecified, result_true) +def test_extract_expand_kwarg(any_string_dtype): + s = Series(["fooBAD__barBAD", np.nan, "foo"], dtype=any_string_dtype) + expected = DataFrame(["BAD__", np.nan, np.nan], dtype=any_string_dtype) + result = s.str.extract(".*(BAD[_]+).*") + tm.assert_frame_equal(result, expected) -def test_extract_expand_False(): - # Contains tests like those in test_match and some others. - values = Series(["fooBAD__barBAD", np.nan, "foo"]) - er = [np.nan, np.nan] # empty row + result = s.str.extract(".*(BAD[_]+).*", expand=True) + tm.assert_frame_equal(result, expected) + + expected = DataFrame( + [["BAD__", "BAD"], [np.nan, np.nan], [np.nan, np.nan]], dtype=any_string_dtype + ) + result = s.str.extract(".*(BAD[_]+).*(BAD)", expand=False) + tm.assert_frame_equal(result, expected) - result = values.str.extract(".*(BAD[_]+).*(BAD)", expand=False) - exp = DataFrame([["BAD__", "BAD"], er, er]) - tm.assert_frame_equal(result, exp) - # mixed +def test_extract_expand_False_mixed_object(): mixed = Series( [ "aBAD_BAD", @@ -51,163 +52,175 @@ def test_extract_expand_False(): ] ) - rs = Series(mixed).str.extract(".*(BAD[_]+).*(BAD)", expand=False) - exp = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er]) - tm.assert_frame_equal(rs, exp) - - # unicode - values = Series(["fooBAD__barBAD", np.nan, "foo"]) + result = Series(mixed).str.extract(".*(BAD[_]+).*(BAD)", expand=False) + er = [np.nan, np.nan] # empty row + expected = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er]) + tm.assert_frame_equal(result, expected) - result = values.str.extract(".*(BAD[_]+).*(BAD)", expand=False) - exp = DataFrame([["BAD__", "BAD"], er, er]) - tm.assert_frame_equal(result, exp) +def test_extract_expand_index_raises(): # GH9980 # Index only works with one regex group since # multi-group would expand to a frame idx = Index(["A1", "A2", "A3", "A4", "B5"]) - with pytest.raises(ValueError, match="supported"): + msg = "only one regex group is supported with Index" + with pytest.raises(ValueError, match=msg): idx.str.extract("([AB])([123])", expand=False) - # these should work for both Series and Index - for klass in [Series, Index]: - # no groups - s_or_idx = klass(["A1", "B2", "C3"]) - msg = "pattern contains no capture groups" - with pytest.raises(ValueError, match=msg): - s_or_idx.str.extract("[ABC][123]", expand=False) - - # only non-capturing groups - with pytest.raises(ValueError, match=msg): - s_or_idx.str.extract("(?:[AB]).*", expand=False) - - # single group renames series/index properly - s_or_idx = klass(["A1", "A2"]) - result = s_or_idx.str.extract(r"(?P<uno>A)\d", expand=False) - assert result.name == "uno" - - exp = klass(["A", "A"], name="uno") - if klass == Series: - tm.assert_series_equal(result, exp) - else: - tm.assert_index_equal(result, exp) - - s = Series(["A1", "B2", "C3"]) + +def test_extract_expand_no_capture_groups_raises(index_or_series, any_string_dtype): + s_or_idx = index_or_series(["A1", "B2", "C3"], dtype=any_string_dtype) + msg = "pattern contains no capture groups" + + # no groups + with pytest.raises(ValueError, match=msg): + s_or_idx.str.extract("[ABC][123]", expand=False) + + # only non-capturing groups + with pytest.raises(ValueError, match=msg): + s_or_idx.str.extract("(?:[AB]).*", expand=False) + + +def test_extract_expand_single_capture_group(index_or_series, any_string_dtype): + # single group renames series/index properly + s_or_idx = index_or_series(["A1", "A2"], dtype=any_string_dtype) + result = s_or_idx.str.extract(r"(?P<uno>A)\d", expand=False) + + expected = index_or_series(["A", "A"], name="uno", dtype=any_string_dtype) + if index_or_series == Series: + tm.assert_series_equal(result, expected) + else: + tm.assert_index_equal(result, expected) + + +def test_extract_expand_capture_groups(any_string_dtype): + s = Series(["A1", "B2", "C3"], dtype=any_string_dtype) # one group, no matches result = s.str.extract("(_)", expand=False) - exp = Series([np.nan, np.nan, np.nan], dtype=object) - tm.assert_series_equal(result, exp) + expected = Series([np.nan, np.nan, np.nan], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) # two groups, no matches result = s.str.extract("(_)(_)", expand=False) - exp = DataFrame( - [[np.nan, np.nan], [np.nan, np.nan], [np.nan, np.nan]], dtype=object + expected = DataFrame( + [[np.nan, np.nan], [np.nan, np.nan], [np.nan, np.nan]], dtype=any_string_dtype ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # one group, some matches result = s.str.extract("([AB])[123]", expand=False) - exp = Series(["A", "B", np.nan]) - tm.assert_series_equal(result, exp) + expected = Series(["A", "B", np.nan], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) # two groups, some matches result = s.str.extract("([AB])([123])", expand=False) - exp = DataFrame([["A", "1"], ["B", "2"], [np.nan, np.nan]]) - tm.assert_frame_equal(result, exp) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) # one named group result = s.str.extract("(?P<letter>[AB])", expand=False) - exp = Series(["A", "B", np.nan], name="letter") - tm.assert_series_equal(result, exp) + expected = Series(["A", "B", np.nan], name="letter", dtype=any_string_dtype) + tm.assert_series_equal(result, expected) # two named groups result = s.str.extract("(?P<letter>[AB])(?P<number>[123])", expand=False) - exp = DataFrame( - [["A", "1"], ["B", "2"], [np.nan, np.nan]], columns=["letter", "number"] + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], + columns=["letter", "number"], + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # mix named and unnamed groups result = s.str.extract("([AB])(?P<number>[123])", expand=False) - exp = DataFrame([["A", "1"], ["B", "2"], [np.nan, np.nan]], columns=[0, "number"]) - tm.assert_frame_equal(result, exp) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], + columns=[0, "number"], + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) # one normal group, one non-capturing group result = s.str.extract("([AB])(?:[123])", expand=False) - exp = Series(["A", "B", np.nan]) - tm.assert_series_equal(result, exp) + expected = Series(["A", "B", np.nan], dtype=any_string_dtype) + tm.assert_series_equal(result, expected) # two normal groups, one non-capturing group - result = Series(["A11", "B22", "C33"]).str.extract( - "([AB])([123])(?:[123])", expand=False + s = Series(["A11", "B22", "C33"], dtype=any_string_dtype) + result = s.str.extract("([AB])([123])(?:[123])", expand=False) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], dtype=any_string_dtype ) - exp = DataFrame([["A", "1"], ["B", "2"], [np.nan, np.nan]]) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # one optional group followed by one normal group - result = Series(["A1", "B2", "3"]).str.extract( - "(?P<letter>[AB])?(?P<number>[123])", expand=False - ) - exp = DataFrame( - [["A", "1"], ["B", "2"], [np.nan, "3"]], columns=["letter", "number"] + s = Series(["A1", "B2", "3"], dtype=any_string_dtype) + result = s.str.extract("(?P<letter>[AB])?(?P<number>[123])", expand=False) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, "3"]], + columns=["letter", "number"], + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # one normal group followed by one optional group - result = Series(["A1", "B2", "C"]).str.extract( - "(?P<letter>[ABC])(?P<number>[123])?", expand=False - ) - exp = DataFrame( - [["A", "1"], ["B", "2"], ["C", np.nan]], columns=["letter", "number"] + s = Series(["A1", "B2", "C"], dtype=any_string_dtype) + result = s.str.extract("(?P<letter>[ABC])(?P<number>[123])?", expand=False) + expected = DataFrame( + [["A", "1"], ["B", "2"], ["C", np.nan]], + columns=["letter", "number"], + dtype=any_string_dtype, ) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) - # GH6348 + +def test_extract_expand_capture_groups_index(index, any_string_dtype): + # https://github.com/pandas-dev/pandas/issues/6348 # not passing index to the extractor - def check_index(index): - data = ["A1", "B2", "C"] - index = index[: len(data)] - s = Series(data, index=index) - result = s.str.extract(r"(\d)", expand=False) - exp = Series(["1", "2", np.nan], index=index) - tm.assert_series_equal(result, exp) - - result = Series(data, index=index).str.extract( - r"(?P<letter>\D)(?P<number>\d)?", expand=False - ) - e_list = [["A", "1"], ["B", "2"], ["C", np.nan]] - exp = DataFrame(e_list, columns=["letter", "number"], index=index) - tm.assert_frame_equal(result, exp) - - i_funs = [ - tm.makeStringIndex, - tm.makeUnicodeIndex, - tm.makeIntIndex, - tm.makeDateIndex, - tm.makePeriodIndex, - tm.makeRangeIndex, - ] - for index in i_funs: - check_index(index()) + data = ["A1", "B2", "C"] + + if len(index) < len(data): + pytest.skip("Index too short") + + index = index[: len(data)] + s = Series(data, index=index, dtype=any_string_dtype) + + result = s.str.extract(r"(\d)", expand=False) + expected = Series(["1", "2", np.nan], index=index, dtype=any_string_dtype) + tm.assert_series_equal(result, expected) + + result = s.str.extract(r"(?P<letter>\D)(?P<number>\d)?", expand=False) + expected = DataFrame( + [["A", "1"], ["B", "2"], ["C", np.nan]], + columns=["letter", "number"], + index=index, + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) + - # single_series_name_is_preserved. - s = Series(["a3", "b3", "c2"], name="bob") - r = s.str.extract(r"(?P<sue>[a-z])", expand=False) - e = Series(["a", "b", "c"], name="sue") - tm.assert_series_equal(r, e) - assert r.name == e.name +def test_extract_single_series_name_is_preserved(any_string_dtype): + s = Series(["a3", "b3", "c2"], name="bob", dtype=any_string_dtype) + result = s.str.extract(r"(?P<sue>[a-z])", expand=False) + expected = Series(["a", "b", "c"], name="sue", dtype=any_string_dtype) + tm.assert_series_equal(result, expected) -def test_extract_expand_True(): +def test_extract_expand_True(any_string_dtype): # Contains tests like those in test_match and some others. - values = Series(["fooBAD__barBAD", np.nan, "foo"]) - er = [np.nan, np.nan] # empty row + s = Series(["fooBAD__barBAD", np.nan, "foo"], dtype=any_string_dtype) + + result = s.str.extract(".*(BAD[_]+).*(BAD)", expand=True) + expected = DataFrame( + [["BAD__", "BAD"], [np.nan, np.nan], [np.nan, np.nan]], dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) - result = values.str.extract(".*(BAD[_]+).*(BAD)", expand=True) - exp = DataFrame([["BAD__", "BAD"], er, er]) - tm.assert_frame_equal(result, exp) - # mixed +def test_extract_expand_True_mixed_object(): + er = [np.nan, np.nan] # empty row mixed = Series( [ "aBAD_BAD", @@ -222,140 +235,158 @@ def test_extract_expand_True(): ] ) - rs = Series(mixed).str.extract(".*(BAD[_]+).*(BAD)", expand=True) - exp = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er]) - tm.assert_frame_equal(rs, exp) + result = mixed.str.extract(".*(BAD[_]+).*(BAD)", expand=True) + expected = DataFrame([["BAD_", "BAD"], er, ["BAD_", "BAD"], er, er, er, er, er, er]) + tm.assert_frame_equal(result, expected) + +def test_extract_expand_True_single_capture_group_raises( + index_or_series, any_string_dtype +): # these should work for both Series and Index - for klass in [Series, Index]: - # no groups - s_or_idx = klass(["A1", "B2", "C3"]) - msg = "pattern contains no capture groups" - with pytest.raises(ValueError, match=msg): - s_or_idx.str.extract("[ABC][123]", expand=True) - - # only non-capturing groups - with pytest.raises(ValueError, match=msg): - s_or_idx.str.extract("(?:[AB]).*", expand=True) - - # single group renames series/index properly - s_or_idx = klass(["A1", "A2"]) - result_df = s_or_idx.str.extract(r"(?P<uno>A)\d", expand=True) - assert isinstance(result_df, DataFrame) - result_series = result_df["uno"] - tm.assert_series_equal(result_series, Series(["A", "A"], name="uno")) - - -def test_extract_series(): - # extract should give the same result whether or not the - # series has a name. - for series_name in None, "series_name": - s = Series(["A1", "B2", "C3"], name=series_name) - # one group, no matches - result = s.str.extract("(_)", expand=True) - exp = DataFrame([np.nan, np.nan, np.nan], dtype=object) - tm.assert_frame_equal(result, exp) - - # two groups, no matches - result = s.str.extract("(_)(_)", expand=True) - exp = DataFrame( - [[np.nan, np.nan], [np.nan, np.nan], [np.nan, np.nan]], dtype=object - ) - tm.assert_frame_equal(result, exp) - - # one group, some matches - result = s.str.extract("([AB])[123]", expand=True) - exp = DataFrame(["A", "B", np.nan]) - tm.assert_frame_equal(result, exp) - - # two groups, some matches - result = s.str.extract("([AB])([123])", expand=True) - exp = DataFrame([["A", "1"], ["B", "2"], [np.nan, np.nan]]) - tm.assert_frame_equal(result, exp) - - # one named group - result = s.str.extract("(?P<letter>[AB])", expand=True) - exp = DataFrame({"letter": ["A", "B", np.nan]}) - tm.assert_frame_equal(result, exp) - - # two named groups - result = s.str.extract("(?P<letter>[AB])(?P<number>[123])", expand=True) - e_list = [["A", "1"], ["B", "2"], [np.nan, np.nan]] - exp = DataFrame(e_list, columns=["letter", "number"]) - tm.assert_frame_equal(result, exp) - - # mix named and unnamed groups - result = s.str.extract("([AB])(?P<number>[123])", expand=True) - exp = DataFrame(e_list, columns=[0, "number"]) - tm.assert_frame_equal(result, exp) - - # one normal group, one non-capturing group - result = s.str.extract("([AB])(?:[123])", expand=True) - exp = DataFrame(["A", "B", np.nan]) - tm.assert_frame_equal(result, exp) - - -def test_extract_optional_groups(): + # no groups + s_or_idx = index_or_series(["A1", "B2", "C3"], dtype=any_string_dtype) + msg = "pattern contains no capture groups" + with pytest.raises(ValueError, match=msg): + s_or_idx.str.extract("[ABC][123]", expand=True) + + # only non-capturing groups + with pytest.raises(ValueError, match=msg): + s_or_idx.str.extract("(?:[AB]).*", expand=True) + + +def test_extract_expand_True_single_capture_group(index_or_series, any_string_dtype): + # single group renames series/index properly + s_or_idx = index_or_series(["A1", "A2"], dtype=any_string_dtype) + result = s_or_idx.str.extract(r"(?P<uno>A)\d", expand=True) + expected_dtype = "object" if index_or_series is Index else any_string_dtype + expected = DataFrame({"uno": ["A", "A"]}, dtype=expected_dtype) + tm.assert_frame_equal(result, expected) + + +@pytest.mark.parametrize("name", [None, "series_name"]) +def test_extract_series(name, any_string_dtype): + # extract should give the same result whether or not the series has a name. + s = Series(["A1", "B2", "C3"], name=name, dtype=any_string_dtype) + + # one group, no matches + result = s.str.extract("(_)", expand=True) + expected = DataFrame([np.nan, np.nan, np.nan], dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) + + # two groups, no matches + result = s.str.extract("(_)(_)", expand=True) + expected = DataFrame( + [[np.nan, np.nan], [np.nan, np.nan], [np.nan, np.nan]], dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) + + # one group, some matches + result = s.str.extract("([AB])[123]", expand=True) + expected = DataFrame(["A", "B", np.nan], dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) + + # two groups, some matches + result = s.str.extract("([AB])([123])", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], dtype=any_string_dtype + ) + tm.assert_frame_equal(result, expected) + + # one named group + result = s.str.extract("(?P<letter>[AB])", expand=True) + expected = DataFrame({"letter": ["A", "B", np.nan]}, dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) + + # two named groups + result = s.str.extract("(?P<letter>[AB])(?P<number>[123])", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], + columns=["letter", "number"], + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) + + # mix named and unnamed groups + result = s.str.extract("([AB])(?P<number>[123])", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], + columns=[0, "number"], + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) + + # one normal group, one non-capturing group + result = s.str.extract("([AB])(?:[123])", expand=True) + expected = DataFrame(["A", "B", np.nan], dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) + + +def test_extract_optional_groups(any_string_dtype): # two normal groups, one non-capturing group - result = Series(["A11", "B22", "C33"]).str.extract( - "([AB])([123])(?:[123])", expand=True + s = Series(["A11", "B22", "C33"], dtype=any_string_dtype) + result = s.str.extract("([AB])([123])(?:[123])", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, np.nan]], dtype=any_string_dtype ) - exp = DataFrame([["A", "1"], ["B", "2"], [np.nan, np.nan]]) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # one optional group followed by one normal group - result = Series(["A1", "B2", "3"]).str.extract( - "(?P<letter>[AB])?(?P<number>[123])", expand=True + s = Series(["A1", "B2", "3"], dtype=any_string_dtype) + result = s.str.extract("(?P<letter>[AB])?(?P<number>[123])", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], [np.nan, "3"]], + columns=["letter", "number"], + dtype=any_string_dtype, ) - e_list = [["A", "1"], ["B", "2"], [np.nan, "3"]] - exp = DataFrame(e_list, columns=["letter", "number"]) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) # one normal group followed by one optional group - result = Series(["A1", "B2", "C"]).str.extract( - "(?P<letter>[ABC])(?P<number>[123])?", expand=True + s = Series(["A1", "B2", "C"], dtype=any_string_dtype) + result = s.str.extract("(?P<letter>[ABC])(?P<number>[123])?", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], ["C", np.nan]], + columns=["letter", "number"], + dtype=any_string_dtype, ) - e_list = [["A", "1"], ["B", "2"], ["C", np.nan]] - exp = DataFrame(e_list, columns=["letter", "number"]) - tm.assert_frame_equal(result, exp) + tm.assert_frame_equal(result, expected) + +def test_extract_dataframe_capture_groups_index(index, any_string_dtype): # GH6348 # not passing index to the extractor - def check_index(index): - data = ["A1", "B2", "C"] - index = index[: len(data)] - result = Series(data, index=index).str.extract(r"(\d)", expand=True) - exp = DataFrame(["1", "2", np.nan], index=index) - tm.assert_frame_equal(result, exp) - - result = Series(data, index=index).str.extract( - r"(?P<letter>\D)(?P<number>\d)?", expand=True - ) - e_list = [["A", "1"], ["B", "2"], ["C", np.nan]] - exp = DataFrame(e_list, columns=["letter", "number"], index=index) - tm.assert_frame_equal(result, exp) - - i_funs = [ - tm.makeStringIndex, - tm.makeUnicodeIndex, - tm.makeIntIndex, - tm.makeDateIndex, - tm.makePeriodIndex, - tm.makeRangeIndex, - ] - for index in i_funs: - check_index(index()) + + data = ["A1", "B2", "C"] + + if len(index) < len(data): + pytest.skip("Index too short") + + index = index[: len(data)] + s = Series(data, index=index, dtype=any_string_dtype) + + result = s.str.extract(r"(\d)", expand=True) + expected = DataFrame(["1", "2", np.nan], index=index, dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) + + result = s.str.extract(r"(?P<letter>\D)(?P<number>\d)?", expand=True) + expected = DataFrame( + [["A", "1"], ["B", "2"], ["C", np.nan]], + columns=["letter", "number"], + index=index, + dtype=any_string_dtype, + ) + tm.assert_frame_equal(result, expected) -def test_extract_single_group_returns_frame(): +def test_extract_single_group_returns_frame(any_string_dtype): # GH11386 extract should always return DataFrame, even when # there is only one group. Prior to v0.18.0, extract returned # Series when there was only one group in the regex. - s = Series(["a3", "b3", "c2"], name="series_name") - r = s.str.extract(r"(?P<letter>[a-z])", expand=True) - e = DataFrame({"letter": ["a", "b", "c"]}) - tm.assert_frame_equal(r, e) + s = Series(["a3", "b3", "c2"], name="series_name", dtype=any_string_dtype) + result = s.str.extract(r"(?P<letter>[a-z])", expand=True) + expected = DataFrame({"letter": ["a", "b", "c"]}, dtype=any_string_dtype) + tm.assert_frame_equal(result, expected) def test_extractall():
https://api.github.com/repos/pandas-dev/pandas/pulls/41393
2021-05-09T13:54:19Z
2021-05-12T01:12:50Z
2021-05-12T01:12:50Z
2021-05-12T08:03:31Z
[ArrowStringArray] TST: parametrize str.split tests
diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py index 45a9053954569..79ea2a4fba284 100644 --- a/asv_bench/benchmarks/strings.py +++ b/asv_bench/benchmarks/strings.py @@ -230,16 +230,21 @@ def time_contains(self, dtype, regex): class Split: - params = [True, False] - param_names = ["expand"] + params = (["str", "string", "arrow_string"], [True, False]) + param_names = ["dtype", "expand"] + + def setup(self, dtype, expand): + from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401 - def setup(self, expand): - self.s = Series(tm.makeStringIndex(10 ** 5)).str.join("--") + try: + self.s = Series(tm.makeStringIndex(10 ** 5), dtype=dtype).str.join("--") + except ImportError: + raise NotImplementedError - def time_split(self, expand): + def time_split(self, dtype, expand): self.s.str.split("--", expand=expand) - def time_rsplit(self, expand): + def time_rsplit(self, dtype, expand): self.s.str.rsplit("--", expand=expand) diff --git a/pandas/tests/strings/test_split_partition.py b/pandas/tests/strings/test_split_partition.py index f8804d6dd6266..e59105eccc67c 100644 --- a/pandas/tests/strings/test_split_partition.py +++ b/pandas/tests/strings/test_split_partition.py @@ -13,22 +13,29 @@ ) -def test_split(): - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) +def test_split(any_string_dtype): + values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype) result = values.str.split("_") exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]]) tm.assert_series_equal(result, exp) # more than one char - values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"]) + values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"], dtype=any_string_dtype) result = values.str.split("__") tm.assert_series_equal(result, exp) result = values.str.split("__", expand=False) tm.assert_series_equal(result, exp) - # mixed + # regex split + values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype) + result = values.str.split("[,_]") + exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]]) + tm.assert_series_equal(result, exp) + + +def test_split_object_mixed(): mixed = Series(["a_b_c", np.nan, "d_e_f", True, datetime.today(), None, 1, 2.0]) result = mixed.str.split("_") exp = Series( @@ -50,17 +57,10 @@ def test_split(): assert isinstance(result, Series) tm.assert_almost_equal(result, exp) - # regex split - values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"]) - result = values.str.split("[,_]") - exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]]) - tm.assert_series_equal(result, exp) - -@pytest.mark.parametrize("dtype", [object, "string"]) @pytest.mark.parametrize("method", ["split", "rsplit"]) -def test_split_n(dtype, method): - s = Series(["a b", pd.NA, "b c"], dtype=dtype) +def test_split_n(any_string_dtype, method): + s = Series(["a b", pd.NA, "b c"], dtype=any_string_dtype) expected = Series([["a", "b"], pd.NA, ["b", "c"]]) result = getattr(s.str, method)(" ", n=None) @@ -70,20 +70,34 @@ def test_split_n(dtype, method): tm.assert_series_equal(result, expected) -def test_rsplit(): - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) +def test_rsplit(any_string_dtype): + values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype) result = values.str.rsplit("_") exp = Series([["a", "b", "c"], ["c", "d", "e"], np.nan, ["f", "g", "h"]]) tm.assert_series_equal(result, exp) # more than one char - values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"]) + values = Series(["a__b__c", "c__d__e", np.nan, "f__g__h"], dtype=any_string_dtype) result = values.str.rsplit("__") tm.assert_series_equal(result, exp) result = values.str.rsplit("__", expand=False) tm.assert_series_equal(result, exp) + # regex split is not supported by rsplit + values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"], dtype=any_string_dtype) + result = values.str.rsplit("[,_]") + exp = Series([["a,b_c"], ["c_d,e"], np.nan, ["f,g,h"]]) + tm.assert_series_equal(result, exp) + + # setting max number of splits, make sure it's from reverse + values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"], dtype=any_string_dtype) + result = values.str.rsplit("_", n=1) + exp = Series([["a_b", "c"], ["c_d", "e"], np.nan, ["f_g", "h"]]) + tm.assert_series_equal(result, exp) + + +def test_rsplit_object_mixed(): # mixed mixed = Series(["a_b_c", np.nan, "d_e_f", True, datetime.today(), None, 1, 2.0]) result = mixed.str.rsplit("_") @@ -106,27 +120,15 @@ def test_rsplit(): assert isinstance(result, Series) tm.assert_almost_equal(result, exp) - # regex split is not supported by rsplit - values = Series(["a,b_c", "c_d,e", np.nan, "f,g,h"]) - result = values.str.rsplit("[,_]") - exp = Series([["a,b_c"], ["c_d,e"], np.nan, ["f,g,h"]]) - tm.assert_series_equal(result, exp) - # setting max number of splits, make sure it's from reverse - values = Series(["a_b_c", "c_d_e", np.nan, "f_g_h"]) - result = values.str.rsplit("_", n=1) - exp = Series([["a_b", "c"], ["c_d", "e"], np.nan, ["f_g", "h"]]) - tm.assert_series_equal(result, exp) - - -def test_split_blank_string(): +def test_split_blank_string(any_string_dtype): # expand blank split GH 20067 - values = Series([""], name="test") + values = Series([""], name="test", dtype=any_string_dtype) result = values.str.split(expand=True) - exp = DataFrame([[]]) # NOTE: this is NOT an empty DataFrame + exp = DataFrame([[]], dtype=any_string_dtype) # NOTE: this is NOT an empty df tm.assert_frame_equal(result, exp) - values = Series(["a b c", "a b", "", " "], name="test") + values = Series(["a b c", "a b", "", " "], name="test", dtype=any_string_dtype) result = values.str.split(expand=True) exp = DataFrame( [ @@ -134,14 +136,15 @@ def test_split_blank_string(): ["a", "b", np.nan], [np.nan, np.nan, np.nan], [np.nan, np.nan, np.nan], - ] + ], + dtype=any_string_dtype, ) tm.assert_frame_equal(result, exp) -def test_split_noargs(): +def test_split_noargs(any_string_dtype): # #1859 - s = Series(["Wes McKinney", "Travis Oliphant"]) + s = Series(["Wes McKinney", "Travis Oliphant"], dtype=any_string_dtype) result = s.str.split() expected = ["Travis", "Oliphant"] assert result[1] == expected @@ -149,44 +152,64 @@ def test_split_noargs(): assert result[1] == expected -def test_split_maxsplit(): +@pytest.mark.parametrize( + "data, pat", + [ + (["bd asdf jfg", "kjasdflqw asdfnfk"], None), + (["bd asdf jfg", "kjasdflqw asdfnfk"], "asdf"), + (["bd_asdf_jfg", "kjasdflqw_asdfnfk"], "_"), + ], +) +def test_split_maxsplit(data, pat, any_string_dtype): # re.split 0, str.split -1 - s = Series(["bd asdf jfg", "kjasdflqw asdfnfk"]) - - result = s.str.split(n=-1) - xp = s.str.split() - tm.assert_series_equal(result, xp) + s = Series(data, dtype=any_string_dtype) - result = s.str.split(n=0) + result = s.str.split(pat=pat, n=-1) + xp = s.str.split(pat=pat) tm.assert_series_equal(result, xp) - xp = s.str.split("asdf") - result = s.str.split("asdf", n=0) + result = s.str.split(pat=pat, n=0) tm.assert_series_equal(result, xp) - result = s.str.split("asdf", n=-1) - tm.assert_series_equal(result, xp) - -def test_split_no_pat_with_nonzero_n(): - s = Series(["split once", "split once too!"]) - result = s.str.split(n=1) - expected = Series({0: ["split", "once"], 1: ["split", "once too!"]}) +@pytest.mark.parametrize( + "data, pat, expected", + [ + ( + ["split once", "split once too!"], + None, + Series({0: ["split", "once"], 1: ["split", "once too!"]}), + ), + ( + ["split_once", "split_once_too!"], + "_", + Series({0: ["split", "once"], 1: ["split", "once_too!"]}), + ), + ], +) +def test_split_no_pat_with_nonzero_n(data, pat, expected, any_string_dtype): + s = Series(data, dtype=any_string_dtype) + result = s.str.split(pat=pat, n=1) tm.assert_series_equal(expected, result, check_index_type=False) -def test_split_to_dataframe(): - s = Series(["nosplit", "alsonosplit"]) +def test_split_to_dataframe(any_string_dtype): + s = Series(["nosplit", "alsonosplit"], dtype=any_string_dtype) result = s.str.split("_", expand=True) - exp = DataFrame({0: Series(["nosplit", "alsonosplit"])}) + exp = DataFrame({0: Series(["nosplit", "alsonosplit"], dtype=any_string_dtype)}) tm.assert_frame_equal(result, exp) - s = Series(["some_equal_splits", "with_no_nans"]) + s = Series(["some_equal_splits", "with_no_nans"], dtype=any_string_dtype) result = s.str.split("_", expand=True) - exp = DataFrame({0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}) + exp = DataFrame( + {0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}, + dtype=any_string_dtype, + ) tm.assert_frame_equal(result, exp) - s = Series(["some_unequal_splits", "one_of_these_things_is_not"]) + s = Series( + ["some_unequal_splits", "one_of_these_things_is_not"], dtype=any_string_dtype + ) result = s.str.split("_", expand=True) exp = DataFrame( { @@ -196,14 +219,19 @@ def test_split_to_dataframe(): 3: [np.nan, "things"], 4: [np.nan, "is"], 5: [np.nan, "not"], - } + }, + dtype=any_string_dtype, ) tm.assert_frame_equal(result, exp) - s = Series(["some_splits", "with_index"], index=["preserve", "me"]) + s = Series( + ["some_splits", "with_index"], index=["preserve", "me"], dtype=any_string_dtype + ) result = s.str.split("_", expand=True) exp = DataFrame( - {0: ["some", "with"], 1: ["splits", "index"]}, index=["preserve", "me"] + {0: ["some", "with"], 1: ["splits", "index"]}, + index=["preserve", "me"], + dtype=any_string_dtype, ) tm.assert_frame_equal(result, exp) @@ -250,29 +278,41 @@ def test_split_to_multiindex_expand(): idx.str.split("_", expand="not_a_boolean") -def test_rsplit_to_dataframe_expand(): - s = Series(["nosplit", "alsonosplit"]) +def test_rsplit_to_dataframe_expand(any_string_dtype): + s = Series(["nosplit", "alsonosplit"], dtype=any_string_dtype) result = s.str.rsplit("_", expand=True) - exp = DataFrame({0: Series(["nosplit", "alsonosplit"])}) + exp = DataFrame({0: Series(["nosplit", "alsonosplit"])}, dtype=any_string_dtype) tm.assert_frame_equal(result, exp) - s = Series(["some_equal_splits", "with_no_nans"]) + s = Series(["some_equal_splits", "with_no_nans"], dtype=any_string_dtype) result = s.str.rsplit("_", expand=True) - exp = DataFrame({0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}) + exp = DataFrame( + {0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}, + dtype=any_string_dtype, + ) tm.assert_frame_equal(result, exp) result = s.str.rsplit("_", expand=True, n=2) - exp = DataFrame({0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}) + exp = DataFrame( + {0: ["some", "with"], 1: ["equal", "no"], 2: ["splits", "nans"]}, + dtype=any_string_dtype, + ) tm.assert_frame_equal(result, exp) result = s.str.rsplit("_", expand=True, n=1) - exp = DataFrame({0: ["some_equal", "with_no"], 1: ["splits", "nans"]}) + exp = DataFrame( + {0: ["some_equal", "with_no"], 1: ["splits", "nans"]}, dtype=any_string_dtype + ) tm.assert_frame_equal(result, exp) - s = Series(["some_splits", "with_index"], index=["preserve", "me"]) + s = Series( + ["some_splits", "with_index"], index=["preserve", "me"], dtype=any_string_dtype + ) result = s.str.rsplit("_", expand=True) exp = DataFrame( - {0: ["some", "with"], 1: ["splits", "index"]}, index=["preserve", "me"] + {0: ["some", "with"], 1: ["splits", "index"]}, + index=["preserve", "me"], + dtype=any_string_dtype, ) tm.assert_frame_equal(result, exp) @@ -297,30 +337,35 @@ def test_rsplit_to_multiindex_expand(): assert result.nlevels == 2 -def test_split_nan_expand(): +def test_split_nan_expand(any_string_dtype): # gh-18450 - s = Series(["foo,bar,baz", np.nan]) + s = Series(["foo,bar,baz", np.nan], dtype=any_string_dtype) result = s.str.split(",", expand=True) - exp = DataFrame([["foo", "bar", "baz"], [np.nan, np.nan, np.nan]]) + exp = DataFrame( + [["foo", "bar", "baz"], [np.nan, np.nan, np.nan]], dtype=any_string_dtype + ) tm.assert_frame_equal(result, exp) - # check that these are actually np.nan and not None + # check that these are actually np.nan/pd.NA and not None # TODO see GH 18463 # tm.assert_frame_equal does not differentiate - assert all(np.isnan(x) for x in result.iloc[1]) + if any_string_dtype == "object": + assert all(np.isnan(x) for x in result.iloc[1]) + else: + assert all(x is pd.NA for x in result.iloc[1]) -def test_split_with_name(): +def test_split_with_name(any_string_dtype): # GH 12617 # should preserve name - s = Series(["a,b", "c,d"], name="xxx") + s = Series(["a,b", "c,d"], name="xxx", dtype=any_string_dtype) res = s.str.split(",") exp = Series([["a", "b"], ["c", "d"]], name="xxx") tm.assert_series_equal(res, exp) res = s.str.split(",", expand=True) - exp = DataFrame([["a", "b"], ["c", "d"]]) + exp = DataFrame([["a", "b"], ["c", "d"]], dtype=any_string_dtype) tm.assert_frame_equal(res, exp) idx = Index(["a,b", "c,d"], name="xxx")
the test (and benchmark) changes broken off from #41085 as a precursor to #41085 and #41372 (which currently makes changes to the str.split path, although I may break that PR up also)
https://api.github.com/repos/pandas-dev/pandas/pulls/41392
2021-05-09T10:13:04Z
2021-05-10T11:58:53Z
2021-05-10T11:58:53Z
2021-05-10T12:14:31Z
REF: add internal mechanics to separate index/columns sparsification in Styler
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py index bd768f4f0a1d4..6917daaede2c6 100644 --- a/pandas/io/formats/style_render.py +++ b/pandas/io/formats/style_render.py @@ -132,11 +132,33 @@ def _compute(self): r = func(self)(*args, **kwargs) return r - def _translate(self): + def _translate( + self, sparsify_index: bool | None = None, sparsify_cols: bool | None = None + ): """ - Convert the DataFrame in `self.data` and the attrs from `_build_styles` - into a dictionary of {head, body, uuid, cellstyle}. + Process Styler data and settings into a dict for template rendering. + + Convert data and settings from ``Styler`` attributes such as ``self.data``, + ``self.tooltips`` including applying any methods in ``self._todo``. + + Parameters + ---------- + sparsify_index : bool, optional + Whether to sparsify the index or print all hierarchical index elements + sparsify_cols : bool, optional + Whether to sparsify the columns or print all hierarchical column elements + + Returns + ------- + d : dict + The following structure: {uuid, table_styles, caption, head, body, + cellstyle, table_attributes} """ + if sparsify_index is None: + sparsify_index = get_option("display.multi_sparse") + if sparsify_cols is None: + sparsify_cols = get_option("display.multi_sparse") + ROW_HEADING_CLASS = "row_heading" COL_HEADING_CLASS = "col_heading" INDEX_NAME_CLASS = "index_name" @@ -153,14 +175,14 @@ def _translate(self): } head = self._translate_header( - BLANK_CLASS, BLANK_VALUE, INDEX_NAME_CLASS, COL_HEADING_CLASS + BLANK_CLASS, BLANK_VALUE, INDEX_NAME_CLASS, COL_HEADING_CLASS, sparsify_cols ) d.update({"head": head}) self.cellstyle_map: DefaultDict[tuple[CSSPair, ...], list[str]] = defaultdict( list ) - body = self._translate_body(DATA_CLASS, ROW_HEADING_CLASS) + body = self._translate_body(DATA_CLASS, ROW_HEADING_CLASS, sparsify_index) d.update({"body": body}) cellstyle: list[dict[str, CSSList | list[str]]] = [ @@ -185,10 +207,17 @@ def _translate(self): return d def _translate_header( - self, blank_class, blank_value, index_name_class, col_heading_class + self, + blank_class: str, + blank_value: str, + index_name_class: str, + col_heading_class: str, + sparsify_cols: bool, ): """ - Build each <tr> within table <head>, using the structure: + Build each <tr> within table <head> as a list + + Using the structure: +----------------------------+---------------+---------------------------+ | index_blanks ... | column_name_0 | column_headers (level_0) | 1) | .. | .. | .. | @@ -196,9 +225,29 @@ def _translate_header( +----------------------------+---------------+---------------------------+ 2) | index_names (level_0 to level_n) ... | column_blanks ... | +----------------------------+---------------+---------------------------+ + + Parameters + ---------- + blank_class : str + CSS class added to elements within blank sections of the structure. + blank_value : str + HTML display value given to elements within blank sections of the structure. + index_name_class : str + CSS class added to elements within the index_names section of the structure. + col_heading_class : str + CSS class added to elements within the column_names section of structure. + sparsify_cols : bool + Whether column_headers section will add colspan attributes (>1) to elements. + + Returns + ------- + head : list + The associated HTML elements needed for template rendering. """ # for sparsifying a MultiIndex - col_lengths = _get_level_lengths(self.columns, self.hidden_columns) + col_lengths = _get_level_lengths( + self.columns, sparsify_cols, self.hidden_columns + ) clabels = self.data.columns.tolist() if self.data.columns.nlevels == 1: @@ -268,18 +317,36 @@ def _translate_header( return head - def _translate_body(self, data_class, row_heading_class): + def _translate_body( + self, data_class: str, row_heading_class: str, sparsify_index: bool + ): """ - Build each <tr> in table <body> in the following format: + Build each <tr> within table <body> as a list + + Use the following structure: +--------------------------------------------+---------------------------+ | index_header_0 ... index_header_n | data_by_column | +--------------------------------------------+---------------------------+ Also add elements to the cellstyle_map for more efficient grouped elements in <style></style> block + + Parameters + ---------- + data_class : str + CSS class added to elements within data_by_column sections of the structure. + row_heading_class : str + CSS class added to elements within the index_header section of structure. + sparsify_index : bool + Whether index_headers section will add rowspan attributes (>1) to elements. + + Returns + ------- + body : list + The associated HTML elements needed for template rendering. """ # for sparsifying a MultiIndex - idx_lengths = _get_level_lengths(self.index) + idx_lengths = _get_level_lengths(self.index, sparsify_index) rlabels = self.data.index.tolist() if self.data.index.nlevels == 1: @@ -520,14 +587,26 @@ def _element( } -def _get_level_lengths(index, hidden_elements=None): +def _get_level_lengths( + index: Index, sparsify: bool, hidden_elements: Sequence[int] | None = None +): """ Given an index, find the level length for each element. - Optional argument is a list of index positions which - should not be visible. + Parameters + ---------- + index : Index + Index or columns to determine lengths of each element + sparsify : bool + Whether to hide or show each distinct element in a MultiIndex + hidden_elements : sequence of int + Index positions of elements hidden from display in the index affecting + length - Result is a dictionary of (level, initial_position): span + Returns + ------- + Dict : + Result is a dictionary of (level, initial_position): span """ if isinstance(index, MultiIndex): levels = index.format(sparsify=lib.no_default, adjoin=False) @@ -546,7 +625,7 @@ def _get_level_lengths(index, hidden_elements=None): for i, lvl in enumerate(levels): for j, row in enumerate(lvl): - if not get_option("display.multi_sparse"): + if not sparsify: lengths[(i, j)] = 1 elif (row is not lib.no_default) and (j not in hidden_elements): last_label = j diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py index 855def916c2cd..31877b3f33482 100644 --- a/pandas/tests/io/formats/style/test_style.py +++ b/pandas/tests/io/formats/style/test_style.py @@ -844,7 +844,24 @@ def test_get_level_lengths(self): (1, 4): 1, (1, 5): 1, } - result = _get_level_lengths(index) + result = _get_level_lengths(index, sparsify=True) + tm.assert_dict_equal(result, expected) + + expected = { + (0, 0): 1, + (0, 1): 1, + (0, 2): 1, + (0, 3): 1, + (0, 4): 1, + (0, 5): 1, + (1, 0): 1, + (1, 1): 1, + (1, 2): 1, + (1, 3): 1, + (1, 4): 1, + (1, 5): 1, + } + result = _get_level_lengths(index, sparsify=False) tm.assert_dict_equal(result, expected) def test_get_level_lengths_un_sorted(self): @@ -858,7 +875,20 @@ def test_get_level_lengths_un_sorted(self): (1, 2): 1, (1, 3): 1, } - result = _get_level_lengths(index) + result = _get_level_lengths(index, sparsify=True) + tm.assert_dict_equal(result, expected) + + expected = { + (0, 0): 1, + (0, 1): 1, + (0, 2): 1, + (0, 3): 1, + (1, 0): 1, + (1, 1): 1, + (1, 2): 1, + (1, 3): 1, + } + result = _get_level_lengths(index, sparsify=False) tm.assert_dict_equal(result, expected) def test_mi_sparse(self):
This is not a user facing change. It adds only the necessary code to permit separate control of index/column MultIndex sparsification. Essentially it removes the `get_option("display.multi_sparse")` from a low level method to a higher level method where it can be overwritten by user-args later. partly addresses #41142
https://api.github.com/repos/pandas-dev/pandas/pulls/41391
2021-05-09T07:51:33Z
2021-05-12T15:03:20Z
2021-05-12T15:03:20Z
2021-05-12T16:09:17Z
REF: re-use machinery for DataFrameGroupBy.nunique
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 9287163053cac..76c53f2888a8f 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -22,7 +22,6 @@ Mapping, TypeVar, Union, - cast, ) import warnings @@ -1576,6 +1575,10 @@ def _wrap_aggregated_output( if self.axis == 1: result = result.T + if result.index.equals(self.obj.index): + # Retain e.g. DatetimeIndex/TimedeltaIndex freq + result.index = self.obj.index.copy() + # TODO: Do this more systematically return self._reindex_output(result) @@ -1627,21 +1630,21 @@ def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame: return self._reindex_output(result)._convert(datetime=True) - def _iterate_column_groupbys(self): - for i, colname in enumerate(self._selected_obj.columns): + def _iterate_column_groupbys(self, obj: FrameOrSeries): + for i, colname in enumerate(obj.columns): yield colname, SeriesGroupBy( - self._selected_obj.iloc[:, i], + obj.iloc[:, i], selection=colname, grouper=self.grouper, exclusions=self.exclusions, ) - def _apply_to_column_groupbys(self, func) -> DataFrame: + def _apply_to_column_groupbys(self, func, obj: FrameOrSeries) -> DataFrame: from pandas.core.reshape.concat import concat - columns = self._selected_obj.columns + columns = obj.columns results = [ - func(col_groupby) for _, col_groupby in self._iterate_column_groupbys() + func(col_groupby) for _, col_groupby in self._iterate_column_groupbys(obj) ] if not len(results): @@ -1728,41 +1731,21 @@ def nunique(self, dropna: bool = True) -> DataFrame: 4 ham 5 x 5 ham 5 y """ - from pandas.core.reshape.concat import concat - # TODO: this is duplicative of how GroupBy naturally works - # Try to consolidate with normal wrapping functions + if self.axis != 0: + # see test_groupby_crash_on_nunique + return self._python_agg_general(lambda sgb: sgb.nunique(dropna)) obj = self._obj_with_exclusions - if self.axis == 0: - iter_func = obj.items - else: - iter_func = obj.iterrows - - res_list = [ - SeriesGroupBy(content, selection=label, grouper=self.grouper).nunique( - dropna - ) - for label, content in iter_func() - ] - if res_list: - results = concat(res_list, axis=1) - results = cast(DataFrame, results) - else: - # concat would raise - results = DataFrame( - [], index=self.grouper.result_index, columns=obj.columns[:0] - ) - - if self.axis == 1: - results = results.T - - other_axis = 1 - self.axis - results._get_axis(other_axis).names = obj._get_axis(other_axis).names + results = self._apply_to_column_groupbys( + lambda sgb: sgb.nunique(dropna), obj=obj + ) + results.columns.names = obj.columns.names # TODO: do at higher level? if not self.as_index: results.index = ibase.default_index(len(results)) self._insert_inaxis_grouper_inplace(results) + return results @Appender(DataFrame.idxmax.__doc__) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 1105c1bd1d782..d6b0e118cc7ce 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1904,7 +1904,9 @@ def ohlc(self) -> DataFrame: ) return self._reindex_output(result) - return self._apply_to_column_groupbys(lambda x: x.ohlc()) + return self._apply_to_column_groupbys( + lambda x: x.ohlc(), self._obj_with_exclusions + ) @final @doc(DataFrame.describe) diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index f716a3a44cd54..67d2af46ac8ee 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -2060,24 +2060,36 @@ def test_dup_labels_output_shape(groupby_func, idx): def test_groupby_crash_on_nunique(axis): # Fix following 30253 + dti = date_range("2016-01-01", periods=2, name="foo") df = DataFrame({("A", "B"): [1, 2], ("A", "C"): [1, 3], ("D", "B"): [0, 0]}) + df.columns.names = ("bar", "baz") + df.index = dti axis_number = df._get_axis_number(axis) if not axis_number: df = df.T - result = df.groupby(axis=axis_number, level=0).nunique() + gb = df.groupby(axis=axis_number, level=0) + result = gb.nunique() - expected = DataFrame({"A": [1, 2], "D": [1, 1]}) + expected = DataFrame({"A": [1, 2], "D": [1, 1]}, index=dti) + expected.columns.name = "bar" if not axis_number: expected = expected.T tm.assert_frame_equal(result, expected) - # same thing, but empty columns - gb = df[[]].groupby(axis=axis_number, level=0) - res = gb.nunique() - exp = expected[[]] + if axis_number == 0: + # same thing, but empty columns + gb2 = df[[]].groupby(axis=axis_number, level=0) + exp = expected[[]] + else: + # same thing, but empty rows + gb2 = df.loc[[]].groupby(axis=axis_number, level=0) + # default for empty when we can't infer a dtype is float64 + exp = expected.loc[[]].astype(np.float64) + + res = gb2.nunique() tm.assert_frame_equal(res, exp) diff --git a/pandas/tests/resample/test_time_grouper.py b/pandas/tests/resample/test_time_grouper.py index 5dc64a33098f3..7cc2b7f72fb69 100644 --- a/pandas/tests/resample/test_time_grouper.py +++ b/pandas/tests/resample/test_time_grouper.py @@ -121,12 +121,8 @@ def test_aaa_group_order(): tm.assert_frame_equal(grouped.get_group(datetime(2013, 1, 5)), df[4::5]) -def test_aggregate_normal(request, resample_method): +def test_aggregate_normal(resample_method): """Check TimeGrouper's aggregation is identical as normal groupby.""" - if resample_method == "ohlc": - request.node.add_marker( - pytest.mark.xfail(reason="DataError: No numeric types to aggregate") - ) data = np.random.randn(20, 4) normal_df = DataFrame(data, columns=["A", "B", "C", "D"])
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41390
2021-05-09T03:28:14Z
2021-05-10T23:26:13Z
2021-05-10T23:26:13Z
2021-05-11T00:12:57Z
TST: Add regression tests for old issues
diff --git a/pandas/tests/arithmetic/test_numeric.py b/pandas/tests/arithmetic/test_numeric.py index 9e1d13eac5039..9ca7d0b465250 100644 --- a/pandas/tests/arithmetic/test_numeric.py +++ b/pandas/tests/arithmetic/test_numeric.py @@ -1407,3 +1407,18 @@ def test_integer_array_add_list_like( assert_function(left, expected) assert_function(right, expected) + + +def test_sub_multiindex_swapped_levels(): + # GH 9952 + df = pd.DataFrame( + {"a": np.random.randn(6)}, + index=pd.MultiIndex.from_product( + [["a", "b"], [0, 1, 2]], names=["levA", "levB"] + ), + ) + df2 = df.copy() + df2.index = df2.index.swaplevel(0, 1) + result = df - df2 + expected = pd.DataFrame([0.0] * 6, columns=["a"], index=df.index) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/indexing/test_where.py b/pandas/tests/frame/indexing/test_where.py index 7ffe2fb9ab1ff..7244624e563e3 100644 --- a/pandas/tests/frame/indexing/test_where.py +++ b/pandas/tests/frame/indexing/test_where.py @@ -729,3 +729,19 @@ def test_where_string_dtype(frame_or_series): dtype=StringDtype(), ) tm.assert_equal(result, expected) + + +def test_where_bool_comparison(): + # GH 10336 + df_mask = DataFrame( + {"AAA": [True] * 4, "BBB": [False] * 4, "CCC": [True, False, True, False]} + ) + result = df_mask.where(df_mask == False) # noqa:E712 + expected = DataFrame( + { + "AAA": np.array([np.nan] * 4, dtype=object), + "BBB": [False] * 4, + "CCC": [np.nan, False, np.nan, False], + } + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/methods/test_reset_index.py b/pandas/tests/frame/methods/test_reset_index.py index 924059f634fca..5a87803ddc21e 100644 --- a/pandas/tests/frame/methods/test_reset_index.py +++ b/pandas/tests/frame/methods/test_reset_index.py @@ -657,3 +657,17 @@ def test_reset_index_empty_frame_with_datetime64_multiindex_from_groupby(): expected["c3"] = expected["c3"].astype("datetime64[ns]") expected["c1"] = expected["c1"].astype("float64") tm.assert_frame_equal(result, expected) + + +def test_reset_index_multiindex_nat(): + # GH 11479 + idx = range(3) + tstamp = date_range("2015-07-01", freq="D", periods=3) + df = DataFrame({"id": idx, "tstamp": tstamp, "a": list("abc")}) + df.loc[2, "tstamp"] = pd.NaT + result = df.set_index(["id", "tstamp"]).reset_index("id") + expected = DataFrame( + {"id": range(3), "a": list("abc")}, + index=pd.DatetimeIndex(["2015-07-01", "2015-07-02", "NaT"], name="tstamp"), + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/frame/test_constructors.py b/pandas/tests/frame/test_constructors.py index ab240531a7505..03376bdce26f8 100644 --- a/pandas/tests/frame/test_constructors.py +++ b/pandas/tests/frame/test_constructors.py @@ -2398,6 +2398,12 @@ def check_views(): # assert b[0] == 0 assert df.iloc[0, 2] == 0 + def test_from_series_with_name_with_columns(self): + # GH 7893 + result = DataFrame(Series(1, name="foo"), columns=["bar"]) + expected = DataFrame(columns=["bar"]) + tm.assert_frame_equal(result, expected) + class TestDataFrameConstructorWithDatetimeTZ: @pytest.mark.parametrize("tz", ["US/Eastern", "dateutil/US/Eastern"]) diff --git a/pandas/tests/frame/test_stack_unstack.py b/pandas/tests/frame/test_stack_unstack.py index cc4042822bc8b..365d8abcb6bac 100644 --- a/pandas/tests/frame/test_stack_unstack.py +++ b/pandas/tests/frame/test_stack_unstack.py @@ -1998,3 +1998,23 @@ def test_stack_nan_in_multiindex_columns(self): columns=Index([(0, None), (0, 2), (0, 3)]), ) tm.assert_frame_equal(result, expected) + + def test_stack_nan_level(self): + # GH 9406 + df_nan = DataFrame( + np.arange(4).reshape(2, 2), + columns=MultiIndex.from_tuples( + [("A", np.nan), ("B", "b")], names=["Upper", "Lower"] + ), + index=Index([0, 1], name="Num"), + dtype=np.float64, + ) + result = df_nan.stack() + expected = DataFrame( + [[0.0, np.nan], [np.nan, 1], [2.0, np.nan], [np.nan, 3.0]], + columns=Index(["A", "B"], name="Upper"), + index=MultiIndex.from_tuples( + [(0, np.nan), (0, "b"), (1, np.nan), (1, "b")], names=["Num", "Lower"] + ), + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py index 96d2c246dd0ee..c87efdbd35fa4 100644 --- a/pandas/tests/indexing/multiindex/test_loc.py +++ b/pandas/tests/indexing/multiindex/test_loc.py @@ -776,3 +776,15 @@ def test_loc_getitem_drops_levels_for_one_row_dataframe(): result = ser.loc["x", :, "z"] expected = Series([0], index=Index(["y"], name="b")) tm.assert_series_equal(result, expected) + + +def test_mi_columns_loc_list_label_order(): + # GH 10710 + cols = MultiIndex.from_product([["A", "B", "C"], [1, 2]]) + df = DataFrame(np.zeros((5, 6)), columns=cols) + result = df.loc[:, ["B", "A"]] + expected = DataFrame( + np.zeros((5, 4)), + columns=MultiIndex.from_tuples([("B", 1), ("B", 2), ("A", 1), ("A", 2)]), + ) + tm.assert_frame_equal(result, expected) diff --git a/pandas/tests/reshape/concat/test_datetimes.py b/pandas/tests/reshape/concat/test_datetimes.py index 2b8233388d328..0a1ba17949ddb 100644 --- a/pandas/tests/reshape/concat/test_datetimes.py +++ b/pandas/tests/reshape/concat/test_datetimes.py @@ -470,6 +470,14 @@ def test_concat_tz_NaT(self, t1): tm.assert_frame_equal(result, expected) + def test_concat_tz_with_empty(self): + # GH 9188 + result = concat( + [DataFrame(date_range("2000", periods=1, tz="UTC")), DataFrame()] + ) + expected = DataFrame(date_range("2000", periods=1, tz="UTC")) + tm.assert_frame_equal(result, expected) + class TestPeriodConcat: def test_concat_period_series(self): diff --git a/pandas/tests/series/indexing/test_setitem.py b/pandas/tests/series/indexing/test_setitem.py index 0372bf740640c..675120e03d821 100644 --- a/pandas/tests/series/indexing/test_setitem.py +++ b/pandas/tests/series/indexing/test_setitem.py @@ -201,6 +201,14 @@ def test_setitem_slicestep(self): series[::2] = 0 assert (series[::2] == 0).all() + def test_setitem_multiindex_slice(self, indexer_sli): + # GH 8856 + mi = MultiIndex.from_product(([0, 1], list("abcde"))) + result = Series(np.arange(10, dtype=np.int64), mi) + indexer_sli(result)[::4] = 100 + expected = Series([100, 1, 2, 3, 100, 5, 6, 7, 100, 9], mi) + tm.assert_series_equal(result, expected) + class TestSetitemBooleanMask: def test_setitem_boolean(self, string_series):
- [x] closes #7893 - [x] closes #8856 - [x] closes #9188 - [x] closes #9406 - [x] closes #9952 - [x] closes #10336 - [x] closes #10710 - [x] closes #11479 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
https://api.github.com/repos/pandas-dev/pandas/pulls/41389
2021-05-09T00:57:27Z
2021-05-10T21:45:33Z
2021-05-10T21:45:30Z
2021-05-11T00:03:19Z
TST: fix groupby xfails
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index f716a3a44cd54..39121e92dcd83 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -1738,6 +1738,7 @@ def test_pivot_table_values_key_error(): ) def test_empty_groupby(columns, keys, values, method, op, request): # GH8093 & GH26411 + override_dtype = None if isinstance(values, Categorical) and len(keys) == 1 and method == "apply": mark = pytest.mark.xfail(raises=TypeError, match="'str' object is not callable") @@ -1784,12 +1785,9 @@ def test_empty_groupby(columns, keys, values, method, op, request): and op in ["sum", "prod"] and method != "apply" ): - mark = pytest.mark.xfail( - raises=AssertionError, match="(DataFrame|Series) are different" - ) - request.node.add_marker(mark) + # We expect to get Int64 back for these + override_dtype = "Int64" - override_dtype = None if isinstance(values[0], bool) and op in ("prod", "sum") and method != "apply": # sum/product of bools is an integer override_dtype = "int64" diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py index 20edf03c5b96c..6a73d540c7088 100644 --- a/pandas/tests/groupby/test_rank.py +++ b/pandas/tests/groupby/test_rank.py @@ -11,7 +11,6 @@ concat, ) import pandas._testing as tm -from pandas.core.base import DataError def test_rank_apply(): @@ -462,7 +461,6 @@ def test_rank_avg_even_vals(dtype, upper): tm.assert_frame_equal(result, exp_df) -@pytest.mark.xfail(reason="Works now, needs tests") @pytest.mark.parametrize("ties_method", ["average", "min", "max", "first", "dense"]) @pytest.mark.parametrize("ascending", [True, False]) @pytest.mark.parametrize("na_option", ["keep", "top", "bottom"]) @@ -470,13 +468,25 @@ def test_rank_avg_even_vals(dtype, upper): @pytest.mark.parametrize( "vals", [["bar", "bar", "foo", "bar", "baz"], ["bar", np.nan, "foo", np.nan, "baz"]] ) -def test_rank_object_raises(ties_method, ascending, na_option, pct, vals): +def test_rank_object_dtype(ties_method, ascending, na_option, pct, vals): df = DataFrame({"key": ["foo"] * 5, "val": vals}) + mask = df["val"].isna() - with pytest.raises(DataError, match="No numeric types to aggregate"): - df.groupby("key").rank( - method=ties_method, ascending=ascending, na_option=na_option, pct=pct - ) + gb = df.groupby("key") + res = gb.rank(method=ties_method, ascending=ascending, na_option=na_option, pct=pct) + + # construct our expected by using numeric values with the same ordering + if mask.any(): + df2 = DataFrame({"key": ["foo"] * 5, "val": [0, np.nan, 2, np.nan, 1]}) + else: + df2 = DataFrame({"key": ["foo"] * 5, "val": [0, 0, 2, 0, 1]}) + + gb2 = df2.groupby("key") + alt = gb2.rank( + method=ties_method, ascending=ascending, na_option=na_option, pct=pct + ) + + tm.assert_frame_equal(res, alt) @pytest.mark.parametrize("na_option", [True, "bad", 1])
https://api.github.com/repos/pandas-dev/pandas/pulls/41387
2021-05-08T22:35:25Z
2021-05-10T20:11:14Z
2021-05-10T20:11:14Z
2021-05-10T21:23:00Z
TYP: SelectionMixin
diff --git a/pandas/_typing.py b/pandas/_typing.py index 1e1fffdd60676..7763b0ceb610a 100644 --- a/pandas/_typing.py +++ b/pandas/_typing.py @@ -56,6 +56,7 @@ from pandas.core.generic import NDFrame from pandas.core.groupby.generic import ( DataFrameGroupBy, + GroupBy, SeriesGroupBy, ) from pandas.core.indexes.base import Index @@ -158,6 +159,7 @@ AggObjType = Union[ "Series", "DataFrame", + "GroupBy", "SeriesGroupBy", "DataFrameGroupBy", "BaseWindow", diff --git a/pandas/core/apply.py b/pandas/core/apply.py index ad25eb6fbcaa8..9d3437fe08b24 100644 --- a/pandas/core/apply.py +++ b/pandas/core/apply.py @@ -24,6 +24,7 @@ AggFuncTypeDict, AggObjType, Axis, + FrameOrSeries, FrameOrSeriesUnion, ) from pandas.util._decorators import cache_readonly @@ -60,10 +61,7 @@ Index, Series, ) - from pandas.core.groupby import ( - DataFrameGroupBy, - SeriesGroupBy, - ) + from pandas.core.groupby import GroupBy from pandas.core.resample import Resampler from pandas.core.window.rolling import BaseWindow @@ -1089,11 +1087,9 @@ def apply_standard(self) -> FrameOrSeriesUnion: class GroupByApply(Apply): - obj: SeriesGroupBy | DataFrameGroupBy - def __init__( self, - obj: SeriesGroupBy | DataFrameGroupBy, + obj: GroupBy[FrameOrSeries], func: AggFuncType, args, kwargs, diff --git a/pandas/core/base.py b/pandas/core/base.py index 3270e3dd82f7d..542fd54ce0ac7 100644 --- a/pandas/core/base.py +++ b/pandas/core/base.py @@ -8,6 +8,8 @@ from typing import ( TYPE_CHECKING, Any, + Generic, + Hashable, TypeVar, cast, ) @@ -19,6 +21,7 @@ ArrayLike, Dtype, DtypeObj, + FrameOrSeries, IndexLabel, Shape, final, @@ -163,13 +166,15 @@ class SpecificationError(Exception): pass -class SelectionMixin: +class SelectionMixin(Generic[FrameOrSeries]): """ mixin implementing the selection & aggregation interface on a group-like object sub-classes need to define: obj, exclusions """ + obj: FrameOrSeries _selection: IndexLabel | None = None + exclusions: frozenset[Hashable] _internal_names = ["_cache", "__setstate__"] _internal_names_set = set(_internal_names) @@ -194,15 +199,10 @@ def _selection_list(self): @cache_readonly def _selected_obj(self): - # error: "SelectionMixin" has no attribute "obj" - if self._selection is None or isinstance( - self.obj, ABCSeries # type: ignore[attr-defined] - ): - # error: "SelectionMixin" has no attribute "obj" - return self.obj # type: ignore[attr-defined] + if self._selection is None or isinstance(self.obj, ABCSeries): + return self.obj else: - # error: "SelectionMixin" has no attribute "obj" - return self.obj[self._selection] # type: ignore[attr-defined] + return self.obj[self._selection] @cache_readonly def ndim(self) -> int: @@ -211,49 +211,31 @@ def ndim(self) -> int: @final @cache_readonly def _obj_with_exclusions(self): - # error: "SelectionMixin" has no attribute "obj" - if self._selection is not None and isinstance( - self.obj, ABCDataFrame # type: ignore[attr-defined] - ): - # error: "SelectionMixin" has no attribute "obj" - return self.obj.reindex( # type: ignore[attr-defined] - columns=self._selection_list - ) + if self._selection is not None and isinstance(self.obj, ABCDataFrame): + return self.obj.reindex(columns=self._selection_list) - # error: "SelectionMixin" has no attribute "exclusions" - if len(self.exclusions) > 0: # type: ignore[attr-defined] - # error: "SelectionMixin" has no attribute "obj" - # error: "SelectionMixin" has no attribute "exclusions" - return self.obj.drop(self.exclusions, axis=1) # type: ignore[attr-defined] + if len(self.exclusions) > 0: + return self.obj.drop(self.exclusions, axis=1) else: - # error: "SelectionMixin" has no attribute "obj" - return self.obj # type: ignore[attr-defined] + return self.obj def __getitem__(self, key): if self._selection is not None: raise IndexError(f"Column(s) {self._selection} already selected") if isinstance(key, (list, tuple, ABCSeries, ABCIndex, np.ndarray)): - # error: "SelectionMixin" has no attribute "obj" - if len( - self.obj.columns.intersection(key) # type: ignore[attr-defined] - ) != len(key): - # error: "SelectionMixin" has no attribute "obj" - bad_keys = list( - set(key).difference(self.obj.columns) # type: ignore[attr-defined] - ) + if len(self.obj.columns.intersection(key)) != len(key): + bad_keys = list(set(key).difference(self.obj.columns)) raise KeyError(f"Columns not found: {str(bad_keys)[1:-1]}") return self._gotitem(list(key), ndim=2) elif not getattr(self, "as_index", False): - # error: "SelectionMixin" has no attribute "obj" - if key not in self.obj.columns: # type: ignore[attr-defined] + if key not in self.obj.columns: raise KeyError(f"Column not found: {key}") return self._gotitem(key, ndim=2) else: - # error: "SelectionMixin" has no attribute "obj" - if key not in self.obj: # type: ignore[attr-defined] + if key not in self.obj: raise KeyError(f"Column not found: {key}") return self._gotitem(key, ndim=1) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 1105c1bd1d782..0a176dafb34bb 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -20,7 +20,6 @@ class providing the base-class of operations. from typing import ( TYPE_CHECKING, Callable, - Generic, Hashable, Iterable, Iterator, @@ -567,7 +566,7 @@ def group_selection_context(groupby: GroupBy) -> Iterator[GroupBy]: ] -class BaseGroupBy(PandasObject, SelectionMixin, Generic[FrameOrSeries]): +class BaseGroupBy(PandasObject, SelectionMixin[FrameOrSeries]): _group_selection: IndexLabel | None = None _apply_allowlist: frozenset[str] = frozenset() _hidden_attrs = PandasObject._hidden_attrs | { @@ -588,7 +587,6 @@ class BaseGroupBy(PandasObject, SelectionMixin, Generic[FrameOrSeries]): axis: int grouper: ops.BaseGrouper - obj: FrameOrSeries group_keys: bool @final @@ -840,7 +838,6 @@ class GroupBy(BaseGroupBy[FrameOrSeries]): more """ - obj: FrameOrSeries grouper: ops.BaseGrouper as_index: bool @@ -852,7 +849,7 @@ def __init__( axis: int = 0, level: IndexLabel | None = None, grouper: ops.BaseGrouper | None = None, - exclusions: set[Hashable] | None = None, + exclusions: frozenset[Hashable] | None = None, selection: IndexLabel | None = None, as_index: bool = True, sort: bool = True, @@ -901,7 +898,7 @@ def __init__( self.obj = obj self.axis = obj._get_axis_number(axis) self.grouper = grouper - self.exclusions = exclusions or set() + self.exclusions = frozenset(exclusions) if exclusions else frozenset() def __getattr__(self, attr: str): if attr in self._internal_names_set: diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index f1762a2535ff7..4650dbea27de1 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -652,7 +652,7 @@ def get_grouper( mutated: bool = False, validate: bool = True, dropna: bool = True, -) -> tuple[ops.BaseGrouper, set[Hashable], FrameOrSeries]: +) -> tuple[ops.BaseGrouper, frozenset[Hashable], FrameOrSeries]: """ Create and return a BaseGrouper, which is an internal mapping of how to create the grouper indexers. @@ -728,13 +728,13 @@ def get_grouper( if isinstance(key, Grouper): binner, grouper, obj = key._get_grouper(obj, validate=False) if key.key is None: - return grouper, set(), obj + return grouper, frozenset(), obj else: - return grouper, {key.key}, obj + return grouper, frozenset({key.key}), obj # already have a BaseGrouper, just return it elif isinstance(key, ops.BaseGrouper): - return key, set(), obj + return key, frozenset(), obj if not isinstance(key, list): keys = [key] @@ -861,7 +861,7 @@ def is_in_obj(gpr) -> bool: grouper = ops.BaseGrouper( group_axis, groupings, sort=sort, mutated=mutated, dropna=dropna ) - return grouper, exclusions, obj + return grouper, frozenset(exclusions), obj def _is_label_like(val) -> bool: diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py index b51875134c614..490cdca8519e6 100644 --- a/pandas/core/window/rolling.py +++ b/pandas/core/window/rolling.py @@ -13,6 +13,7 @@ TYPE_CHECKING, Any, Callable, + Hashable, ) import warnings @@ -109,7 +110,7 @@ class BaseWindow(SelectionMixin): """Provides utilities for performing windowing operations.""" _attributes: list[str] = [] - exclusions: set[str] = set() + exclusions: frozenset[Hashable] = frozenset() def __init__( self,
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry @simonjayhawkins thoughts on the FrameOrSeries vs FrameOrSeriesUnion that ive ignored here?
https://api.github.com/repos/pandas-dev/pandas/pulls/41384
2021-05-08T18:37:08Z
2021-05-10T23:28:48Z
2021-05-10T23:28:48Z
2021-05-11T00:13:48Z
DOC: Remove deprecated example for astype
diff --git a/pandas/core/generic.py b/pandas/core/generic.py index d225ac6e6881b..c61fe49663920 100644 --- a/pandas/core/generic.py +++ b/pandas/core/generic.py @@ -5686,6 +5686,14 @@ def astype( to_numeric : Convert argument to a numeric type. numpy.ndarray.astype : Cast a numpy array to a specified type. + Notes + ----- + .. deprecated:: 1.3.0 + + Using ``astype`` to convert from timezone-naive dtype to + timezone-aware dtype is deprecated and will raise in a + future version. Use :meth:`Series.dt.tz_localize` instead. + Examples -------- Create a DataFrame: @@ -5761,15 +5769,6 @@ def astype( 1 2020-01-02 2 2020-01-03 dtype: datetime64[ns] - - Datetimes are localized to UTC first before - converting to the specified timezone: - - >>> ser_date.astype('datetime64[ns, US/Eastern]') - 0 2019-12-31 19:00:00-05:00 - 1 2020-01-01 19:00:00-05:00 - 2 2020-01-02 19:00:00-05:00 - dtype: datetime64[ns, US/Eastern] """ if is_dict_like(dtype): if self.ndim == 1: # i.e. Series
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them Followup to #39258
https://api.github.com/repos/pandas-dev/pandas/pulls/41381
2021-05-08T15:23:40Z
2021-05-10T23:29:35Z
2021-05-10T23:29:35Z
2021-05-11T02:00:13Z
TST/CLN: use dtype fixture in numeric tests
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py index 857b136b67a0c..1cef932f7bf0a 100644 --- a/pandas/tests/indexes/common.py +++ b/pandas/tests/indexes/common.py @@ -39,7 +39,7 @@ class Base: _index_cls: Type[Index] @pytest.fixture - def simple_index(self) -> Index: + def simple_index(self): raise NotImplementedError("Method not implemented") def create_index(self) -> Index: @@ -772,6 +772,12 @@ class NumericBase(Base): Base class for numeric index (incl. RangeIndex) sub-class tests. """ + def test_constructor_unwraps_index(self, dtype): + idx = Index([1, 2], dtype=dtype) + result = self._index_cls(idx) + expected = np.array([1, 2], dtype=dtype) + tm.assert_numpy_array_equal(result._data, expected) + def test_where(self): # Tested in numeric.test_indexing pass diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py index e63aeba54fccd..4c2c38df601ce 100644 --- a/pandas/tests/indexes/numeric/test_numeric.py +++ b/pandas/tests/indexes/numeric/test_numeric.py @@ -17,7 +17,10 @@ class TestFloat64Index(NumericBase): _index_cls = Float64Index - _dtype = np.float64 + + @pytest.fixture + def dtype(self, request): + return np.float64 @pytest.fixture( params=["int64", "uint64", "category", "datetime64"], @@ -26,8 +29,8 @@ def invalid_dtype(self, request): return request.param @pytest.fixture - def simple_index(self) -> Index: - values = np.arange(5, dtype=self._dtype) + def simple_index(self, dtype): + values = np.arange(5, dtype=dtype) return self._index_cls(values) @pytest.fixture( @@ -65,9 +68,8 @@ def check_coerce(self, a, b, is_float_index=True): else: self.check_is_index(b) - def test_constructor(self): + def test_constructor(self, dtype): index_cls = self._index_cls - dtype = self._dtype # explicit construction index = index_cls([1, 2, 3, 4, 5]) @@ -204,8 +206,7 @@ def test_equals_numeric_other_index_type(self, other): pd.timedelta_range("1 Day", periods=3), ], ) - def test_lookups_datetimelike_values(self, vals): - dtype = self._dtype + def test_lookups_datetimelike_values(self, vals, dtype): # If we have datetime64 or timedelta64 values, make sure they are # wrappped correctly GH#31163 @@ -277,14 +278,14 @@ def test_fillna_float64(self): class NumericInt(NumericBase): - def test_view(self): + def test_view(self, dtype): index_cls = self._index_cls idx = index_cls([], name="Foo") idx_view = idx.view() assert idx_view.name == "Foo" - idx_view = idx.view(self._dtype) + idx_view = idx.view(dtype) tm.assert_index_equal(idx, index_cls(idx_view, name="Foo")) idx_view = idx.view(index_cls) @@ -334,7 +335,7 @@ def test_logical_compat(self, simple_index): assert idx.all() == idx.values.all() assert idx.any() == idx.values.any() - def test_identical(self, simple_index): + def test_identical(self, simple_index, dtype): index = simple_index idx = Index(index.copy()) @@ -351,7 +352,7 @@ def test_identical(self, simple_index): assert not idx.identical(index) assert Index(same_values, name="foo", dtype=object).identical(idx) - assert not index.astype(dtype=object).identical(index.astype(dtype=self._dtype)) + assert not index.astype(dtype=object).identical(index.astype(dtype=dtype)) def test_cant_or_shouldnt_cast(self): msg = ( @@ -380,7 +381,10 @@ def test_prevent_casting(self, simple_index): class TestInt64Index(NumericInt): _index_cls = Int64Index - _dtype = np.int64 + + @pytest.fixture + def dtype(self): + return np.int64 @pytest.fixture( params=["uint64", "float64", "category", "datetime64"], @@ -389,8 +393,8 @@ def invalid_dtype(self, request): return request.param @pytest.fixture - def simple_index(self) -> Index: - return self._index_cls(range(0, 20, 2), dtype=self._dtype) + def simple_index(self, dtype): + return self._index_cls(range(0, 20, 2), dtype=dtype) @pytest.fixture( params=[range(0, 20, 2), range(19, -1, -1)], ids=["index_inc", "index_dec"] @@ -398,9 +402,8 @@ def simple_index(self) -> Index: def index(self, request): return self._index_cls(request.param) - def test_constructor(self): + def test_constructor(self, dtype): index_cls = self._index_cls - dtype = self._dtype # pass list, coerce fine index = index_cls([-5, 0, 1, 2]) @@ -439,9 +442,8 @@ def test_constructor(self): ]: tm.assert_index_equal(idx, expected) - def test_constructor_corner(self): + def test_constructor_corner(self, dtype): index_cls = self._index_cls - dtype = self._dtype arr = np.array([1, 2, 3, 4], dtype=object) index = index_cls(arr) @@ -465,12 +467,6 @@ def test_constructor_coercion_signed_to_unsigned(self, uint_dtype): with pytest.raises(OverflowError, match=msg): Index([-1], dtype=uint_dtype) - def test_constructor_unwraps_index(self): - idx = Index([1, 2]) - result = self._index_cls(idx) - expected = np.array([1, 2], dtype=self._dtype) - tm.assert_numpy_array_equal(result._data, expected) - def test_coerce_list(self): # coerce things arr = Index([1, 2, 3, 4]) @@ -478,13 +474,16 @@ def test_coerce_list(self): # but not if explicit dtype passed arr = Index([1, 2, 3, 4], dtype=object) - assert isinstance(arr, Index) + assert type(arr) is Index class TestUInt64Index(NumericInt): _index_cls = UInt64Index - _dtype = np.uint64 + + @pytest.fixture + def dtype(self): + return np.uint64 @pytest.fixture( params=["int64", "float64", "category", "datetime64"], @@ -493,9 +492,9 @@ def invalid_dtype(self, request): return request.param @pytest.fixture - def simple_index(self) -> Index: + def simple_index(self, dtype): # compat with shared Int64/Float64 tests - return self._index_cls(np.arange(5, dtype=self._dtype)) + return self._index_cls(np.arange(5, dtype=dtype)) @pytest.fixture( params=[ @@ -507,9 +506,8 @@ def simple_index(self) -> Index: def index(self, request): return self._index_cls(request.param) - def test_constructor(self): + def test_constructor(self, dtype): index_cls = self._index_cls - dtype = self._dtype idx = index_cls([1, 2, 3]) res = Index([1, 2, 3], dtype=dtype) diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py index 12fce56ffcb21..d9b093cc97fda 100644 --- a/pandas/tests/indexes/ranges/test_range.py +++ b/pandas/tests/indexes/ranges/test_range.py @@ -23,6 +23,10 @@ class TestRangeIndex(NumericBase): _index_cls = RangeIndex + @pytest.fixture + def dtype(self): + return np.int64 + @pytest.fixture( params=["uint64", "float64", "category", "datetime64"], ) @@ -43,6 +47,11 @@ def simple_index(self) -> Index: def index(self, request): return request.param + def test_constructor_unwraps_index(self, dtype): + result = self._index_cls(1, 3) + expected = np.array([1, 2], dtype=dtype) + tm.assert_numpy_array_equal(result._data, expected) + def test_can_hold_identifiers(self, simple_index): idx = simple_index key = idx[0]
small cleanup to make tests amenable to tests againt several dtypes.
https://api.github.com/repos/pandas-dev/pandas/pulls/41380
2021-05-08T08:46:25Z
2021-05-10T23:30:42Z
2021-05-10T23:30:42Z
2021-05-10T23:42:46Z
CLN: groupby assorted
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 9287163053cac..df0a413b7a76a 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -90,7 +90,6 @@ MultiIndex, all_indexes_same, ) -import pandas.core.indexes.base as ibase from pandas.core.series import Series from pandas.core.util.numba_ import maybe_use_numba @@ -482,14 +481,13 @@ def _get_index() -> Index: if isinstance(values[0], dict): # GH #823 #24880 index = _get_index() - result: FrameOrSeriesUnion = self._reindex_output( - self.obj._constructor_expanddim(values, index=index) - ) + res_df = self.obj._constructor_expanddim(values, index=index) + res_df = self._reindex_output(res_df) # if self.observed is False, # keep all-NaN rows created while re-indexing - result = result.stack(dropna=self.observed) - result.name = self._selection_name - return result + res_ser = res_df.stack(dropna=self.observed) + res_ser.name = self._selection_name + return res_ser elif isinstance(values[0], (Series, DataFrame)): return self._concat_objects(keys, values, not_indexed_same=not_indexed_same) else: @@ -1000,13 +998,18 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) # grouper specific aggregations if self.grouper.nkeys > 1: + # test_groupby_as_index_series_scalar gets here with 'not self.as_index' return self._python_agg_general(func, *args, **kwargs) elif args or kwargs: + # test_pass_args_kwargs gets here (with and without as_index) + # can't return early result = self._aggregate_frame(func, *args, **kwargs) elif self.axis == 1: # _aggregate_multiple_funcs does not allow self.axis == 1 + # Note: axis == 1 precludes 'not self.as_index', see __init__ result = self._aggregate_frame(func) + return result else: @@ -1036,7 +1039,7 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) if not self.as_index: self._insert_inaxis_grouper_inplace(result) - result.index = np.arange(len(result)) + result.index = Index(range(len(result))) return result._convert(datetime=True) @@ -1162,7 +1165,9 @@ def _wrap_applied_output(self, data, keys, values, not_indexed_same=False): if self.as_index: return self.obj._constructor_sliced(values, index=key_index) else: - result = DataFrame(values, index=key_index, columns=[self._selection]) + result = self.obj._constructor( + values, index=key_index, columns=[self._selection] + ) self._insert_inaxis_grouper_inplace(result) return result else: @@ -1611,8 +1616,8 @@ def _wrap_transformed_output( def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame: if not self.as_index: - index = np.arange(mgr.shape[1]) - mgr.set_axis(1, ibase.Index(index)) + index = Index(range(mgr.shape[1])) + mgr.set_axis(1, index) result = self.obj._constructor(mgr) self._insert_inaxis_grouper_inplace(result) @@ -1761,7 +1766,7 @@ def nunique(self, dropna: bool = True) -> DataFrame: results._get_axis(other_axis).names = obj._get_axis(other_axis).names if not self.as_index: - results.index = ibase.default_index(len(results)) + results.index = Index(range(len(results))) self._insert_inaxis_grouper_inplace(results) return results diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 46b47bc29d8a6..3045451974ee7 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -889,9 +889,8 @@ def codes_info(self) -> np.ndarray: @final def _get_compressed_codes(self) -> tuple[np.ndarray, np.ndarray]: - all_codes = self.codes - if len(all_codes) > 1: - group_index = get_group_index(all_codes, self.shape, sort=True, xnull=True) + if len(self.groupings) > 1: + group_index = get_group_index(self.codes, self.shape, sort=True, xnull=True) return compress_group_index(group_index, sort=self.sort) ping = self.groupings[0] @@ -1111,6 +1110,7 @@ def groups(self): @property def nkeys(self) -> int: + # still matches len(self.groupings), but we can hard-code return 1 def _get_grouper(self): diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index f716a3a44cd54..44d48c45e1fd1 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -234,17 +234,18 @@ def f(x, q=None, axis=0): tm.assert_series_equal(trans_result, trans_expected) # DataFrame - df_grouped = tsframe.groupby(lambda x: x.month) - agg_result = df_grouped.agg(np.percentile, 80, axis=0) - apply_result = df_grouped.apply(DataFrame.quantile, 0.8) - expected = df_grouped.quantile(0.8) - tm.assert_frame_equal(apply_result, expected, check_names=False) - tm.assert_frame_equal(agg_result, expected) - - agg_result = df_grouped.agg(f, q=80) - apply_result = df_grouped.apply(DataFrame.quantile, q=0.8) - tm.assert_frame_equal(agg_result, expected) - tm.assert_frame_equal(apply_result, expected, check_names=False) + for as_index in [True, False]: + df_grouped = tsframe.groupby(lambda x: x.month, as_index=as_index) + agg_result = df_grouped.agg(np.percentile, 80, axis=0) + apply_result = df_grouped.apply(DataFrame.quantile, 0.8) + expected = df_grouped.quantile(0.8) + tm.assert_frame_equal(apply_result, expected, check_names=False) + tm.assert_frame_equal(agg_result, expected) + + agg_result = df_grouped.agg(f, q=80) + apply_result = df_grouped.apply(DataFrame.quantile, q=0.8) + tm.assert_frame_equal(agg_result, expected) + tm.assert_frame_equal(apply_result, expected, check_names=False) def test_len():
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41379
2021-05-08T05:39:35Z
2021-05-10T23:31:58Z
2021-05-10T23:31:58Z
2021-05-10T23:59:05Z
DEPR: kind kwarg in _maybe_cast_slice_bound
diff --git a/doc/source/user_guide/scale.rst b/doc/source/user_guide/scale.rst index 7f2419bc7f19d..71aef4fdd75f6 100644 --- a/doc/source/user_guide/scale.rst +++ b/doc/source/user_guide/scale.rst @@ -345,6 +345,7 @@ we need to supply the divisions manually. Now we can do things like fast random access with ``.loc``. .. ipython:: python + :okwarning: ddf.loc["2002-01-01 12:01":"2002-01-01 12:05"].compute() diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 84f1245299d53..539881deff705 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -3685,7 +3685,7 @@ def is_int(v): ) indexer = key else: - indexer = self.slice_indexer(start, stop, step, kind=kind) + indexer = self.slice_indexer(start, stop, step) return indexer @@ -5648,7 +5648,7 @@ def slice_indexer( >>> idx.slice_indexer(start='b', end=('c', 'g')) slice(1, 3, None) """ - start_slice, end_slice = self.slice_locs(start, end, step=step, kind=kind) + start_slice, end_slice = self.slice_locs(start, end, step=step) # return a slice if not is_scalar(start_slice): @@ -5678,7 +5678,7 @@ def _validate_indexer(self, form: str_t, key, kind: str_t): if key is not None and not is_integer(key): raise self._invalid_indexer(form, key) - def _maybe_cast_slice_bound(self, label, side: str_t, kind): + def _maybe_cast_slice_bound(self, label, side: str_t, kind=no_default): """ This function should be overloaded in subclasses that allow non-trivial casting on label-slice bounds, e.g. datetime-like indices allowing @@ -5698,7 +5698,8 @@ def _maybe_cast_slice_bound(self, label, side: str_t, kind): ----- Value of `side` parameter should be validated in caller. """ - assert kind in ["loc", "getitem", None] + assert kind in ["loc", "getitem", None, no_default] + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") # We are a plain index here (sub-class override this method if they # wish to have special treatment for floats/ints, e.g. Float64Index and @@ -5723,7 +5724,7 @@ def _searchsorted_monotonic(self, label, side: str_t = "left"): raise ValueError("index must be monotonic increasing or decreasing") - def get_slice_bound(self, label, side: str_t, kind) -> int: + def get_slice_bound(self, label, side: str_t, kind=None) -> int: """ Calculate slice bound that corresponds to given label. @@ -5753,7 +5754,7 @@ def get_slice_bound(self, label, side: str_t, kind) -> int: # For datetime indices label may be a string that has to be converted # to datetime boundary according to its resolution. - label = self._maybe_cast_slice_bound(label, side, kind) + label = self._maybe_cast_slice_bound(label, side) # we need to look up the label try: @@ -5843,13 +5844,13 @@ def slice_locs(self, start=None, end=None, step=None, kind=None): start_slice = None if start is not None: - start_slice = self.get_slice_bound(start, "left", kind) + start_slice = self.get_slice_bound(start, "left") if start_slice is None: start_slice = 0 end_slice = None if end is not None: - end_slice = self.get_slice_bound(end, "right", kind) + end_slice = self.get_slice_bound(end, "right") if end_slice is None: end_slice = len(self) @@ -6181,6 +6182,18 @@ def shape(self) -> Shape: # See GH#27775, GH#27384 for history/reasoning in how this is defined. return (len(self),) + def _deprecated_arg(self, value, name: str_t, methodname: str_t) -> None: + """ + Issue a FutureWarning if the arg/kwarg is not no_default. + """ + if value is not no_default: + warnings.warn( + f"'{name}' argument in {methodname} is deprecated " + "and will be removed in a future version. Do not pass it.", + FutureWarning, + stacklevel=3, + ) + def ensure_index_from_sequences(sequences, names=None): """ diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index f77f28deecf57..e8b21f3cec668 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -724,7 +724,7 @@ def _maybe_cast_for_get_loc(self, key) -> Timestamp: key = key.tz_convert(self.tz) return key - def _maybe_cast_slice_bound(self, label, side: str, kind): + def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): """ If label is a string, cast it to datetime according to resolution. @@ -742,7 +742,8 @@ def _maybe_cast_slice_bound(self, label, side: str, kind): ----- Value of `side` parameter should be validated in caller. """ - assert kind in ["loc", "getitem", None] + assert kind in ["loc", "getitem", None, lib.no_default] + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") if isinstance(label, str): freq = getattr(self, "freqstr", getattr(self, "inferred_freq", None)) @@ -823,12 +824,12 @@ def check_str_or_none(point): mask = np.array(True) deprecation_mask = np.array(True) if start is not None: - start_casted = self._maybe_cast_slice_bound(start, "left", kind) + start_casted = self._maybe_cast_slice_bound(start, "left") mask = start_casted <= self deprecation_mask = start_casted == self if end is not None: - end_casted = self._maybe_cast_slice_bound(end, "right", kind) + end_casted = self._maybe_cast_slice_bound(end, "right") mask = (self <= end_casted) & mask deprecation_mask = (end_casted == self) | deprecation_mask diff --git a/pandas/core/indexes/interval.py b/pandas/core/indexes/interval.py index d7b5f66bd385f..fc92a1b3afe53 100644 --- a/pandas/core/indexes/interval.py +++ b/pandas/core/indexes/interval.py @@ -825,8 +825,9 @@ def _should_fallback_to_positional(self) -> bool: # positional in this case return self.dtype.subtype.kind in ["m", "M"] - def _maybe_cast_slice_bound(self, label, side: str, kind): - return getattr(self, side)._maybe_cast_slice_bound(label, side, kind) + def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") + return getattr(self, side)._maybe_cast_slice_bound(label, side) @Appender(Index._convert_list_indexer.__doc__) def _convert_list_indexer(self, keyarr): diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index a68238af003e4..d143af4e53c60 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -2716,7 +2716,7 @@ def _get_indexer( return ensure_platform_int(indexer) def get_slice_bound( - self, label: Hashable | Sequence[Hashable], side: str, kind: str + self, label: Hashable | Sequence[Hashable], side: str, kind: str | None = None ) -> int: """ For an ordered MultiIndex, compute slice bound @@ -2729,7 +2729,7 @@ def get_slice_bound( ---------- label : object or tuple of objects side : {'left', 'right'} - kind : {'loc', 'getitem'} + kind : {'loc', 'getitem', None} Returns ------- @@ -2747,13 +2747,13 @@ def get_slice_bound( Get the locations from the leftmost 'b' in the first level until the end of the multiindex: - >>> mi.get_slice_bound('b', side="left", kind="loc") + >>> mi.get_slice_bound('b', side="left") 1 Like above, but if you get the locations from the rightmost 'b' in the first level and 'f' in the second level: - >>> mi.get_slice_bound(('b','f'), side="right", kind="loc") + >>> mi.get_slice_bound(('b','f'), side="right") 3 See Also @@ -2820,7 +2820,7 @@ def slice_locs(self, start=None, end=None, step=None, kind=None): """ # This function adds nothing to its parent implementation (the magic # happens in get_slice_bound method), but it adds meaningful doc. - return super().slice_locs(start, end, step, kind=kind) + return super().slice_locs(start, end, step) def _partial_tup_index(self, tup, side="left"): if len(tup) > self._lexsort_depth: @@ -3206,9 +3206,7 @@ def convert_indexer(start, stop, step, indexer=indexer, codes=level_codes): # we have a partial slice (like looking up a partial date # string) - start = stop = level_index.slice_indexer( - key.start, key.stop, key.step, kind="loc" - ) + start = stop = level_index.slice_indexer(key.start, key.stop, key.step) step = start.step if isinstance(start, slice) or isinstance(stop, slice): diff --git a/pandas/core/indexes/numeric.py b/pandas/core/indexes/numeric.py index 28f563764ef10..88b0b019324ea 100644 --- a/pandas/core/indexes/numeric.py +++ b/pandas/core/indexes/numeric.py @@ -112,8 +112,9 @@ def _validate_dtype(cls, dtype: Dtype | None) -> None: # Indexing Methods @doc(Index._maybe_cast_slice_bound) - def _maybe_cast_slice_bound(self, label, side: str, kind): - assert kind in ["loc", "getitem", None] + def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): + assert kind in ["loc", "getitem", None, lib.no_default] + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") # we will try to coerce to integers return self._maybe_cast_indexer(label) @@ -346,7 +347,7 @@ def _convert_slice_indexer(self, key: slice, kind: str): # We always treat __getitem__ slicing as label-based # translate to locations - return self.slice_indexer(key.start, key.stop, key.step, kind=kind) + return self.slice_indexer(key.start, key.stop, key.step) # ---------------------------------------------------------------- diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py index 18e441ef165c9..136843938b683 100644 --- a/pandas/core/indexes/period.py +++ b/pandas/core/indexes/period.py @@ -531,7 +531,7 @@ def get_loc(self, key, method=None, tolerance=None): except KeyError as err: raise KeyError(orig_key) from err - def _maybe_cast_slice_bound(self, label, side: str, kind: str): + def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): """ If label is a string or a datetime, cast it to Period.ordinal according to resolution. @@ -540,7 +540,7 @@ def _maybe_cast_slice_bound(self, label, side: str, kind: str): ---------- label : object side : {'left', 'right'} - kind : {'loc', 'getitem'} + kind : {'loc', 'getitem'}, or None Returns ------- @@ -551,7 +551,8 @@ def _maybe_cast_slice_bound(self, label, side: str, kind: str): Value of `side` parameter should be validated in caller. """ - assert kind in ["loc", "getitem"] + assert kind in ["loc", "getitem", None, lib.no_default] + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") if isinstance(label, datetime): return Period(label, freq=self.freq) diff --git a/pandas/core/indexes/timedeltas.py b/pandas/core/indexes/timedeltas.py index a23dd10bc3c0e..ec97fa1e05851 100644 --- a/pandas/core/indexes/timedeltas.py +++ b/pandas/core/indexes/timedeltas.py @@ -192,7 +192,7 @@ def get_loc(self, key, method=None, tolerance=None): return Index.get_loc(self, key, method, tolerance) - def _maybe_cast_slice_bound(self, label, side: str, kind): + def _maybe_cast_slice_bound(self, label, side: str, kind=lib.no_default): """ If label is a string, cast it to timedelta according to resolution. @@ -206,7 +206,8 @@ def _maybe_cast_slice_bound(self, label, side: str, kind): ------- label : object """ - assert kind in ["loc", "getitem", None] + assert kind in ["loc", "getitem", None, lib.no_default] + self._deprecated_arg(kind, "kind", "_maybe_cast_slice_bound") if isinstance(label, str): parsed = Timedelta(label) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 96aeda955df01..3d36387588e73 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -1170,9 +1170,7 @@ def _get_slice_axis(self, slice_obj: slice, axis: int): return obj.copy(deep=False) labels = obj._get_axis(axis) - indexer = labels.slice_indexer( - slice_obj.start, slice_obj.stop, slice_obj.step, kind="loc" - ) + indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step) if isinstance(indexer, slice): return self.obj._slice(indexer, axis=axis) diff --git a/pandas/tests/indexes/datetimes/test_indexing.py b/pandas/tests/indexes/datetimes/test_indexing.py index 3da6414332cb8..2773543b74764 100644 --- a/pandas/tests/indexes/datetimes/test_indexing.py +++ b/pandas/tests/indexes/datetimes/test_indexing.py @@ -679,18 +679,18 @@ def test_maybe_cast_slice_bounds_empty(self): # GH#14354 empty_idx = date_range(freq="1H", periods=0, end="2015") - right = empty_idx._maybe_cast_slice_bound("2015-01-02", "right", "loc") + right = empty_idx._maybe_cast_slice_bound("2015-01-02", "right") exp = Timestamp("2015-01-02 23:59:59.999999999") assert right == exp - left = empty_idx._maybe_cast_slice_bound("2015-01-02", "left", "loc") + left = empty_idx._maybe_cast_slice_bound("2015-01-02", "left") exp = Timestamp("2015-01-02 00:00:00") assert left == exp def test_maybe_cast_slice_duplicate_monotonic(self): # https://github.com/pandas-dev/pandas/issues/16515 idx = DatetimeIndex(["2017", "2017"]) - result = idx._maybe_cast_slice_bound("2017-01-01", "left", "loc") + result = idx._maybe_cast_slice_bound("2017-01-01", "left") expected = Timestamp("2017-01-01") assert result == expected diff --git a/pandas/tests/indexes/period/test_partial_slicing.py b/pandas/tests/indexes/period/test_partial_slicing.py index b0e573250d02e..148999d90d554 100644 --- a/pandas/tests/indexes/period/test_partial_slicing.py +++ b/pandas/tests/indexes/period/test_partial_slicing.py @@ -110,9 +110,9 @@ def test_maybe_cast_slice_bound(self, make_range, frame_or_series): # Check the lower-level calls are raising where expected. with pytest.raises(TypeError, match=msg): - idx._maybe_cast_slice_bound("foo", "left", "loc") + idx._maybe_cast_slice_bound("foo", "left") with pytest.raises(TypeError, match=msg): - idx.get_slice_bound("foo", "left", "loc") + idx.get_slice_bound("foo", "left") with pytest.raises(TypeError, match=msg): obj["2013/09/30":"foo"] diff --git a/pandas/tests/indexes/test_indexing.py b/pandas/tests/indexes/test_indexing.py index 8d2637e4a06f6..379c766b94d6c 100644 --- a/pandas/tests/indexes/test_indexing.py +++ b/pandas/tests/indexes/test_indexing.py @@ -259,3 +259,16 @@ def test_getitem_deprecated_float(idx): expected = idx[1] assert result == expected + + +def test_maybe_cast_slice_bound_kind_deprecated(index): + if not len(index): + return + + with tm.assert_produces_warning(FutureWarning): + # passed as keyword + index._maybe_cast_slice_bound(index[0], "left", kind="loc") + + with tm.assert_produces_warning(FutureWarning): + # pass as positional + index._maybe_cast_slice_bound(index[0], "left", "loc")
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry When I tried just removing it that broke some dask tests.
https://api.github.com/repos/pandas-dev/pandas/pulls/41378
2021-05-08T04:29:48Z
2021-05-10T23:33:09Z
2021-05-10T23:33:09Z
2021-07-22T21:59:56Z
REF: document casting behavior in groupby
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 7796df98395a7..123b9e3350fda 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -539,9 +539,6 @@ def _transform_general(self, func: Callable, *args, **kwargs) -> Series: object.__setattr__(group, "name", name) res = func(group, *args, **kwargs) - if isinstance(res, (DataFrame, Series)): - res = res._values - results.append(klass(res, index=group.index)) # check for empty "results" to avoid concat ValueError @@ -1251,12 +1248,11 @@ def _wrap_applied_output_series( columns = key_index stacked_values = stacked_values.T + if stacked_values.dtype == object: + # We'll have the DataFrame constructor do inference + stacked_values = stacked_values.tolist() result = self.obj._constructor(stacked_values, index=index, columns=columns) - # if we have date/time like in the original, then coerce dates - # as we are stacking can easily have object dtypes here - result = result._convert(datetime=True) - if not self.as_index: self._insert_inaxis_grouper_inplace(result) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 2091d2fc484e1..0b07668a9fea2 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1329,7 +1329,10 @@ def _agg_py_fallback( # reductions; see GH#28949 ser = df.iloc[:, 0] - res_values = self.grouper.agg_series(ser, alt) + # We do not get here with UDFs, so we know that our dtype + # should always be preserved by the implemented aggregations + # TODO: Is this exactly right; see WrappedCythonOp get_result_dtype? + res_values = self.grouper.agg_series(ser, alt, preserve_dtype=True) if isinstance(values, Categorical): # Because we only get here with known dtype-preserving diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 3045451974ee7..8b6136b3abc42 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -966,11 +966,24 @@ def _cython_operation( ) @final - def agg_series(self, obj: Series, func: F) -> ArrayLike: + def agg_series( + self, obj: Series, func: F, preserve_dtype: bool = False + ) -> ArrayLike: + """ + Parameters + ---------- + obj : Series + func : function taking a Series and returning a scalar-like + preserve_dtype : bool + Whether the aggregation is known to be dtype-preserving. + + Returns + ------- + np.ndarray or ExtensionArray + """ # test_groupby_empty_with_category gets here with self.ngroups == 0 # and len(obj) > 0 - cast_back = True if len(obj) == 0: # SeriesGrouper would raise if we were to call _aggregate_series_fast result = self._aggregate_series_pure_python(obj, func) @@ -982,17 +995,21 @@ def agg_series(self, obj: Series, func: F) -> ArrayLike: # TODO: can we get a performant workaround for EAs backed by ndarray? result = self._aggregate_series_pure_python(obj, func) + # we can preserve a little bit more aggressively with EA dtype + # because maybe_cast_pointwise_result will do a try/except + # with _from_sequence. NB we are assuming here that _from_sequence + # is sufficiently strict that it casts appropriately. + preserve_dtype = True + elif obj.index._has_complex_internals: # Preempt TypeError in _aggregate_series_fast result = self._aggregate_series_pure_python(obj, func) else: result = self._aggregate_series_fast(obj, func) - cast_back = False npvalues = lib.maybe_convert_objects(result, try_float=False) - if cast_back: - # TODO: Is there a documented reason why we dont always cast_back? + if preserve_dtype: out = maybe_cast_pointwise_result(npvalues, obj.dtype, numeric_only=True) else: out = npvalues
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41376
2021-05-08T01:02:29Z
2021-05-17T19:18:11Z
2021-05-17T19:18:11Z
2021-05-17T19:25:12Z
REF: do less in Grouping.__init__
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index cb5b54ca0c598..d4d20809ada85 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -777,11 +777,7 @@ def apply_series_value_counts(): # multi-index components codes = self.grouper.reconstructed_codes codes = [rep(level_codes) for level_codes in codes] + [llab(lab, inc)] - # error: List item 0 has incompatible type "Union[ndarray, Any]"; - # expected "Index" - levels = [ping.group_index for ping in self.grouper.groupings] + [ - lev # type: ignore[list-item] - ] + levels = [ping.group_index for ping in self.grouper.groupings] + [lev] names = self.grouper.names + [self.obj.name] if dropna: diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 1b5c11b363457..598750475f3e8 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -438,6 +438,9 @@ class Grouping: * groups : dict of {group -> label_list} """ + _codes: np.ndarray | None = None + _group_index: Index | None = None + def __init__( self, index: Index, @@ -461,6 +464,8 @@ def __init__( self.in_axis = in_axis self.dropna = dropna + self._passed_categorical = False + # right place for this? if isinstance(grouper, (Series, Index)) and name is None: self.name = grouper.name @@ -468,20 +473,16 @@ def __init__( # we have a single grouper which may be a myriad of things, # some of which are dependent on the passing in level - if level is not None: - if not isinstance(level, int): - if level not in index.names: - raise AssertionError(f"Level {level} not in index") - level = index.names.index(level) - + ilevel = self._ilevel + if ilevel is not None: if self.name is None: - self.name = index.names[level] + self.name = index.names[ilevel] ( - self.grouper, + self.grouper, # Index self._codes, self._group_index, - ) = index._get_grouper_for_level(self.grouper, level) + ) = index._get_grouper_for_level(self.grouper, ilevel) # a passed Grouper like, directly get the grouper in the same way # as single grouper groupby, use the group_info to get codes @@ -502,32 +503,13 @@ def __init__( self.grouper = grouper._get_grouper() else: - # a passed Categorical if is_categorical_dtype(self.grouper): + self._passed_categorical = True self.grouper, self.all_grouper = recode_for_groupby( self.grouper, self.sort, observed ) - categories = self.grouper.categories - - # we make a CategoricalIndex out of the cat grouper - # preserving the categories / ordered attributes - self._codes = self.grouper.codes - if observed: - codes = algorithms.unique1d(self.grouper.codes) - codes = codes[codes != -1] - if sort or self.grouper.ordered: - codes = np.sort(codes) - else: - codes = np.arange(len(categories)) - - self._group_index = CategoricalIndex( - Categorical.from_codes( - codes=codes, categories=categories, ordered=self.grouper.ordered - ), - name=self.name, - ) # we are done elif isinstance(self.grouper, Grouping): @@ -564,8 +546,20 @@ def __repr__(self) -> str: def __iter__(self): return iter(self.indices) - _codes: np.ndarray | None = None - _group_index: Index | None = None + @cache_readonly + def _ilevel(self) -> int | None: + """ + If necessary, converted index level name to index level position. + """ + level = self.level + if level is None: + return None + if not isinstance(level, int): + index = self.index + if level not in index.names: + raise AssertionError(f"Level {level} not in index") + return index.names.index(level) + return level @property def ngroups(self) -> int: @@ -582,6 +576,12 @@ def indices(self): @property def codes(self) -> np.ndarray: + if self._passed_categorical: + # we make a CategoricalIndex out of the cat grouper + # preserving the categories / ordered attributes + cat = self.grouper + return cat.codes + if self._codes is None: self._make_codes() # error: Incompatible return value type (got "Optional[ndarray]", @@ -592,12 +592,33 @@ def codes(self) -> np.ndarray: def result_index(self) -> Index: if self.all_grouper is not None: group_idx = self.group_index - assert isinstance(group_idx, CategoricalIndex) # set in __init__ + assert isinstance(group_idx, CategoricalIndex) return recode_from_groupby(self.all_grouper, self.sort, group_idx) return self.group_index - @property + @cache_readonly def group_index(self) -> Index: + if self._passed_categorical: + # we make a CategoricalIndex out of the cat grouper + # preserving the categories / ordered attributes + cat = self.grouper + categories = cat.categories + + if self.observed: + codes = algorithms.unique1d(cat.codes) + codes = codes[codes != -1] + if self.sort or cat.ordered: + codes = np.sort(codes) + else: + codes = np.arange(len(categories)) + + return CategoricalIndex( + Categorical.from_codes( + codes=codes, categories=categories, ordered=cat.ordered + ), + name=self.name, + ) + if self._group_index is None: self._make_codes() assert self._group_index is not None
making it incrementally easier to reason about what `self.grouper` is
https://api.github.com/repos/pandas-dev/pandas/pulls/41375
2021-05-07T21:53:45Z
2021-05-17T14:25:50Z
2021-05-17T14:25:50Z
2021-05-17T14:41:07Z
BUG: loc casting to float for scalar with MultiIndex df
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 258e391b9220c..9b96c6b95301d 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -872,6 +872,7 @@ Indexing - Bug in :meth:`DataFrame.__setitem__` and :meth:`DataFrame.iloc.__setitem__` raising ``ValueError`` when trying to index with a row-slice and setting a list as values (:issue:`40440`) - Bug in :meth:`DataFrame.loc` not raising ``KeyError`` when key was not found in :class:`MultiIndex` when levels contain more values than used (:issue:`41170`) - Bug in :meth:`DataFrame.loc.__setitem__` when setting-with-expansion incorrectly raising when the index in the expanding axis contains duplicates (:issue:`40096`) +- Bug in :meth:`DataFrame.loc.__getitem__` with :class:`MultiIndex` casting to float when at least one column is from has float dtype and we retrieve a scalar (:issue:`41369`) - Bug in :meth:`DataFrame.loc` incorrectly matching non-boolean index elements (:issue:`20432`) - Bug in :meth:`Series.__delitem__` with ``ExtensionDtype`` incorrectly casting to ``ndarray`` (:issue:`40386`) - Bug in :meth:`DataFrame.__setitem__` raising ``TypeError`` when using a str subclass as the column name with a :class:`DatetimeIndex` (:issue:`37366`) diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index 0a06dff790cbf..be5b89f08b5ca 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -886,26 +886,22 @@ def _getitem_nested_tuple(self, tup: tuple): # handle the multi-axis by taking sections and reducing # this is iterative obj = self.obj - axis = 0 - for key in tup: + # GH#41369 Loop in reverse order ensures indexing along columns before rows + # which selects only necessary blocks which avoids dtype conversion if possible + axis = len(tup) - 1 + for key in tup[::-1]: if com.is_null_slice(key): - axis += 1 + axis -= 1 continue - current_ndim = obj.ndim obj = getattr(obj, self.name)._getitem_axis(key, axis=axis) - axis += 1 + axis -= 1 # if we have a scalar, we are done if is_scalar(obj) or not hasattr(obj, "ndim"): break - # has the dim of the obj changed? - # GH 7199 - if obj.ndim < current_ndim: - axis -= 1 - return obj def _convert_to_indexer(self, key, axis: int, is_setter: bool = False): diff --git a/pandas/tests/indexing/multiindex/test_loc.py b/pandas/tests/indexing/multiindex/test_loc.py index 558270ac86532..f9e2d1280b33a 100644 --- a/pandas/tests/indexing/multiindex/test_loc.py +++ b/pandas/tests/indexing/multiindex/test_loc.py @@ -831,3 +831,16 @@ def test_mi_add_cell_missing_row_non_unique(): columns=MultiIndex.from_product([[1, 2], ["A", "B"]]), ) tm.assert_frame_equal(result, expected) + + +def test_loc_get_scalar_casting_to_float(): + # GH#41369 + df = DataFrame( + {"a": 1.0, "b": 2}, index=MultiIndex.from_arrays([[3], [4]], names=["c", "d"]) + ) + result = df.loc[(3, 4), "b"] + assert result == 2 + assert isinstance(result, np.int64) + result = df.loc[[(3, 4)], "b"].iloc[0] + assert result == 2 + assert isinstance(result, np.int64)
- [x] closes #41369 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry In theory this would solve the issue, but I am not sure if this is desirable. We cast the row to a series, which forces the dtype conversion. If we loop in reverse we retrieve a column as a series which would avoid the conversion. Would you mind having a look @jbrockmendel ?
https://api.github.com/repos/pandas-dev/pandas/pulls/41374
2021-05-07T21:53:12Z
2021-05-25T12:28:55Z
2021-05-25T12:28:54Z
2021-06-03T22:08:41Z
ENH: retain masked EA dtypes in groupby with as_index=False
diff --git a/doc/source/whatsnew/v1.4.0.rst b/doc/source/whatsnew/v1.4.0.rst index 08f30f467dfa7..c0f05108ff464 100644 --- a/doc/source/whatsnew/v1.4.0.rst +++ b/doc/source/whatsnew/v1.4.0.rst @@ -29,6 +29,7 @@ enhancement2 Other enhancements ^^^^^^^^^^^^^^^^^^ +- :class:`DataFrameGroupBy` operations with ``as_index=False`` now correctly retain ``ExtensionDtype`` dtypes for columns being grouped on (:issue:`41373`) - Add support for assigning values to ``by`` argument in :meth:`DataFrame.plot.hist` and :meth:`DataFrame.plot.box` (:issue:`15079`) - :meth:`Series.sample`, :meth:`DataFrame.sample`, and :meth:`.GroupBy.sample` now accept a ``np.random.Generator`` as input to ``random_state``. A generator will be more performant, especially with ``replace=False`` (:issue:`38100`) - Additional options added to :meth:`.Styler.bar` to control alignment and display, with keyword only arguments (:issue:`26070`, :issue:`36419`) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index a6be85bf2be2a..59c57cf4a1ea0 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1033,7 +1033,7 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) self._insert_inaxis_grouper_inplace(result) result.index = Index(range(len(result))) - return result._convert(datetime=True) + return result agg = aggregate @@ -1684,6 +1684,8 @@ def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame: if self.axis == 1: result = result.T + # Note: we only need to pass datetime=True in order to get numeric + # values converted return self._reindex_output(result)._convert(datetime=True) def _iterate_column_groupbys(self, obj: FrameOrSeries): diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py index 3307558deec33..76815d780a1ad 100644 --- a/pandas/core/groupby/grouper.py +++ b/pandas/core/groupby/grouper.py @@ -619,11 +619,20 @@ def group_arraylike(self) -> ArrayLike: Analogous to result_index, but holding an ArrayLike to ensure we can can retain ExtensionDtypes. """ + if self._group_index is not None: + # _group_index is set in __init__ for MultiIndex cases + return self._group_index._values + + elif self._all_grouper is not None: + # retain dtype for categories, including unobserved ones + return self.result_index._values + return self._codes_and_uniques[1] @cache_readonly def result_index(self) -> Index: - # TODO: what's the difference between result_index vs group_index? + # result_index retains dtype for categories, including unobserved ones, + # which group_index does not if self._all_grouper is not None: group_idx = self.group_index assert isinstance(group_idx, CategoricalIndex) @@ -635,7 +644,8 @@ def group_index(self) -> Index: if self._group_index is not None: # _group_index is set in __init__ for MultiIndex cases return self._group_index - uniques = self.group_arraylike + + uniques = self._codes_and_uniques[1] return Index(uniques, name=self.name) @cache_readonly diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index 36fbda5974ea0..6d8881d12dbb7 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -885,6 +885,7 @@ def result_arraylike(self) -> ArrayLike: if len(self.groupings) == 1: return self.groupings[0].group_arraylike + # result_index is MultiIndex return self.result_index._values @cache_readonly @@ -903,12 +904,12 @@ def get_group_levels(self) -> list[ArrayLike]: # Note: only called from _insert_inaxis_grouper_inplace, which # is only called for BaseGrouper, never for BinGrouper if len(self.groupings) == 1: - return [self.groupings[0].result_index] + return [self.groupings[0].group_arraylike] name_list = [] for ping, codes in zip(self.groupings, self.reconstructed_codes): codes = ensure_platform_int(codes) - levels = ping.result_index.take(codes) + levels = ping.group_arraylike.take(codes) name_list.append(levels) diff --git a/pandas/tests/extension/base/groupby.py b/pandas/tests/extension/base/groupby.py index 1a045fa33f487..4c8dc6ca1ad9c 100644 --- a/pandas/tests/extension/base/groupby.py +++ b/pandas/tests/extension/base/groupby.py @@ -22,14 +22,14 @@ def test_grouping_grouper(self, data_for_grouping): def test_groupby_extension_agg(self, as_index, data_for_grouping): df = pd.DataFrame({"A": [1, 1, 2, 2, 3, 3, 1, 4], "B": data_for_grouping}) result = df.groupby("B", as_index=as_index).A.mean() - _, index = pd.factorize(data_for_grouping, sort=True) + _, uniques = pd.factorize(data_for_grouping, sort=True) - index = pd.Index(index, name="B") - expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A") if as_index: + index = pd.Index(uniques, name="B") + expected = pd.Series([3.0, 1.0, 4.0], index=index, name="A") self.assert_series_equal(result, expected) else: - expected = expected.reset_index() + expected = pd.DataFrame({"B": uniques, "A": [3.0, 1.0, 4.0]}) self.assert_frame_equal(result, expected) def test_groupby_agg_extension(self, data_for_grouping): diff --git a/pandas/tests/extension/json/test_json.py b/pandas/tests/extension/json/test_json.py index b8fa158083327..b5bb68e8a9a12 100644 --- a/pandas/tests/extension/json/test_json.py +++ b/pandas/tests/extension/json/test_json.py @@ -312,10 +312,6 @@ def test_groupby_extension_apply(self): we'll be able to dispatch unique. """ - @pytest.mark.parametrize("as_index", [True, False]) - def test_groupby_extension_agg(self, as_index, data_for_grouping): - super().test_groupby_extension_agg(as_index, data_for_grouping) - @pytest.mark.xfail(reason="GH#39098: Converts agg result to object") def test_groupby_agg_extension(self, data_for_grouping): super().test_groupby_agg_extension(data_for_grouping) diff --git a/pandas/tests/extension/test_boolean.py b/pandas/tests/extension/test_boolean.py index 172137ff3a5a2..395540993dc15 100644 --- a/pandas/tests/extension/test_boolean.py +++ b/pandas/tests/extension/test_boolean.py @@ -269,14 +269,14 @@ def test_grouping_grouper(self, data_for_grouping): def test_groupby_extension_agg(self, as_index, data_for_grouping): df = pd.DataFrame({"A": [1, 1, 2, 2, 3, 3, 1], "B": data_for_grouping}) result = df.groupby("B", as_index=as_index).A.mean() - _, index = pd.factorize(data_for_grouping, sort=True) + _, uniques = pd.factorize(data_for_grouping, sort=True) - index = pd.Index(index, name="B") - expected = pd.Series([3.0, 1.0], index=index, name="A") if as_index: + index = pd.Index(uniques, name="B") + expected = pd.Series([3.0, 1.0], index=index, name="A") self.assert_series_equal(result, expected) else: - expected = expected.reset_index() + expected = pd.DataFrame({"B": uniques, "A": [3.0, 1.0]}) self.assert_frame_equal(result, expected) def test_groupby_agg_extension(self, data_for_grouping): diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index bcdb6817c0321..538a707aa3580 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -717,6 +717,10 @@ def test_ops_not_as_index(reduction_func): expected = expected.rename("size") expected = expected.reset_index() + if reduction_func != "size": + # 32 bit compat -> groupby preserves dtype whereas reset_index casts to int64 + expected["a"] = expected["a"].astype(df["a"].dtype) + g = df.groupby("a", as_index=False) result = getattr(g, reduction_func)()
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41373
2021-05-07T20:27:00Z
2021-07-25T14:23:16Z
2021-07-25T14:23:16Z
2021-07-25T14:42:04Z
CLN: Remove raise if missing only controlling the error message
diff --git a/pandas/core/frame.py b/pandas/core/frame.py index a1b46c42f6d68..9a57d86d62fdc 100644 --- a/pandas/core/frame.py +++ b/pandas/core/frame.py @@ -3412,7 +3412,7 @@ def __getitem__(self, key): else: if is_iterator(key): key = list(key) - indexer = self.loc._get_listlike_indexer(key, axis=1, raise_missing=True)[1] + indexer = self.loc._get_listlike_indexer(key, axis=1)[1] # take() does not accept boolean indexers if getattr(indexer, "dtype", None) == bool: diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py index b267472eba573..96aeda955df01 100644 --- a/pandas/core/indexing.py +++ b/pandas/core/indexing.py @@ -11,8 +11,6 @@ import numpy as np -from pandas._config.config import option_context - from pandas._libs.indexing import NDFrameIndexerBase from pandas._libs.lib import item_from_zerodim from pandas.errors import ( @@ -1089,7 +1087,7 @@ def _getitem_iterable(self, key, axis: int): self._validate_key(key, axis) # A collection of keys - keyarr, indexer = self._get_listlike_indexer(key, axis, raise_missing=False) + keyarr, indexer = self._get_listlike_indexer(key, axis) return self.obj._reindex_with_indexers( {axis: [keyarr, indexer]}, copy=True, allow_dups=True ) @@ -1255,8 +1253,7 @@ def _convert_to_indexer(self, key, axis: int, is_setter: bool = False): (inds,) = key.nonzero() return inds else: - # When setting, missing keys are not allowed, even with .loc: - return self._get_listlike_indexer(key, axis, raise_missing=True)[1] + return self._get_listlike_indexer(key, axis)[1] else: try: return labels.get_loc(key) @@ -1266,7 +1263,7 @@ def _convert_to_indexer(self, key, axis: int, is_setter: bool = False): return {"key": key} raise - def _get_listlike_indexer(self, key, axis: int, raise_missing: bool = False): + def _get_listlike_indexer(self, key, axis: int): """ Transform a list-like of keys into a new index and an indexer. @@ -1276,16 +1273,11 @@ def _get_listlike_indexer(self, key, axis: int, raise_missing: bool = False): Targeted labels. axis: int Dimension on which the indexing is being made. - raise_missing: bool, default False - Whether to raise a KeyError if some labels were not found. - Will be removed in the future, and then this method will always behave as - if ``raise_missing=True``. Raises ------ KeyError - If at least one key was requested but none was found, and - raise_missing=True. + If at least one key was requested but none was found. Returns ------- @@ -1310,12 +1302,10 @@ def _get_listlike_indexer(self, key, axis: int, raise_missing: bool = False): else: keyarr, indexer, new_indexer = ax._reindex_non_unique(keyarr) - self._validate_read_indexer(keyarr, indexer, axis, raise_missing=raise_missing) + self._validate_read_indexer(keyarr, indexer, axis) return keyarr, indexer - def _validate_read_indexer( - self, key, indexer, axis: int, raise_missing: bool = False - ): + def _validate_read_indexer(self, key, indexer, axis: int): """ Check that indexer can be used to return a result. @@ -1331,16 +1321,11 @@ def _validate_read_indexer( (with -1 indicating not found). axis : int Dimension on which the indexing is being made. - raise_missing: bool - Whether to raise a KeyError if some labels are not found. Will be - removed in the future, and then this method will always behave as - if raise_missing=True. Raises ------ KeyError - If at least one key was requested but none was found, and - raise_missing=True. + If at least one key was requested but none was found. """ if len(key) == 0: return @@ -1356,21 +1341,8 @@ def _validate_read_indexer( ax = self.obj._get_axis(axis) - # We (temporarily) allow for some missing keys with .loc, except in - # some cases (e.g. setting) in which "raise_missing" will be False - if raise_missing: - not_found = list(set(key) - set(ax)) - raise KeyError(f"{not_found} not in index") - - not_found = key[missing_mask] - - with option_context("display.max_seq_items", 10, "display.width", 80): - raise KeyError( - "Passing list-likes to .loc or [] with any missing labels " - "is no longer supported. " - f"The following labels were missing: {not_found}. " - "See https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#deprecate-loc-reindex-listlike" # noqa:E501 - ) + not_found = list(set(key) - set(ax)) + raise KeyError(f"{not_found} not in index") @doc(IndexingMixin.iloc) diff --git a/pandas/tests/indexing/test_categorical.py b/pandas/tests/indexing/test_categorical.py index 11943d353e8c8..cd49620f45fae 100644 --- a/pandas/tests/indexing/test_categorical.py +++ b/pandas/tests/indexing/test_categorical.py @@ -296,12 +296,7 @@ def test_loc_getitem_listlike_labels(self): def test_loc_getitem_listlike_unused_category(self): # GH#37901 a label that is in index.categories but not in index # listlike containing an element in the categories but not in the values - msg = ( - "The following labels were missing: CategoricalIndex(['e'], " - "categories=['c', 'a', 'b', 'e'], ordered=False, name='B', " - "dtype='category')" - ) - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['e'] not in index")): self.df2.loc[["a", "b", "e"]] def test_loc_getitem_label_unused_category(self): @@ -311,10 +306,7 @@ def test_loc_getitem_label_unused_category(self): def test_loc_getitem_non_category(self): # not all labels in the categories - msg = ( - "The following labels were missing: Index(['d'], dtype='object', name='B')" - ) - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['d'] not in index")): self.df2.loc[["a", "d"]] def test_loc_setitem_expansion_label_unused_category(self): @@ -346,8 +338,7 @@ def test_loc_listlike_dtypes(self): exp = DataFrame({"A": [1, 1, 2], "B": [4, 4, 5]}, index=exp_index) tm.assert_frame_equal(res, exp, check_index_type=True) - msg = "The following labels were missing: Index(['x'], dtype='object')" - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['x'] not in index")): df.loc[["a", "x"]] def test_loc_listlike_dtypes_duplicated_categories_and_codes(self): @@ -370,8 +361,7 @@ def test_loc_listlike_dtypes_duplicated_categories_and_codes(self): ) tm.assert_frame_equal(res, exp, check_index_type=True) - msg = "The following labels were missing: Index(['x'], dtype='object')" - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['x'] not in index")): df.loc[["a", "x"]] def test_loc_listlike_dtypes_unused_category(self): @@ -394,11 +384,10 @@ def test_loc_listlike_dtypes_unused_category(self): ) tm.assert_frame_equal(res, exp, check_index_type=True) - msg = "The following labels were missing: Index(['x'], dtype='object')" - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['x'] not in index")): df.loc[["a", "x"]] - def test_loc_getitem_listlike_unused_category_raises_keyerro(self): + def test_loc_getitem_listlike_unused_category_raises_keyerror(self): # key that is an *unused* category raises index = CategoricalIndex(["a", "b", "a", "c"], categories=list("abcde")) df = DataFrame({"A": [1, 2, 3, 4], "B": [5, 6, 7, 8]}, index=index) @@ -407,13 +396,7 @@ def test_loc_getitem_listlike_unused_category_raises_keyerro(self): # For comparison, check the scalar behavior df.loc["e"] - msg = ( - "Passing list-likes to .loc or [] with any missing labels is no " - "longer supported. The following labels were missing: " - "CategoricalIndex(['e'], categories=['a', 'b', 'c', 'd', 'e'], " - "ordered=False, dtype='category'). See https" - ) - with pytest.raises(KeyError, match=re.escape(msg)): + with pytest.raises(KeyError, match=re.escape("['e'] not in index")): df.loc[["a", "e"]] def test_ix_categorical_index(self): diff --git a/pandas/tests/indexing/test_floats.py b/pandas/tests/indexing/test_floats.py index a84be049ebff4..6116c34f238e2 100644 --- a/pandas/tests/indexing/test_floats.py +++ b/pandas/tests/indexing/test_floats.py @@ -535,10 +535,10 @@ def test_floating_misc(self, indexer_sl): result2 = s.iloc[[0, 2, 4]] tm.assert_series_equal(result1, result2) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): indexer_sl(s)[[1.6, 5, 10]] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): indexer_sl(s)[[0, 1, 2]] result = indexer_sl(s)[[2.5, 5]] diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py index 446b616111e9e..1f50dacc4dffd 100644 --- a/pandas/tests/indexing/test_iloc.py +++ b/pandas/tests/indexing/test_iloc.py @@ -800,7 +800,7 @@ def test_iloc_non_unique_indexing(self): df2 = DataFrame({"A": [0.1] * 1000, "B": [1] * 1000}) df2 = concat([df2, 2 * df2, 3 * df2]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df2.loc[idx] def test_iloc_empty_list_indexer_is_ok(self): diff --git a/pandas/tests/indexing/test_indexing.py b/pandas/tests/indexing/test_indexing.py index df688d6745096..0c20622311e1f 100644 --- a/pandas/tests/indexing/test_indexing.py +++ b/pandas/tests/indexing/test_indexing.py @@ -247,12 +247,12 @@ def test_dups_fancy_indexing_not_in_order(self): tm.assert_frame_equal(result, expected) rows = ["C", "B", "E"] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df.loc[rows] # see GH5553, make sure we use the right indexer rows = ["F", "G", "H", "C", "B", "E"] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df.loc[rows] def test_dups_fancy_indexing_only_missing_label(self): @@ -274,14 +274,14 @@ def test_dups_fancy_indexing_missing_label(self, vals): # GH 4619; duplicate indexer with missing label df = DataFrame({"A": vals}) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df.loc[[0, 8, 0]] def test_dups_fancy_indexing_non_unique(self): # non unique with non unique selector df = DataFrame({"test": [5, 7, 9, 11]}, index=["A", "A", "B", "C"]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df.loc[["A", "A", "E"]] def test_dups_fancy_indexing2(self): @@ -289,7 +289,7 @@ def test_dups_fancy_indexing2(self): # dups on index and missing values df = DataFrame(np.random.randn(5, 5), columns=["A", "B", "B", "B", "A"]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): df.loc[:, ["A", "B", "C"]] def test_dups_fancy_indexing3(self): diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index 11391efde4956..382ea8c382824 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -293,11 +293,11 @@ def test_getitem_label_list_with_missing(self): s = Series(range(3), index=["a", "b", "c"]) # consistency - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s[["a", "d"]] s = Series(range(3)) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s[[0, 3]] @pytest.mark.parametrize("index", [[True, False], [True, False, True, False]]) @@ -349,7 +349,7 @@ def test_loc_to_fail(self): s.loc[["4"]] s.loc[-1] = 3 - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[-1, -2]] s["a"] = 2 @@ -396,7 +396,7 @@ def test_loc_getitem_list_with_fail(self): s.loc[[3]] # a non-match and a match - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[2, 3]] def test_loc_index(self): @@ -2249,12 +2249,7 @@ def test_loc_getitem_list_of_labels_categoricalindex_with_na(self, box): ser2 = ser[:-1] ci2 = ci[1:] # but if there are no NAs present, this should raise KeyError - msg = ( - r"Passing list-likes to .loc or \[\] with any missing labels is no " - "longer supported. The following labels were missing: " - r"(Categorical)?Index\(\[nan\], .*\). " - "See https" - ) + msg = "not in index" with pytest.raises(KeyError, match=msg): ser2.loc[box(ci2)] @@ -2264,41 +2259,13 @@ def test_loc_getitem_list_of_labels_categoricalindex_with_na(self, box): with pytest.raises(KeyError, match=msg): ser2.to_frame().loc[box(ci2)] - def test_loc_getitem_many_missing_labels_inside_error_message_limited(self): - # GH#34272 - n = 10000 - missing_labels = [f"missing_{label}" for label in range(n)] - ser = Series({"a": 1, "b": 2, "c": 3}) - # regex checks labels between 4 and 9995 are replaced with ellipses - error_message_regex = "missing_4.*\\.\\.\\..*missing_9995" - with pytest.raises(KeyError, match=error_message_regex): - ser.loc[["a", "c"] + missing_labels] - - def test_loc_getitem_missing_labels_inside_matched_in_error_message(self): - # GH#34272 - ser = Series({"a": 1, "b": 2, "c": 3}) - error_message_regex = "missing_0.*missing_1.*missing_2" - with pytest.raises(KeyError, match=error_message_regex): - ser.loc[["a", "b", "missing_0", "c", "missing_1", "missing_2"]] - - def test_loc_getitem_long_text_missing_labels_inside_error_message_limited(self): - # GH#34272 - ser = Series({"a": 1, "b": 2, "c": 3}) - missing_labels = [f"long_missing_label_text_{i}" * 5 for i in range(3)] - # regex checks for very long labels there are new lines between each - error_message_regex = ( - "long_missing_label_text_0.*\\\\n.*long_missing_label_text_1" - ) - with pytest.raises(KeyError, match=error_message_regex): - ser.loc[["a", "c"] + missing_labels] - def test_loc_getitem_series_label_list_missing_values(self): # gh-11428 key = np.array( ["2001-01-04", "2001-01-02", "2001-01-04", "2001-01-14"], dtype="datetime64" ) ser = Series([2, 5, 8, 11], date_range("2001-01-01", freq="D", periods=4)) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[key] def test_loc_getitem_series_label_list_missing_integer_values(self): @@ -2307,7 +2274,7 @@ def test_loc_getitem_series_label_list_missing_integer_values(self): index=np.array([9730701000001104, 10049011000001109]), data=np.array([999000011000001104, 999000011000001104]), ) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[np.array([9730701000001104, 10047311000001102])] @pytest.mark.parametrize("to_period", [True, False]) @@ -2349,7 +2316,7 @@ def test_loc_getitem_listlike_of_datetimelike_keys(self, to_period): if to_period: keys = [x.to_period("D") for x in keys] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[keys] diff --git a/pandas/tests/indexing/test_partial.py b/pandas/tests/indexing/test_partial.py index b8680cc4e611e..dd26a978fe81d 100644 --- a/pandas/tests/indexing/test_partial.py +++ b/pandas/tests/indexing/test_partial.py @@ -199,14 +199,14 @@ def test_series_partial_set(self): # loc equiv to .reindex expected = Series([np.nan, 0.2, np.nan], index=[3, 2, 3]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match=r"not in index"): ser.loc[[3, 2, 3]] result = ser.reindex([3, 2, 3]) tm.assert_series_equal(result, expected, check_index_type=True) expected = Series([np.nan, 0.2, np.nan, np.nan], index=[3, 2, 3, "x"]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[[3, 2, 3, "x"]] result = ser.reindex([3, 2, 3, "x"]) @@ -217,7 +217,7 @@ def test_series_partial_set(self): tm.assert_series_equal(result, expected, check_index_type=True) expected = Series([0.2, 0.2, np.nan, 0.1], index=[2, 2, "x", 1]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[[2, 2, "x", 1]] result = ser.reindex([2, 2, "x", 1]) @@ -232,7 +232,7 @@ def test_series_partial_set(self): ser.loc[[3, 3, 3]] expected = Series([0.2, 0.2, np.nan], index=[2, 2, 3]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[[2, 2, 3]] result = ser.reindex([2, 2, 3]) @@ -240,7 +240,7 @@ def test_series_partial_set(self): s = Series([0.1, 0.2, 0.3], index=[1, 2, 3]) expected = Series([0.3, np.nan, np.nan], index=[3, 4, 4]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[3, 4, 4]] result = s.reindex([3, 4, 4]) @@ -248,7 +248,7 @@ def test_series_partial_set(self): s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]) expected = Series([np.nan, 0.3, 0.3], index=[5, 3, 3]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[5, 3, 3]] result = s.reindex([5, 3, 3]) @@ -256,7 +256,7 @@ def test_series_partial_set(self): s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]) expected = Series([np.nan, 0.4, 0.4], index=[5, 4, 4]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[5, 4, 4]] result = s.reindex([5, 4, 4]) @@ -264,7 +264,7 @@ def test_series_partial_set(self): s = Series([0.1, 0.2, 0.3, 0.4], index=[4, 5, 6, 7]) expected = Series([0.4, np.nan, np.nan], index=[7, 2, 2]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[7, 2, 2]] result = s.reindex([7, 2, 2]) @@ -272,7 +272,7 @@ def test_series_partial_set(self): s = Series([0.1, 0.2, 0.3, 0.4], index=[1, 2, 3, 4]) expected = Series([0.4, np.nan, np.nan], index=[4, 5, 5]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): s.loc[[4, 5, 5]] result = s.reindex([4, 5, 5]) @@ -290,10 +290,10 @@ def test_series_partial_set_with_name(self): ser = Series([0.1, 0.2], index=idx, name="s") # loc - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match=r"\[3\] not in index"): ser.loc[[3, 2, 3]] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match=r"not in index"): ser.loc[[3, 2, 3, "x"]] exp_idx = Index([2, 2, 1], dtype="int64", name="idx") @@ -301,7 +301,7 @@ def test_series_partial_set_with_name(self): result = ser.loc[[2, 2, 1]] tm.assert_series_equal(result, expected, check_index_type=True) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match=r"\['x'\] not in index"): ser.loc[[2, 2, "x", 1]] # raises as nothing is in the index @@ -312,27 +312,27 @@ def test_series_partial_set_with_name(self): with pytest.raises(KeyError, match=msg): ser.loc[[3, 3, 3]] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser.loc[[2, 2, 3]] idx = Index([1, 2, 3], dtype="int64", name="idx") - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): Series([0.1, 0.2, 0.3], index=idx, name="s").loc[[3, 4, 4]] idx = Index([1, 2, 3, 4], dtype="int64", name="idx") - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[5, 3, 3]] idx = Index([1, 2, 3, 4], dtype="int64", name="idx") - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[5, 4, 4]] idx = Index([4, 5, 6, 7], dtype="int64", name="idx") - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[7, 2, 2]] idx = Index([1, 2, 3, 4], dtype="int64", name="idx") - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): Series([0.1, 0.2, 0.3, 0.4], index=idx, name="s").loc[[4, 5, 5]] # iloc @@ -591,7 +591,7 @@ def test_loc_with_list_of_strings_representing_datetimes_missing_value( # GH 11278 s = Series(range(20), index=idx) df = DataFrame(range(20), index=idx) - msg = r"with any missing labels" + msg = r"not in index" with pytest.raises(KeyError, match=msg): s.loc[labels] diff --git a/pandas/tests/series/indexing/test_getitem.py b/pandas/tests/series/indexing/test_getitem.py index 9a166fc8057ed..0e43e351bc082 100644 --- a/pandas/tests/series/indexing/test_getitem.py +++ b/pandas/tests/series/indexing/test_getitem.py @@ -604,10 +604,10 @@ def test_getitem_with_integer_labels(): ser = Series(np.random.randn(10), index=list(range(0, 20, 2))) inds = [0, 2, 5, 7, 8] arr_inds = np.array([0, 2, 5, 7, 8]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser[inds] - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match="not in index"): ser[arr_inds] diff --git a/pandas/tests/series/indexing/test_indexing.py b/pandas/tests/series/indexing/test_indexing.py index 30c37113f6b8f..6c3587c7eeada 100644 --- a/pandas/tests/series/indexing/test_indexing.py +++ b/pandas/tests/series/indexing/test_indexing.py @@ -1,6 +1,6 @@ """ test get/set & misc """ - from datetime import timedelta +import re import numpy as np import pytest @@ -149,7 +149,7 @@ def test_getitem_dups_with_missing(indexer_sl): # breaks reindex, so need to use .loc internally # GH 4246 ser = Series([1, 2, 3, 4], ["foo", "bar", "foo", "bah"]) - with pytest.raises(KeyError, match="with any missing labels"): + with pytest.raises(KeyError, match=re.escape("['bam'] not in index")): indexer_sl(ser)[["foo", "bar", "bah", "bam"]]
- [x] xref #41170 The removed tests don't make sense anymore, since all is printed now but without a fixed order
https://api.github.com/repos/pandas-dev/pandas/pulls/41371
2021-05-07T16:32:36Z
2021-05-07T22:07:35Z
2021-05-07T22:07:35Z
2021-05-12T01:57:02Z
Pin fastparquet to leq 0.5.0
diff --git a/ci/deps/actions-37-db.yaml b/ci/deps/actions-37-db.yaml index 8755e1a02c3cf..edca7b51a3420 100644 --- a/ci/deps/actions-37-db.yaml +++ b/ci/deps/actions-37-db.yaml @@ -15,7 +15,7 @@ dependencies: - beautifulsoup4 - botocore>=1.11 - dask - - fastparquet>=0.4.0 + - fastparquet>=0.4.0, <=0.5.0 - fsspec>=0.7.4 - gcsfs>=0.6.0 - geopandas diff --git a/ci/deps/azure-windows-38.yaml b/ci/deps/azure-windows-38.yaml index 661d8813d32d2..fdea34d573340 100644 --- a/ci/deps/azure-windows-38.yaml +++ b/ci/deps/azure-windows-38.yaml @@ -15,7 +15,7 @@ dependencies: # pandas dependencies - blosc - bottleneck - - fastparquet>=0.4.0 + - fastparquet>=0.4.0, <=0.5.0 - flask - fsspec>=0.8.0 - matplotlib=3.1.3 diff --git a/environment.yml b/environment.yml index 2e0228a15272e..30fa7c0dea696 100644 --- a/environment.yml +++ b/environment.yml @@ -99,7 +99,7 @@ dependencies: - xlwt - odfpy - - fastparquet>=0.3.2 # pandas.read_parquet, DataFrame.to_parquet + - fastparquet>=0.3.2, <=0.5.0 # pandas.read_parquet, DataFrame.to_parquet - pyarrow>=0.15.0 # pandas.read_parquet, DataFrame.to_parquet, pandas.read_feather, DataFrame.to_feather - python-snappy # required by pyarrow diff --git a/requirements-dev.txt b/requirements-dev.txt index ea7ca43742934..3e421c7715566 100644 --- a/requirements-dev.txt +++ b/requirements-dev.txt @@ -64,7 +64,7 @@ xlrd xlsxwriter xlwt odfpy -fastparquet>=0.3.2 +fastparquet>=0.3.2, <=0.5.0 pyarrow>=0.15.0 python-snappy pyqt5>=5.9.2
- [x] xref #41366 Pinning for now to get ci passing Labeled the issue with 1.3 for now so that we don't forget it
https://api.github.com/repos/pandas-dev/pandas/pulls/41370
2021-05-07T14:59:58Z
2021-05-07T16:41:07Z
2021-05-07T16:41:07Z
2021-06-01T14:58:14Z
CLN: groupby
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 55e8578b2cef4..9287163053cac 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -688,9 +688,9 @@ def describe(self, **kwargs): def value_counts( self, - normalize=False, - sort=True, - ascending=False, + normalize: bool = False, + sort: bool = True, + ascending: bool = False, bins=None, dropna: bool = True, ): @@ -715,7 +715,7 @@ def apply_series_value_counts(): # scalar bins cannot be done at top level # in a backward compatible way return apply_series_value_counts() - elif is_categorical_dtype(val): + elif is_categorical_dtype(val.dtype): # GH38672 return apply_series_value_counts() @@ -807,44 +807,36 @@ def apply_series_value_counts(): sorter = np.lexsort((out if ascending else -out, cat)) out, codes[-1] = out[sorter], codes[-1][sorter] - if bins is None: - mi = MultiIndex( - levels=levels, codes=codes, names=names, verify_integrity=False - ) - - if is_integer_dtype(out): - out = ensure_int64(out) - return self.obj._constructor(out, index=mi, name=self._selection_name) - - # for compat. with libgroupby.value_counts need to ensure every - # bin is present at every index level, null filled with zeros - diff = np.zeros(len(out), dtype="bool") - for level_codes in codes[:-1]: - diff |= np.r_[True, level_codes[1:] != level_codes[:-1]] + if bins is not None: + # for compat. with libgroupby.value_counts need to ensure every + # bin is present at every index level, null filled with zeros + diff = np.zeros(len(out), dtype="bool") + for level_codes in codes[:-1]: + diff |= np.r_[True, level_codes[1:] != level_codes[:-1]] - ncat, nbin = diff.sum(), len(levels[-1]) + ncat, nbin = diff.sum(), len(levels[-1]) - left = [np.repeat(np.arange(ncat), nbin), np.tile(np.arange(nbin), ncat)] + left = [np.repeat(np.arange(ncat), nbin), np.tile(np.arange(nbin), ncat)] - right = [diff.cumsum() - 1, codes[-1]] + right = [diff.cumsum() - 1, codes[-1]] - _, idx = get_join_indexers(left, right, sort=False, how="left") - out = np.where(idx != -1, out[idx], 0) + _, idx = get_join_indexers(left, right, sort=False, how="left") + out = np.where(idx != -1, out[idx], 0) - if sort: - sorter = np.lexsort((out if ascending else -out, left[0])) - out, left[-1] = out[sorter], left[-1][sorter] + if sort: + sorter = np.lexsort((out if ascending else -out, left[0])) + out, left[-1] = out[sorter], left[-1][sorter] - # build the multi-index w/ full levels - def build_codes(lev_codes: np.ndarray) -> np.ndarray: - return np.repeat(lev_codes[diff], nbin) + # build the multi-index w/ full levels + def build_codes(lev_codes: np.ndarray) -> np.ndarray: + return np.repeat(lev_codes[diff], nbin) - codes = [build_codes(lev_codes) for lev_codes in codes[:-1]] - codes.append(left[-1]) + codes = [build_codes(lev_codes) for lev_codes in codes[:-1]] + codes.append(left[-1]) mi = MultiIndex(levels=levels, codes=codes, names=names, verify_integrity=False) - if is_integer_dtype(out): + if is_integer_dtype(out.dtype): out = ensure_int64(out) return self.obj._constructor(out, index=mi, name=self._selection_name) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index c20a9b7ad2210..1105c1bd1d782 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1837,7 +1837,7 @@ def first(x: Series): return obj.apply(first, axis=axis) elif isinstance(obj, Series): return first(obj) - else: + else: # pragma: no cover raise TypeError(type(obj)) return self._agg_general( @@ -1862,7 +1862,7 @@ def last(x: Series): return obj.apply(last, axis=axis) elif isinstance(obj, Series): return last(obj) - else: + else: # pragma: no cover raise TypeError(type(obj)) return self._agg_general( @@ -3271,7 +3271,7 @@ def get_groupby( from pandas.core.groupby.generic import DataFrameGroupBy klass = DataFrameGroupBy - else: + else: # pragma: no cover raise TypeError(f"invalid type: {obj}") return klass( diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index b88f2b0200768..46b47bc29d8a6 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -276,11 +276,11 @@ def get_out_dtype(self, dtype: np.dtype) -> np.dtype: @overload def _get_result_dtype(self, dtype: np.dtype) -> np.dtype: - ... + ... # pragma: no cover @overload def _get_result_dtype(self, dtype: ExtensionDtype) -> ExtensionDtype: - ... + ... # pragma: no cover def _get_result_dtype(self, dtype: DtypeObj) -> DtypeObj: """
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41363
2021-05-07T04:18:19Z
2021-05-07T19:34:32Z
2021-05-07T19:34:32Z
2021-05-07T19:38:41Z
CLN: trim unreachable groupby paths
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 18506b871bda6..b2041a3e1380a 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -266,7 +266,11 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) func = maybe_mangle_lambdas(func) ret = self._aggregate_multiple_funcs(func) if relabeling: - ret.columns = columns + # error: Incompatible types in assignment (expression has type + # "Optional[List[str]]", variable has type "Index") + ret.columns = columns # type: ignore[assignment] + return ret + else: cyfunc = com.get_cython_func(func) if cyfunc and not args and not kwargs: @@ -282,33 +286,21 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) # see test_groupby.test_basic result = self._aggregate_named(func, *args, **kwargs) - index = Index(sorted(result), name=self.grouper.names[0]) - ret = create_series_with_explicit_dtype( - result, index=index, dtype_if_empty=object - ) - - if not self.as_index: # pragma: no cover - print("Warning, ignoring as_index=True") - - if isinstance(ret, dict): - from pandas import concat - - ret = concat(ret.values(), axis=1, keys=[key.label for key in ret.keys()]) - return ret + index = Index(sorted(result), name=self.grouper.names[0]) + return create_series_with_explicit_dtype( + result, index=index, dtype_if_empty=object + ) agg = aggregate - def _aggregate_multiple_funcs(self, arg): + def _aggregate_multiple_funcs(self, arg) -> DataFrame: if isinstance(arg, dict): # show the deprecation, but only if we # have not shown a higher level one # GH 15931 - if isinstance(self._selected_obj, Series): - raise SpecificationError("nested renamer is not supported") + raise SpecificationError("nested renamer is not supported") - columns = list(arg.keys()) - arg = arg.items() elif any(isinstance(x, (tuple, list)) for x in arg): arg = [(x, x) if not isinstance(x, (tuple, list)) else x for x in arg] @@ -335,8 +327,14 @@ def _aggregate_multiple_funcs(self, arg): results[base.OutputKey(label=name, position=idx)] = obj.aggregate(func) if any(isinstance(x, DataFrame) for x in results.values()): - # let higher level handle - return results + from pandas import concat + + res_df = concat( + results.values(), axis=1, keys=[key.label for key in results.keys()] + ) + # error: Incompatible return value type (got "Union[DataFrame, Series]", + # expected "DataFrame") + return res_df # type: ignore[return-value] indexed_output = {key.position: val for key, val in results.items()} output = self.obj._constructor_expanddim(indexed_output, index=None) @@ -1000,6 +998,11 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) result = op.agg() if not is_dict_like(func) and result is not None: return result + elif relabeling and result is not None: + # this should be the only (non-raising) case with relabeling + # used reordered index of columns + result = result.iloc[:, order] + result.columns = columns if result is None: @@ -1039,12 +1042,6 @@ def aggregate(self, func=None, *args, engine=None, engine_kwargs=None, **kwargs) [sobj.columns.name] * result.columns.nlevels ).droplevel(-1) - if relabeling: - - # used reordered index of columns - result = result.iloc[:, order] - result.columns = columns - if not self.as_index: self._insert_inaxis_grouper_inplace(result) result.index = np.arange(len(result)) @@ -1389,9 +1386,7 @@ def _transform_item_by_item(self, obj: DataFrame, wrapper) -> DataFrame: if not output: raise TypeError("Transform function invalid for data types") - columns = obj.columns - if len(output) < len(obj.columns): - columns = columns.take(inds) + columns = obj.columns.take(inds) return self.obj._constructor(output, index=obj.index, columns=columns) diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index c5ef18c51a533..d222bc1c083ed 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -2174,7 +2174,9 @@ def backfill(self, limit=None): @final @Substitution(name="groupby") @Substitution(see_also=_common_see_also) - def nth(self, n: int | list[int], dropna: str | None = None) -> DataFrame: + def nth( + self, n: int | list[int], dropna: Literal["any", "all", None] = None + ) -> DataFrame: """ Take the nth row from each group if n is an int, or a subset of rows if n is a list of ints. @@ -2187,9 +2189,9 @@ def nth(self, n: int | list[int], dropna: str | None = None) -> DataFrame: ---------- n : int or list of ints A single nth value for the row or a list of nth values. - dropna : None or str, optional + dropna : {'any', 'all', None}, default None Apply the specified dropna operation before counting which row is - the nth row. Needs to be None, 'any' or 'all'. + the nth row. Returns -------
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41361
2021-05-06T23:10:06Z
2021-05-07T15:53:05Z
2021-05-07T15:53:05Z
2023-02-09T04:18:20Z
REF: test_invalid_dtype in numeric index tests
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py index 45e1b615b1ade..857b136b67a0c 100644 --- a/pandas/tests/indexes/common.py +++ b/pandas/tests/indexes/common.py @@ -831,3 +831,10 @@ def test_arithmetic_explicit_conversions(self): a = np.zeros(5, dtype="float64") result = a - fidx tm.assert_index_equal(result, expected) + + def test_invalid_dtype(self, invalid_dtype): + # GH 29539 + dtype = invalid_dtype + msg = fr"Incorrect `dtype` passed: expected \w+(?: \w+)?, received {dtype}" + with pytest.raises(ValueError, match=msg): + self._index_cls([1, 2, 3], dtype=dtype) diff --git a/pandas/tests/indexes/numeric/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py index bfe06d74570da..e63aeba54fccd 100644 --- a/pandas/tests/indexes/numeric/test_numeric.py +++ b/pandas/tests/indexes/numeric/test_numeric.py @@ -8,7 +8,6 @@ Float64Index, Index, Int64Index, - RangeIndex, Series, UInt64Index, ) @@ -20,6 +19,12 @@ class TestFloat64Index(NumericBase): _index_cls = Float64Index _dtype = np.float64 + @pytest.fixture( + params=["int64", "uint64", "category", "datetime64"], + ) + def invalid_dtype(self, request): + return request.param + @pytest.fixture def simple_index(self) -> Index: values = np.arange(5, dtype=self._dtype) @@ -104,23 +109,6 @@ def test_constructor(self): assert result.dtype == dtype assert pd.isna(result.values).all() - @pytest.mark.parametrize( - "index, dtype", - [ - (Int64Index, "float64"), - (UInt64Index, "categorical"), - (Float64Index, "datetime64"), - (RangeIndex, "float64"), - ], - ) - def test_invalid_dtype(self, index, dtype): - # GH 29539 - with pytest.raises( - ValueError, - match=rf"Incorrect `dtype` passed: expected \w+(?: \w+)?, received {dtype}", - ): - index([1, 2, 3], dtype=dtype) - def test_constructor_invalid(self): index_cls = self._index_cls cls_name = index_cls.__name__ @@ -394,6 +382,12 @@ class TestInt64Index(NumericInt): _index_cls = Int64Index _dtype = np.int64 + @pytest.fixture( + params=["uint64", "float64", "category", "datetime64"], + ) + def invalid_dtype(self, request): + return request.param + @pytest.fixture def simple_index(self) -> Index: return self._index_cls(range(0, 20, 2), dtype=self._dtype) @@ -492,6 +486,12 @@ class TestUInt64Index(NumericInt): _index_cls = UInt64Index _dtype = np.uint64 + @pytest.fixture( + params=["int64", "float64", "category", "datetime64"], + ) + def invalid_dtype(self, request): + return request.param + @pytest.fixture def simple_index(self) -> Index: # compat with shared Int64/Float64 tests diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py index 3a4aa29ea620e..12fce56ffcb21 100644 --- a/pandas/tests/indexes/ranges/test_range.py +++ b/pandas/tests/indexes/ranges/test_range.py @@ -23,6 +23,12 @@ class TestRangeIndex(NumericBase): _index_cls = RangeIndex + @pytest.fixture( + params=["uint64", "float64", "category", "datetime64"], + ) + def invalid_dtype(self, request): + return request.param + @pytest.fixture def simple_index(self) -> Index: return self._index_cls(start=0, stop=20, step=2)
The invalid dtype tests for all the numeric index types are currently done in `tests.numeric.test_numeric.py::TestFloat64Index::test_invalid_dtype`. This PR moves them to the appropriate test class (`TestInt64Index` etc.).
https://api.github.com/repos/pandas-dev/pandas/pulls/41359
2021-05-06T20:55:17Z
2021-05-07T12:43:27Z
2021-05-07T12:43:27Z
2021-05-07T13:09:43Z
Bug in loc not raising KeyError with MultiIndex containing no longer used levels
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index cf3dd1b0e3226..495ed86f2cc4e 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -787,6 +787,7 @@ Indexing - Bug in setting ``numpy.timedelta64`` values into an object-dtype :class:`Series` using a boolean indexer (:issue:`39488`) - Bug in setting numeric values into a into a boolean-dtypes :class:`Series` using ``at`` or ``iat`` failing to cast to object-dtype (:issue:`39582`) - Bug in :meth:`DataFrame.__setitem__` and :meth:`DataFrame.iloc.__setitem__` raising ``ValueError`` when trying to index with a row-slice and setting a list as values (:issue:`40440`) +- Bug in :meth:`DataFrame.loc` not raising ``KeyError`` when key was not found in :class:`MultiIndex` when levels contain more values than used (:issue:`41170`) - Bug in :meth:`DataFrame.loc.__setitem__` when setting-with-expansion incorrectly raising when the index in the expanding axis contains duplicates (:issue:`40096`) - Bug in :meth:`DataFrame.loc` incorrectly matching non-boolean index elements (:issue:`20432`) - Bug in :meth:`Series.__delitem__` with ``ExtensionDtype`` incorrectly casting to ``ndarray`` (:issue:`40386`) @@ -806,6 +807,7 @@ MultiIndex - Bug in :meth:`MultiIndex.intersection` duplicating ``NaN`` in result (:issue:`38623`) - Bug in :meth:`MultiIndex.equals` incorrectly returning ``True`` when :class:`MultiIndex` containing ``NaN`` even when they are differently ordered (:issue:`38439`) - Bug in :meth:`MultiIndex.intersection` always returning empty when intersecting with :class:`CategoricalIndex` (:issue:`38653`) +- Bug in :meth:`MultiIndex.reindex` raising ``ValueError`` with empty MultiIndex and indexing only a specific level (:issue:`41170`) I/O ^^^ diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py index 84f1245299d53..3785ade4688d2 100644 --- a/pandas/core/indexes/base.py +++ b/pandas/core/indexes/base.py @@ -4229,7 +4229,8 @@ def _get_leaf_sorter(labels: list[np.ndarray]) -> np.ndarray: else: # tie out the order with other if level == 0: # outer most level, take the fast route - ngroups = 1 + new_lev_codes.max() + max_new_lev = 0 if len(new_lev_codes) == 0 else new_lev_codes.max() + ngroups = 1 + max_new_lev left_indexer, counts = libalgos.groupsort_indexer( new_lev_codes, ngroups ) diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py index a68238af003e4..c1295a98bf357 100644 --- a/pandas/core/indexes/multi.py +++ b/pandas/core/indexes/multi.py @@ -72,6 +72,7 @@ from pandas.core.arrays import Categorical from pandas.core.arrays.categorical import factorize_from_iterables import pandas.core.common as com +from pandas.core.indexers import is_empty_indexer import pandas.core.indexes.base as ibase from pandas.core.indexes.base import ( Index, @@ -2634,6 +2635,10 @@ def _convert_listlike_indexer(self, keyarr): mask = check == -1 if mask.any(): raise KeyError(f"{keyarr[mask]} not in index") + elif is_empty_indexer(indexer, keyarr): + # We get here when levels still contain values which are not + # actually in Index anymore + raise KeyError(f"{keyarr} not in index") return indexer, keyarr diff --git a/pandas/tests/indexes/multi/test_reindex.py b/pandas/tests/indexes/multi/test_reindex.py index 5ed34cd766bce..3b0fcd72f3123 100644 --- a/pandas/tests/indexes/multi/test_reindex.py +++ b/pandas/tests/indexes/multi/test_reindex.py @@ -104,3 +104,14 @@ def test_reindex_non_unique(): msg = "cannot handle a non-unique multi-index!" with pytest.raises(ValueError, match=msg): a.reindex(new_idx) + + +@pytest.mark.parametrize("values", [[["a"], ["x"]], [[], []]]) +def test_reindex_empty_with_level(values): + # GH41170 + idx = MultiIndex.from_arrays(values) + result, result_indexer = idx.reindex(np.array(["b"]), level=0) + expected = MultiIndex(levels=[["b"], values[1]], codes=[[], []]) + expected_indexer = np.array([], dtype=result_indexer.dtype) + tm.assert_index_equal(result, expected) + tm.assert_numpy_array_equal(result_indexer, expected_indexer) diff --git a/pandas/tests/indexing/test_loc.py b/pandas/tests/indexing/test_loc.py index 11391efde4956..a1c646b4dc0b5 100644 --- a/pandas/tests/indexing/test_loc.py +++ b/pandas/tests/indexing/test_loc.py @@ -1624,6 +1624,13 @@ def test_loc_getitem_preserves_index_level_category_dtype(self): result = df.loc[["a"]].index.levels[0] tm.assert_index_equal(result, expected) + @pytest.mark.parametrize("lt_value", [30, 10]) + def test_loc_multiindex_levels_contain_values_not_in_index_anymore(self, lt_value): + # GH#41170 + df = DataFrame({"a": [12, 23, 34, 45]}, index=[list("aabb"), [0, 1, 2, 3]]) + with pytest.raises(KeyError, match=r"\['b'\] not in index"): + df.loc[df["a"] < lt_value, :].loc[["b"], :] + class TestLocSetitemWithExpansion: @pytest.mark.slow diff --git a/pandas/tests/series/methods/test_reindex.py b/pandas/tests/series/methods/test_reindex.py index 8e54cbeb313c4..36d3971d10a3d 100644 --- a/pandas/tests/series/methods/test_reindex.py +++ b/pandas/tests/series/methods/test_reindex.py @@ -4,6 +4,7 @@ from pandas import ( Categorical, Index, + MultiIndex, NaT, Period, PeriodIndex, @@ -345,3 +346,16 @@ def test_reindex_periodindex_with_object(p_values, o_values, values, expected_va result = ser.reindex(object_index) expected = Series(expected_values, index=object_index) tm.assert_series_equal(result, expected) + + +@pytest.mark.parametrize("values", [[["a"], ["x"]], [[], []]]) +def test_reindex_empty_with_level(values): + # GH41170 + ser = Series( + range(len(values[0])), index=MultiIndex.from_arrays(values), dtype="object" + ) + result = ser.reindex(np.array(["b"]), level=0) + expected = Series( + index=MultiIndex(levels=[["b"], values[1]], codes=[[], []]), dtype="object" + ) + tm.assert_series_equal(result, expected)
- [x] closes #41170 - [x] closes #40235 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry stumbled across a bug in MultiIndex.reindex, which caused the ValueError from the op
https://api.github.com/repos/pandas-dev/pandas/pulls/41358
2021-05-06T20:44:43Z
2021-05-12T01:20:35Z
2021-05-12T01:20:34Z
2021-05-12T09:19:14Z
REF: preserve Index dtype in BlockManager._combine
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index 71e6d14e6a716..dd02771f735a6 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -274,9 +274,6 @@ def apply( else: new_axes = self._axes - if len(result_arrays) == 0: - return self.make_empty(new_axes) - # error: Argument 1 to "ArrayManager" has incompatible type "List[ndarray]"; # expected "List[Union[ndarray, ExtensionArray]]" return type(self)(result_arrays, new_axes) # type: ignore[arg-type] @@ -487,7 +484,7 @@ def _get_data_subset(self: T, predicate: Callable) -> T: indices = [i for i, arr in enumerate(self.arrays) if predicate(arr)] arrays = [self.arrays[i] for i in indices] # TODO copy? - new_axes = [self._axes[0], self._axes[1][np.array(indices, dtype="int64")]] + new_axes = [self._axes[0], self._axes[1][np.array(indices, dtype="intp")]] return type(self)(arrays, new_axes, verify_integrity=False) def get_bool_data(self: T, copy: bool = False) -> T: @@ -696,7 +693,6 @@ def _equal_values(self, other) -> bool: return True # TODO - # equals # to_dict diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 73f463997c085..cdb7d8a6ccd45 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -345,9 +345,6 @@ def apply( if ignore_failures: return self._combine(result_blocks) - if len(result_blocks) == 0: - return self.make_empty(self.axes) - return type(self).from_blocks(result_blocks, self.axes) def where(self: T, other, cond, align: bool, errors: str) -> T: @@ -532,6 +529,13 @@ def _combine( ) -> T: """ return a new manager with the blocks """ if len(blocks) == 0: + if self.ndim == 2: + # retain our own Index dtype + if index is not None: + axes = [self.items[:0], index] + else: + axes = [self.items[:0]] + self.axes[1:] + return self.make_empty(axes) return self.make_empty() # FIXME: optimization potential @@ -1233,7 +1237,7 @@ def grouped_reduce(self: T, func: Callable, ignore_failures: bool = False) -> T: index = Index(range(result_blocks[0].values.shape[-1])) if ignore_failures: - return self._combine(result_blocks, index=index) + return self._combine(result_blocks, copy=False, index=index) return type(self).from_blocks(result_blocks, [self.axes[0], index]) @@ -1270,7 +1274,7 @@ def reduce( new_mgr = self._combine(res_blocks, copy=False, index=index) else: indexer = [] - new_mgr = type(self).from_blocks([], [Index([]), index]) + new_mgr = type(self).from_blocks([], [self.items[:0], index]) else: indexer = np.arange(self.shape[0]) new_mgr = type(self).from_blocks(res_blocks, [self.items, index]) diff --git a/pandas/tests/generic/test_generic.py b/pandas/tests/generic/test_generic.py index d07f843f4acfc..771d31aa6865b 100644 --- a/pandas/tests/generic/test_generic.py +++ b/pandas/tests/generic/test_generic.py @@ -85,7 +85,7 @@ def test_rename(self): # multiple axes at once - def test_get_numeric_data(self, using_array_manager): + def test_get_numeric_data(self): n = 4 kwargs = { @@ -100,9 +100,9 @@ def test_get_numeric_data(self, using_array_manager): # non-inclusion result = o._get_bool_data() expected = self._construct(n, value="empty", **kwargs) - if using_array_manager and isinstance(o, DataFrame): - # INFO(ArrayManager) preserve the dtype of the columns Index - expected.columns = expected.columns.astype("int64") + if isinstance(o, DataFrame): + # preserve columns dtype + expected.columns = o.columns[:0] self._compare(result, expected) # get the bool data
needed for fixing the DataFrameGroupBy part of #41291
https://api.github.com/repos/pandas-dev/pandas/pulls/41354
2021-05-06T17:48:46Z
2021-05-06T23:09:39Z
2021-05-06T23:09:39Z
2021-05-06T23:19:47Z
whatsnew about new engine param for to_sql (follow up to #40556)
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index a81dda4e7dfdd..2b560f7f2499f 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -608,7 +608,7 @@ Other API changes ^^^^^^^^^^^^^^^^^ - Partially initialized :class:`CategoricalDtype` (i.e. those with ``categories=None`` objects will no longer compare as equal to fully initialized dtype objects. - Accessing ``_constructor_expanddim`` on a :class:`DataFrame` and ``_constructor_sliced`` on a :class:`Series` now raise an ``AttributeError``. Previously a ``NotImplementedError`` was raised (:issue:`38782`) -- +- Added new ``engine`` and ``**engine_kwargs`` parameters to :meth:`DataFrame.to_sql` to support other future "SQL engines". Currently we still only use ``SQLAlchemy`` under the hood, but more engines are planned to be supported such as ``turbodbc`` (:issue:`36893`) Build =====
- [ ] xref #36893 - [ ] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry Follow up to #40556, per this comment https://github.com/pandas-dev/pandas/pull/40556#issuecomment-831413175
https://api.github.com/repos/pandas-dev/pandas/pulls/41353
2021-05-06T15:41:02Z
2021-05-06T23:20:15Z
2021-05-06T23:20:15Z
2021-05-06T23:38:44Z
TST/REF: split out replace regex into class
diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py index e6ed60dc2bb08..d2974a5d08a60 100644 --- a/pandas/tests/frame/methods/test_replace.py +++ b/pandas/tests/frame/methods/test_replace.py @@ -57,217 +57,6 @@ def test_replace_inplace(self, datetime_frame, float_string_frame): assert return_value is None tm.assert_frame_equal(tsframe, datetime_frame.fillna(0)) - def test_regex_replace_scalar(self, mix_ab): - obj = {"a": list("ab.."), "b": list("efgh")} - dfobj = DataFrame(obj) - dfmix = DataFrame(mix_ab) - - # simplest cases - # regex -> value - # obj frame - res = dfobj.replace(r"\s*\.\s*", np.nan, regex=True) - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.replace(r"\s*\.\s*", np.nan, regex=True) - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True) - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True) - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.replace(re.compile(r"\s*\.\s*"), np.nan, regex=True) - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.replace(re.compile(r"\s*\.\s*"), np.nan, regex=True) - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.replace(re.compile(r"\s*(\.)\s*"), r"\1\1\1") - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.replace(re.compile(r"\s*(\.)\s*"), r"\1\1\1") - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - res = dfmix.replace(regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1") - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - res = dfmix.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1") - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - def test_regex_replace_scalar_inplace(self, mix_ab): - obj = {"a": list("ab.."), "b": list("efgh")} - dfobj = DataFrame(obj) - dfmix = DataFrame(mix_ab) - - # simplest cases - # regex -> value - # obj frame - res = dfobj.copy() - return_value = res.replace(r"\s*\.\s*", np.nan, regex=True, inplace=True) - assert return_value is None - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.copy() - return_value = res.replace(r"\s*\.\s*", np.nan, regex=True, inplace=True) - assert return_value is None - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.copy() - return_value = res.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True, inplace=True) - assert return_value is None - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - return_value = res.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True, inplace=True) - assert return_value is None - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - return_value = res.replace( - re.compile(r"\s*\.\s*"), np.nan, regex=True, inplace=True - ) - assert return_value is None - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.copy() - return_value = res.replace( - re.compile(r"\s*\.\s*"), np.nan, regex=True, inplace=True - ) - assert return_value is None - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.copy() - return_value = res.replace( - re.compile(r"\s*(\.)\s*"), r"\1\1\1", regex=True, inplace=True - ) - assert return_value is None - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - return_value = res.replace( - re.compile(r"\s*(\.)\s*"), r"\1\1\1", regex=True, inplace=True - ) - assert return_value is None - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - res = dfobj.copy() - return_value = res.replace(regex=r"\s*\.\s*", value=np.nan, inplace=True) - assert return_value is None - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.copy() - return_value = res.replace(regex=r"\s*\.\s*", value=np.nan, inplace=True) - assert return_value is None - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.copy() - return_value = res.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1", inplace=True) - assert return_value is None - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - return_value = res.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1", inplace=True) - assert return_value is None - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - - # everything with compiled regexs as well - res = dfobj.copy() - return_value = res.replace( - regex=re.compile(r"\s*\.\s*"), value=np.nan, inplace=True - ) - assert return_value is None - tm.assert_frame_equal(dfobj, res.fillna(".")) - - # mixed - res = dfmix.copy() - return_value = res.replace( - regex=re.compile(r"\s*\.\s*"), value=np.nan, inplace=True - ) - assert return_value is None - tm.assert_frame_equal(dfmix, res.fillna(".")) - - # regex -> regex - # obj frame - res = dfobj.copy() - return_value = res.replace( - regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1", inplace=True - ) - assert return_value is None - objc = obj.copy() - objc["a"] = ["a", "b", "...", "..."] - expec = DataFrame(objc) - tm.assert_frame_equal(res, expec) - - # with mixed - res = dfmix.copy() - return_value = res.replace( - regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1", inplace=True - ) - assert return_value is None - mixc = mix_ab.copy() - mixc["b"] = ["a", "b", "...", "..."] - expec = DataFrame(mixc) - tm.assert_frame_equal(res, expec) - def test_regex_replace_list_obj(self): obj = {"a": list("ab.."), "b": list("efgh"), "c": list("helo")} dfobj = DataFrame(obj) @@ -1689,3 +1478,216 @@ def test_replace_bytes(self, frame_or_series): expected = obj.copy() obj = obj.replace({None: np.nan}) tm.assert_equal(obj, expected) + + +class TestDataFrameReplaceRegex: + def test_regex_replace_scalar(self, mix_ab): + obj = {"a": list("ab.."), "b": list("efgh")} + dfobj = DataFrame(obj) + dfmix = DataFrame(mix_ab) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.replace(r"\s*\.\s*", np.nan, regex=True) + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.replace(r"\s*\.\s*", np.nan, regex=True) + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True) + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True) + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.replace(re.compile(r"\s*\.\s*"), np.nan, regex=True) + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.replace(re.compile(r"\s*\.\s*"), np.nan, regex=True) + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.replace(re.compile(r"\s*(\.)\s*"), r"\1\1\1") + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.replace(re.compile(r"\s*(\.)\s*"), r"\1\1\1") + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + res = dfmix.replace(regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1") + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + res = dfmix.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1") + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + def test_regex_replace_scalar_inplace(self, mix_ab): + obj = {"a": list("ab.."), "b": list("efgh")} + dfobj = DataFrame(obj) + dfmix = DataFrame(mix_ab) + + # simplest cases + # regex -> value + # obj frame + res = dfobj.copy() + return_value = res.replace(r"\s*\.\s*", np.nan, regex=True, inplace=True) + assert return_value is None + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.copy() + return_value = res.replace(r"\s*\.\s*", np.nan, regex=True, inplace=True) + assert return_value is None + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.copy() + return_value = res.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True, inplace=True) + assert return_value is None + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + return_value = res.replace(r"\s*(\.)\s*", r"\1\1\1", regex=True, inplace=True) + assert return_value is None + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + return_value = res.replace( + re.compile(r"\s*\.\s*"), np.nan, regex=True, inplace=True + ) + assert return_value is None + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.copy() + return_value = res.replace( + re.compile(r"\s*\.\s*"), np.nan, regex=True, inplace=True + ) + assert return_value is None + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.copy() + return_value = res.replace( + re.compile(r"\s*(\.)\s*"), r"\1\1\1", regex=True, inplace=True + ) + assert return_value is None + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + return_value = res.replace( + re.compile(r"\s*(\.)\s*"), r"\1\1\1", regex=True, inplace=True + ) + assert return_value is None + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + res = dfobj.copy() + return_value = res.replace(regex=r"\s*\.\s*", value=np.nan, inplace=True) + assert return_value is None + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.copy() + return_value = res.replace(regex=r"\s*\.\s*", value=np.nan, inplace=True) + assert return_value is None + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.copy() + return_value = res.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1", inplace=True) + assert return_value is None + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + return_value = res.replace(regex=r"\s*(\.)\s*", value=r"\1\1\1", inplace=True) + assert return_value is None + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec) + + # everything with compiled regexs as well + res = dfobj.copy() + return_value = res.replace( + regex=re.compile(r"\s*\.\s*"), value=np.nan, inplace=True + ) + assert return_value is None + tm.assert_frame_equal(dfobj, res.fillna(".")) + + # mixed + res = dfmix.copy() + return_value = res.replace( + regex=re.compile(r"\s*\.\s*"), value=np.nan, inplace=True + ) + assert return_value is None + tm.assert_frame_equal(dfmix, res.fillna(".")) + + # regex -> regex + # obj frame + res = dfobj.copy() + return_value = res.replace( + regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1", inplace=True + ) + assert return_value is None + objc = obj.copy() + objc["a"] = ["a", "b", "...", "..."] + expec = DataFrame(objc) + tm.assert_frame_equal(res, expec) + + # with mixed + res = dfmix.copy() + return_value = res.replace( + regex=re.compile(r"\s*(\.)\s*"), value=r"\1\1\1", inplace=True + ) + assert return_value is None + mixc = mix_ab.copy() + mixc["b"] = ["a", "b", "...", "..."] + expec = DataFrame(mixc) + tm.assert_frame_equal(res, expec)
Can move a bunch more, but the diff gets really garbled when moving more, so easier to do in chunks
https://api.github.com/repos/pandas-dev/pandas/pulls/41352
2021-05-06T15:02:42Z
2021-05-12T01:22:16Z
2021-05-12T01:22:16Z
2021-05-12T01:52:52Z
BUG : Min/max markers on box plot are not visible with 'dark_background' (#40769)
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index a81dda4e7dfdd..84144f234d7a9 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -849,6 +849,7 @@ Plotting - Prevent warnings when matplotlib's ``constrained_layout`` is enabled (:issue:`25261`) - Bug in :func:`DataFrame.plot` was showing the wrong colors in the legend if the function was called repeatedly and some calls used ``yerr`` while others didn't (partial fix of :issue:`39522`) - Bug in :func:`DataFrame.plot` was showing the wrong colors in the legend if the function was called repeatedly and some calls used ``secondary_y`` and others use ``legend=False`` (:issue:`40044`) +- Bug in :meth:`DataFrame.plot.box` in box plot when ``dark_background`` theme was selected, caps or min/max markers for the plot was not visible (:issue:`40769`) Groupby/resample/rolling diff --git a/pandas/plotting/_matplotlib/boxplot.py b/pandas/plotting/_matplotlib/boxplot.py index 6a81e3ae43b5d..21f30c1311e17 100644 --- a/pandas/plotting/_matplotlib/boxplot.py +++ b/pandas/plotting/_matplotlib/boxplot.py @@ -101,7 +101,7 @@ def _validate_color_args(self): self._boxes_c = colors[0] self._whiskers_c = colors[0] self._medians_c = colors[2] - self._caps_c = "k" # mpl default + self._caps_c = colors[0] def _get_colors(self, num_colors=None, color_kwds="color"): pass diff --git a/pandas/tests/plotting/frame/test_frame_color.py b/pandas/tests/plotting/frame/test_frame_color.py index 6844124d15f9d..a9b691f2a42b9 100644 --- a/pandas/tests/plotting/frame/test_frame_color.py +++ b/pandas/tests/plotting/frame/test_frame_color.py @@ -546,7 +546,13 @@ def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c="k", fliers_c=None): df = DataFrame(np.random.randn(5, 5)) bp = df.plot.box(return_type="dict") - _check_colors(bp, default_colors[0], default_colors[0], default_colors[2]) + _check_colors( + bp, + default_colors[0], + default_colors[0], + default_colors[2], + default_colors[0], + ) tm.close() dict_colors = { @@ -569,7 +575,7 @@ def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c="k", fliers_c=None): # partial colors dict_colors = {"whiskers": "c", "medians": "m"} bp = df.plot.box(color=dict_colors, return_type="dict") - _check_colors(bp, default_colors[0], "c", "m") + _check_colors(bp, default_colors[0], "c", "m", default_colors[0]) tm.close() from matplotlib import cm @@ -577,12 +583,12 @@ def _check_colors(bp, box_c, whiskers_c, medians_c, caps_c="k", fliers_c=None): # Test str -> colormap functionality bp = df.plot.box(colormap="jet", return_type="dict") jet_colors = [cm.jet(n) for n in np.linspace(0, 1, 3)] - _check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2]) + _check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2], jet_colors[0]) tm.close() # Test colormap functionality bp = df.plot.box(colormap=cm.jet, return_type="dict") - _check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2]) + _check_colors(bp, jet_colors[0], jet_colors[0], jet_colors[2], jet_colors[0]) tm.close() # string color is applied to all artists except fliers diff --git a/pandas/tests/plotting/test_boxplot_method.py b/pandas/tests/plotting/test_boxplot_method.py index 448679d562a4a..dbceeae44a493 100644 --- a/pandas/tests/plotting/test_boxplot_method.py +++ b/pandas/tests/plotting/test_boxplot_method.py @@ -195,6 +195,39 @@ def test_color_kwd(self, colors_kwd, expected): for k, v in expected.items(): assert result[k][0].get_color() == v + @pytest.mark.parametrize( + "scheme,expected", + [ + ( + "dark_background", + { + "boxes": "#8dd3c7", + "whiskers": "#8dd3c7", + "medians": "#bfbbd9", + "caps": "#8dd3c7", + }, + ), + ( + "default", + { + "boxes": "#1f77b4", + "whiskers": "#1f77b4", + "medians": "#2ca02c", + "caps": "#1f77b4", + }, + ), + ], + ) + def test_colors_in_theme(self, scheme, expected): + # GH: 40769 + df = DataFrame(np.random.rand(10, 2)) + import matplotlib.pyplot as plt + + plt.style.use(scheme) + result = df.plot.box(return_type="dict") + for k, v in expected.items(): + assert result[k][0].get_color() == v + @pytest.mark.parametrize( "dict_colors, msg", [({"boxes": "r", "invalid_key": "r"}, "invalid key 'invalid_key'")],
- [x] closes #40769 - [x] tests added / passed - [x] Ensure all linting tests pass, see here for how to run them Color in boxplot.py should be a hex value. Letter based color will be at matplotlib level. Using same color as that of boxes for the caps as well. The color will be fetched based on the selected theme. We are over riding color with "k" , rather need to use the one set at matplotlib level. Before Fix : Caps for the box plot is not visible : ![![image](https://user-images.githubusercontent.com/9417467/117279259-2634b480-ae7f-11eb-933c-350cff0133de.png)](https://user-images.githubusercontent.com/9417467/117279207-1a48f280-ae7f-11eb-9f0f-fe477d80dc77.png) After Fix : Caps for the box plow will be visible in dark background mode : ![image](https://user-images.githubusercontent.com/9417467/117279282-2df45900-ae7f-11eb-8107-9fe3ae2fe15f.png) Whats New : Box Plot's Caps will have same color as boxes unless color is explicitly specified by user arguments. Hence theme changes will not adversly effect caps color.
https://api.github.com/repos/pandas-dev/pandas/pulls/41349
2021-05-06T09:55:00Z
2021-05-10T09:34:29Z
2021-05-10T09:34:28Z
2021-05-10T09:34:38Z
REF: Share Block/ArrayManager set_axis
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 18506b871bda6..dcb962d737903 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -1625,14 +1625,14 @@ def _wrap_transformed_output( def _wrap_agged_manager(self, mgr: Manager2D) -> DataFrame: if not self.as_index: index = np.arange(mgr.shape[1]) - mgr.set_axis(1, ibase.Index(index), verify_integrity=False) + mgr.set_axis(1, ibase.Index(index)) result = self.obj._constructor(mgr) self._insert_inaxis_grouper_inplace(result) result = result._consolidate() else: index = self.grouper.result_index - mgr.set_axis(1, index, verify_integrity=False) + mgr.set_axis(1, index) result = self.obj._constructor(mgr) if self.axis == 1: diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py index 71e6d14e6a716..d4dee81bf8081 100644 --- a/pandas/core/internals/array_manager.py +++ b/pandas/core/internals/array_manager.py @@ -160,21 +160,10 @@ def _normalize_axis(axis: int) -> int: axis = 1 if axis == 0 else 0 return axis - def set_axis( - self, axis: int, new_labels: Index, verify_integrity: bool = True - ) -> None: + def set_axis(self, axis: int, new_labels: Index) -> None: # Caller is responsible for ensuring we have an Index object. + self._validate_set_axis(axis, new_labels) axis = self._normalize_axis(axis) - if verify_integrity: - old_len = len(self._axes[axis]) - new_len = len(new_labels) - - if new_len != old_len: - raise ValueError( - f"Length mismatch: Expected axis has {old_len} elements, new " - f"values have {new_len} elements" - ) - self._axes[axis] = new_labels def consolidate(self: T) -> T: diff --git a/pandas/core/internals/base.py b/pandas/core/internals/base.py index 3a8ff8237b62f..f8ccb10655ea1 100644 --- a/pandas/core/internals/base.py +++ b/pandas/core/internals/base.py @@ -44,6 +44,23 @@ def ndim(self) -> int: def shape(self) -> Shape: return tuple(len(ax) for ax in self.axes) + @final + def _validate_set_axis(self, axis: int, new_labels: Index) -> None: + # Caller is responsible for ensuring we have an Index object. + old_len = len(self.axes[axis]) + new_len = len(new_labels) + + if axis == 1 and len(self.items) == 0: + # If we are setting the index on a DataFrame with no columns, + # it is OK to change the length. + pass + + elif new_len != old_len: + raise ValueError( + f"Length mismatch: Expected axis has {old_len} elements, new " + f"values have {new_len} elements" + ) + def reindex_indexer( self: T, new_axis, diff --git a/pandas/core/internals/managers.py b/pandas/core/internals/managers.py index 73f463997c085..836a903248f1d 100644 --- a/pandas/core/internals/managers.py +++ b/pandas/core/internals/managers.py @@ -211,20 +211,9 @@ def _normalize_axis(self, axis: int) -> int: axis = 1 if axis == 0 else 0 return axis - def set_axis( - self, axis: int, new_labels: Index, verify_integrity: bool = True - ) -> None: + def set_axis(self, axis: int, new_labels: Index) -> None: # Caller is responsible for ensuring we have an Index object. - if verify_integrity: - old_len = len(self.axes[axis]) - new_len = len(new_labels) - - if new_len != old_len: - raise ValueError( - f"Length mismatch: Expected axis has {old_len} elements, new " - f"values have {new_len} elements" - ) - + self._validate_set_axis(axis, new_labels) self.axes[axis] = new_labels @property
Fix the need for verify_integrity=False
https://api.github.com/repos/pandas-dev/pandas/pulls/41348
2021-05-06T04:07:04Z
2021-05-06T23:21:27Z
2021-05-06T23:21:27Z
2021-05-06T23:23:13Z
REF: move masked dispatch inside _ea_wrap_cython_operation
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py index d1a46c1c36439..26b71b396f4b5 100644 --- a/pandas/core/groupby/ops.py +++ b/pandas/core/groupby/ops.py @@ -327,6 +327,14 @@ def _ea_wrap_cython_operation( re-wrap if appropriate. """ # TODO: general case implementation overridable by EAs. + if isinstance(values, BaseMaskedArray) and self.uses_mask(): + return self._masked_ea_wrap_cython_operation( + values, + min_count=min_count, + ngroups=ngroups, + comp_ids=comp_ids, + **kwargs, + ) orig_values = values if isinstance(orig_values, (DatetimeArray, PeriodArray)): @@ -614,22 +622,13 @@ def cython_operation( if not isinstance(values, np.ndarray): # i.e. ExtensionArray - if isinstance(values, BaseMaskedArray) and self.uses_mask(): - return self._masked_ea_wrap_cython_operation( - values, - min_count=min_count, - ngroups=ngroups, - comp_ids=comp_ids, - **kwargs, - ) - else: - return self._ea_wrap_cython_operation( - values, - min_count=min_count, - ngroups=ngroups, - comp_ids=comp_ids, - **kwargs, - ) + return self._ea_wrap_cython_operation( + values, + min_count=min_count, + ngroups=ngroups, + comp_ids=comp_ids, + **kwargs, + ) return self._cython_op_ndim_compat( values,
After the groupby cleanups, masked dispatch can be moved inside `_ea_wrap_cython_operation` without adding args. Seemed like the preferred place for it in the original pr.
https://api.github.com/repos/pandas-dev/pandas/pulls/41347
2021-05-06T03:47:47Z
2021-05-06T13:19:40Z
2021-05-06T13:19:40Z
2021-05-06T15:07:01Z
TST: catch/suppress test warnings
diff --git a/pandas/tests/generic/test_finalize.py b/pandas/tests/generic/test_finalize.py index dbe2df5238c7e..50ecb74924e2a 100644 --- a/pandas/tests/generic/test_finalize.py +++ b/pandas/tests/generic/test_finalize.py @@ -226,7 +226,10 @@ ), pytest.param( (pd.DataFrame, frame_mi_data, operator.methodcaller("count", level="A")), - marks=not_implemented_mark, + marks=[ + not_implemented_mark, + pytest.mark.filterwarnings("ignore:Using the level keyword:FutureWarning"), + ], ), pytest.param( (pd.DataFrame, frame_data, operator.methodcaller("nunique")), diff --git a/pandas/tests/io/__init__.py b/pandas/tests/io/__init__.py index 39474dedba78c..3231e38b985af 100644 --- a/pandas/tests/io/__init__.py +++ b/pandas/tests/io/__init__.py @@ -5,6 +5,12 @@ pytest.mark.filterwarnings( "ignore:PY_SSIZE_T_CLEAN will be required.*:DeprecationWarning" ), + pytest.mark.filterwarnings( + "ignore:Block.is_categorical is deprecated:DeprecationWarning" + ), + pytest.mark.filterwarnings( + r"ignore:`np\.bool` is a deprecated alias:DeprecationWarning" + ), # xlrd pytest.mark.filterwarnings( "ignore:This method will be removed in future versions:DeprecationWarning" diff --git a/pandas/tests/io/formats/style/test_highlight.py b/pandas/tests/io/formats/style/test_highlight.py index 9e956e055d1aa..a681d7c65a190 100644 --- a/pandas/tests/io/formats/style/test_highlight.py +++ b/pandas/tests/io/formats/style/test_highlight.py @@ -5,6 +5,7 @@ DataFrame, IndexSlice, ) +import pandas._testing as tm pytest.importorskip("jinja2") @@ -54,7 +55,9 @@ def test_highlight_minmax_basic(df, f): } if f == "highlight_min": df = -df - result = getattr(df.style, f)(axis=1, color="red")._compute().ctx + with tm.assert_produces_warning(RuntimeWarning): + # All-NaN slice encountered + result = getattr(df.style, f)(axis=1, color="red")._compute().ctx assert result == expected diff --git a/pandas/tests/io/pytables/__init__.py b/pandas/tests/io/pytables/__init__.py index d3735f8863c3b..cbf848a401dc4 100644 --- a/pandas/tests/io/pytables/__init__.py +++ b/pandas/tests/io/pytables/__init__.py @@ -9,4 +9,7 @@ pytest.mark.filterwarnings( r"ignore:`np\.object` is a deprecated alias:DeprecationWarning" ), + pytest.mark.filterwarnings( + r"ignore:`np\.bool` is a deprecated alias:DeprecationWarning" + ), ] diff --git a/pandas/tests/io/pytables/test_categorical.py b/pandas/tests/io/pytables/test_categorical.py index 4928a70f90960..0b3d56ebf959e 100644 --- a/pandas/tests/io/pytables/test_categorical.py +++ b/pandas/tests/io/pytables/test_categorical.py @@ -216,7 +216,7 @@ def test_convert_value(setup_path, where: str, df: DataFrame, expected: DataFram max_widths = {"col": 1} categorical_values = sorted(df.col.unique()) expected.col = expected.col.astype("category") - expected.col.cat.set_categories(categorical_values, inplace=True) + expected.col = expected.col.cat.set_categories(categorical_values) with ensure_clean_path(setup_path) as path: df.to_hdf(path, "df", format="table", min_itemsize=max_widths) diff --git a/pandas/tests/io/test_parquet.py b/pandas/tests/io/test_parquet.py index 30666a716859a..f66451cd72309 100644 --- a/pandas/tests/io/test_parquet.py +++ b/pandas/tests/io/test_parquet.py @@ -3,7 +3,10 @@ from io import BytesIO import os import pathlib -from warnings import catch_warnings +from warnings import ( + catch_warnings, + filterwarnings, +) import numpy as np import pytest @@ -36,7 +39,10 @@ _HAVE_PYARROW = False try: - import fastparquet + with catch_warnings(): + # `np.bool` is a deprecated alias... + filterwarnings("ignore", "`np.bool`", category=DeprecationWarning) + import fastparquet _HAVE_FASTPARQUET = True except ImportError:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/41346
2021-05-06T03:13:11Z
2021-05-06T13:19:05Z
2021-05-06T13:19:05Z
2021-05-06T15:13:02Z
REF: _cython_transform operate blockwise
diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx index 8637d50745195..7a286188c4e74 100644 --- a/pandas/_libs/groupby.pyx +++ b/pandas/_libs/groupby.pyx @@ -1136,19 +1136,24 @@ def group_rank(float64_t[:, ::1] out, This method modifies the `out` parameter rather than returning an object """ cdef: + Py_ssize_t i, k, N ndarray[float64_t, ndim=1] result - result = rank_1d( - values=values[:, 0], - labels=labels, - is_datetimelike=is_datetimelike, - ties_method=ties_method, - ascending=ascending, - pct=pct, - na_option=na_option - ) - for i in range(len(result)): - out[i, 0] = result[i] + N = values.shape[1] + + for k in range(N): + result = rank_1d( + values=values[:, k], + labels=labels, + is_datetimelike=is_datetimelike, + ties_method=ties_method, + ascending=ascending, + pct=pct, + na_option=na_option + ) + for i in range(len(result)): + # TODO: why cant we do out[:, k] = result? + out[i, k] = result[i] # ---------------------------------------------------------------------- diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index 18506b871bda6..c394390f051de 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -530,6 +530,26 @@ def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs): func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs ) + def _cython_transform( + self, how: str, numeric_only: bool = True, axis: int = 0, **kwargs + ): + assert axis == 0 # handled by caller + + obj = self._selected_obj + + is_numeric = is_numeric_dtype(obj.dtype) + if numeric_only and not is_numeric: + raise DataError("No numeric types to aggregate") + + try: + result = self.grouper._cython_operation( + "transform", obj._values, how, axis, **kwargs + ) + except (NotImplementedError, TypeError): + raise DataError("No numeric types to aggregate") + + return obj._constructor(result, index=self.obj.index, name=obj.name) + def _transform_general(self, func: Callable, *args, **kwargs) -> Series: """ Transform with a callable func`. @@ -1258,6 +1278,36 @@ def _wrap_applied_output_series( return self._reindex_output(result) + def _cython_transform( + self, how: str, numeric_only: bool = True, axis: int = 0, **kwargs + ) -> DataFrame: + assert axis == 0 # handled by caller + # TODO: no tests with self.ndim == 1 for DataFrameGroupBy + + # With self.axis == 0, we have multi-block tests + # e.g. test_rank_min_int, test_cython_transform_frame + # test_transform_numeric_ret + # With self.axis == 1, _get_data_to_aggregate does a transpose + # so we always have a single block. + mgr: Manager2D = self._get_data_to_aggregate() + if numeric_only: + mgr = mgr.get_numeric_data(copy=False) + + def arr_func(bvalues: ArrayLike) -> ArrayLike: + return self.grouper._cython_operation( + "transform", bvalues, how, 1, **kwargs + ) + + # We could use `mgr.apply` here and not have to set_axis, but + # we would have to do shape gymnastics for ArrayManager compat + res_mgr = mgr.grouped_reduce(arr_func, ignore_failures=True) + res_mgr.set_axis(1, mgr.axes[1]) + + res_df = self.obj._constructor(res_mgr) + if self.axis == 1: + res_df = res_df.T + return res_df + def _transform_general(self, func, *args, **kwargs): from pandas.core.reshape.concat import concat diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index c5ef18c51a533..0d2be53dc3e0e 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1361,32 +1361,10 @@ def _cython_agg_general( ): raise AbstractMethodError(self) - @final def _cython_transform( self, how: str, numeric_only: bool = True, axis: int = 0, **kwargs ): - output: dict[base.OutputKey, ArrayLike] = {} - - for idx, obj in enumerate(self._iterate_slices()): - name = obj.name - is_numeric = is_numeric_dtype(obj.dtype) - if numeric_only and not is_numeric: - continue - - try: - result = self.grouper._cython_operation( - "transform", obj._values, how, axis, **kwargs - ) - except (NotImplementedError, TypeError): - continue - - key = base.OutputKey(label=name, position=idx) - output[key] = result - - if not output: - raise DataError("No numeric types to aggregate") - - return self._wrap_transformed_output(output) + raise AbstractMethodError(self) @final def _transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs): diff --git a/pandas/tests/apply/test_frame_transform.py b/pandas/tests/apply/test_frame_transform.py index d56c8c1e83ab4..18c36e4096b2b 100644 --- a/pandas/tests/apply/test_frame_transform.py +++ b/pandas/tests/apply/test_frame_transform.py @@ -51,6 +51,19 @@ def test_transform_groupby_kernel(axis, float_frame, op, request): result = float_frame.transform(op, axis, *args) tm.assert_frame_equal(result, expected) + # same thing, but ensuring we have multiple blocks + assert "E" not in float_frame.columns + float_frame["E"] = float_frame["A"].copy() + assert len(float_frame._mgr.arrays) > 1 + + if axis == 0 or axis == "index": + ones = np.ones(float_frame.shape[0]) + else: + ones = np.ones(float_frame.shape[1]) + expected2 = float_frame.groupby(ones, axis=axis).transform(op, *args) + result2 = float_frame.transform(op, axis, *args) + tm.assert_frame_equal(result2, expected2) + @pytest.mark.parametrize( "ops, names", diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py index 20edf03c5b96c..aafdffba43388 100644 --- a/pandas/tests/groupby/test_rank.py +++ b/pandas/tests/groupby/test_rank.py @@ -584,21 +584,23 @@ def test_rank_multiindex(): # GH27721 df = concat( { - "a": DataFrame({"col1": [1, 2], "col2": [3, 4]}), + "a": DataFrame({"col1": [3, 4], "col2": [1, 2]}), "b": DataFrame({"col3": [5, 6], "col4": [7, 8]}), }, axis=1, ) - result = df.groupby(level=0, axis=1).rank(axis=1, ascending=False, method="first") + gb = df.groupby(level=0, axis=1) + result = gb.rank(axis=1) + expected = concat( - { - "a": DataFrame({"col1": [2.0, 2.0], "col2": [1.0, 1.0]}), - "b": DataFrame({"col3": [2.0, 2.0], "col4": [1.0, 1.0]}), - }, + [ + df["a"].rank(axis=1), + df["b"].rank(axis=1), + ], axis=1, + keys=["a", "b"], ) - tm.assert_frame_equal(result, expected) @@ -615,3 +617,24 @@ def test_groupby_axis0_rank_axis1(): # This should match what we get when "manually" operating group-by-group expected = concat([df.loc["a"].rank(axis=1), df.loc["b"].rank(axis=1)], axis=0) tm.assert_frame_equal(res, expected) + + # check that we haven't accidentally written a case that coincidentally + # matches rank(axis=0) + alt = gb.rank(axis=0) + assert not alt.equals(expected) + + +def test_groupby_axis0_cummax_axis1(): + # case where groupby axis is 0 and axis keyword in transform is 1 + + # df has mixed dtype -> multiple blocks + df = DataFrame( + {0: [1, 3, 5, 7], 1: [2, 4, 6, 8], 2: [1.5, 3.5, 5.5, 7.5]}, + index=["a", "a", "b", "b"], + ) + gb = df.groupby(level=0, axis=0) + + cmax = gb.cummax(axis=1) + expected = df[[0, 1]].astype(np.float64) + expected[2] = expected[1] + tm.assert_frame_equal(cmax, expected)
In the process of doing this found that group_rank is silently assuming it is single-column.
https://api.github.com/repos/pandas-dev/pandas/pulls/41344
2021-05-06T02:51:33Z
2021-05-10T14:34:45Z
2021-05-10T14:34:44Z
2021-05-10T14:39:19Z
BUG: replace with regex raising for StringDType
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index 5adc8540e6864..7ec74b7045437 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -748,7 +748,7 @@ Strings ^^^^^^^ - Bug in the conversion from ``pyarrow.ChunkedArray`` to :class:`~arrays.StringArray` when the original had zero chunks (:issue:`41040`) -- +- Bug in :meth:`Series.replace` and :meth:`DataFrame.replace` ignoring replacements with ``regex=True`` for ``StringDType`` data (:issue:`41333`, :issue:`35977`) Interval ^^^^^^^^ diff --git a/pandas/conftest.py b/pandas/conftest.py index 7b29c41ef70f5..f948dc11bc014 100644 --- a/pandas/conftest.py +++ b/pandas/conftest.py @@ -1153,6 +1153,27 @@ def object_dtype(request): return request.param +@pytest.fixture( + params=[ + "object", + "string", + pytest.param( + "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0") + ), + ] +) +def any_string_dtype(request): + """ + Parametrized fixture for string dtypes. + * 'object' + * 'string' + * 'arrow_string' + """ + from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401 + + return request.param + + @pytest.fixture(params=tm.DATETIME64_DTYPES) def datetime64_dtype(request): """ diff --git a/pandas/core/array_algos/replace.py b/pandas/core/array_algos/replace.py index 201b9fdcc51cc..2d3a168a31e1e 100644 --- a/pandas/core/array_algos/replace.py +++ b/pandas/core/array_algos/replace.py @@ -149,7 +149,7 @@ def re_replacer(s): else: return s - f = np.vectorize(re_replacer, otypes=[values.dtype]) + f = np.vectorize(re_replacer, otypes=[np.object_]) if mask is None: values[:] = f(values) diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py index 92f9d803d1ebe..bd4dfdb4ebad0 100644 --- a/pandas/core/internals/blocks.py +++ b/pandas/core/internals/blocks.py @@ -49,6 +49,7 @@ is_extension_array_dtype, is_list_like, is_sparse, + is_string_dtype, pandas_dtype, ) from pandas.core.dtypes.dtypes import ( @@ -788,7 +789,7 @@ def _replace_list( src_len = len(pairs) - 1 - if values.dtype == _dtype_obj: + if is_string_dtype(values): # Calculate the mask once, prior to the call of comp # in order to avoid repeating the same computations mask = ~isna(values) diff --git a/pandas/tests/frame/methods/test_replace.py b/pandas/tests/frame/methods/test_replace.py index e6ed60dc2bb08..3ffaf67c656d9 100644 --- a/pandas/tests/frame/methods/test_replace.py +++ b/pandas/tests/frame/methods/test_replace.py @@ -563,10 +563,11 @@ def test_regex_replace_dict_nested(self, mix_abc): tm.assert_frame_equal(res3, expec) tm.assert_frame_equal(res4, expec) - def test_regex_replace_dict_nested_non_first_character(self): + def test_regex_replace_dict_nested_non_first_character(self, any_string_dtype): # GH 25259 - df = DataFrame({"first": ["abc", "bca", "cab"]}) - expected = DataFrame({"first": [".bc", "bc.", "c.b"]}) + dtype = any_string_dtype + df = DataFrame({"first": ["abc", "bca", "cab"]}, dtype=dtype) + expected = DataFrame({"first": [".bc", "bc.", "c.b"]}, dtype=dtype) result = df.replace({"a": "."}, regex=True) tm.assert_frame_equal(result, expected) @@ -685,6 +686,24 @@ def test_replace_regex_metachar(self, metachar): expected = DataFrame({"a": ["paren", "else"]}) tm.assert_frame_equal(result, expected) + @pytest.mark.parametrize( + "data,to_replace,expected", + [ + (["xax", "xbx"], {"a": "c", "b": "d"}, ["xcx", "xdx"]), + (["d", "", ""], {r"^\s*$": pd.NA}, ["d", pd.NA, pd.NA]), + ], + ) + def test_regex_replace_string_types( + self, data, to_replace, expected, frame_or_series, any_string_dtype + ): + # GH-41333, GH-35977 + dtype = any_string_dtype + obj = frame_or_series(data, dtype=dtype) + result = obj.replace(to_replace, regex=True) + expected = frame_or_series(expected, dtype=dtype) + + tm.assert_equal(result, expected) + def test_replace(self, datetime_frame): datetime_frame["A"][:5] = np.nan datetime_frame["A"][-5:] = np.nan diff --git a/pandas/tests/strings/conftest.py b/pandas/tests/strings/conftest.py index 17703d970e29e..4fedbee91f649 100644 --- a/pandas/tests/strings/conftest.py +++ b/pandas/tests/strings/conftest.py @@ -1,8 +1,6 @@ import numpy as np import pytest -import pandas.util._test_decorators as td - from pandas import Series from pandas.core import strings as strings @@ -175,24 +173,3 @@ def any_allowed_skipna_inferred_dtype(request): # correctness of inference tested in tests/dtypes/test_inference.py return inferred_dtype, values - - -@pytest.fixture( - params=[ - "object", - "string", - pytest.param( - "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0") - ), - ] -) -def any_string_dtype(request): - """ - Parametrized fixture for string dtypes. - * 'object' - * 'string' - * 'arrow_string' - """ - from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401 - - return request.param
- [x] closes #35977, closes #41333 - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry 2 separate issues at play here - regex was ignored in #41333 (specific to the `replace_list` path) but then if `replace_regex` was hit with anything but an object type that would just raise.
https://api.github.com/repos/pandas-dev/pandas/pulls/41343
2021-05-06T02:46:02Z
2021-05-11T20:23:12Z
2021-05-11T20:23:12Z
2021-05-11T20:27:16Z
BUG/API: SeriesGroupBy reduction with numeric_only=True
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst index d654cf5715bdf..a72cfce6e7477 100644 --- a/doc/source/whatsnew/v1.3.0.rst +++ b/doc/source/whatsnew/v1.3.0.rst @@ -1037,6 +1037,7 @@ Groupby/resample/rolling - Bug in :meth:`DataFrameGroupBy.__getitem__` with non-unique columns incorrectly returning a malformed :class:`SeriesGroupBy` instead of :class:`DataFrameGroupBy` (:issue:`41427`) - Bug in :meth:`DataFrameGroupBy.transform` with non-unique columns incorrectly raising ``AttributeError`` (:issue:`41427`) - Bug in :meth:`Resampler.apply` with non-unique columns incorrectly dropping duplicated columns (:issue:`41445`) +- Bug in :meth:`SeriesGroupBy` aggregations incorrectly returning empty :class:`Series` instead of raising ``TypeError`` on aggregations that are invalid for its dtype, e.g. ``.prod`` with ``datetime64[ns]`` dtype (:issue:`41342`) - Bug in :meth:`DataFrame.rolling.__iter__` where ``on`` was not assigned to the index of the resulting objects (:issue:`40373`) - Bug in :meth:`DataFrameGroupBy.transform` and :meth:`DataFrameGroupBy.agg` with ``engine="numba"`` where ``*args`` were being cached with the user passed function (:issue:`41647`) diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py index dec68ab8f392d..b51fb2234e148 100644 --- a/pandas/core/groupby/generic.py +++ b/pandas/core/groupby/generic.py @@ -323,7 +323,7 @@ def _aggregate_multiple_funcs(self, arg) -> DataFrame: return output def _cython_agg_general( - self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1 + self, how: str, alt: Callable, numeric_only: bool, min_count: int = -1 ): obj = self._selected_obj @@ -331,7 +331,10 @@ def _cython_agg_general( data = obj._mgr if numeric_only and not is_numeric_dtype(obj.dtype): - raise DataError("No numeric types to aggregate") + # GH#41291 match Series behavior + raise NotImplementedError( + f"{type(self).__name__}.{how} does not implement numeric_only." + ) # This is overkill because it is only called once, but is here to # mirror the array_func used in DataFrameGroupBy._cython_agg_general @@ -1056,7 +1059,7 @@ def _iterate_slices(self) -> Iterable[Series]: yield values def _cython_agg_general( - self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1 + self, how: str, alt: Callable, numeric_only: bool, min_count: int = -1 ) -> DataFrame: # Note: we never get here with how="ohlc"; that goes through SeriesGroupBy diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py index 7c8d83f83e20f..b00a1160fb01b 100644 --- a/pandas/core/groupby/groupby.py +++ b/pandas/core/groupby/groupby.py @@ -1101,6 +1101,34 @@ def _wrap_transformed_output(self, output: Mapping[base.OutputKey, ArrayLike]): def _wrap_applied_output(self, data, keys, values, not_indexed_same: bool = False): raise AbstractMethodError(self) + def _resolve_numeric_only(self, numeric_only: bool | lib.NoDefault) -> bool: + """ + Determine subclass-specific default value for 'numeric_only'. + + For SeriesGroupBy we want the default to be False (to match Series behavior). + For DataFrameGroupBy we want it to be True (for backwards-compat). + + Parameters + ---------- + numeric_only : bool or lib.no_default + + Returns + ------- + bool + """ + # GH#41291 + if numeric_only is lib.no_default: + # i.e. not explicitly passed by user + if self.obj.ndim == 2: + # i.e. DataFrameGroupBy + numeric_only = True + else: + numeric_only = False + + # error: Incompatible return value type (got "Union[bool, NoDefault]", + # expected "bool") + return numeric_only # type: ignore[return-value] + # ----------------------------------------------------------------- # numba @@ -1308,6 +1336,7 @@ def _agg_general( alias: str, npfunc: Callable, ): + with group_selection_context(self): # try a cython aggregation if we can result = None @@ -1367,7 +1396,7 @@ def _agg_py_fallback( return ensure_block_shape(res_values, ndim=ndim) def _cython_agg_general( - self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1 + self, how: str, alt: Callable, numeric_only: bool, min_count: int = -1 ): raise AbstractMethodError(self) @@ -1587,7 +1616,7 @@ def count(self): @final @Substitution(name="groupby") @Substitution(see_also=_common_see_also) - def mean(self, numeric_only: bool = True): + def mean(self, numeric_only: bool | lib.NoDefault = lib.no_default): """ Compute mean of groups, excluding missing values. @@ -1635,6 +1664,8 @@ def mean(self, numeric_only: bool = True): 2 4.0 Name: B, dtype: float64 """ + numeric_only = self._resolve_numeric_only(numeric_only) + result = self._cython_agg_general( "mean", alt=lambda x: Series(x).mean(numeric_only=numeric_only), @@ -1645,7 +1676,7 @@ def mean(self, numeric_only: bool = True): @final @Substitution(name="groupby") @Appender(_common_see_also) - def median(self, numeric_only=True): + def median(self, numeric_only: bool | lib.NoDefault = lib.no_default): """ Compute median of groups, excluding missing values. @@ -1662,6 +1693,8 @@ def median(self, numeric_only=True): Series or DataFrame Median of values within each group. """ + numeric_only = self._resolve_numeric_only(numeric_only) + result = self._cython_agg_general( "median", alt=lambda x: Series(x).median(numeric_only=numeric_only), @@ -1719,8 +1752,9 @@ def var(self, ddof: int = 1): Variance of values within each group. """ if ddof == 1: + numeric_only = self._resolve_numeric_only(lib.no_default) return self._cython_agg_general( - "var", alt=lambda x: Series(x).var(ddof=ddof) + "var", alt=lambda x: Series(x).var(ddof=ddof), numeric_only=numeric_only ) else: func = lambda x: x.var(ddof=ddof) @@ -1785,7 +1819,10 @@ def size(self) -> FrameOrSeriesUnion: @final @doc(_groupby_agg_method_template, fname="sum", no=True, mc=0) - def sum(self, numeric_only: bool = True, min_count: int = 0): + def sum( + self, numeric_only: bool | lib.NoDefault = lib.no_default, min_count: int = 0 + ): + numeric_only = self._resolve_numeric_only(numeric_only) # If we are grouping on categoricals we want unobserved categories to # return zero, rather than the default of NaN which the reindexing in @@ -1802,7 +1839,11 @@ def sum(self, numeric_only: bool = True, min_count: int = 0): @final @doc(_groupby_agg_method_template, fname="prod", no=True, mc=0) - def prod(self, numeric_only: bool = True, min_count: int = 0): + def prod( + self, numeric_only: bool | lib.NoDefault = lib.no_default, min_count: int = 0 + ): + numeric_only = self._resolve_numeric_only(numeric_only) + return self._agg_general( numeric_only=numeric_only, min_count=min_count, alias="prod", npfunc=np.prod ) @@ -2731,7 +2772,7 @@ def _get_cythonized_result( how: str, cython_dtype: np.dtype, aggregate: bool = False, - numeric_only: bool = True, + numeric_only: bool | lib.NoDefault = lib.no_default, needs_counts: bool = False, needs_values: bool = False, needs_2d: bool = False, @@ -2799,6 +2840,8 @@ def _get_cythonized_result( ------- `Series` or `DataFrame` with filled values """ + numeric_only = self._resolve_numeric_only(numeric_only) + if result_is_index and aggregate: raise ValueError("'result_is_index' and 'aggregate' cannot both be True!") if post_processing and not callable(post_processing): diff --git a/pandas/tests/groupby/aggregate/test_cython.py b/pandas/tests/groupby/aggregate/test_cython.py index 4a8aabe41b754..cf1177d231e37 100644 --- a/pandas/tests/groupby/aggregate/test_cython.py +++ b/pandas/tests/groupby/aggregate/test_cython.py @@ -89,12 +89,16 @@ def test_cython_agg_boolean(): def test_cython_agg_nothing_to_agg(): frame = DataFrame({"a": np.random.randint(0, 5, 50), "b": ["foo", "bar"] * 25}) - msg = "No numeric types to aggregate" - with pytest.raises(DataError, match=msg): + with pytest.raises(NotImplementedError, match="does not implement"): + frame.groupby("a")["b"].mean(numeric_only=True) + + with pytest.raises(TypeError, match="Could not convert (foo|bar)*"): frame.groupby("a")["b"].mean() frame = DataFrame({"a": np.random.randint(0, 5, 50), "b": ["foo", "bar"] * 25}) + + msg = "No numeric types to aggregate" with pytest.raises(DataError, match=msg): frame[["b"]].groupby(frame["a"]).mean() @@ -107,9 +111,8 @@ def test_cython_agg_nothing_to_agg_with_dates(): "dates": pd.date_range("now", periods=50, freq="T"), } ) - msg = "No numeric types to aggregate" - with pytest.raises(DataError, match=msg): - frame.groupby("b").dates.mean() + with pytest.raises(NotImplementedError, match="does not implement"): + frame.groupby("b").dates.mean(numeric_only=True) def test_cython_agg_frame_columns(): @@ -170,7 +173,7 @@ def test__cython_agg_general(op, targop): df = DataFrame(np.random.randn(1000)) labels = np.random.randint(0, 50, size=1000).astype(float) - result = df.groupby(labels)._cython_agg_general(op) + result = df.groupby(labels)._cython_agg_general(op, alt=None, numeric_only=True) expected = df.groupby(labels).agg(targop) tm.assert_frame_equal(result, expected) @@ -192,7 +195,7 @@ def test_cython_agg_empty_buckets(op, targop, observed): # calling _cython_agg_general directly, instead of via the user API # which sets different values for min_count, so do that here. g = df.groupby(pd.cut(df[0], grps), observed=observed) - result = g._cython_agg_general(op) + result = g._cython_agg_general(op, alt=None, numeric_only=True) g = df.groupby(pd.cut(df[0], grps), observed=observed) expected = g.agg(lambda x: targop(x)) @@ -206,7 +209,7 @@ def test_cython_agg_empty_buckets_nanops(observed): grps = range(0, 25, 5) # add / sum result = df.groupby(pd.cut(df["a"], grps), observed=observed)._cython_agg_general( - "add" + "add", alt=None, numeric_only=True ) intervals = pd.interval_range(0, 20, freq=5) expected = DataFrame( @@ -220,7 +223,7 @@ def test_cython_agg_empty_buckets_nanops(observed): # prod result = df.groupby(pd.cut(df["a"], grps), observed=observed)._cython_agg_general( - "prod" + "prod", alt=None, numeric_only=True ) expected = DataFrame( {"a": [1, 1, 1716, 1]}, diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index 4d8ae8d269eb6..70bdfe92602b2 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -1764,7 +1764,15 @@ def test_empty_groupby(columns, keys, values, method, op, request): # GH8093 & GH26411 override_dtype = None - if isinstance(values, Categorical) and len(keys) == 1 and method == "apply": + if ( + isinstance(values, Categorical) + and not isinstance(columns, list) + and op in ["sum", "prod"] + and method != "apply" + ): + # handled below GH#41291 + pass + elif isinstance(values, Categorical) and len(keys) == 1 and method == "apply": mark = pytest.mark.xfail(raises=TypeError, match="'str' object is not callable") request.node.add_marker(mark) elif ( @@ -1825,11 +1833,36 @@ def test_empty_groupby(columns, keys, values, method, op, request): df = df.iloc[:0] gb = df.groupby(keys)[columns] - if method == "attr": - result = getattr(gb, op)() - else: - result = getattr(gb, method)(op) + def get_result(): + if method == "attr": + return getattr(gb, op)() + else: + return getattr(gb, method)(op) + + if columns == "C": + # i.e. SeriesGroupBy + if op in ["prod", "sum"]: + # ops that require more than just ordered-ness + if method != "apply": + # FIXME: apply goes through different code path + if df.dtypes[0].kind == "M": + # GH#41291 + # datetime64 -> prod and sum are invalid + msg = "datetime64 type does not support" + with pytest.raises(TypeError, match=msg): + get_result() + + return + elif isinstance(values, Categorical): + # GH#41291 + msg = "category type does not support" + with pytest.raises(TypeError, match=msg): + get_result() + + return + + result = get_result() expected = df.set_index(keys)[columns] if override_dtype is not None: expected = expected.astype(override_dtype) diff --git a/pandas/tests/resample/test_datetime_index.py b/pandas/tests/resample/test_datetime_index.py index 296182a6bdbea..abe834b9fff17 100644 --- a/pandas/tests/resample/test_datetime_index.py +++ b/pandas/tests/resample/test_datetime_index.py @@ -61,7 +61,7 @@ def test_custom_grouper(index): g.ohlc() # doesn't use _cython_agg_general funcs = ["add", "mean", "prod", "min", "max", "var"] for f in funcs: - g._cython_agg_general(f) + g._cython_agg_general(f, alt=None, numeric_only=True) b = Grouper(freq=Minute(5), closed="right", label="right") g = s.groupby(b) @@ -69,7 +69,7 @@ def test_custom_grouper(index): g.ohlc() # doesn't use _cython_agg_general funcs = ["add", "mean", "prod", "min", "max", "var"] for f in funcs: - g._cython_agg_general(f) + g._cython_agg_general(f, alt=None, numeric_only=True) assert g.ngroups == 2593 assert notna(g.mean()).all() @@ -417,7 +417,7 @@ def test_resample_frame_basic(): # check all cython functions work funcs = ["add", "mean", "prod", "min", "max", "var"] for f in funcs: - g._cython_agg_general(f) + g._cython_agg_general(f, alt=None, numeric_only=True) result = df.resample("A").mean() tm.assert_series_equal(result["A"], df["A"].resample("A").mean())
- [ ] closes #xxxx - [x] tests added / passed - [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [x] whatsnew entry This addresses the SeriesGroupBy (easier) half of #41291: 1) When we have numeric_only=True and non-numeric data in a SeriesGroupBy reduction, raise NotImplementedError (matching Series behavior) instead of falling back to an often-wrong fallback (xref #41341) 2) When the user doesn't explicitly pass `numeric_only=True`, change the default to False for SeriesGroupBy, leaving DataFrameGroupBy unaffected.
https://api.github.com/repos/pandas-dev/pandas/pulls/41342
2021-05-05T22:18:43Z
2021-05-26T10:16:16Z
2021-05-26T10:16:16Z
2021-05-26T17:15:55Z
TST: xfail incorrect test_empty_groupby
diff --git a/pandas/tests/groupby/test_groupby.py b/pandas/tests/groupby/test_groupby.py index abfa2a23a4402..f716a3a44cd54 100644 --- a/pandas/tests/groupby/test_groupby.py +++ b/pandas/tests/groupby/test_groupby.py @@ -1724,27 +1724,82 @@ def test_pivot_table_values_key_error(): [0], [0.0], ["a"], - [Categorical([0])], + Categorical([0]), [to_datetime(0)], - [date_range(0, 1, 1, tz="US/Eastern")], - [pd.array([0], dtype="Int64")], - [pd.array([0], dtype="Float64")], - [pd.array([False], dtype="boolean")], + date_range(0, 1, 1, tz="US/Eastern"), + pd.array([0], dtype="Int64"), + pd.array([0], dtype="Float64"), + pd.array([False], dtype="boolean"), ], ) @pytest.mark.parametrize("method", ["attr", "agg", "apply"]) @pytest.mark.parametrize( "op", ["idxmax", "idxmin", "mad", "min", "max", "sum", "prod", "skew"] ) -def test_empty_groupby(columns, keys, values, method, op): +def test_empty_groupby(columns, keys, values, method, op, request): # GH8093 & GH26411 + if isinstance(values, Categorical) and len(keys) == 1 and method == "apply": + mark = pytest.mark.xfail(raises=TypeError, match="'str' object is not callable") + request.node.add_marker(mark) + elif ( + isinstance(values, Categorical) + and len(keys) == 1 + and op in ["idxmax", "idxmin"] + ): + mark = pytest.mark.xfail( + raises=ValueError, match="attempt to get arg(min|max) of an empty sequence" + ) + request.node.add_marker(mark) + elif ( + isinstance(values, Categorical) + and len(keys) == 1 + and not isinstance(columns, list) + ): + mark = pytest.mark.xfail( + raises=TypeError, match="'Categorical' does not implement" + ) + request.node.add_marker(mark) + elif ( + isinstance(values, Categorical) + and len(keys) == 1 + and op in ["mad", "min", "max", "sum", "prod", "skew"] + ): + mark = pytest.mark.xfail( + raises=AssertionError, match="(DataFrame|Series) are different" + ) + request.node.add_marker(mark) + elif ( + isinstance(values, Categorical) + and len(keys) == 2 + and op in ["min", "max", "sum"] + and method != "apply" + ): + mark = pytest.mark.xfail( + raises=AssertionError, match="(DataFrame|Series) are different" + ) + request.node.add_marker(mark) + elif ( + isinstance(values, pd.core.arrays.BooleanArray) + and op in ["sum", "prod"] + and method != "apply" + ): + mark = pytest.mark.xfail( + raises=AssertionError, match="(DataFrame|Series) are different" + ) + request.node.add_marker(mark) + override_dtype = None if isinstance(values[0], bool) and op in ("prod", "sum") and method != "apply": # sum/product of bools is an integer override_dtype = "int64" - df = DataFrame([3 * values], columns=list("ABC")) + df = DataFrame({"A": values, "B": values, "C": values}, columns=list("ABC")) + + if hasattr(values, "dtype"): + # check that we did the construction right + assert (df.dtypes == values.dtype).all() + df = df.iloc[:0] gb = df.groupby(keys)[columns]
In addressing #41291 i found that this test isn't constructing the DataFrame (i think) it intends to. When the construction is fixed, a bunch of the tests fail. This fixes the construction and xfails those tests.
https://api.github.com/repos/pandas-dev/pandas/pulls/41341
2021-05-05T21:40:16Z
2021-05-06T13:50:05Z
2021-05-06T13:50:05Z
2021-05-06T15:08:04Z
TYP: tslibs.parsing.pyi
diff --git a/pandas/_libs/tslibs/parsing.pyi b/pandas/_libs/tslibs/parsing.pyi new file mode 100644 index 0000000000000..f346204d69d25 --- /dev/null +++ b/pandas/_libs/tslibs/parsing.pyi @@ -0,0 +1,81 @@ +from datetime import datetime + +import numpy as np + +from pandas._libs.tslibs.offsets import BaseOffset + +class DateParseError(ValueError): ... + + +def parse_datetime_string( + date_string: str, + dayfirst: bool = ..., + yearfirst: bool = ..., + **kwargs, +) -> datetime: ... + + +def parse_time_string( + arg: str, + freq: BaseOffset | str | None = ..., + dayfirst: bool | None = ..., + yearfirst: bool | None = ..., +) -> tuple[datetime, str]: ... + + +def _does_string_look_like_datetime(py_string: str) -> bool: ... + +def quarter_to_myear(year: int, quarter: int, freq: str) -> tuple[int, int]: ... + + +def try_parse_dates( + values: np.ndarray, # object[:] + parser=..., + dayfirst: bool = ..., + default: datetime | None = ..., +) -> np.ndarray: ... # np.ndarray[object] + +def try_parse_date_and_time( + dates: np.ndarray, # object[:] + times: np.ndarray, # object[:] + date_parser=..., + time_parser=..., + dayfirst: bool = ..., + default: datetime | None = ..., +) -> np.ndarray: ... # np.ndarray[object] + +def try_parse_year_month_day( + years: np.ndarray, # object[:] + months: np.ndarray, # object[:] + days: np.ndarray, # object[:] +) -> np.ndarray: ... # np.ndarray[object] + + +def try_parse_datetime_components( + years: np.ndarray, # object[:] + months: np.ndarray, # object[:] + days: np.ndarray, # object[:] + hours: np.ndarray, # object[:] + minutes: np.ndarray, # object[:] + seconds: np.ndarray, # object[:] +) -> np.ndarray: ... # np.ndarray[object] + + +def format_is_iso(f: str) -> bool: ... + + +def guess_datetime_format( + dt_str, + dayfirst: bool = ..., + dt_str_parse=..., + dt_str_split=..., +) -> str | None: ... + + +def concat_date_cols( + date_cols: tuple, + keep_trivial_numbers: bool = ..., +) -> np.ndarray: ... # np.ndarray[object] + + +def get_rule_month(source: str) -> str: ... diff --git a/pandas/_libs/tslibs/parsing.pyx b/pandas/_libs/tslibs/parsing.pyx index 50b1804e1c5f9..9892671f5c18c 100644 --- a/pandas/_libs/tslibs/parsing.pyx +++ b/pandas/_libs/tslibs/parsing.pyx @@ -219,7 +219,7 @@ def parse_datetime_string( bint dayfirst=False, bint yearfirst=False, **kwargs, -): +) -> datetime: """ Parse datetime string, only returns datetime. Also cares special handling matching time patterns. @@ -281,7 +281,9 @@ def parse_time_string(arg: str, freq=None, dayfirst=None, yearfirst=None): Returns ------- - datetime, datetime/dateutil.parser._result, str + datetime + str + Describing resolution of parsed string. """ if is_offset_object(freq): freq = freq.rule_code @@ -595,7 +597,7 @@ cdef dateutil_parse( def try_parse_dates( object[:] values, parser=None, bint dayfirst=False, default=None, -): +) -> np.ndarray: cdef: Py_ssize_t i, n object[:] result @@ -639,7 +641,7 @@ def try_parse_date_and_time( time_parser=None, bint dayfirst=False, default=None, -): +) -> np.ndarray: cdef: Py_ssize_t i, n object[:] result @@ -675,7 +677,9 @@ def try_parse_date_and_time( return result.base # .base to access underlying ndarray -def try_parse_year_month_day(object[:] years, object[:] months, object[:] days): +def try_parse_year_month_day( + object[:] years, object[:] months, object[:] days +) -> np.ndarray: cdef: Py_ssize_t i, n object[:] result @@ -697,7 +701,7 @@ def try_parse_datetime_components(object[:] years, object[:] days, object[:] hours, object[:] minutes, - object[:] seconds): + object[:] seconds) -> np.ndarray: cdef: Py_ssize_t i, n @@ -988,7 +992,7 @@ cdef inline object convert_to_unicode(object item, bint keep_trivial_numbers): @cython.wraparound(False) @cython.boundscheck(False) -def concat_date_cols(tuple date_cols, bint keep_trivial_numbers=True): +def concat_date_cols(tuple date_cols, bint keep_trivial_numbers=True) -> np.ndarray: """ Concatenates elements from numpy arrays in `date_cols` into strings. diff --git a/pandas/core/indexes/datetimes.py b/pandas/core/indexes/datetimes.py index ed0856f3d30a3..0b80e863ef3ea 100644 --- a/pandas/core/indexes/datetimes.py +++ b/pandas/core/indexes/datetimes.py @@ -636,7 +636,7 @@ def _validate_partial_date_slice(self, reso: Resolution): # See also GH14826 raise KeyError - if reso == "microsecond": + if reso.attrname == "microsecond": # _partial_date_slice doesn't allow microsecond resolution, but # _parsed_string_to_bounds allows it. raise KeyError @@ -748,11 +748,11 @@ def _maybe_cast_slice_bound(self, label, side: str, kind): if isinstance(label, str): freq = getattr(self, "freqstr", getattr(self, "inferred_freq", None)) try: - parsed, reso = parsing.parse_time_string(label, freq) + parsed, reso_str = parsing.parse_time_string(label, freq) except parsing.DateParseError as err: raise self._invalid_indexer("slice", label) from err - reso = Resolution.from_attrname(reso) + reso = Resolution.from_attrname(reso_str) lower, upper = self._parsed_string_to_bounds(reso, parsed) # lower, upper form the half-open interval: # [parsed, parsed + 1 freq) @@ -772,8 +772,8 @@ def _maybe_cast_slice_bound(self, label, side: str, kind): def _get_string_slice(self, key: str): freq = getattr(self, "freqstr", getattr(self, "inferred_freq", None)) - parsed, reso = parsing.parse_time_string(key, freq) - reso = Resolution.from_attrname(reso) + parsed, reso_str = parsing.parse_time_string(key, freq) + reso = Resolution.from_attrname(reso_str) return self._partial_date_slice(reso, parsed) def slice_indexer(self, start=None, end=None, step=None, kind=None): diff --git a/pandas/core/indexes/period.py b/pandas/core/indexes/period.py index 0c5dbec2094e5..66f2b757438a7 100644 --- a/pandas/core/indexes/period.py +++ b/pandas/core/indexes/period.py @@ -490,12 +490,12 @@ def get_loc(self, key, method=None, tolerance=None): pass try: - asdt, reso = parse_time_string(key, self.freq) + asdt, reso_str = parse_time_string(key, self.freq) except (ValueError, DateParseError) as err: # A string with invalid format raise KeyError(f"Cannot interpret '{key}' as period") from err - reso = Resolution.from_attrname(reso) + reso = Resolution.from_attrname(reso_str) grp = reso.freq_group.value freqn = self.dtype.freq_group_code @@ -556,8 +556,8 @@ def _maybe_cast_slice_bound(self, label, side: str, kind: str): return Period(label, freq=self.freq) elif isinstance(label, str): try: - parsed, reso = parse_time_string(label, self.freq) - reso = Resolution.from_attrname(reso) + parsed, reso_str = parse_time_string(label, self.freq) + reso = Resolution.from_attrname(reso_str) bounds = self._parsed_string_to_bounds(reso, parsed) return bounds[0 if side == "left" else 1] except ValueError as err: @@ -585,8 +585,8 @@ def _validate_partial_date_slice(self, reso: Resolution): raise ValueError def _get_string_slice(self, key: str): - parsed, reso = parse_time_string(key, self.freq) - reso = Resolution.from_attrname(reso) + parsed, reso_str = parse_time_string(key, self.freq) + reso = Resolution.from_attrname(reso_str) try: return self._partial_date_slice(reso, parsed) except KeyError as err:
- [ ] closes #xxxx - [ ] tests added / passed - [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them - [ ] whatsnew entry
https://api.github.com/repos/pandas-dev/pandas/pulls/40576
2021-03-22T23:28:01Z
2021-03-30T12:57:59Z
2021-03-30T12:57:59Z
2021-03-30T14:02:30Z