title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
BUG/ENH: provide better .loc based semantics for float based indicies, continuing not to fallback (related GH236) | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 9f238c22850b7..bc9dcdccfc2e1 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -1199,22 +1199,109 @@ numpy array. For instance,
dflookup = DataFrame(np.random.rand(20,4), columns = ['A','B','C','D'])
dflookup.lookup(list(range(0,10,2)), ['B','C','A','B','D'])
-Setting values in mixed-type DataFrame
---------------------------------------
+.. _indexing.float64index:
-.. _indexing.mixed_type_setting:
+Float64Index
+------------
+
+.. versionadded:: 0.13.0
-Setting values on a mixed-type DataFrame or Panel is supported when using
-scalar values, though setting arbitrary vectors is not yet supported:
+By default a ``Float64Index`` will be automatically created when passing floating, or mixed-integer-floating values in index creation.
+This enables a pure label-based slicing paradigm that makes ``[],ix,loc`` for scalar indexing and slicing work exactly the
+same.
.. ipython:: python
- df2 = df[:4]
- df2['foo'] = 'bar'
- print(df2)
- df2.ix[2] = np.nan
- print(df2)
- print(df2.dtypes)
+ indexf = Index([1.5, 2, 3, 4.5, 5])
+ indexf
+ sf = Series(range(5),index=indexf)
+ sf
+
+Scalar selection for ``[],.ix,.loc`` will always be label based. An integer will match an equal float index (e.g. ``3`` is equivalent to ``3.0``)
+
+.. ipython:: python
+
+ sf[3]
+ sf[3.0]
+ sf.ix[3]
+ sf.ix[3.0]
+ sf.loc[3]
+ sf.loc[3.0]
+
+The only positional indexing is via ``iloc``
+
+.. ipython:: python
+
+ sf.iloc[3]
+
+A scalar index that is not found will raise ``KeyError``
+
+Slicing is ALWAYS on the values of the index, for ``[],ix,loc`` and ALWAYS positional with ``iloc``
+
+.. ipython:: python
+
+ sf[2:4]
+ sf.ix[2:4]
+ sf.loc[2:4]
+ sf.iloc[2:4]
+
+In float indexes, slicing using floats is allowed
+
+.. ipython:: python
+
+ sf[2.1:4.6]
+ sf.loc[2.1:4.6]
+
+In non-float indexes, slicing using floats will raise a ``TypeError``
+
+.. code-block:: python
+
+ In [1]: Series(range(5))[3.5]
+ TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
+
+ In [1]: Series(range(5))[3.5:4.5]
+ TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
+
+Using a scalar float indexer will be deprecated in a future version, but is allowed for now.
+
+.. code-block:: python
+
+ In [3]: Series(range(5))[3.0]
+ Out[3]: 3
+
+Here is a typical use-case for using this type of indexing. Imagine that you have a somewhat
+irregular timedelta-like indexing scheme, but the data is recorded as floats. This could for
+example be millisecond offsets.
+
+.. ipython:: python
+
+ dfir = concat([DataFrame(randn(5,2),
+ index=np.arange(5) * 250.0,
+ columns=list('AB')),
+ DataFrame(randn(6,2),
+ index=np.arange(4,10) * 250.1,
+ columns=list('AB'))])
+ dfir
+
+Selection operations then will always work on a value basis, for all selection operators.
+
+.. ipython:: python
+
+ dfir[0:1000.4]
+ dfir.loc[0:1001,'A']
+ dfir.loc[1000.4]
+
+You could then easily pick out the first 1 second (1000 ms) of data then.
+
+.. ipython:: python
+
+ dfir[0:1000]
+
+Of course if you need integer based selection, then use ``iloc``
+
+.. ipython:: python
+
+ dfir.iloc[0:5]
.. _indexing.view_versus_copy:
diff --git a/doc/source/release.rst b/doc/source/release.rst
index eec2e91f0a755..e39116f9023e1 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -226,6 +226,10 @@ API Changes
add top-level ``to_timedelta`` function
- ``NDFrame`` now is compatible with Python's toplevel ``abs()`` function (:issue:`4821`).
- raise a ``TypeError`` on invalid comparison ops on Series/DataFrame (e.g. integer/datetime) (:issue:`4968`)
+ - Added a new index type, ``Float64Index``. This will be automatically created when passing floating values in index creation.
+ This enables a pure label-based slicing paradigm that makes ``[],ix,loc`` for scalar indexing and slicing work exactly the same.
+ Indexing on other index types are preserved (and positional fallback for ``[],ix``), with the exception, that floating point slicing
+ on indexes on non ``Float64Index`` will raise a ``TypeError``, e.g. ``Series(range(5))[3.5:4.5]`` (:issue:`263`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index bda6fa4cdf021..95e5ff62a2abd 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -116,6 +116,72 @@ Indexing API Changes
p
p.loc[:,:,'C']
+Float64Index API Change
+~~~~~~~~~~~~~~~~~~~~~~~
+
+ - Added a new index type, ``Float64Index``. This will be automatically created when passing floating values in index creation.
+ This enables a pure label-based slicing paradigm that makes ``[],ix,loc`` for scalar indexing and slicing work exactly the
+ same. See :ref:`the docs<indexing.float64index>`, (:issue:`263`)
+
+ Construction is by default for floating type values.
+
+ .. ipython:: python
+
+ index = Index([1.5, 2, 3, 4.5, 5])
+ index
+ s = Series(range(5),index=index)
+ s
+
+ Scalar selection for ``[],.ix,.loc`` will always be label based. An integer will match an equal float index (e.g. ``3`` is equivalent to ``3.0``)
+
+ .. ipython:: python
+
+ s[3]
+ s.ix[3]
+ s.loc[3]
+
+ The only positional indexing is via ``iloc``
+
+ .. ipython:: python
+
+ s.iloc[3]
+
+ A scalar index that is not found will raise ``KeyError``
+
+ Slicing is ALWAYS on the values of the index, for ``[],ix,loc`` and ALWAYS positional with ``iloc``
+
+ .. ipython:: python
+
+ s[2:4]
+ s.ix[2:4]
+ s.loc[2:4]
+ s.iloc[2:4]
+
+ In float indexes, slicing using floats are allowed
+
+ .. ipython:: python
+
+ s[2.1:4.6]
+ s.loc[2.1:4.6]
+
+ - Indexing on other index types are preserved (and positional fallback for ``[],ix``), with the exception, that floating point slicing
+ on indexes on non ``Float64Index`` will now raise a ``TypeError``.
+
+ .. code-block:: python
+
+ In [1]: Series(range(5))[3.5]
+ TypeError: the label [3.5] is not a proper indexer for this index type (Int64Index)
+
+ In [1]: Series(range(5))[3.5:4.5]
+ TypeError: the slice start [3.5] is not a proper indexer for this index type (Int64Index)
+
+ Using a scalar float indexer will be deprecated in a future version, but is allowed for now.
+
+ .. code-block:: python
+
+ In [3]: Series(range(5))[3.0]
+ Out[3]: 3
+
HDFStore API Changes
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.4.x.txt b/doc/source/v0.4.x.txt
index 249dec5fd647b..5333bb9ffb157 100644
--- a/doc/source/v0.4.x.txt
+++ b/doc/source/v0.4.x.txt
@@ -15,8 +15,7 @@ New Features
with choice of join method (ENH56_)
- :ref:`Added <indexing.get_level_values>` method ``get_level_values`` to
``MultiIndex`` (:issue:`188`)
-- :ref:`Set <indexing.mixed_type_setting>` values in mixed-type
- ``DataFrame`` objects via ``.ix`` indexing attribute (:issue:`135`)
+- Set values in mixed-type ``DataFrame`` objects via ``.ix`` indexing attribute (:issue:`135`)
- Added new ``DataFrame`` :ref:`methods <basics.dtypes>`
``get_dtype_counts`` and property ``dtypes`` (ENHdc_)
- Added :ref:`ignore_index <merging.ignore_index>` option to
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index 7dc2ebc3d54e1..3554b8a3f81e1 100755
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -778,7 +778,7 @@ def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
class TestAlignment(object):
index_types = 'i', 'u', 'dt'
- lhs_index_types = index_types + ('f', 's') # 'p'
+ lhs_index_types = index_types + ('s',) # 'p'
def check_align_nested_unary_op(self, engine, parser):
skip_if_no_ne(engine)
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 14af72a2a762a..b4afe90d46842 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -8,7 +8,7 @@
from pandas.core.categorical import Categorical, Factor
from pandas.core.format import (set_printoptions, reset_printoptions,
set_eng_float_format)
-from pandas.core.index import Index, Int64Index, MultiIndex
+from pandas.core.index import Index, Int64Index, Float64Index, MultiIndex
from pandas.core.series import Series, TimeSeries
from pandas.core.frame import DataFrame
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0fd02c2bdc3a4..c98790fdc38ff 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2050,7 +2050,7 @@ def eval(self, expr, **kwargs):
kwargs['local_dict'] = _ensure_scope(resolvers=resolvers, **kwargs)
return _eval(expr, **kwargs)
- def _slice(self, slobj, axis=0, raise_on_error=False):
+ def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
axis = self._get_block_manager_axis(axis)
new_data = self._data.get_slice(
slobj, axis=axis, raise_on_error=raise_on_error)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 734a6ee15307d..7f136450daf6e 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -14,7 +14,7 @@
from pandas.util.decorators import cache_readonly, deprecate
from pandas.core.common import isnull
import pandas.core.common as com
-from pandas.core.common import _values_from_object
+from pandas.core.common import _values_from_object, is_float, is_integer
from pandas.core.config import get_option
@@ -49,10 +49,8 @@ def _shouldbe_timestamp(obj):
or tslib.is_datetime64_array(obj)
or tslib.is_timestamp_array(obj))
-
_Identity = object
-
class Index(FrozenNDArray):
"""
@@ -160,8 +158,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
subarr = subarr.copy()
elif np.isscalar(data):
- raise TypeError('Index(...) must be called with a collection '
- 'of some kind, %s was passed' % repr(data))
+ cls._scalar_data_error(data)
+
else:
# other iterable of some kind
subarr = com._asarray_tuplesafe(data, dtype=object)
@@ -170,6 +168,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
inferred = lib.infer_dtype(subarr)
if inferred == 'integer':
return Int64Index(subarr.astype('i8'), copy=copy, name=name)
+ elif inferred in ['floating','mixed-integer-float']:
+ return Float64Index(subarr, copy=copy, name=name)
elif inferred != 'string':
if (inferred.startswith('datetime') or
tslib.is_timestamp_array(subarr)):
@@ -183,6 +183,30 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
subarr._set_names([name])
return subarr
+ # construction helpers
+ @classmethod
+ def _scalar_data_error(cls, data):
+ raise TypeError('{0}(...) must be called with a collection '
+ 'of some kind, {1} was passed'.format(cls.__name__,repr(data)))
+
+ @classmethod
+ def _string_data_error(cls, data):
+ raise TypeError('String dtype not supported, you may need '
+ 'to explicitly cast to a numeric type')
+
+ @classmethod
+ def _coerce_to_ndarray(cls, data):
+
+ if not isinstance(data, np.ndarray):
+ if np.isscalar(data):
+ cls._scalar_data_error(data)
+
+ # other iterable of some kind
+ if not isinstance(data, (list, tuple)):
+ data = list(data)
+ data = np.asarray(data)
+ return data
+
def __array_finalize__(self, obj):
self._reset_identity()
if not isinstance(obj, type(self)):
@@ -374,12 +398,137 @@ def is_lexsorted_for_tuple(self, tup):
def is_unique(self):
return self._engine.is_unique
+ def is_integer(self):
+ return self.inferred_type in ['integer']
+
+ def is_floating(self):
+ return self.inferred_type in ['floating','mixed-integer-float']
+
def is_numeric(self):
return self.inferred_type in ['integer', 'floating']
+ def is_mixed(self):
+ return 'mixed' in self.inferred_type
+
def holds_integer(self):
return self.inferred_type in ['integer', 'mixed-integer']
+ def _convert_scalar_indexer(self, key, typ=None):
+ """ convert a scalar indexer, right now we are converting floats -> ints
+ if the index supports it """
+
+ def to_int():
+ ikey = int(key)
+ if ikey != key:
+ self._convert_indexer_error(key, 'label')
+ return ikey
+
+ if typ == 'iloc':
+ if not (is_integer(key) or is_float(key)):
+ self._convert_indexer_error(key, 'label')
+ return to_int()
+
+ if is_float(key):
+ return to_int()
+
+ return key
+
+ def _validate_slicer(self, key, f):
+ """ validate and raise if needed on a slice indexers according to the
+ passed in function """
+
+ if not f(key.start):
+ self._convert_indexer_error(key.start, 'slice start value')
+ if not f(key.stop):
+ self._convert_indexer_error(key.stop, 'slice stop value')
+ if not f(key.step):
+ self._convert_indexer_error(key.step, 'slice step value')
+
+ def _convert_slice_indexer_iloc(self, key):
+ """ convert a slice indexer for iloc only """
+ self._validate_slicer(key, lambda v: v is None or is_integer(v))
+ return key
+
+ def _convert_slice_indexer_getitem(self, key, is_index_slice=False):
+ """ called from the getitem slicers, determine how to treat the key
+ whether positional or not """
+ if self.is_integer() or is_index_slice:
+ return key
+ return self._convert_slice_indexer(key)
+
+ def _convert_slice_indexer(self, key, typ=None):
+ """ convert a slice indexer. disallow floats in the start/stop/step """
+
+ # validate slicers
+ def validate(v):
+ if v is None or is_integer(v):
+ return True
+
+ # dissallow floats
+ elif is_float(v):
+ return False
+
+ return True
+
+ self._validate_slicer(key, validate)
+
+ # figure out if this is a positional indexer
+ start, stop, step = key.start, key.stop, key.step
+
+ def is_int(v):
+ return v is None or is_integer(v)
+
+ is_null_slice = start is None and stop is None
+ is_index_slice = is_int(start) and is_int(stop)
+ is_positional = is_index_slice and not self.is_integer()
+
+ if typ == 'iloc':
+ return self._convert_slice_indexer_iloc(key)
+ elif typ == 'getitem':
+ return self._convert_slice_indexer_getitem(key, is_index_slice=is_index_slice)
+
+ # convert the slice to an indexer here
+
+ # if we are mixed and have integers
+ try:
+ if is_positional and self.is_mixed():
+ if start is not None:
+ i = self.get_loc(start)
+ if stop is not None:
+ j = self.get_loc(stop)
+ is_positional = False
+ except KeyError:
+ if self.inferred_type == 'mixed-integer-float':
+ raise
+
+ if is_null_slice:
+ indexer = key
+ elif is_positional:
+ indexer = key
+ else:
+ try:
+ indexer = self.slice_indexer(start, stop, step)
+ except Exception:
+ if is_index_slice:
+ if self.is_integer():
+ raise
+ else:
+ indexer = key
+ else:
+ raise
+
+ return indexer
+
+ def _convert_list_indexer(self, key, typ=None):
+ """ convert a list indexer. these should be locations """
+ return key
+
+ def _convert_indexer_error(self, key, msg=None):
+ if msg is None:
+ msg = 'label'
+ raise TypeError("the {0} [{1}] is not a proper indexer for this index type ({2})".format(msg,
+ key,
+ self.__class__.__name__))
def get_duplicates(self):
from collections import defaultdict
counter = defaultdict(lambda: 0)
@@ -858,6 +1007,11 @@ def get_value(self, series, key):
"""
s = _values_from_object(series)
k = _values_from_object(key)
+
+ # prevent integer truncation bug in indexing
+ if is_float(k) and not self.is_floating():
+ raise KeyError
+
try:
return self._engine.get_value(s, k)
except KeyError as e1:
@@ -1323,6 +1477,11 @@ def slice_indexer(self, start=None, end=None, step=None):
# return a slice
if np.isscalar(start_slice) and np.isscalar(end_slice):
+
+ # degenerate cases
+ if start is None and end is None:
+ return slice(None, None, step)
+
return slice(start_slice, end_slice, step)
# loc indexers
@@ -1488,22 +1647,13 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
if not isinstance(data, np.ndarray):
if np.isscalar(data):
- raise ValueError('Index(...) must be called with a collection '
- 'of some kind, %s was passed' % repr(data))
+ cls._scalar_data_error(data)
- if not isinstance(data, np.ndarray):
- if np.isscalar(data):
- raise ValueError('Index(...) must be called with a collection '
- 'of some kind, %s was passed' % repr(data))
-
- # other iterable of some kind
- if not isinstance(data, (list, tuple)):
- data = list(data)
- data = np.asarray(data)
+ data = cls._coerce_to_ndarray(data)
if issubclass(data.dtype.type, compat.string_types):
- raise TypeError('String dtype not supported, you may need '
- 'to explicitly cast to int')
+ cls._string_data_error(data)
+
elif issubclass(data.dtype.type, np.integer):
# don't force the upcast as we may be dealing
# with a platform int
@@ -1524,8 +1674,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
data = np.asarray(data)
if issubclass(data.dtype.type, compat.string_types):
- raise TypeError('String dtype not supported, you may need '
- 'to explicitly cast to int')
+ cls._string_data_error(data)
+
elif issubclass(data.dtype.type, np.integer):
# don't force the upcast as we may be dealing
# with a platform int
@@ -1581,6 +1731,123 @@ def _wrap_joined_index(self, joined, other):
return Int64Index(joined, name=name)
+class Float64Index(Index):
+ """
+ Immutable ndarray implementing an ordered, sliceable set. The basic object
+ storing axis labels for all pandas objects. Float64Index is a special case of `Index`
+ with purely floating point labels.
+
+ Parameters
+ ----------
+ data : array-like (1-dimensional)
+ dtype : NumPy dtype (default: object)
+ copy : bool
+ Make a copy of input ndarray
+ name : object
+ Name to be stored in the index
+
+ Note
+ ----
+ An Index instance can **only** contain hashable objects
+ """
+
+ # when this is not longer object dtype this can be changed
+ #_engine_type = _index.Float64Engine
+
+ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
+
+ if fastpath:
+ subarr = data.view(cls)
+ subarr.name = name
+ return subarr
+
+ if not isinstance(data, np.ndarray):
+ if np.isscalar(data):
+ cls._scalar_data_error(data)
+
+ data = cls._coerce_to_ndarray(data)
+
+ if issubclass(data.dtype.type, compat.string_types):
+ cls._string_data_error(data)
+
+ if dtype is None:
+ dtype = np.float64
+
+ try:
+ subarr = np.array(data, dtype=dtype, copy=copy)
+ except:
+ raise TypeError('Unsafe NumPy casting, you must '
+ 'explicitly cast')
+
+ # coerce to object for storage
+ if not subarr.dtype == np.object_:
+ subarr = subarr.astype(object)
+
+ subarr = subarr.view(cls)
+ subarr.name = name
+ return subarr
+
+ @property
+ def inferred_type(self):
+ return 'floating'
+
+ def astype(self, dtype):
+ if np.dtype(dtype) != np.object_:
+ raise TypeError(
+ "Setting %s dtype to anything other than object is not supported" % self.__class__)
+ return Index(self.values,name=self.name,dtype=object)
+
+ def _convert_scalar_indexer(self, key, typ=None):
+
+ if typ == 'iloc':
+ return super(Float64Index, self)._convert_scalar_indexer(key, typ=typ)
+ return key
+
+ def _convert_slice_indexer(self, key, typ=None):
+ """ convert a slice indexer, by definition these are labels
+ unless we are iloc """
+ if typ == 'iloc':
+ return self._convert_slice_indexer_iloc(key)
+ elif typ == 'getitem':
+ pass
+
+ # allow floats here
+ self._validate_slicer(key, lambda v: v is None or is_integer(v) or is_float(v))
+
+ # translate to locations
+ return self.slice_indexer(key.start,key.stop,key.step)
+
+ def get_value(self, series, key):
+ """ we always want to get an index value, never a value """
+ if not np.isscalar(key):
+ raise InvalidIndexError
+
+ from pandas.core.indexing import _maybe_droplevels
+ from pandas.core.series import Series
+
+ k = _values_from_object(key)
+ loc = self.get_loc(k)
+ new_values = series.values[loc]
+ if np.isscalar(new_values):
+ return new_values
+
+ new_index = self[loc]
+ new_index = _maybe_droplevels(new_index, k)
+ return Series(new_values, index=new_index, name=series.name)
+
+ def equals(self, other):
+ """
+ Determines if two Index objects contain the same elements.
+ """
+ if self is other:
+ return True
+
+ try:
+ return np.array_equal(self, other)
+ except TypeError:
+ # e.g. fails in numpy 1.6 with DatetimeIndex #1681
+ return False
+
class MultiIndex(Index):
"""
@@ -1801,6 +2068,14 @@ def __unicode__(self):
def __len__(self):
return len(self.labels[0])
+ def _convert_slice_indexer(self, key, typ=None):
+ """ convert a slice indexer. disallow floats in the start/stop/step """
+
+ if typ == 'iloc':
+ return self._convert_slice_indexer_iloc(key)
+
+ return super(MultiIndex,self)._convert_slice_indexer(key, typ=typ)
+
def _get_names(self):
return FrozenList(level.name for level in self.levels)
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index cb738df6966da..afbeb53d857e2 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -6,7 +6,7 @@
from pandas.compat import range, zip
import pandas.compat as compat
import pandas.core.common as com
-from pandas.core.common import (_is_bool_indexer,
+from pandas.core.common import (_is_bool_indexer, is_integer_dtype,
ABCSeries, ABCDataFrame, ABCPanel)
import pandas.lib as lib
@@ -16,7 +16,7 @@
def get_indexers_list():
return [
- ('ix' ,_NDFrameIndexer),
+ ('ix' ,_IXIndexer ),
('iloc',_iLocIndexer ),
('loc' ,_LocIndexer ),
('at' ,_AtIndexer ),
@@ -32,6 +32,7 @@ class IndexingError(Exception):
class _NDFrameIndexer(object):
+ _valid_types = None
_exception = KeyError
def __init__(self, obj, name):
@@ -68,8 +69,8 @@ def _get_label(self, label, axis=0):
def _get_loc(self, key, axis=0):
return self.obj._ixs(key, axis=axis)
- def _slice(self, obj, axis=0, raise_on_error=False):
- return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error)
+ def _slice(self, obj, axis=0, raise_on_error=False, typ=None):
+ return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error, typ=typ)
def __setitem__(self, key, value):
# kludgetastic
@@ -92,8 +93,16 @@ def __setitem__(self, key, value):
self._setitem_with_indexer(indexer, value)
+ def _has_valid_type(self, k, axis):
+ raise NotImplementedError()
+
def _has_valid_tuple(self, key):
- pass
+ """ check the key for valid keys across my indexer """
+ for i, k in enumerate(key):
+ if i >= self.obj.ndim:
+ raise IndexingError('Too many indexers')
+ if not self._has_valid_type(k,i):
+ raise ValueError("Location based indexing can only have [%s] types" % self._valid_types)
def _convert_tuple(self, key, is_setter=False):
keyidx = []
@@ -102,6 +111,17 @@ def _convert_tuple(self, key, is_setter=False):
keyidx.append(idx)
return tuple(keyidx)
+ def _convert_scalar_indexer(self, key, axis):
+ # if we are accessing via lowered dim, use the last dim
+ ax = self.obj._get_axis(min(axis,self.ndim-1))
+ # a scalar
+ return ax._convert_scalar_indexer(key, typ=self.name)
+
+ def _convert_slice_indexer(self, key, axis):
+ # if we are accessing via lowered dim, use the last dim
+ ax = self.obj._get_axis(min(axis,self.ndim-1))
+ return ax._convert_slice_indexer(key, typ=self.name)
+
def _has_valid_setitem_indexer(self, indexer):
return True
@@ -228,7 +248,9 @@ def _setitem_with_indexer(self, indexer, value):
# if we have a partial multiindex, then need to adjust the plane indexer here
if len(labels) == 1 and isinstance(self.obj[labels[0]].index,MultiIndex):
- index = self.obj[labels[0]].index
+ item = labels[0]
+ obj = self.obj[item]
+ index = obj.index
idx = indexer[:info_axis][0]
try:
if idx in index:
@@ -238,8 +260,19 @@ def _setitem_with_indexer(self, indexer, value):
plane_indexer = tuple([idx]) + indexer[info_axis + 1:]
lplane_indexer = _length_of_indexer(plane_indexer[0],index)
+ # require that we are setting the right number of values that we are indexing
if is_list_like(value) and lplane_indexer != len(value):
- raise ValueError("cannot set using a multi-index selection indexer with a different length than the value")
+
+ if len(obj[idx]) != len(value):
+ raise ValueError("cannot set using a multi-index selection indexer with a different length than the value")
+
+ # we can directly set the series here
+ # as we select a slice indexer on the mi
+ idx = index._convert_slice_indexer(idx)
+ obj = obj.copy()
+ obj._data = obj._data.setitem(tuple([idx]),value)
+ self.obj[item] = obj
+ return
# non-mi
else:
@@ -546,7 +579,7 @@ def _convert_for_reindex(self, key, axis=0):
# asarray can be unsafe, NumPy strings are weird
keyarr = _asarray_tuplesafe(key)
- if _is_integer_dtype(keyarr) and not _is_integer_index(labels):
+ if is_integer_dtype(keyarr) and not labels.is_integer():
keyarr = com._ensure_platform_int(keyarr)
return labels.take(keyarr)
@@ -610,6 +643,8 @@ def _getitem_lowerdim(self, tup):
raise IndexingError('not applicable')
def _getitem_axis(self, key, axis=0):
+
+ self._has_valid_type(key, axis)
labels = self.obj._get_axis(axis)
if isinstance(key, slice):
return self._get_slice_axis(key, axis=axis)
@@ -626,10 +661,11 @@ def _getitem_axis(self, key, axis=0):
try:
return self._get_label(key, axis=axis)
except (KeyError, TypeError):
- if _is_integer_index(self.obj.index.levels[0]):
+ if self.obj.index.levels[0].is_integer():
raise
- if not _is_integer_index(labels):
+ # this is the fallback! (for a non-float, non-integer index)
+ if not labels.is_floating() and not labels.is_integer():
return self._get_loc(key, axis=axis)
return self._get_label(key, axis=axis)
@@ -658,7 +694,7 @@ def _reindex(keys, level=None):
# asarray can be unsafe, NumPy strings are weird
keyarr = _asarray_tuplesafe(key)
- if _is_integer_dtype(keyarr):
+ if is_integer_dtype(keyarr) and not labels.is_floating():
if labels.inferred_type != 'integer':
keyarr = np.where(keyarr < 0,
len(labels) + keyarr, keyarr)
@@ -747,7 +783,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
- No, prefer label-based indexing
"""
labels = self.obj._get_axis(axis)
- is_int_index = _is_integer_index(labels)
+ is_int_index = labels.is_integer()
if com.is_integer(obj) and not is_int_index:
@@ -765,52 +801,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
pass
if isinstance(obj, slice):
- ltype = labels.inferred_type
-
- # in case of providing all floats, use label-based indexing
- float_slice = (labels.inferred_type == 'floating'
- and _is_float_slice(obj))
-
- # floats that are within tolerance of int used as positions
- int_slice = _is_index_slice(obj)
-
- null_slice = obj.start is None and obj.stop is None
-
- # could have integers in the first level of the MultiIndex,
- # in which case we wouldn't want to do position-based slicing
- position_slice = (int_slice
- and not ltype == 'integer'
- and not isinstance(labels, MultiIndex)
- and not float_slice)
-
- start, stop = obj.start, obj.stop
-
- # last ditch effort: if we are mixed and have integers
- try:
- if position_slice and 'mixed' in ltype:
- if start is not None:
- i = labels.get_loc(start)
- if stop is not None:
- j = labels.get_loc(stop)
- position_slice = False
- except KeyError:
- if ltype == 'mixed-integer-float':
- raise
-
- if null_slice or position_slice:
- indexer = obj
- else:
- try:
- indexer = labels.slice_indexer(start, stop, obj.step)
- except Exception:
- if _is_index_slice(obj):
- if ltype == 'integer':
- raise
- indexer = obj
- else:
- raise
-
- return indexer
+ return self._convert_slice_indexer(obj, axis)
elif _is_list_like(obj):
if com._is_bool_indexer(obj):
@@ -824,7 +815,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
objarr = _asarray_tuplesafe(obj)
# If have integer labels, defer to label-based indexing
- if _is_integer_dtype(objarr) and not is_int_index:
+ if is_integer_dtype(objarr) and not is_int_index:
if labels.inferred_type != 'integer':
objarr = np.where(objarr < 0,
len(labels) + objarr, objarr)
@@ -879,74 +870,37 @@ def _get_slice_axis(self, slice_obj, axis=0):
if not _need_slice(slice_obj):
return obj
+ indexer = self._convert_slice_indexer(slice_obj, axis)
- labels = obj._get_axis(axis)
-
- ltype = labels.inferred_type
-
- # in case of providing all floats, use label-based indexing
- float_slice = (labels.inferred_type == 'floating'
- and _is_float_slice(slice_obj))
+ if isinstance(indexer, slice):
+ return self._slice(indexer, axis=axis, typ='iloc')
+ else:
+ return self.obj.take(indexer, axis=axis)
- # floats that are within tolerance of int used as positions
- int_slice = _is_index_slice(slice_obj)
+class _IXIndexer(_NDFrameIndexer):
+ """ A primarily location based indexer, with integer fallback """
- null_slice = slice_obj.start is None and slice_obj.stop is None
+ def _has_valid_type(self, key, axis):
+ ax = self.obj._get_axis(axis)
- # could have integers in the first level of the MultiIndex,
- # in which case we wouldn't want to do position-based slicing
- position_slice = (int_slice
- and not ltype == 'integer'
- and not isinstance(labels, MultiIndex)
- and not float_slice)
+ if isinstance(key, slice):
+ return True
- start, stop = slice_obj.start, slice_obj.stop
+ elif com._is_bool_indexer(key):
+ return True
- # last ditch effort: if we are mixed and have integers
- try:
- if position_slice and 'mixed' in ltype:
- if start is not None:
- i = labels.get_loc(start)
- if stop is not None:
- j = labels.get_loc(stop)
- position_slice = False
- except KeyError:
- if ltype == 'mixed-integer-float':
- raise
+ elif _is_list_like(key):
+ return True
- if null_slice or position_slice:
- indexer = slice_obj
else:
- try:
- indexer = labels.slice_indexer(start, stop, slice_obj.step)
- except Exception:
- if _is_index_slice(slice_obj):
- if ltype == 'integer':
- raise
- indexer = slice_obj
- else:
- raise
- if isinstance(indexer, slice):
- return self._slice(indexer, axis=axis)
- else:
- return self.obj.take(indexer, axis=axis)
+ self._convert_scalar_indexer(key, axis)
+
+ return True
class _LocationIndexer(_NDFrameIndexer):
- _valid_types = None
_exception = Exception
- def _has_valid_type(self, k, axis):
- raise NotImplementedError()
-
- def _has_valid_tuple(self, key):
- """ check the key for valid keys across my indexer """
- for i, k in enumerate(key):
- if i >= self.obj.ndim:
- raise ValueError('Too many indexers')
- if not self._has_valid_type(k,i):
- raise ValueError("Location based indexing can only have [%s] types" % self._valid_types)
-
def __getitem__(self, key):
if type(key) is tuple:
return self._getitem_tuple(key)
@@ -974,7 +928,7 @@ def _get_slice_axis(self, slice_obj, axis=0):
indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step)
if isinstance(indexer, slice):
- return self._slice(indexer, axis=axis)
+ return self._slice(indexer, axis=axis, typ='iloc')
else:
return self.obj.take(indexer, axis=axis)
@@ -993,18 +947,28 @@ def _has_valid_type(self, key, axis):
if isinstance(key, slice):
- if key.start is not None:
- if key.start not in ax:
- raise KeyError("start bound [%s] is not the [%s]" % (key.start,self.obj._get_axis_name(axis)))
- if key.stop is not None:
- if key.stop not in ax:
- raise KeyError("stop bound [%s] is not in the [%s]" % (key.stop,self.obj._get_axis_name(axis)))
+ if ax.is_floating():
+
+ # allowing keys to be slicers with no fallback
+ pass
+
+ else:
+ if key.start is not None:
+ if key.start not in ax:
+ raise KeyError("start bound [%s] is not the [%s]" % (key.start,self.obj._get_axis_name(axis)))
+ if key.stop is not None:
+ if key.stop not in ax:
+ raise KeyError("stop bound [%s] is not in the [%s]" % (key.stop,self.obj._get_axis_name(axis)))
elif com._is_bool_indexer(key):
return True
elif _is_list_like(key):
+ # mi is just a passthru
+ if isinstance(key, tuple) and isinstance(ax, MultiIndex):
+ return True
+
# require all elements in the index
idx = _ensure_index(key)
if not idx.isin(ax).all():
@@ -1014,18 +978,15 @@ def _has_valid_type(self, key, axis):
else:
- # if its empty we want a KeyError here
- if not len(ax):
- raise KeyError("The [%s] axis is empty" % self.obj._get_axis_name(axis))
+ def error():
+ raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
+ key = self._convert_scalar_indexer(key, axis)
try:
if not key in ax:
- raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
- except (TypeError):
-
- # if we have a weird type of key/ax
- raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
-
+ error()
+ except:
+ error()
return True
@@ -1045,6 +1006,7 @@ def _getitem_axis(self, key, axis=0):
return self._getitem_iterable(key, axis=axis)
else:
+ self._has_valid_type(key,axis)
return self._get_label(key, axis=axis)
class _iLocIndexer(_LocationIndexer):
@@ -1092,11 +1054,12 @@ def _get_slice_axis(self, slice_obj, axis=0):
return obj
if isinstance(slice_obj, slice):
- return self._slice(slice_obj, axis=axis, raise_on_error=True)
+ return self._slice(slice_obj, axis=axis, raise_on_error=True, typ='iloc')
else:
return self.obj.take(slice_obj, axis=axis)
def _getitem_axis(self, key, axis=0):
+
if isinstance(key, slice):
self._has_valid_type(key,axis)
return self._get_slice_axis(key, axis=axis)
@@ -1108,8 +1071,13 @@ def _getitem_axis(self, key, axis=0):
# a single integer or a list of integers
else:
- if not (com.is_integer(key) or _is_list_like(key)):
- raise ValueError("Cannot index by location index with a non-integer key")
+ if _is_list_like(key):
+ pass
+ else:
+ key = self._convert_scalar_indexer(key, axis)
+
+ if not com.is_integer(key):
+ raise TypeError("Cannot index by location index with a non-integer key")
return self._get_loc(key,axis=axis)
@@ -1200,14 +1168,7 @@ def _convert_to_index_sliceable(obj, key):
""" if we are index sliceable, then return my slicer, otherwise return None """
idx = obj.index
if isinstance(key, slice):
- idx_type = idx.inferred_type
- if idx_type == 'floating':
- indexer = obj.ix._convert_to_indexer(key, axis=0)
- elif idx_type == 'integer' or _is_index_slice(key):
- indexer = key
- else:
- indexer = obj.ix._convert_to_indexer(key, axis=0)
- return indexer
+ return idx._convert_slice_indexer(key, typ='getitem')
elif isinstance(key, compat.string_types):
@@ -1237,31 +1198,7 @@ def _crit(v):
return not both_none and (_crit(obj.start) and _crit(obj.stop))
-def _is_int_slice(obj):
- def _is_valid_index(x):
- return com.is_integer(x)
-
- def _crit(v):
- return v is None or _is_valid_index(v)
-
- both_none = obj.start is None and obj.stop is None
-
- return not both_none and (_crit(obj.start) and _crit(obj.stop))
-
-
-def _is_float_slice(obj):
- def _is_valid_index(x):
- return com.is_float(x)
-
- def _crit(v):
- return v is None or _is_valid_index(v)
-
- both_none = obj.start is None and obj.stop is None
-
- return not both_none and (_crit(obj.start) and _crit(obj.stop))
-
-
-class _SeriesIndexer(_NDFrameIndexer):
+class _SeriesIndexer(_IXIndexer):
"""
Class to support fancy indexing, potentially using labels
@@ -1286,7 +1223,7 @@ def _get_label(self, key, axis=0):
def _get_loc(self, key, axis=0):
return self.obj.values[key]
- def _slice(self, indexer, axis=0):
+ def _slice(self, indexer, axis=0, typ=None):
return self.obj._get_values(indexer)
def _setitem_with_indexer(self, indexer, value):
@@ -1389,15 +1326,6 @@ def _is_null_slice(obj):
obj.stop is None and obj.step is None)
-def _is_integer_dtype(arr):
- return (issubclass(arr.dtype.type, np.integer) and
- not arr.dtype.type == np.datetime64)
-
-
-def _is_integer_index(index):
- return index.inferred_type == 'integer'
-
-
def _is_label_like(key):
# select a label or row
return not isinstance(key, slice) and not _is_list_like(key)
@@ -1438,6 +1366,9 @@ def _maybe_droplevels(index, key):
# we have dropped too much, so back out
return original_index
else:
- index = index.droplevel(0)
+ try:
+ index = index.droplevel(0)
+ except:
+ pass
return index
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 5f90eb9fa31a7..34b65f169b904 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -579,7 +579,7 @@ def _box_item_values(self, key, values):
d = self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:])
return self._constructor_sliced(values, **d)
- def _slice(self, slobj, axis=0, raise_on_error=False):
+ def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
new_data = self._data.get_slice(slobj,
axis=axis,
raise_on_error=raise_on_error)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 942bb700a3718..bf5ec998c9963 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -901,7 +901,8 @@ def _ixs(self, i, axis=0):
raise
except:
if isinstance(i, slice):
- return self[i]
+ indexer = self.index._convert_slice_indexer(i,typ='iloc')
+ return self._get_values(indexer)
else:
label = self.index[i]
if isinstance(label, Index):
@@ -914,10 +915,10 @@ def _ixs(self, i, axis=0):
def _is_mixed_type(self):
return False
- def _slice(self, slobj, axis=0, raise_on_error=False):
+ def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
if raise_on_error:
_check_slice_bounds(slobj, self.values)
-
+ slobj = self.index._convert_slice_indexer(slobj,typ=typ or 'getitem')
return self._constructor(self.values[slobj], index=self.index[slobj],
name=self.name)
@@ -935,7 +936,13 @@ def __getitem__(self, key):
elif _is_bool_indexer(key):
pass
else:
+
+ # we can try to coerce the indexer (or this will raise)
+ new_key = self.index._convert_scalar_indexer(key)
+ if type(new_key) != type(key):
+ return self.__getitem__(new_key)
raise
+
except Exception:
raise
@@ -950,14 +957,7 @@ def __getitem__(self, key):
def _get_with(self, key):
# other: fancy integer or otherwise
if isinstance(key, slice):
-
- idx_type = self.index.inferred_type
- if idx_type == 'floating':
- indexer = self.ix._convert_to_indexer(key, axis=0)
- elif idx_type == 'integer' or _is_index_slice(key):
- indexer = key
- else:
- indexer = self.ix._convert_to_indexer(key, axis=0)
+ indexer = self.index._convert_slice_indexer(key,typ='getitem')
return self._get_values(indexer)
else:
if isinstance(key, tuple):
@@ -980,7 +980,7 @@ def _get_with(self, key):
key_type = lib.infer_dtype(key)
if key_type == 'integer':
- if self.index.inferred_type == 'integer':
+ if self.index.is_integer() or self.index.is_floating():
return self.reindex(key)
else:
return self._get_values(key)
@@ -1080,10 +1080,7 @@ def _set_with_engine(self, key, value):
def _set_with(self, key, value):
# other: fancy integer or otherwise
if isinstance(key, slice):
- if self.index.inferred_type == 'integer' or _is_index_slice(key):
- indexer = key
- else:
- indexer = self.ix._convert_to_indexer(key, axis=0)
+ indexer = self.index._convert_slice_indexer(key,typ='getitem')
return self._set_values(indexer, value)
else:
if isinstance(key, tuple):
@@ -2348,7 +2345,7 @@ def sort(self, axis=0, kind='quicksort', order=None, ascending=True):
raise TypeError('This Series is a view of some other array, to '
'sort in-place you must create a copy')
- self[:] = sortedSeries
+ self._data = sortedSeries._data.copy()
self.index = sortedSeries.index
def sort_index(self, ascending=True):
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index 7d2571e6c3c74..a1b630dedaaab 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -374,7 +374,7 @@ def set_value(self, index, col, value):
return dense.to_sparse(kind=self._default_kind,
fill_value=self._default_fill_value)
- def _slice(self, slobj, axis=0, raise_on_error=False):
+ def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
if axis == 0:
if raise_on_error:
_check_slice_bounds(slobj, self.index)
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index e7a52756089cc..723bf022c3f48 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -1110,10 +1110,10 @@ def test_to_string_float_index(self):
result = df.to_string()
expected = (' 0\n'
'1.5 0\n'
- '2 1\n'
- '3 2\n'
- '4 3\n'
- '5 4')
+ '2.0 1\n'
+ '3.0 2\n'
+ '4.0 3\n'
+ '5.0 4')
self.assertEqual(result, expected)
def test_to_string_ascii_error(self):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 312b5aee18752..82be82ea57dae 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1254,48 +1254,62 @@ def test_getitem_setitem_float_labels(self):
assert_frame_equal(result, expected)
self.assertEqual(len(result), 2)
- # this should raise an exception
- with tm.assertRaises(KeyError):
- df.ix[1:2]
- with tm.assertRaises(KeyError):
- df.ix[1:2] = 0
+ # loc_float changes this to work properly
+ result = df.ix[1:2]
+ expected = df.iloc[0:2]
+ assert_frame_equal(result, expected)
+
+ df.ix[1:2] = 0
+ result = df[1:2]
+ self.assert_((result==0).all().all())
# #2727
index = Index([1.0, 2.5, 3.5, 4.5, 5.0])
df = DataFrame(np.random.randn(5, 5), index=index)
- # positional slicing!
+ # positional slicing only via iloc!
+ result = df.iloc[1.0:5]
+ expected = df.reindex([2.5, 3.5, 4.5, 5.0])
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 4)
+
+ result = df.iloc[4:5]
+ expected = df.reindex([5.0])
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 1)
+
+ cp = df.copy()
+ cp.iloc[1.0:5] = 0
+ self.assert_((cp.iloc[1.0:5] == 0).values.all())
+ self.assert_((cp.iloc[0:1] == df.iloc[0:1]).values.all())
+
+ cp = df.copy()
+ cp.iloc[4:5] = 0
+ self.assert_((cp.iloc[4:5] == 0).values.all())
+ self.assert_((cp.iloc[0:4] == df.iloc[0:4]).values.all())
+
+ # float slicing
result = df.ix[1.0:5]
+ expected = df
+ assert_frame_equal(result, expected)
+ self.assertEqual(len(result), 5)
+
+ result = df.ix[1.1:5]
expected = df.reindex([2.5, 3.5, 4.5, 5.0])
assert_frame_equal(result, expected)
self.assertEqual(len(result), 4)
- # positional again
- result = df.ix[4:5]
+ result = df.ix[4.51:5]
expected = df.reindex([5.0])
assert_frame_equal(result, expected)
self.assertEqual(len(result), 1)
- # label-based
result = df.ix[1.0:5.0]
expected = df.reindex([1.0, 2.5, 3.5, 4.5, 5.0])
assert_frame_equal(result, expected)
self.assertEqual(len(result), 5)
cp = df.copy()
- # positional slicing!
- cp.ix[1.0:5] = 0
- self.assert_((cp.ix[1.0:5] == 0).values.all())
- self.assert_((cp.ix[0:1] == df.ix[0:1]).values.all())
-
- cp = df.copy()
- # positional again
- cp.ix[4:5] = 0
- self.assert_((cp.ix[4:5] == 0).values.all())
- self.assert_((cp.ix[0:4] == df.ix[0:4]).values.all())
-
- cp = df.copy()
- # label-based
cp.ix[1.0:5.0] = 0
self.assert_((cp.ix[1.0:5.0] == 0).values.all())
@@ -10064,15 +10078,15 @@ def test_reindex_with_nans(self):
index=[100.0, 101.0, np.nan, 102.0, 103.0])
result = df.reindex(index=[101.0, 102.0, 103.0])
- expected = df.ix[[1, 3, 4]]
+ expected = df.iloc[[1, 3, 4]]
assert_frame_equal(result, expected)
result = df.reindex(index=[103.0])
- expected = df.ix[[4]]
+ expected = df.iloc[[4]]
assert_frame_equal(result, expected)
result = df.reindex(index=[101.0])
- expected = df.ix[[1]]
+ expected = df.iloc[[1]]
assert_frame_equal(result, expected)
def test_reindex_multi(self):
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index e3c9da3630975..857836fa698ce 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -12,7 +12,7 @@
import numpy as np
from numpy.testing import assert_array_equal
-from pandas.core.index import Index, Int64Index, MultiIndex, InvalidIndexError
+from pandas.core.index import Index, Float64Index, Int64Index, MultiIndex, InvalidIndexError
from pandas.core.frame import DataFrame
from pandas.core.series import Series
from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp,
@@ -654,6 +654,88 @@ def test_join_self(self):
self.assert_(res is joined)
+class TestFloat64Index(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ self.mixed = Float64Index([1.5, 2, 3, 4, 5])
+ self.float = Float64Index(np.arange(5) * 2.5)
+
+ def check_is_index(self, i):
+ self.assert_(isinstance(i, Index) and not isinstance(i, Float64Index))
+
+ def check_coerce(self, a, b, is_float_index=True):
+ self.assert_(a.equals(b))
+ if is_float_index:
+ self.assert_(isinstance(b, Float64Index))
+ else:
+ self.check_is_index(b)
+
+ def test_constructor(self):
+
+ # explicit construction
+ index = Float64Index([1,2,3,4,5])
+ self.assert_(isinstance(index, Float64Index))
+ self.assert_((index.values == np.array([1,2,3,4,5],dtype='float64')).all())
+ index = Float64Index(np.array([1,2,3,4,5]))
+ self.assert_(isinstance(index, Float64Index))
+ index = Float64Index([1.,2,3,4,5])
+ self.assert_(isinstance(index, Float64Index))
+ index = Float64Index(np.array([1.,2,3,4,5]))
+ self.assert_(isinstance(index, Float64Index))
+ self.assert_(index.dtype == object)
+
+ index = Float64Index(np.array([1.,2,3,4,5]),dtype=np.float32)
+ self.assert_(isinstance(index, Float64Index))
+ self.assert_(index.dtype == object)
+
+ index = Float64Index(np.array([1,2,3,4,5]),dtype=np.float32)
+ self.assert_(isinstance(index, Float64Index))
+ self.assert_(index.dtype == object)
+
+ # nan handling
+ result = Float64Index([np.nan, np.nan])
+ self.assert_(pd.isnull(result.values).all())
+ result = Float64Index(np.array([np.nan]))
+ self.assert_(pd.isnull(result.values).all())
+ result = Index(np.array([np.nan]))
+ self.assert_(pd.isnull(result.values).all())
+
+ def test_constructor_invalid(self):
+
+ # invalid
+ self.assertRaises(TypeError, Float64Index, 0.)
+ self.assertRaises(TypeError, Float64Index, ['a','b',0.])
+ self.assertRaises(TypeError, Float64Index, [Timestamp('20130101')])
+
+ def test_constructor_coerce(self):
+
+ self.check_coerce(self.mixed,Index([1.5, 2, 3, 4, 5]))
+ self.check_coerce(self.float,Index(np.arange(5) * 2.5))
+ self.check_coerce(self.float,Index(np.array(np.arange(5) * 2.5, dtype=object)))
+
+ def test_constructor_explicit(self):
+
+ # these don't auto convert
+ self.check_coerce(self.float,Index((np.arange(5) * 2.5), dtype=object),
+ is_float_index=False)
+ self.check_coerce(self.mixed,Index([1.5, 2, 3, 4, 5],dtype=object),
+ is_float_index=False)
+
+ def test_astype(self):
+
+ result = self.float.astype(object)
+ self.assert_(result.equals(self.float))
+ self.assert_(self.float.equals(result))
+ self.check_is_index(result)
+
+ i = self.mixed.copy()
+ i.name = 'foo'
+ result = i.astype(object)
+ self.assert_(result.equals(i))
+ self.assert_(i.equals(result))
+ self.check_is_index(result)
+
class TestInt64Index(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -676,7 +758,7 @@ def test_constructor(self):
self.assert_(np.array_equal(index, expected))
# scalar raise Exception
- self.assertRaises(ValueError, Int64Index, 5)
+ self.assertRaises(TypeError, Int64Index, 5)
# copy
arr = self.index.values
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index ced4cbdc4dc36..0eab5ab834533 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -13,7 +13,7 @@
import pandas as pd
import pandas.core.common as com
from pandas.core.api import (DataFrame, Index, Series, Panel, notnull, isnull,
- MultiIndex, DatetimeIndex, Timestamp)
+ MultiIndex, DatetimeIndex, Float64Index, Timestamp)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal)
from pandas import compat, concat
@@ -1519,6 +1519,309 @@ def test_cache_updating(self):
self.assert_("A+1" in panel.ix[0].columns)
self.assert_("A+1" in panel.ix[1].columns)
+ def test_floating_index_doc_example(self):
+
+ index = Index([1.5, 2, 3, 4.5, 5])
+ s = Series(range(5),index=index)
+ self.assert_(s[3] == 2)
+ self.assert_(s.ix[3] == 2)
+ self.assert_(s.loc[3] == 2)
+ self.assert_(s.iloc[3] == 3)
+
+ def test_floating_index(self):
+
+ # related 236
+ # scalar/slicing of a float index
+ s = Series(np.arange(5), index=np.arange(5) * 2.5)
+
+ # label based slicing
+ result1 = s[1.0:3.0]
+ result2 = s.ix[1.0:3.0]
+ result3 = s.loc[1.0:3.0]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+
+ # exact indexing when found
+ result1 = s[5.0]
+ result2 = s.loc[5.0]
+ result3 = s.ix[5.0]
+ self.assert_(result1 == result2)
+ self.assert_(result1 == result3)
+
+ result1 = s[5]
+ result2 = s.loc[5]
+ result3 = s.ix[5]
+ self.assert_(result1 == result2)
+ self.assert_(result1 == result3)
+
+ self.assert_(s[5.0] == s[5])
+
+ # value not found (and no fallbacking at all)
+
+ # scalar integers
+ self.assertRaises(KeyError, lambda : s.loc[4])
+ self.assertRaises(KeyError, lambda : s.ix[4])
+ self.assertRaises(KeyError, lambda : s[4])
+
+ # fancy floats/integers create the correct entry (as nan)
+ # fancy tests
+ expected = Series([2, 0], index=Float64Index([5.0, 0.0]))
+ for fancy_idx in [[5.0, 0.0], [5, 0], np.array([5.0, 0.0]), np.array([5, 0])]:
+ assert_series_equal(s[fancy_idx], expected)
+ assert_series_equal(s.loc[fancy_idx], expected)
+ assert_series_equal(s.ix[fancy_idx], expected)
+
+ # all should return the same as we are slicing 'the same'
+ result1 = s.loc[2:5]
+ result2 = s.loc[2.0:5.0]
+ result3 = s.loc[2.0:5]
+ result4 = s.loc[2.1:5]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, result4)
+
+ # previously this did fallback indexing
+ result1 = s[2:5]
+ result2 = s[2.0:5.0]
+ result3 = s[2.0:5]
+ result4 = s[2.1:5]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, result4)
+
+ result1 = s.ix[2:5]
+ result2 = s.ix[2.0:5.0]
+ result3 = s.ix[2.0:5]
+ result4 = s.ix[2.1:5]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, result4)
+
+ # combined test
+ result1 = s.loc[2:5]
+ result2 = s.ix[2:5]
+ result3 = s[2:5]
+
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+
+ # list selection
+ result1 = s[[0.0,5,10]]
+ result2 = s.loc[[0.0,5,10]]
+ result3 = s.ix[[0.0,5,10]]
+ result4 = s.iloc[[0,2,4]]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, result4)
+
+ result1 = s[[1.6,5,10]]
+ result2 = s.loc[[1.6,5,10]]
+ result3 = s.ix[[1.6,5,10]]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, Series([np.nan,2,4],index=[1.6,5,10]))
+
+ result1 = s[[0,1,2]]
+ result2 = s.ix[[0,1,2]]
+ result3 = s.loc[[0,1,2]]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, Series([0.0,np.nan,np.nan],index=[0,1,2]))
+
+ result1 = s.loc[[2.5, 5]]
+ result2 = s.ix[[2.5, 5]]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, Series([1,2],index=[2.5,5.0]))
+
+ result1 = s[[2.5]]
+ result2 = s.ix[[2.5]]
+ result3 = s.loc[[2.5]]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+ assert_series_equal(result1, Series([1],index=[2.5]))
+
+ def test_scalar_indexer(self):
+ # float indexing checked above
+
+ def check_invalid(index, loc=None, iloc=None, ix=None, getitem=None):
+
+ # related 236/4850
+ # trying to access with a float index
+ s = Series(np.arange(len(index)),index=index)
+
+ if iloc is None:
+ iloc = TypeError
+ self.assertRaises(iloc, lambda : s.iloc[3.5])
+ if loc is None:
+ loc = TypeError
+ self.assertRaises(loc, lambda : s.loc[3.5])
+ if ix is None:
+ ix = TypeError
+ self.assertRaises(ix, lambda : s.ix[3.5])
+ if getitem is None:
+ getitem = TypeError
+ self.assertRaises(getitem, lambda : s[3.5])
+
+ for index in [ tm.makeStringIndex, tm.makeUnicodeIndex, tm.makeIntIndex,
+ tm.makeDateIndex, tm.makePeriodIndex ]:
+ check_invalid(index())
+ check_invalid(Index(np.arange(5) * 2.5),loc=KeyError, ix=KeyError, getitem=KeyError)
+
+ def check_getitem(index):
+
+ s = Series(np.arange(len(index)),index=index)
+
+ # positional selection
+ result1 = s[5]
+ result2 = s[5.0]
+ result3 = s.iloc[5]
+ result4 = s.iloc[5.0]
+
+ # by value
+ self.assertRaises(KeyError, lambda : s.loc[5])
+ self.assertRaises(KeyError, lambda : s.loc[5.0])
+
+ # this is fallback, so it works
+ result5 = s.ix[5]
+ result6 = s.ix[5.0]
+ self.assert_(result1 == result2)
+ self.assert_(result1 == result3)
+ self.assert_(result1 == result4)
+ self.assert_(result1 == result5)
+ self.assert_(result1 == result6)
+
+ # all index types except float/int
+ for index in [ tm.makeStringIndex, tm.makeUnicodeIndex,
+ tm.makeDateIndex, tm.makePeriodIndex ]:
+ check_getitem(index())
+
+ # exact indexing when found on IntIndex
+ s = Series(np.arange(10),dtype='int64')
+
+ result1 = s[5.0]
+ result2 = s.loc[5.0]
+ result3 = s.ix[5.0]
+ result4 = s[5]
+ result5 = s.loc[5]
+ result6 = s.ix[5]
+ self.assert_(result1 == result2)
+ self.assert_(result1 == result3)
+ self.assert_(result1 == result4)
+ self.assert_(result1 == result5)
+ self.assert_(result1 == result6)
+
+ def test_slice_indexer(self):
+
+ def check_slicing_positional(index):
+
+ s = Series(np.arange(len(index))+10,index=index)
+
+ # these are all positional
+ result1 = s[2:5]
+ result2 = s.ix[2:5]
+ result3 = s.iloc[2:5]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+
+ # not in the index
+ self.assertRaises(KeyError, lambda : s.loc[2:5])
+
+ # make all float slicing fail
+ self.assertRaises(TypeError, lambda : s[2.0:5])
+ self.assertRaises(TypeError, lambda : s[2.0:5.0])
+ self.assertRaises(TypeError, lambda : s[2:5.0])
+
+ self.assertRaises(TypeError, lambda : s.ix[2.0:5])
+ self.assertRaises(TypeError, lambda : s.ix[2.0:5.0])
+ self.assertRaises(TypeError, lambda : s.ix[2:5.0])
+
+ self.assertRaises(KeyError, lambda : s.loc[2.0:5])
+ self.assertRaises(KeyError, lambda : s.loc[2.0:5.0])
+ self.assertRaises(KeyError, lambda : s.loc[2:5.0])
+
+ # these work for now
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5])
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5.0])
+ #self.assertRaises(TypeError, lambda : s.iloc[2:5.0])
+
+ # all index types except int, float
+ for index in [ tm.makeStringIndex, tm.makeUnicodeIndex,
+ tm.makeDateIndex, tm.makePeriodIndex ]:
+ check_slicing_positional(index())
+
+ # int
+ index = tm.makeIntIndex()
+ s = Series(np.arange(len(index))+10,index)
+
+ # this is positional
+ result1 = s[2:5]
+ result4 = s.iloc[2:5]
+ assert_series_equal(result1, result4)
+
+ # these are all value based
+ result2 = s.ix[2:5]
+ result3 = s.loc[2:5]
+ result4 = s.loc[2.0:5]
+ result5 = s.loc[2.0:5.0]
+ result6 = s.loc[2:5.0]
+ assert_series_equal(result2, result3)
+ assert_series_equal(result2, result4)
+ assert_series_equal(result2, result5)
+ assert_series_equal(result2, result6)
+
+ # make all float slicing fail
+ self.assertRaises(TypeError, lambda : s[2.0:5])
+ self.assertRaises(TypeError, lambda : s[2.0:5.0])
+ self.assertRaises(TypeError, lambda : s[2:5.0])
+
+ self.assertRaises(TypeError, lambda : s.ix[2.0:5])
+ self.assertRaises(TypeError, lambda : s.ix[2.0:5.0])
+ self.assertRaises(TypeError, lambda : s.ix[2:5.0])
+
+ # these work for now
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5])
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5.0])
+ #self.assertRaises(TypeError, lambda : s.iloc[2:5.0])
+
+ # float
+ index = tm.makeFloatIndex()
+ s = Series(np.arange(len(index))+10,index=index)
+
+ # these are all value based
+ result1 = s[2:5]
+ result2 = s.ix[2:5]
+ result3 = s.loc[2:5]
+ assert_series_equal(result1, result2)
+ assert_series_equal(result1, result3)
+
+ # these are all valid
+ result1a = s[2.0:5]
+ result2a = s[2.0:5.0]
+ result3a = s[2:5.0]
+ assert_series_equal(result1a, result2a)
+ assert_series_equal(result1a, result3a)
+
+ result1b = s.ix[2.0:5]
+ result2b = s.ix[2.0:5.0]
+ result3b = s.ix[2:5.0]
+ assert_series_equal(result1b, result2b)
+ assert_series_equal(result1b, result3b)
+
+ result1c = s.loc[2.0:5]
+ result2c = s.loc[2.0:5.0]
+ result3c = s.loc[2:5.0]
+ assert_series_equal(result1c, result2c)
+ assert_series_equal(result1c, result3c)
+
+ assert_series_equal(result1a, result1b)
+ assert_series_equal(result1a, result1c)
+
+ # these work for now
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5])
+ #self.assertRaises(TypeError, lambda : s.iloc[2.0:5.0])
+ #self.assertRaises(TypeError, lambda : s.iloc[2:5.0])
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tests/test_reshape.py b/pandas/tests/test_reshape.py
index 161b740178a4d..c4e75fcb41d45 100644
--- a/pandas/tests/test_reshape.py
+++ b/pandas/tests/test_reshape.py
@@ -14,6 +14,7 @@
import numpy as np
from pandas.util.testing import assert_frame_equal
+from numpy.testing import assert_array_equal
from pandas.core.reshape import melt, convert_dummies, lreshape, get_dummies
import pandas.util.testing as tm
@@ -195,11 +196,8 @@ def test_include_na(self):
assert_frame_equal(res_na, exp_na)
res_just_na = get_dummies([nan], dummy_na=True)
- exp_just_na = DataFrame({nan: {0: 1.0}})
- # hack (NaN handling in assert_index_equal)
- exp_just_na.columns = res_just_na.columns
- assert_frame_equal(res_just_na, exp_just_na)
-
+ exp_just_na = DataFrame(Series(1.0,index=[0]),columns=[nan])
+ assert_array_equal(res_just_na.values, exp_just_na.values)
class TestConvertDummies(unittest.TestCase):
def test_convert_dummies(self):
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 6d3b052154147..98fa5c0a56ccd 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -909,12 +909,11 @@ def test_slice_can_reorder_not_uniquely_indexed(self):
result = s[::-1] # it works!
def test_slice_float_get_set(self):
- result = self.ts[4.0:10.0]
- expected = self.ts[4:10]
- assert_series_equal(result, expected)
- self.ts[4.0:10.0] = 0
- self.assert_((self.ts[4:10] == 0).all())
+ self.assertRaises(TypeError, lambda : self.ts[4.0:10.0])
+ def f():
+ self.ts[4.0:10.0] = 0
+ self.assertRaises(TypeError, f)
self.assertRaises(TypeError, self.ts.__getitem__, slice(4.5, 10.0))
self.assertRaises(TypeError, self.ts.__setitem__, slice(4.5, 10.0), 0)
@@ -932,6 +931,7 @@ def test_slice_floats2(self):
self.assert_(len(s.ix[12.5:]) == 7)
def test_slice_float64(self):
+
values = np.arange(10., 50., 2)
index = Index(values)
@@ -940,19 +940,19 @@ def test_slice_float64(self):
s = Series(np.random.randn(20), index=index)
result = s[start:end]
- expected = s.ix[5:16]
+ expected = s.iloc[5:16]
assert_series_equal(result, expected)
- result = s.ix[start:end]
+ result = s.loc[start:end]
assert_series_equal(result, expected)
df = DataFrame(np.random.randn(20, 3), index=index)
result = df[start:end]
- expected = df.ix[5:16]
+ expected = df.iloc[5:16]
tm.assert_frame_equal(result, expected)
- result = df.ix[start:end]
+ result = df.loc[start:end]
tm.assert_frame_equal(result, expected)
def test_setitem(self):
@@ -3254,6 +3254,13 @@ def test_value_counts_nunique(self):
#self.assert_(result.index.dtype == 'timedelta64[ns]')
self.assert_(result.index.dtype == 'int64')
+ # basics.rst doc example
+ series = Series(np.random.randn(500))
+ series[20:500] = np.nan
+ series[10:20] = 5000
+ result = series.nunique()
+ self.assert_(result == 11)
+
def test_unique(self):
# 714 also, dtype=float
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index a5a96d3e03cac..b25f85c961798 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -393,23 +393,36 @@ def getCols(k):
return string.ascii_uppercase[:k]
-def makeStringIndex(k):
+# make index
+def makeStringIndex(k=10):
return Index([rands(10) for _ in range(k)])
-def makeUnicodeIndex(k):
+def makeUnicodeIndex(k=10):
return Index([randu(10) for _ in range(k)])
-def makeIntIndex(k):
+def makeIntIndex(k=10):
return Index(lrange(k))
-def makeFloatIndex(k):
+def makeFloatIndex(k=10):
values = sorted(np.random.random_sample(k)) - np.random.random_sample(1)
return Index(values * (10 ** np.random.randint(0, 9)))
+def makeDateIndex(k=10):
+ dt = datetime(2000, 1, 1)
+ dr = bdate_range(dt, periods=k)
+ return DatetimeIndex(dr)
+
+
+def makePeriodIndex(k=10):
+ dt = datetime(2000, 1, 1)
+ dr = PeriodIndex(start=dt, periods=k, freq='B')
+ return dr
+
+# make series
def makeFloatSeries():
index = makeStringIndex(N)
return Series(randn(N), index=index)
@@ -431,41 +444,6 @@ def getSeriesData():
index = makeStringIndex(N)
return dict((c, Series(randn(N), index=index)) for c in getCols(K))
-
-def makeDataFrame():
- data = getSeriesData()
- return DataFrame(data)
-
-
-def getArangeMat():
- return np.arange(N * K).reshape((N, K))
-
-
-def getMixedTypeDict():
- index = Index(['a', 'b', 'c', 'd', 'e'])
-
- data = {
- 'A': [0., 1., 2., 3., 4.],
- 'B': [0., 1., 0., 1., 0.],
- 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
- 'D': bdate_range('1/1/2009', periods=5)
- }
-
- return index, data
-
-
-def makeDateIndex(k):
- dt = datetime(2000, 1, 1)
- dr = bdate_range(dt, periods=k)
- return DatetimeIndex(dr)
-
-
-def makePeriodIndex(k):
- dt = datetime(2000, 1, 1)
- dr = PeriodIndex(start=dt, periods=k, freq='B')
- return dr
-
-
def makeTimeSeries(nper=None):
if nper is None:
nper = N
@@ -490,6 +468,28 @@ def makeTimeDataFrame(nper=None):
def getPeriodData(nper=None):
return dict((c, makePeriodSeries(nper)) for c in getCols(K))
+# make frame
+def makeDataFrame():
+ data = getSeriesData()
+ return DataFrame(data)
+
+
+def getArangeMat():
+ return np.arange(N * K).reshape((N, K))
+
+
+def getMixedTypeDict():
+ index = Index(['a', 'b', 'c', 'd', 'e'])
+
+ data = {
+ 'A': [0., 1., 2., 3., 4.],
+ 'B': [0., 1., 0., 1., 0.],
+ 'C': ['foo1', 'foo2', 'foo3', 'foo4', 'foo5'],
+ 'D': bdate_range('1/1/2009', periods=5)
+ }
+
+ return index, data
+
def makePeriodFrame(nper=None):
data = getPeriodData(nper)
| closes #236
- CLN: refactor all scalar/slicing code from core/indexing.py to core/index.py on a per-index basis
- BUG: provide better `.loc` semantics on floating indicies
- API: provide a implementation of `Float64Index`
- BUG: raise on indexing with float keys on a non-float index
provide better `loc` semantics on `Float64Index`
```
In [1]: s = Series(np.arange(5), index=np.arange(5) * 2.5)
In [9]: s.index
Out[9]: Float64Index([0.0, 2.5, 5.0, 7.5, 10.0], dtype=object)
In [2]: s
Out[2]:
0.0 0
2.5 1
5.0 2
7.5 3
10.0 4
dtype: int64
In [3]: # label based slicing
In [4]: s[1.0:3.0]
Out[4]:
2.5 1
dtype: int64
In [5]: s.ix[1.0:3.0]
Out[5]:
2.5 1
dtype: int64
In [6]: s.loc[1.0:3.0]
Out[6]:
2.5 1
dtype: int64
In [7]: # exact indexing when found
In [8]: s[5.0]
Out[8]: 2
In [9]: s.loc[5.0]
Out[9]: 2
In [10]: s.ix[5.0]
Out[10]: 2
In [11]: # non-fallback location based should raise this error (__getitem__,ix fallback here)
In [12]: s.loc[4.0]
KeyError: 'the label [4.0] is not in the [index]'
In [13]: s[4.0] == s[4]
Out[13]: True
In [14]: s[4] == s[4]
Out[14]: True
# [],ix,s.ix[2:5]
loc is clear about slicing on the values ONLY
In [11]: s[2:5]
Out[11]:
2.5 1
5.0 2
dtype: int64
In [12]: s.ix[2:5]
Out[12]:
2.5 1
5.0 2
dtype: int64
In [2]: s.loc[2:5]
Out[2]:
2.5 1
5.0 2
dtype: int64
In [15]: s.loc[2.0:5.0]
Out[15]:
2.5 1
5.0 2
dtype: int64
In [16]: s.loc[2.0:5]
Out[16]:
2.5 1
5.0 2
dtype: int64
In [17]: s.loc[2.1:5]
Out[17]:
2.5 1
5.0 2
dtype: int64
```
Float Slicing now raises when indexing by a non-integer slicer
```
In [1]: s = Series(np.arange(5))
In [2]: s
Out[2]:
0 0
1 1
2 2
3 3
4 4
dtype: int64
In [3]: s[3.5]
KeyError:
In [4]: s.loc[3.5]
KeyError: 'the label [3.5] is not in the [index]'
In [5]: s.ix[3.5]
KeyError: 'the label [3.5] is not in the [index]'
In [6]: s.iloc[3.5]
TypeError: the positional label [3.5] is not a proper indexer for this index type (Float64Index)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4850 | 2013-09-16T14:48:21Z | 2013-09-25T20:08:12Z | 2013-09-25T20:08:12Z | 2014-07-05T04:36:06Z |
DOC: isin Example for .13 release notes | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index bc9dcdccfc2e1..dc8c42fd11989 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -519,21 +519,14 @@ of the DataFrame):
df[df['A'] > 0]
-Consider the ``isin`` method of Series, which returns a boolean vector that is
-true wherever the Series elements exist in the passed list. This allows you to
-select rows where one or more columns have values you want:
+List comprehensions and ``map`` method of Series can also be used to produce
+more complex criteria:
.. ipython:: python
df2 = DataFrame({'a' : ['one', 'one', 'two', 'three', 'two', 'one', 'six'],
'b' : ['x', 'y', 'y', 'x', 'y', 'x', 'x'],
'c' : randn(7)})
- df2[df2['a'].isin(['one', 'two'])]
-
-List comprehensions and ``map`` method of Series can also be used to produce
-more complex criteria:
-
-.. ipython:: python
# only want 'two' or 'three'
criterion = df2['a'].map(lambda x: x.startswith('t'))
@@ -553,6 +546,26 @@ and :ref:`Advanced Indexing <indexing.advanced>` you may select along more than
df2.loc[criterion & (df2['b'] == 'x'),'b':'c']
+.. _indexing.basics.indexing_isin:
+
+Indexing with isin
+~~~~~~~~~~~~~~~~~~
+
+Consider the ``isin`` method of Series, which returns a boolean vector that is
+true wherever the Series elements exist in the passed list. This allows you to
+select rows where one or more columns have values you want:
+
+.. ipython:: python
+
+ s = Series(np.arange(5),index=np.arange(5)[::-1],dtype='int64')
+
+ s
+
+ s.isin([2, 4])
+
+ s[s.isin([2, 4])]
+
+
DataFrame also has an ``isin`` method. When calling ``isin``, pass a set of
values as either an array or dict. If values is an array, ``isin`` returns
a DataFrame of booleans that is the same shape as the original DataFrame, with True
@@ -585,6 +598,17 @@ You can also describe columns using integer location:
df.isin(values, iloc=True)
+Combine DataFrame's ``isin`` with the ``any()`` and ``all()`` methods to
+quickly select subsets of your data that meet a given criteria.
+To select a row where each column meets its own criterion:
+
+.. ipython:: python
+
+ values = {'ids': ['a', 'b'], 'ids2': ['a', 'c'], 'vals': [1, 3]}
+
+ row_mask = df.isin(values).all(1)
+
+ df[row_mask]
The :meth:`~pandas.DataFrame.where` Method and Masking
------------------------------------------------------
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 1f9dc8d7dad81..090be81a3ee7c 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -516,6 +516,22 @@ Experimental
For more details see the :ref:`indexing documentation on query
<indexing.query>`.
+ - DataFrame now has an ``isin`` method that can be used to easily check whether the DataFrame's values are contained in an iterable. Use a dictionary if you'd like to check specific iterables for specific columns or rows.
+
+ .. ipython:: python
+
+ df = pd.DataFrame({'A': [1, 2, 3], 'B': ['d', 'e', 'f']})
+ df.isin({'A': [1, 2], 'B': ['e', 'f']})
+
+ The ``isin`` method plays nicely with boolean indexing. To get the rows where each condition is met:
+
+ .. ipython:: python
+
+ mask = df.isin({'A': [1, 2], 'B': ['e', 'f']})
+ df[mask.all(1)]
+
+ See the :ref:`documentation<indexing.basics.indexing_isin>` for more.
+
.. _whatsnew_0130.refactoring:
Internal Refactoring
| I also fixed a few typos in a separate commit. I can remove that commit if they're going to cause conflicts.
Also did I get the syntax correct for this link?
> See :ref:`Boolean Indexing<indexing.boolean>` for more
| https://api.github.com/repos/pandas-dev/pandas/pulls/4848 | 2013-09-16T13:39:44Z | 2013-09-30T13:57:37Z | 2013-09-30T13:57:37Z | 2017-04-05T02:05:40Z |
BUG: fix sort_index with one col and ascending list | diff --git a/doc/source/release.rst b/doc/source/release.rst
index e4143e3f76a25..793d52223b6f5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -424,6 +424,10 @@ Bug Fixes
across different versions of matplotlib (:issue:`4789`)
- Suppressed DeprecationWarning associated with internal calls issued by repr() (:issue:`4391`)
- Fixed an issue with a duplicate index and duplicate selector with ``.loc`` (:issue:`4825`)
+ - Fixed an issue with ``DataFrame.sort_index`` where, when sorting by a
+ single column and passing a list for ``ascending``, the argument for
+ ``ascending`` was being interpreted as ``True`` (:issue:`4839`,
+ :issue:`4846`)
pandas 0.12.0
-------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 75f81d20926a1..78ef806a45dcb 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2856,7 +2856,7 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False,
Examples
--------
- >>> result = df.sort_index(by=['A', 'B'], ascending=[1, 0])
+ >>> result = df.sort_index(by=['A', 'B'], ascending=[True, False])
Returns
-------
@@ -2875,6 +2875,9 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False,
raise ValueError('When sorting by column, axis must be 0 (rows)')
if not isinstance(by, (tuple, list)):
by = [by]
+ if com._is_sequence(ascending) and len(by) != len(ascending):
+ raise ValueError('Length of ascending (%d) != length of by'
+ ' (%d)' % (len(ascending), len(by)))
if len(by) > 1:
keys = []
@@ -2900,6 +2903,8 @@ def trans(v):
raise ValueError('Cannot sort by duplicate column %s'
% str(by))
indexer = k.argsort(kind=kind)
+ if isinstance(ascending, (tuple, list)):
+ ascending = ascending[0]
if not ascending:
indexer = indexer[::-1]
elif isinstance(labels, MultiIndex):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 752fc4c58ff3a..6f6b3bf71c759 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -8796,24 +8796,37 @@ def test_sort_index(self):
expected = frame.ix[frame.index[indexer]]
assert_frame_equal(sorted_df, expected)
+ sorted_df = frame.sort(columns='A', ascending=False)
+ assert_frame_equal(sorted_df, expected)
+
+ # GH4839
+ sorted_df = frame.sort(columns=['A'], ascending=[False])
+ assert_frame_equal(sorted_df, expected)
+
# check for now
sorted_df = frame.sort(columns='A')
+ assert_frame_equal(sorted_df, expected[::-1])
expected = frame.sort_index(by='A')
assert_frame_equal(sorted_df, expected)
- sorted_df = frame.sort(columns='A', ascending=False)
- expected = frame.sort_index(by='A', ascending=False)
- assert_frame_equal(sorted_df, expected)
sorted_df = frame.sort(columns=['A', 'B'], ascending=False)
expected = frame.sort_index(by=['A', 'B'], ascending=False)
assert_frame_equal(sorted_df, expected)
+ sorted_df = frame.sort(columns=['A', 'B'])
+ assert_frame_equal(sorted_df, expected[::-1])
+
self.assertRaises(ValueError, frame.sort_index, axis=2, inplace=True)
+
msg = 'When sorting by column, axis must be 0'
with assertRaisesRegexp(ValueError, msg):
frame.sort_index(by='A', axis=1)
+ msg = r'Length of ascending \(5\) != length of by \(2\)'
+ with assertRaisesRegexp(ValueError, msg):
+ frame.sort_index(by=['A', 'B'], axis=0, ascending=[True] * 5)
+
def test_sort_index_multicolumn(self):
import random
A = np.arange(5).repeat(20)
| Fixes #4839.
now actually checks the first element of the list
| https://api.github.com/repos/pandas-dev/pandas/pulls/4846 | 2013-09-15T16:52:08Z | 2013-09-17T05:40:41Z | 2013-09-17T05:40:40Z | 2014-06-29T21:06:36Z |
TST: Cleanup Excel tests to make it easier to add and test additional writers | diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index a9822ea0b46c9..00536026994c5 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -1,12 +1,7 @@
# pylint: disable=E1101
-from pandas.compat import StringIO, BytesIO, PY3, u, range, map
-from datetime import datetime
-from os.path import split as psplit
-import csv
+from pandas.compat import u, range, map
import os
-import sys
-import re
import unittest
import nose
@@ -14,51 +9,36 @@
from numpy import nan
import numpy as np
-from pandas import DataFrame, Series, Index, MultiIndex, DatetimeIndex
-import pandas.io.parsers as parsers
-from pandas.io.parsers import (read_csv, read_table, read_fwf,
- TextParser, TextFileReader)
+from pandas import DataFrame, Index, MultiIndex
+from pandas.io.parsers import read_csv
from pandas.io.excel import (
ExcelFile, ExcelWriter, read_excel, _XlwtWriter, _OpenpyxlWriter,
register_writer
)
-from pandas.util.testing import (assert_almost_equal,
- assert_series_equal,
- network,
- ensure_clean)
+from pandas.util.testing import ensure_clean
import pandas.util.testing as tm
import pandas as pd
-import pandas.lib as lib
-from pandas import compat
-from pandas.lib import Timestamp
-from pandas.tseries.index import date_range
-import pandas.tseries.tools as tools
-
-from numpy.testing.decorators import slow
-
-from pandas.parser import OverflowError
-
def _skip_if_no_xlrd():
try:
import xlrd
ver = tuple(map(int, xlrd.__VERSION__.split(".")[:2]))
if ver < (0, 9):
- raise nose.SkipTest('xlrd not installed, skipping')
+ raise nose.SkipTest('xlrd < 0.9, skipping')
except ImportError:
raise nose.SkipTest('xlrd not installed, skipping')
def _skip_if_no_xlwt():
try:
- import xlwt
+ import xlwt # NOQA
except ImportError:
raise nose.SkipTest('xlwt not installed, skipping')
def _skip_if_no_openpyxl():
try:
- import openpyxl
+ import openpyxl # NOQA
except ImportError:
raise nose.SkipTest('openpyxl not installed, skipping')
@@ -78,8 +58,7 @@ def _skip_if_no_excelsuite():
_mixed_frame['foo'] = 'bar'
-class ExcelTests(unittest.TestCase):
-
+class SharedItems(object):
def setUp(self):
self.dirpath = tm.get_data_path()
self.csv1 = os.path.join(self.dirpath, 'test1.csv')
@@ -91,6 +70,13 @@ def setUp(self):
self.tsframe = _tsframe.copy()
self.mixed_frame = _mixed_frame.copy()
+ def read_csv(self, *args, **kwds):
+ kwds = kwds.copy()
+ kwds['engine'] = 'python'
+ return read_csv(*args, **kwds)
+
+
+class ExcelReaderTests(SharedItems, unittest.TestCase):
def test_parse_cols_int(self):
_skip_if_no_openpyxl()
_skip_if_no_xlrd()
@@ -226,24 +212,6 @@ def test_excel_table_sheet_by_index(self):
(self.xlsx1, self.csv1)]:
self.check_excel_table_sheet_by_index(filename, csvfile)
- def check_excel_sheet_by_name_raise(self, ext):
- import xlrd
- pth = os.path.join(self.dirpath, 'testit.{0}'.format(ext))
-
- with ensure_clean(pth) as pth:
- gt = DataFrame(np.random.randn(10, 2))
- gt.to_excel(pth)
- xl = ExcelFile(pth)
- df = xl.parse(0)
- tm.assert_frame_equal(gt, df)
-
- self.assertRaises(xlrd.XLRDError, xl.parse, '0')
-
- def test_excel_sheet_by_name_raise(self):
- _skip_if_no_xlrd()
- _skip_if_no_xlwt()
- for ext in ('xls', 'xlsx'):
- self.check_excel_sheet_by_name_raise(ext)
def test_excel_table(self):
_skip_if_no_xlrd()
@@ -276,7 +244,7 @@ def test_excel_read_buffer(self):
pth = os.path.join(self.dirpath, 'test.xlsx')
f = open(pth, 'rb')
xl = ExcelFile(f)
- df = xl.parse('Sheet1', index_col=0, parse_dates=True)
+ xl.parse('Sheet1', index_col=0, parse_dates=True)
def test_xlsx_table(self):
_skip_if_no_xlrd()
@@ -298,32 +266,37 @@ def test_xlsx_table(self):
tm.assert_frame_equal(df4, df.ix[:-1])
tm.assert_frame_equal(df4, df5)
- def test_specify_kind_xls(self):
- _skip_if_no_xlrd()
- xlsx_file = os.path.join(self.dirpath, 'test.xlsx')
- xls_file = os.path.join(self.dirpath, 'test.xls')
- # succeeds with xlrd 0.8.0, weird
- # self.assertRaises(Exception, ExcelFile, xlsx_file, kind='xls')
+class ExcelWriterBase(SharedItems):
+ # test cases to run with different extensions
+ # for each writer
+ # to add a writer test, define two things:
+ # 1. a check_skip function that skips your tests if your writer isn't
+ # installed
+ # 2. add a property ext, which is the file extension that your writer writes to
+ def setUp(self):
+ self.check_skip()
+ super(ExcelWriterBase, self).setUp()
- # ExcelFile(open(xls_file, 'rb'), kind='xls')
- # self.assertRaises(Exception, ExcelFile, open(xlsx_file, 'rb'),
- # kind='xls')
+ def test_excel_sheet_by_name_raise(self):
+ _skip_if_no_xlrd()
+ import xlrd
- def read_csv(self, *args, **kwds):
- kwds = kwds.copy()
- kwds['engine'] = 'python'
- return read_csv(*args, **kwds)
+ ext = self.ext
+ pth = os.path.join(self.dirpath, 'testit.{0}'.format(ext))
- def test_excel_roundtrip_xls(self):
- _skip_if_no_excelsuite()
- self._check_extension('xls')
+ with ensure_clean(pth) as pth:
+ gt = DataFrame(np.random.randn(10, 2))
+ gt.to_excel(pth)
+ xl = ExcelFile(pth)
+ df = xl.parse(0)
+ tm.assert_frame_equal(gt, df)
- def test_excel_roundtrip_xlsx(self):
- _skip_if_no_excelsuite()
- self._check_extension('xlsx')
+ self.assertRaises(xlrd.XLRDError, xl.parse, '0')
- def _check_extension(self, ext):
+ def test_roundtrip(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel__.' + ext
with ensure_clean(path) as path:
@@ -357,19 +330,9 @@ def _check_extension(self, ext):
recons = read_excel(path, 'test1', index_col=0, na_values=[88,88.0])
tm.assert_frame_equal(self.frame, recons)
- def test_excel_roundtrip_xls_mixed(self):
+ def test_mixed(self):
_skip_if_no_xlrd()
- _skip_if_no_xlwt()
-
- self._check_extension_mixed('xls')
-
- def test_excel_roundtrip_xlsx_mixed(self):
- _skip_if_no_openpyxl()
- _skip_if_no_xlrd()
-
- self._check_extension_mixed('xlsx')
-
- def _check_extension_mixed(self, ext):
+ ext = self.ext
path = '__tmp_to_excel_from_excel_mixed__.' + ext
with ensure_clean(path) as path:
@@ -378,18 +341,10 @@ def _check_extension_mixed(self, ext):
recons = reader.parse('test1', index_col=0)
tm.assert_frame_equal(self.mixed_frame, recons)
- def test_excel_roundtrip_xls_tsframe(self):
- _skip_if_no_xlrd()
- _skip_if_no_xlwt()
-
- self._check_extension_tsframe('xls')
- def test_excel_roundtrip_xlsx_tsframe(self):
- _skip_if_no_openpyxl()
+ def test_tsframe(self):
_skip_if_no_xlrd()
- self._check_extension_tsframe('xlsx')
-
- def _check_extension_tsframe(self, ext):
+ ext = self.ext
path = '__tmp_to_excel_from_excel_tsframe__.' + ext
df = tm.makeTimeDataFrame()[:5]
@@ -400,15 +355,9 @@ def _check_extension_tsframe(self, ext):
recons = reader.parse('test1')
tm.assert_frame_equal(df, recons)
- def test_excel_roundtrip_xls_int64(self):
- _skip_if_no_excelsuite()
- self._check_extension_int64('xls')
-
- def test_excel_roundtrip_xlsx_int64(self):
- _skip_if_no_excelsuite()
- self._check_extension_int64('xlsx')
-
- def _check_extension_int64(self, ext):
+ def test_int64(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel_int64__.' + ext
with ensure_clean(path) as path:
@@ -426,15 +375,9 @@ def _check_extension_int64(self, ext):
recons = reader.parse('test1').astype(np.int64)
tm.assert_frame_equal(frame, recons, check_dtype=False)
- def test_excel_roundtrip_xls_bool(self):
- _skip_if_no_excelsuite()
- self._check_extension_bool('xls')
-
- def test_excel_roundtrip_xlsx_bool(self):
- _skip_if_no_excelsuite()
- self._check_extension_bool('xlsx')
-
- def _check_extension_bool(self, ext):
+ def test_bool(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel_bool__.' + ext
with ensure_clean(path) as path:
@@ -452,15 +395,9 @@ def _check_extension_bool(self, ext):
recons = reader.parse('test1').astype(np.bool8)
tm.assert_frame_equal(frame, recons)
- def test_excel_roundtrip_xls_sheets(self):
- _skip_if_no_excelsuite()
- self._check_extension_sheets('xls')
-
- def test_excel_roundtrip_xlsx_sheets(self):
- _skip_if_no_excelsuite()
- self._check_extension_sheets('xlsx')
-
- def _check_extension_sheets(self, ext):
+ def test_sheets(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel_sheets__.' + ext
with ensure_clean(path) as path:
@@ -485,15 +422,9 @@ def _check_extension_sheets(self, ext):
np.testing.assert_equal('test1', reader.sheet_names[0])
np.testing.assert_equal('test2', reader.sheet_names[1])
- def test_excel_roundtrip_xls_colaliases(self):
- _skip_if_no_excelsuite()
- self._check_extension_colaliases('xls')
-
- def test_excel_roundtrip_xlsx_colaliases(self):
- _skip_if_no_excelsuite()
- self._check_extension_colaliases('xlsx')
-
- def _check_extension_colaliases(self, ext):
+ def test_colaliases(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel_aliases__.' + ext
with ensure_clean(path) as path:
@@ -513,15 +444,9 @@ def _check_extension_colaliases(self, ext):
xp.columns = col_aliases
tm.assert_frame_equal(xp, rs)
- def test_excel_roundtrip_xls_indexlabels(self):
- _skip_if_no_excelsuite()
- self._check_extension_indexlabels('xls')
-
- def test_excel_roundtrip_xlsx_indexlabels(self):
- _skip_if_no_excelsuite()
- self._check_extension_indexlabels('xlsx')
-
- def _check_extension_indexlabels(self, ext):
+ def test_roundtrip_indexlabels(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
path = '__tmp_to_excel_from_excel_indexlabels__.' + ext
with ensure_clean(path) as path:
@@ -557,7 +482,7 @@ def _check_extension_indexlabels(self, ext):
self.assertEqual(frame.index.names, recons.index.names)
# test index_labels in same row as column names
- path = '%s.xls' % tm.rands(10)
+ path = '%s.%s' % (tm.rands(10), ext)
with ensure_clean(path) as path:
@@ -574,9 +499,8 @@ def _check_extension_indexlabels(self, ext):
def test_excel_roundtrip_indexname(self):
_skip_if_no_xlrd()
- _skip_if_no_xlwt()
- path = '%s.xls' % tm.rands(10)
+ path = '%s.%s' % (tm.rands(10), self.ext)
df = DataFrame(np.random.randn(10, 4))
df.index.name = 'foo'
@@ -592,10 +516,9 @@ def test_excel_roundtrip_indexname(self):
def test_excel_roundtrip_datetime(self):
_skip_if_no_xlrd()
- _skip_if_no_xlwt()
# datetime.date, not sure what to test here exactly
- path = '__tmp_excel_roundtrip_datetime__.xls'
+ path = '__tmp_excel_roundtrip_datetime__.' + self.ext
tsf = self.tsframe.copy()
with ensure_clean(path) as path:
@@ -605,86 +528,22 @@ def test_excel_roundtrip_datetime(self):
recons = reader.parse('test1')
tm.assert_frame_equal(self.tsframe, recons)
- def test_ExcelWriter_dispatch(self):
- with tm.assertRaisesRegexp(ValueError, 'No engine'):
- writer = ExcelWriter('nothing')
-
- _skip_if_no_openpyxl()
- writer = ExcelWriter('apple.xlsx')
- tm.assert_isinstance(writer, _OpenpyxlWriter)
-
- _skip_if_no_xlwt()
- writer = ExcelWriter('apple.xls')
- tm.assert_isinstance(writer, _XlwtWriter)
-
-
- def test_register_writer(self):
- # some awkward mocking to test out dispatch and such actually works
- called_save = []
- called_write_cells = []
- class DummyClass(ExcelWriter):
- called_save = False
- called_write_cells = False
- supported_extensions = ['test', 'xlsx', 'xls']
- engine = 'dummy'
-
- def save(self):
- called_save.append(True)
-
- def write_cells(self, *args, **kwargs):
- called_write_cells.append(True)
-
- def check_called(func):
- func()
- self.assert_(len(called_save) >= 1)
- self.assert_(len(called_write_cells) >= 1)
- del called_save[:]
- del called_write_cells[:]
-
- register_writer(DummyClass)
- writer = ExcelWriter('something.test')
- tm.assert_isinstance(writer, DummyClass)
- df = tm.makeCustomDataframe(1, 1)
- panel = tm.makePanel()
- func = lambda: df.to_excel('something.test')
- check_called(func)
- check_called(lambda: panel.to_excel('something.test'))
- from pandas import set_option, get_option
- val = get_option('io.excel.xlsx.writer')
- set_option('io.excel.xlsx.writer', 'dummy')
- check_called(lambda: df.to_excel('something.xlsx'))
- check_called(lambda: df.to_excel('something.xls', engine='dummy'))
- set_option('io.excel.xlsx.writer', val)
-
-
-
def test_to_excel_periodindex(self):
- _skip_if_no_excelsuite()
-
- for ext in ['xls', 'xlsx']:
- path = '__tmp_to_excel_periodindex__.' + ext
- frame = self.tsframe
- xp = frame.resample('M', kind='period')
+ _skip_if_no_xlrd()
+ path = '__tmp_to_excel_periodindex__.' + self.ext
+ frame = self.tsframe
+ xp = frame.resample('M', kind='period')
- with ensure_clean(path) as path:
- xp.to_excel(path, 'sht1')
+ with ensure_clean(path) as path:
+ xp.to_excel(path, 'sht1')
- reader = ExcelFile(path)
- rs = reader.parse('sht1', index_col=0, parse_dates=True)
- tm.assert_frame_equal(xp, rs.to_period('M'))
+ reader = ExcelFile(path)
+ rs = reader.parse('sht1', index_col=0, parse_dates=True)
+ tm.assert_frame_equal(xp, rs.to_period('M'))
def test_to_excel_multiindex(self):
_skip_if_no_xlrd()
- _skip_if_no_xlwt()
-
- self._check_excel_multiindex('xls')
-
- def test_to_excel_multiindex_xlsx(self):
- _skip_if_no_xlrd()
- _skip_if_no_openpyxl()
- self._check_excel_multiindex('xlsx')
-
- def _check_excel_multiindex(self, ext):
+ ext = self.ext
path = '__tmp_to_excel_multiindex__' + ext + '__.' + ext
frame = self.frame
@@ -708,15 +567,7 @@ def _check_excel_multiindex(self, ext):
def test_to_excel_multiindex_dates(self):
_skip_if_no_xlrd()
- _skip_if_no_xlwt()
- self._check_excel_multiindex_dates('xls')
-
- def test_to_excel_multiindex_xlsx_dates(self):
- _skip_if_no_openpyxl()
- _skip_if_no_xlrd()
- self._check_excel_multiindex_dates('xlsx')
-
- def _check_excel_multiindex_dates(self, ext):
+ ext = self.ext
path = '__tmp_to_excel_multiindex_dates__' + ext + '__.' + ext
# try multiindex with dates
@@ -742,83 +593,48 @@ def _check_excel_multiindex_dates(self, ext):
self.tsframe.index = old_index # needed if setUP becomes classmethod
def test_to_excel_float_format(self):
- _skip_if_no_excelsuite()
- for ext in ['xls', 'xlsx']:
- filename = '__tmp_to_excel_float_format__.' + ext
- df = DataFrame([[0.123456, 0.234567, 0.567567],
- [12.32112, 123123.2, 321321.2]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
-
- with ensure_clean(filename) as filename:
- df.to_excel(filename, 'test1', float_format='%.2f')
-
- reader = ExcelFile(filename)
- rs = reader.parse('test1', index_col=None)
- xp = DataFrame([[0.12, 0.23, 0.57],
- [12.32, 123123.20, 321321.20]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
- tm.assert_frame_equal(rs, xp)
+ _skip_if_no_xlrd()
+ ext = self.ext
+ filename = '__tmp_to_excel_float_format__.' + ext
+ df = DataFrame([[0.123456, 0.234567, 0.567567],
+ [12.32112, 123123.2, 321321.2]],
+ index=['A', 'B'], columns=['X', 'Y', 'Z'])
+
+ with ensure_clean(filename) as filename:
+ df.to_excel(filename, 'test1', float_format='%.2f')
+
+ reader = ExcelFile(filename)
+ rs = reader.parse('test1', index_col=None)
+ xp = DataFrame([[0.12, 0.23, 0.57],
+ [12.32, 123123.20, 321321.20]],
+ index=['A', 'B'], columns=['X', 'Y', 'Z'])
+ tm.assert_frame_equal(rs, xp)
def test_to_excel_unicode_filename(self):
- _skip_if_no_excelsuite()
-
- for ext in ['xls', 'xlsx']:
- filename = u('\u0192u.') + ext
-
- try:
- f = open(filename, 'wb')
- except UnicodeEncodeError:
- raise nose.SkipTest('no unicode file names on this system')
- else:
- f.close()
-
- df = DataFrame([[0.123456, 0.234567, 0.567567],
- [12.32112, 123123.2, 321321.2]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
-
- with ensure_clean(filename) as filename:
- df.to_excel(filename, 'test1', float_format='%.2f')
-
- reader = ExcelFile(filename)
- rs = reader.parse('test1', index_col=None)
- xp = DataFrame([[0.12, 0.23, 0.57],
- [12.32, 123123.20, 321321.20]],
- index=['A', 'B'], columns=['X', 'Y', 'Z'])
- tm.assert_frame_equal(rs, xp)
-
- def test_to_excel_styleconverter(self):
- _skip_if_no_xlwt()
- _skip_if_no_openpyxl()
-
- import xlwt
- import openpyxl
-
- hstyle = {"font": {"bold": True},
- "borders": {"top": "thin",
- "right": "thin",
- "bottom": "thin",
- "left": "thin"},
- "alignment": {"horizontal": "center"}}
- xls_style = _XlwtWriter._convert_to_style(hstyle)
- self.assertTrue(xls_style.font.bold)
- self.assertEquals(xlwt.Borders.THIN, xls_style.borders.top)
- self.assertEquals(xlwt.Borders.THIN, xls_style.borders.right)
- self.assertEquals(xlwt.Borders.THIN, xls_style.borders.bottom)
- self.assertEquals(xlwt.Borders.THIN, xls_style.borders.left)
- self.assertEquals(xlwt.Alignment.HORZ_CENTER, xls_style.alignment.horz)
-
- xlsx_style = _OpenpyxlWriter._convert_to_style(hstyle)
- self.assertTrue(xlsx_style.font.bold)
- self.assertEquals(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.top.border_style)
- self.assertEquals(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.right.border_style)
- self.assertEquals(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.bottom.border_style)
- self.assertEquals(openpyxl.style.Border.BORDER_THIN,
- xlsx_style.borders.left.border_style)
- self.assertEquals(openpyxl.style.Alignment.HORIZONTAL_CENTER,
- xlsx_style.alignment.horizontal)
+ _skip_if_no_xlrd()
+ ext = self.ext
+ filename = u('\u0192u.') + ext
+
+ try:
+ f = open(filename, 'wb')
+ except UnicodeEncodeError:
+ raise nose.SkipTest('no unicode file names on this system')
+ else:
+ f.close()
+
+ df = DataFrame([[0.123456, 0.234567, 0.567567],
+ [12.32112, 123123.2, 321321.2]],
+ index=['A', 'B'], columns=['X', 'Y', 'Z'])
+
+ with ensure_clean(filename) as filename:
+ df.to_excel(filename, 'test1', float_format='%.2f')
+
+ reader = ExcelFile(filename)
+ rs = reader.parse('test1', index_col=None)
+ xp = DataFrame([[0.12, 0.23, 0.57],
+ [12.32, 123123.20, 321321.20]],
+ index=['A', 'B'], columns=['X', 'Y', 'Z'])
+ tm.assert_frame_equal(rs, xp)
# def test_to_excel_header_styling_xls(self):
@@ -921,14 +737,13 @@ def test_to_excel_styleconverter(self):
# self.assertTrue(ws.cell(maddr).merged)
# os.remove(filename)
def test_excel_010_hemstring(self):
- _skip_if_no_excelsuite()
-
+ _skip_if_no_xlrd()
from pandas.util.testing import makeCustomDataframe as mkdf
# ensure limited functionality in 0.10
# override of #2370 until sorted out in 0.11
def roundtrip(df, header=True, parser_hdr=0):
- path = '__tmp__test_xl_010_%s__.xls' % np.random.randint(1, 10000)
+ path = '__tmp__test_xl_010_%s__.%s' % (np.random.randint(1, 10000), self.ext)
df.to_excel(path, header=header)
with ensure_clean(path) as path:
@@ -972,12 +787,120 @@ def roundtrip(df, header=True, parser_hdr=0):
self.assertEqual(res.shape, (1, 2))
self.assertTrue(res.ix[0, 0] is not np.nan)
+
+class OpenpyxlTests(ExcelWriterBase, unittest.TestCase):
+ ext = 'xlsx'
+ check_skip = staticmethod(_skip_if_no_openpyxl)
+
+ def test_to_excel_styleconverter(self):
+ _skip_if_no_openpyxl()
+
+ import openpyxl
+
+ hstyle = {"font": {"bold": True},
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
+
+ xlsx_style = _OpenpyxlWriter._convert_to_style(hstyle)
+ self.assertTrue(xlsx_style.font.bold)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.top.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.right.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.bottom.border_style)
+ self.assertEquals(openpyxl.style.Border.BORDER_THIN,
+ xlsx_style.borders.left.border_style)
+ self.assertEquals(openpyxl.style.Alignment.HORIZONTAL_CENTER,
+ xlsx_style.alignment.horizontal)
+
+
+class XlwtTests(ExcelWriterBase, unittest.TestCase):
+ ext = 'xls'
+ check_skip = staticmethod(_skip_if_no_xlwt)
+
+ def test_to_excel_styleconverter(self):
+ _skip_if_no_xlwt()
+
+ import xlwt
+
+ hstyle = {"font": {"bold": True},
+ "borders": {"top": "thin",
+ "right": "thin",
+ "bottom": "thin",
+ "left": "thin"},
+ "alignment": {"horizontal": "center"}}
+ xls_style = _XlwtWriter._convert_to_style(hstyle)
+ self.assertTrue(xls_style.font.bold)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.top)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.right)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.bottom)
+ self.assertEquals(xlwt.Borders.THIN, xls_style.borders.left)
+ self.assertEquals(xlwt.Alignment.HORZ_CENTER, xls_style.alignment.horz)
+
+class ExcelWriterEngineTests(unittest.TestCase):
+ def test_ExcelWriter_dispatch(self):
+ with tm.assertRaisesRegexp(ValueError, 'No engine'):
+ writer = ExcelWriter('nothing')
+
+ _skip_if_no_openpyxl()
+ writer = ExcelWriter('apple.xlsx')
+ tm.assert_isinstance(writer, _OpenpyxlWriter)
+
+ _skip_if_no_xlwt()
+ writer = ExcelWriter('apple.xls')
+ tm.assert_isinstance(writer, _XlwtWriter)
+
+
+ def test_register_writer(self):
+ # some awkward mocking to test out dispatch and such actually works
+ called_save = []
+ called_write_cells = []
+ class DummyClass(ExcelWriter):
+ called_save = False
+ called_write_cells = False
+ supported_extensions = ['test', 'xlsx', 'xls']
+ engine = 'dummy'
+
+ def save(self):
+ called_save.append(True)
+
+ def write_cells(self, *args, **kwargs):
+ called_write_cells.append(True)
+
+ def check_called(func):
+ func()
+ self.assert_(len(called_save) >= 1)
+ self.assert_(len(called_write_cells) >= 1)
+ del called_save[:]
+ del called_write_cells[:]
+
+ register_writer(DummyClass)
+ writer = ExcelWriter('something.test')
+ tm.assert_isinstance(writer, DummyClass)
+ df = tm.makeCustomDataframe(1, 1)
+ panel = tm.makePanel()
+ func = lambda: df.to_excel('something.test')
+ check_called(func)
+ check_called(lambda: panel.to_excel('something.test'))
+ from pandas import set_option, get_option
+ val = get_option('io.excel.xlsx.writer')
+ set_option('io.excel.xlsx.writer', 'dummy')
+ check_called(lambda: df.to_excel('something.xlsx'))
+ check_called(lambda: df.to_excel('something.xls', engine='dummy'))
+ set_option('io.excel.xlsx.writer', val)
+
+
+class ExcelLegacyTests(SharedItems, unittest.TestCase):
def test_deprecated_from_parsers(self):
# since 0.12 changed the import path
import warnings
- with warnings.catch_warnings() as w:
+ with warnings.catch_warnings():
warnings.filterwarnings(action='ignore', category=FutureWarning)
_skip_if_no_xlrd()
| https://api.github.com/repos/pandas-dev/pandas/pulls/4844 | 2013-09-15T12:55:37Z | 2013-09-15T22:27:34Z | 2013-09-15T22:27:34Z | 2014-07-16T08:28:24Z | |
CLN: replace rwproperty with regular property | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 62ca11b6cef0b..b3fb032493a18 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -287,6 +287,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Refactor of Series arithmetic with time-like objects (datetime/timedelta/time
etc.) into a separate, cleaned up wrapper class. (:issue:`4613`)
- Complex compat for ``Series`` with ``ndarray``. (:issue:`4819`)
+- Removed unnecessary ``rwproperty`` from codebase in favor of builtin property. (:issue:`4843`)
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index e22202c65c140..c265d1590af95 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -24,7 +24,6 @@
from pandas.tslib import Timestamp
from pandas import compat
from pandas.compat import range, lrange, lmap, callable, map, zip
-from pandas.util import rwproperty
class Block(PandasObject):
@@ -1450,22 +1449,22 @@ def shape(self):
def itemsize(self):
return self.dtype.itemsize
- @rwproperty.getproperty
+ @property
def fill_value(self):
return self.values.fill_value
- @rwproperty.setproperty
+ @fill_value.setter
def fill_value(self, v):
# we may need to upcast our fill to match our dtype
if issubclass(self.dtype.type, np.floating):
v = float(v)
self.values.fill_value = v
- @rwproperty.getproperty
+ @property
def sp_values(self):
return self.values.sp_values
- @rwproperty.setproperty
+ @sp_values.setter
def sp_values(self, v):
# reset the sparse values
self.values = SparseArray(
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 7d3cce8c80f72..893483f0f2636 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -37,7 +37,6 @@
from pandas import compat
from pandas.util.terminal import get_terminal_size
from pandas.compat import zip, lzip, u, OrderedDict
-from pandas.util import rwproperty
import pandas.core.array as pa
@@ -797,19 +796,19 @@ def __contains__(self, key):
return key in self.index
# complex
- @rwproperty.getproperty
+ @property
def real(self):
return self.values.real
- @rwproperty.setproperty
+ @real.setter
def real(self, v):
self.values.real = v
- @rwproperty.getproperty
+ @property
def imag(self):
return self.values.imag
- @rwproperty.setproperty
+ @imag.setter
def imag(self, v):
self.values.imag = v
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 21a054e6fe1a3..537b88db3c1f0 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -21,7 +21,6 @@
import pandas.index as _index
from pandas import compat
-from pandas.util import rwproperty
from pandas.sparse.array import (make_sparse, _sparse_array_op, SparseArray)
from pandas._sparse import BlockIndex, IntIndex
@@ -213,11 +212,11 @@ def get_values(self):
def block(self):
return self._data._block
- @rwproperty.getproperty
+ @property
def fill_value(self):
return self.block.fill_value
- @rwproperty.setproperty
+ @fill_value.setter
def fill_value(self, v):
self.block.fill_value = v
diff --git a/pandas/util/rwproperty.py b/pandas/util/rwproperty.py
deleted file mode 100644
index 2d0dada68cc0e..0000000000000
--- a/pandas/util/rwproperty.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# Read & write properties
-#
-# Copyright (c) 2006 by Philipp "philiKON" von Weitershausen
-# philikon@philikon.de
-#
-# Freely distributable under the terms of the Zope Public License, v2.1.
-#
-# See rwproperty.txt for detailed explanations
-#
-import sys
-
-__all__ = ['getproperty', 'setproperty', 'delproperty']
-
-class rwproperty(object):
-
- def __new__(cls, func):
- name = func.__name__
-
- # ugly, but common hack
- frame = sys._getframe(1)
- locals = frame.f_locals
-
- if name not in locals:
- return cls.createProperty(func)
-
- oldprop = locals[name]
- if isinstance(oldprop, property):
- return cls.enhanceProperty(oldprop, func)
-
- raise TypeError("read & write properties cannot be mixed with "
- "other attributes except regular property objects.")
-
- # this might not be particularly elegant, but it's easy on the eyes
-
- @staticmethod
- def createProperty(func):
- raise NotImplementedError
-
- @staticmethod
- def enhanceProperty(oldprop, func):
- raise NotImplementedError
-
-class getproperty(rwproperty):
-
- @staticmethod
- def createProperty(func):
- return property(func)
-
- @staticmethod
- def enhanceProperty(oldprop, func):
- return property(func, oldprop.fset, oldprop.fdel)
-
-class setproperty(rwproperty):
-
- @staticmethod
- def createProperty(func):
- return property(None, func)
-
- @staticmethod
- def enhanceProperty(oldprop, func):
- return property(oldprop.fget, func, oldprop.fdel)
-
-class delproperty(rwproperty):
-
- @staticmethod
- def createProperty(func):
- return property(None, None, func)
-
- @staticmethod
- def enhanceProperty(oldprop, func):
- return property(oldprop.fget, oldprop.fset, func)
-
-if __name__ == "__main__":
- import doctest
- doctest.testfile('rwproperty.txt')
| 2.6 added ability to use property-decorated function as decorator for
setters and deleters, so pandas/util/rwproperty isn't necessary anymore.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4843 | 2013-09-15T04:59:45Z | 2013-09-15T05:12:52Z | 2013-09-15T05:12:52Z | 2014-06-12T09:08:04Z |
BUG: store datetime.date objects in HDFStore as ordinals rather then timetuples to avoid timezone issues (GH2852) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index ba7993bfed9bd..791fbc2c516b5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -114,8 +114,10 @@ Improvements to existing features
- ``Panel.to_excel()`` now accepts keyword arguments that will be passed to
its ``DataFrame``'s ``to_excel()`` methods. (:issue:`4750`)
- allow DataFrame constructor to accept more list-like objects, e.g. list of
- ``collections.Sequence`` and ``array.Array`` objects (:issue:`3783`,:issue:`42971`)
- - DataFrame constructor now accepts a numpy masked record array (:issue:`3478`)
+ ``collections.Sequence`` and ``array.Array`` objects (:issue:`3783`,:issue:`42971`),
+ thanks @lgautier
+ - DataFrame constructor now accepts a numpy masked record array (:issue:`3478`),
+ thanks @jnothman
API Changes
~~~~~~~~~~~
@@ -168,6 +170,8 @@ API Changes
with data_columns on the same axis
- ``select_as_coordinates`` will now return an ``Int64Index`` of the resultant selection set
- support ``timedelta64[ns]`` as a serialization type (:issue:`3577`)
+ - store `datetime.date` objects as ordinals rather then timetuples to avoid timezone issues (:issue:`2852`),
+ thanks @tavistmorph and @numpand
- ``JSON``
- added ``date_unit`` parameter to specify resolution of timestamps. Options
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index d4c1eba1194ac..02548c9af7dc4 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -103,6 +103,8 @@ API changes
- add the keyword ``dropna=True`` to ``append`` to change whether ALL nan rows are not written
to the store (default is ``True``, ALL nan rows are NOT written), also settable
via the option ``io.hdf.dropna_table`` (:issue:`4625`)
+ - store `datetime.date` objects as ordinals rather then timetuples to avoid timezone issues (:issue:`2852`),
+ thanks @tavistmorph and @numpand
- Changes to how ``Index`` and ``MultiIndex`` handle metadata (``levels``,
``labels``, and ``names``) (:issue:`4039`):
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 9b6a230f6a551..c8224f761ce17 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1740,8 +1740,12 @@ def convert(self, values, nan_rep, encoding):
elif dtype == u('timedelta64'):
self.data = np.asarray(self.data, dtype='m8[ns]')
elif dtype == u('date'):
- self.data = np.array(
- [date.fromtimestamp(v) for v in self.data], dtype=object)
+ try:
+ self.data = np.array(
+ [date.fromordinal(v) for v in data], dtype=object)
+ except (ValueError):
+ self.data = np.array(
+ [date.fromtimestamp(v) for v in self.data], dtype=object)
elif dtype == u('datetime'):
self.data = np.array(
[datetime.fromtimestamp(v) for v in self.data],
@@ -3769,7 +3773,7 @@ def _convert_index(index, encoding=None):
return IndexCol(converted, 'datetime', _tables().Time64Col(),
index_name=index_name)
elif inferred_type == 'date':
- converted = np.array([time.mktime(v.timetuple()) for v in values],
+ converted = np.array([v.toordinal() for v in values],
dtype=np.int32)
return IndexCol(converted, 'date', _tables().Time32Col(),
index_name=index_name)
@@ -3809,7 +3813,12 @@ def _unconvert_index(data, kind, encoding=None):
index = np.array([datetime.fromtimestamp(v) for v in data],
dtype=object)
elif kind == u('date'):
- index = np.array([date.fromtimestamp(v) for v in data], dtype=object)
+ try:
+ index = np.array(
+ [date.fromordinal(v) for v in data], dtype=object)
+ except (ValueError):
+ index = np.array(
+ [date.fromtimestamp(v) for v in self.data], dtype=object)
elif kind in (u('integer'), u('float')):
index = np.array(data)
elif kind in (u('string')):
@@ -4096,10 +4105,12 @@ def stringify(value):
elif kind == u('timedelta64') or kind == u('timedelta'):
v = _coerce_scalar_to_timedelta_type(v,unit='s').item()
return TermValue(int(v), v, kind)
- elif (isinstance(v, datetime) or hasattr(v, 'timetuple')
- or kind == u('date')):
+ elif (isinstance(v, datetime) or hasattr(v, 'timetuple')):
v = time.mktime(v.timetuple())
return TermValue(v, Timestamp(v), kind)
+ elif kind == u('date'):
+ v = v.toordinal()
+ return TermValue(v, Timestamp.fromordinal(v), kind)
elif kind == u('integer'):
v = int(float(v))
return TermValue(v, v, kind)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 3f4ce72198215..861b4dd7567a0 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1799,6 +1799,47 @@ def compare(a,b):
result = store.select('df')
assert_frame_equal(result,df)
+ def test_store_timezone(self):
+ # GH2852
+ # issue storing datetime.date with a timezone as it resets when read back in a new timezone
+
+ import platform
+ if platform.system() == "Windows":
+ raise nose.SkipTest("timezone setting not supported on windows")
+
+ import datetime
+ import time
+ import os
+
+ orig_tz = os.environ.get('TZ')
+
+ def setTZ(tz):
+ if tz is None:
+ try:
+ del os.environ['TZ']
+ except:
+ pass
+ else:
+ os.environ['TZ']=tz
+ time.tzset()
+
+ try:
+
+ with ensure_clean(self.path) as store:
+
+ setTZ('EST5EDT')
+ today = datetime.date(2013,9,10)
+ df = DataFrame([1,2,3], index = [today, today, today])
+ store['obj1'] = df
+
+ setTZ('CST6CDT')
+ result = store['obj1']
+
+ assert_frame_equal(result, df)
+
+ finally:
+ setTZ(orig_tz)
+
def test_append_with_timedelta(self):
if _np_version_under1p7:
raise nose.SkipTest("requires numpy >= 1.7")
| closes #2852
thanks @tavistmorph and @numpand
| https://api.github.com/repos/pandas-dev/pandas/pulls/4841 | 2013-09-14T21:08:45Z | 2013-09-14T21:22:25Z | 2013-09-14T21:22:25Z | 2014-06-26T17:16:21Z |
ENH: Better read_json error when handling bad keys | diff --git a/doc/source/release.rst b/doc/source/release.rst
index f7755afe8caae..e4143e3f76a25 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -120,6 +120,8 @@ Improvements to existing features
thanks @jnothman
- ``__getitem__`` with ``tuple`` key (e.g., ``[:, 2]``) on ``Series``
without ``MultiIndex`` raises ``ValueError`` (:issue:`4759`, :issue:`4837`)
+ - ``read_json`` now raises a (more informative) ``ValueError`` when the dict
+ contains a bad key and ``orient='split'`` (:issue:`4730`, :issue:`4838`)
API Changes
~~~~~~~~~~~
diff --git a/pandas/io/json.py b/pandas/io/json.py
index 737ec1941b353..e3c85fae045d0 100644
--- a/pandas/io/json.py
+++ b/pandas/io/json.py
@@ -5,15 +5,14 @@
import pandas.json as _json
from pandas.tslib import iNaT
-from pandas.compat import long
+from pandas.compat import long, u
from pandas import compat, isnull
from pandas import Series, DataFrame, to_datetime
from pandas.io.common import get_filepath_or_buffer
+import pandas.core.common as com
loads = _json.loads
dumps = _json.dumps
-
-
### interface to/from ###
@@ -230,6 +229,14 @@ def __init__(self, json, orient, dtype=True, convert_axes=True,
self.keep_default_dates = keep_default_dates
self.obj = None
+ def check_keys_split(self, decoded):
+ "checks that dict has only the appropriate keys for orient='split'"
+ bad_keys = set(decoded.keys()).difference(set(self._split_keys))
+ if bad_keys:
+ bad_keys = ", ".join(bad_keys)
+ raise ValueError(u("JSON data had unexpected key(s): %s") %
+ com.pprint_thing(bad_keys))
+
def parse(self):
# try numpy
@@ -375,6 +382,8 @@ def _try_convert_dates(self):
class SeriesParser(Parser):
_default_orient = 'index'
+ _split_keys = ('name', 'index', 'data')
+
def _parse_no_numpy(self):
@@ -385,6 +394,7 @@ def _parse_no_numpy(self):
for k, v in compat.iteritems(loads(
json,
precise_float=self.precise_float)))
+ self.check_keys_split(decoded)
self.obj = Series(dtype=None, **decoded)
else:
self.obj = Series(
@@ -398,6 +408,7 @@ def _parse_numpy(self):
decoded = loads(json, dtype=None, numpy=True,
precise_float=self.precise_float)
decoded = dict((str(k), v) for k, v in compat.iteritems(decoded))
+ self.check_keys_split(decoded)
self.obj = Series(**decoded)
elif orient == "columns" or orient == "index":
self.obj = Series(*loads(json, dtype=None, numpy=True,
@@ -418,6 +429,7 @@ def _try_convert_types(self):
class FrameParser(Parser):
_default_orient = 'columns'
+ _split_keys = ('columns', 'index', 'data')
def _parse_numpy(self):
@@ -434,6 +446,7 @@ def _parse_numpy(self):
decoded = loads(json, dtype=None, numpy=True,
precise_float=self.precise_float)
decoded = dict((str(k), v) for k, v in compat.iteritems(decoded))
+ self.check_keys_split(decoded)
self.obj = DataFrame(**decoded)
elif orient == "values":
self.obj = DataFrame(loads(json, dtype=None, numpy=True,
@@ -456,6 +469,7 @@ def _parse_no_numpy(self):
for k, v in compat.iteritems(loads(
json,
precise_float=self.precise_float)))
+ self.check_keys_split(decoded)
self.obj = DataFrame(dtype=None, **decoded)
elif orient == "index":
self.obj = DataFrame(
diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py
index 108e779129672..c32fc08dab297 100644
--- a/pandas/io/tests/test_json/test_pandas.py
+++ b/pandas/io/tests/test_json/test_pandas.py
@@ -252,8 +252,8 @@ def test_frame_from_json_bad_data(self):
json = StringIO('{"badkey":["A","B"],'
'"index":["2","3"],'
'"data":[[1.0,"1"],[2.0,"2"],[null,"3"]]}')
- self.assertRaises(TypeError, read_json, json,
- orient="split")
+ with tm.assertRaisesRegexp(ValueError, r"unexpected key\(s\): badkey"):
+ read_json(json, orient="split")
def test_frame_from_json_nones(self):
df = DataFrame([[1, 2], [4, 5, 6]])
| (in json data with orient=split)
Fixes #4730.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4838 | 2013-09-14T02:58:30Z | 2013-09-17T04:40:23Z | 2013-09-17T04:40:23Z | 2014-06-22T06:43:16Z |
ENH: Better Exception when trying to set on Series with tuple-index. | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 791fbc2c516b5..62ca11b6cef0b 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -118,6 +118,8 @@ Improvements to existing features
thanks @lgautier
- DataFrame constructor now accepts a numpy masked record array (:issue:`3478`),
thanks @jnothman
+ - ``__getitem__`` with ``tuple`` key (e.g., ``[:, 2]``) on ``Series``
+ without ``MultiIndex`` raises ``ValueError`` (:issue:`4759`, :issue:`4837`)
API Changes
~~~~~~~~~~~
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 8d6591c3acd60..7d3cce8c80f72 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1050,6 +1050,8 @@ def __setitem__(self, key, value):
return
except TypeError as e:
+ if isinstance(key, tuple) and not isinstance(self.index, MultiIndex):
+ raise ValueError("Can only tuple-index with a MultiIndex")
# python 3 type errors should be raised
if 'unorderable' in str(e): # pragma: no cover
raise IndexError(key)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index d2d0bc39fbfc9..c52fcad3d5111 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1025,10 +1025,10 @@ def test_setslice(self):
def test_basic_getitem_setitem_corner(self):
# invalid tuples, e.g. self.ts[:, None] vs. self.ts[:, 2]
- self.assertRaises(Exception, self.ts.__getitem__,
- (slice(None, None), 2))
- self.assertRaises(Exception, self.ts.__setitem__,
- (slice(None, None), 2), 2)
+ with tm.assertRaisesRegexp(ValueError, 'tuple-index'):
+ self.ts[:, 2]
+ with tm.assertRaisesRegexp(ValueError, 'tuple-index'):
+ self.ts[:, 2] = 2
# weird lists. [slice(0, 5)] will work but not two slices
result = self.ts[[slice(None, 5)]]
| Closes #4759
| https://api.github.com/repos/pandas-dev/pandas/pulls/4837 | 2013-09-14T02:38:04Z | 2013-09-15T05:01:34Z | 2013-09-15T05:01:34Z | 2014-06-19T05:28:50Z |
ENH: DataFrame constructor now accepts a numpy masked record array (GH3478) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 1d9fec688525a..ba7993bfed9bd 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -115,6 +115,7 @@ Improvements to existing features
its ``DataFrame``'s ``to_excel()`` methods. (:issue:`4750`)
- allow DataFrame constructor to accept more list-like objects, e.g. list of
``collections.Sequence`` and ``array.Array`` objects (:issue:`3783`,:issue:`42971`)
+ - DataFrame constructor now accepts a numpy masked record array (:issue:`3478`)
API Changes
~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index f0a23b46373e9..d4c1eba1194ac 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -266,6 +266,7 @@ Enhancements
``ind``, passed to scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set
the bandwidth, and to gkde.evaluate() to specify the indicies at which it
is evaluated, respecttively. See scipy docs.
+ - DataFrame constructor now accepts a numpy masked record array (:issue:`3478`)
.. _whatsnew_0130.refactoring:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index fb08c5eaa4822..f56b6bc00cf15 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -394,14 +394,22 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
elif isinstance(data, dict):
mgr = self._init_dict(data, index, columns, dtype=dtype)
elif isinstance(data, ma.MaskedArray):
- mask = ma.getmaskarray(data)
- if mask.any():
- data, fill_value = _maybe_upcast(data, copy=True)
- data[mask] = fill_value
+
+ # masked recarray
+ if isinstance(data, ma.mrecords.MaskedRecords):
+ mgr = _masked_rec_array_to_mgr(data, index, columns, dtype, copy)
+
+ # a masked array
else:
- data = data.copy()
- mgr = self._init_ndarray(data, index, columns, dtype=dtype,
- copy=copy)
+ mask = ma.getmaskarray(data)
+ if mask.any():
+ data, fill_value = _maybe_upcast(data, copy=True)
+ data[mask] = fill_value
+ else:
+ data = data.copy()
+ mgr = self._init_ndarray(data, index, columns, dtype=dtype,
+ copy=copy)
+
elif isinstance(data, (np.ndarray, Series)):
if data.dtype.names:
data_columns = list(data.dtype.names)
@@ -1009,13 +1017,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
arr_columns.append(k)
arrays.append(v)
- # reorder according to the columns
- if len(columns) and len(arr_columns):
- indexer = _ensure_index(
- arr_columns).get_indexer(columns)
- arr_columns = _ensure_index(
- [arr_columns[i] for i in indexer])
- arrays = [arrays[i] for i in indexer]
+ arrays, arr_columns = _reorder_arrays(arrays, arr_columns, columns)
elif isinstance(data, (np.ndarray, DataFrame)):
arrays, columns = _to_arrays(data, columns)
@@ -4817,6 +4819,52 @@ def _to_arrays(data, columns, coerce_float=False, dtype=None):
dtype=dtype)
+def _masked_rec_array_to_mgr(data, index, columns, dtype, copy):
+ """ extract from a masked rec array and create the manager """
+
+ # essentially process a record array then fill it
+ fill_value = data.fill_value
+ fdata = ma.getdata(data)
+ if index is None:
+ index = _get_names_from_index(fdata)
+ if index is None:
+ index = _default_index(len(data))
+ index = _ensure_index(index)
+
+ if columns is not None:
+ columns = _ensure_index(columns)
+ arrays, arr_columns = _to_arrays(fdata, columns)
+
+ # fill if needed
+ new_arrays = []
+ for fv, arr, col in zip(fill_value, arrays, arr_columns):
+ mask = ma.getmaskarray(data[col])
+ if mask.any():
+ arr, fv = _maybe_upcast(arr, fill_value=fv, copy=True)
+ arr[mask] = fv
+ new_arrays.append(arr)
+
+ # create the manager
+ arrays, arr_columns = _reorder_arrays(new_arrays, arr_columns, columns)
+ if columns is None:
+ columns = arr_columns
+
+ mgr = _arrays_to_mgr(arrays, arr_columns, index, columns)
+
+ if copy:
+ mgr = mgr.copy()
+ return mgr
+
+def _reorder_arrays(arrays, arr_columns, columns):
+ # reorder according to the columns
+ if columns is not None and len(columns) and arr_columns is not None and len(arr_columns):
+ indexer = _ensure_index(
+ arr_columns).get_indexer(columns)
+ arr_columns = _ensure_index(
+ [arr_columns[i] for i in indexer])
+ arrays = [arrays[i] for i in indexer]
+ return arrays, arr_columns
+
def _list_to_arrays(data, columns, coerce_float=False, dtype=None):
if len(data) > 0 and isinstance(data[0], tuple):
content = list(lib.to_object_array_tuples(data).T)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 507c2055e1b68..201212d27c4b0 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -9,7 +9,8 @@
import csv
import unittest
import nose
-
+import functools
+import itertools
from pandas.compat import(
map, zip, range, long, lrange, lmap, lzip,
OrderedDict, cPickle as pickle, u, StringIO
@@ -21,6 +22,7 @@
import numpy as np
import numpy.ma as ma
from numpy.testing import assert_array_equal
+import numpy.ma.mrecords as mrecords
import pandas as pan
import pandas.core.nanops as nanops
@@ -2510,6 +2512,50 @@ def test_constructor_maskedarray_nonfloat(self):
self.assertEqual(True, frame['A'][1])
self.assertEqual(False, frame['C'][2])
+ def test_constructor_mrecarray(self):
+ """
+ Ensure mrecarray produces frame identical to dict of masked arrays
+ from GH3479
+
+ """
+ assert_fr_equal = functools.partial(assert_frame_equal,
+ check_index_type=True,
+ check_column_type=True,
+ check_frame_type=True)
+ arrays = [
+ ('float', np.array([1.5, 2.0])),
+ ('int', np.array([1, 2])),
+ ('str', np.array(['abc', 'def'])),
+ ]
+ for name, arr in arrays[:]:
+ arrays.append(('masked1_' + name,
+ np.ma.masked_array(arr, mask=[False, True])))
+ arrays.append(('masked_all', np.ma.masked_all((2,))))
+ arrays.append(('masked_none',
+ np.ma.masked_array([1.0, 2.5], mask=False)))
+
+ # call assert_frame_equal for all selections of 3 arrays
+ for comb in itertools.combinations(arrays, 3):
+ names, data = zip(*comb)
+ mrecs = mrecords.fromarrays(data, names=names)
+
+ # fill the comb
+ comb = dict([ (k, v.filled()) if hasattr(v,'filled') else (k, v) for k, v in comb ])
+
+ expected = DataFrame(comb,columns=names)
+ result = DataFrame(mrecs)
+ assert_fr_equal(result,expected)
+
+ # specify columns
+ expected = DataFrame(comb,columns=names[::-1])
+ result = DataFrame(mrecs, columns=names[::-1])
+ assert_fr_equal(result,expected)
+
+ # specify index
+ expected = DataFrame(comb,columns=names,index=[1,2])
+ result = DataFrame(mrecs, index=[1,2])
+ assert_fr_equal(result,expected)
+
def test_constructor_corner(self):
df = DataFrame(index=[])
self.assertEqual(df.values.shape, (0, 0))
| closes #3478
| https://api.github.com/repos/pandas-dev/pandas/pulls/4836 | 2013-09-13T21:38:46Z | 2013-09-14T20:34:53Z | 2013-09-14T20:34:53Z | 2014-07-16T08:28:13Z |
BUG: Fixed an issue with a duplicate index and duplicate selector with loc (GH4825) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index e164584674ae5..0ed1f39d72cb5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -411,6 +411,7 @@ Bug Fixes
- Fixed an issue related to ticklocs/ticklabels with log scale bar plots
across different versions of matplotlib (:issue:`4789`)
- Suppressed DeprecationWarning associated with internal calls issued by repr() (:issue:`4391`)
+ - Fixed an issue with a duplicate index and duplicate selector with ``.loc`` (:issue:`4825`)
pandas 0.12.0
-------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 72196fcdad38d..4f5e6623e1512 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -701,6 +701,11 @@ def _reindex(keys, level=None):
new_labels[cur_indexer] = cur_labels
new_labels[missing_indexer] = missing_labels
+ # reindex with the specified axis
+ ndim = self.obj.ndim
+ if axis+1 > ndim:
+ raise AssertionError("invalid indexing error with non-unique index")
+
# a unique indexer
if keyarr_is_unique:
new_indexer = (Index(cur_indexer) + Index(missing_indexer)).values
@@ -708,12 +713,15 @@ def _reindex(keys, level=None):
# we have a non_unique selector, need to use the original indexer here
else:
- new_indexer = indexer
- # reindex with the specified axis
- ndim = self.obj.ndim
- if axis+1 > ndim:
- raise AssertionError("invalid indexing error with non-unique index")
+ # need to retake to have the same size as the indexer
+ rindexer = indexer.values
+ rindexer[~check] = 0
+ result = self.obj.take(rindexer, axis=axis, convert=False)
+
+ # reset the new indexer to account for the new size
+ new_indexer = np.arange(len(result))
+ new_indexer[~check] = -1
result = result._reindex_with_indexers({ axis : [ new_labels, new_indexer ] }, copy=True, allow_dups=True)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 4b17dd5ffd9db..0c862576b09a1 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1436,6 +1436,49 @@ def f():
p.loc[:,:,'C'] = Series([30,32],index=p_orig.items)
assert_panel_equal(p,expected)
+ def test_series_partial_set(self):
+ # partial set with new index
+ # Regression from GH4825
+ ser = Series([0.1, 0.2], index=[1, 2])
+
+ # loc
+ expected = Series([np.nan, 0.2, np.nan], index=[3, 2, 3])
+ result = ser.loc[[3, 2, 3]]
+ assert_series_equal(result, expected)
+
+ expected = Series([np.nan, np.nan, np.nan], index=[3, 3, 3])
+ result = ser.loc[[3, 3, 3]]
+ assert_series_equal(result, expected)
+
+ expected = Series([0.2, 0.2, np.nan], index=[2, 2, 3])
+ result = ser.loc[[2, 2, 3]]
+ assert_series_equal(result, expected)
+
+ expected = Series([0.3, np.nan, np.nan], index=[3, 4, 4])
+ result = Series([0.1, 0.2, 0.3], index=[1,2,3]).loc[[3,4,4]]
+ assert_series_equal(result, expected)
+
+ expected = Series([np.nan, 0.3, 0.3], index=[5, 3, 3])
+ result = Series([0.1, 0.2, 0.3, 0.4], index=[1,2,3,4]).loc[[5,3,3]]
+ assert_series_equal(result, expected)
+
+ expected = Series([np.nan, 0.4, 0.4], index=[5, 4, 4])
+ result = Series([0.1, 0.2, 0.3, 0.4], index=[1,2,3,4]).loc[[5,4,4]]
+ assert_series_equal(result, expected)
+
+ expected = Series([0.4, np.nan, np.nan], index=[7, 2, 2])
+ result = Series([0.1, 0.2, 0.3, 0.4], index=[4,5,6,7]).loc[[7,2,2]]
+ assert_series_equal(result, expected)
+
+ expected = Series([0.4, np.nan, np.nan], index=[4, 5, 5])
+ result = Series([0.1, 0.2, 0.3, 0.4], index=[1,2,3,4]).loc[[4,5,5]]
+ assert_series_equal(result, expected)
+
+ # iloc
+ expected = Series([0.2,0.2,0.1,0.1], index=[2,2,1,1])
+ result = ser.iloc[[1,1,0,0]]
+ assert_series_equal(result, expected)
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| closes #4825
```
In [1]: ser = Series([0.1, 0.2], index=[1, 2])
In [3]: ser
Out[3]:
1 0.1
2 0.2
dtype: float64
In [4]: ser.loc[[3, 2, 3]]
Out[4]:
3 NaN
2 0.2
3 NaN
dtype: float64
In [5]: ser.loc[[3, 3, 3]]
Out[5]:
3 NaN
3 NaN
3 NaN
dtype: float64
In [6]: ser.loc[[2, 2, 3]]
Out[6]:
2 0.2
2 0.2
3 NaN
dtype: float64
In [7]: ser.iloc[[1,1,0,0]]
Out[7]:
2 0.2
2 0.2
1 0.1
1 0.1
dtype: float64
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4833 | 2013-09-13T14:33:28Z | 2013-09-16T12:56:55Z | 2013-09-16T12:56:55Z | 2014-07-15T19:42:48Z |
BUG/VIS: correctly test for yaxis ticklocs across different versions of MPL | diff --git a/ci/requirements-2.7_LOCALE.txt b/ci/requirements-2.7_LOCALE.txt
index 70c398816f23c..a7e9d62e3549b 100644
--- a/ci/requirements-2.7_LOCALE.txt
+++ b/ci/requirements-2.7_LOCALE.txt
@@ -8,7 +8,7 @@ cython==0.19.1
bottleneck==0.6.0
numexpr==2.1
tables==2.3.1
-matplotlib==1.2.1
+matplotlib==1.3.0
patsy==0.1.0
html5lib==1.0b2
lxml==3.2.1
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 124661021f45c..1d9fec688525a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -396,6 +396,8 @@ Bug Fixes
- Fixed bug with reading compressed files in as ``bytes`` rather than ``str``
in Python 3. Simplifies bytes-producing file-handling in Python 3
(:issue:`3963`, :issue:`4785`).
+ - Fixed an issue related to ticklocs/ticklabels with log scale bar plots
+ across different versions of matplotlib (:issue:`4789`)
pandas 0.12.0
-------------
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 1583a3c0b52d9..aa989e9d785f8 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -2,6 +2,7 @@
import os
import string
import unittest
+from distutils.version import LooseVersion
from datetime import datetime, date, timedelta
@@ -34,8 +35,9 @@ def setUpClass(cls):
try:
import matplotlib as mpl
mpl.use('Agg', warn=False)
+ cls.mpl_le_1_2_1 = str(mpl.__version__) <= LooseVersion('1.2.1')
except ImportError:
- raise nose.SkipTest
+ raise nose.SkipTest("matplotlib not installed")
def setUp(self):
self.ts = tm.makeTimeSeries()
@@ -161,6 +163,16 @@ def test_bar_linewidth(self):
for r in ax.patches:
self.assert_(r.get_linewidth() == 2)
+ @slow
+ def test_bar_log(self):
+ expected = np.array([1., 10., 100., 1000.])
+
+ if not self.mpl_le_1_2_1:
+ expected = np.hstack((.1, expected, 1e4))
+
+ ax = Series([200, 500]).plot(log=True, kind='bar')
+ assert_array_equal(ax.yaxis.get_ticklocs(), expected)
+
def test_rotation(self):
df = DataFrame(randn(5, 5))
ax = df.plot(rot=30)
@@ -342,8 +354,9 @@ def setUpClass(cls):
try:
import matplotlib as mpl
mpl.use('Agg', warn=False)
+ cls.mpl_le_1_2_1 = str(mpl.__version__) <= LooseVersion('1.2.1')
except ImportError:
- raise nose.SkipTest
+ raise nose.SkipTest("matplotlib not installed")
def tearDown(self):
import matplotlib.pyplot as plt
@@ -559,22 +572,31 @@ def test_bar_center(self):
ax.patches[0].get_x() + ax.patches[0].get_width())
@slow
- def test_bar_log(self):
+ def test_bar_log_no_subplots(self):
# GH3254, GH3298 matplotlib/matplotlib#1882, #1892
# regressions in 1.2.1
+ expected = np.array([1., 10.])
+
+ if not self.mpl_le_1_2_1:
+ expected = np.hstack((.1, expected, 100))
+ # no subplots
df = DataFrame({'A': [3] * 5, 'B': lrange(1, 6)}, index=lrange(5))
ax = df.plot(kind='bar', grid=True, log=True)
- self.assertEqual(ax.yaxis.get_ticklocs()[0], 1.0)
+ assert_array_equal(ax.yaxis.get_ticklocs(), expected)
+
+ @slow
+ def test_bar_log_subplots(self):
+ expected = np.array([1., 10., 100., 1000.])
+ if not self.mpl_le_1_2_1:
+ expected = np.hstack((.1, expected, 1e4))
- p1 = Series([200, 500]).plot(log=True, kind='bar')
- p2 = DataFrame([Series([200, 300]),
+ ax = DataFrame([Series([200, 300]),
Series([300, 500])]).plot(log=True, kind='bar',
subplots=True)
- (p1.yaxis.get_ticklocs() == np.array([0.625, 1.625]))
- (p2[0].yaxis.get_ticklocs() == np.array([1., 10., 100., 1000.])).all()
- (p2[1].yaxis.get_ticklocs() == np.array([1., 10., 100., 1000.])).all()
+ assert_array_equal(ax[0].yaxis.get_ticklocs(), expected)
+ assert_array_equal(ax[1].yaxis.get_ticklocs(), expected)
@slow
def test_boxplot(self):
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 9f6f3b08ee508..ce75e755a313f 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -4,6 +4,7 @@
import warnings
import re
from contextlib import contextmanager
+from distutils.version import LooseVersion
import numpy as np
@@ -1452,7 +1453,13 @@ def f(ax, x, y, w, start=None, log=self.log, **kwds):
def _make_plot(self):
import matplotlib as mpl
+
+ # mpl decided to make their version string unicode across all Python
+ # versions for mpl >= 1.3 so we have to call str here for python 2
+ mpl_le_1_2_1 = str(mpl.__version__) <= LooseVersion('1.2.1')
+
colors = self._get_colors()
+ ncolors = len(colors)
rects = []
labels = []
@@ -1466,19 +1473,18 @@ def _make_plot(self):
ax = self._get_ax(i)
label = com.pprint_thing(label)
kwds = self.kwds.copy()
- kwds['color'] = colors[i % len(colors)]
+ kwds['color'] = colors[i % ncolors]
- start =0
+ start = 0
if self.log:
start = 1
if any(y < 1):
# GH3254
- start = 0 if mpl.__version__ == "1.2.1" else None
+ start = 0 if mpl_le_1_2_1 else None
if self.subplots:
rect = bar_f(ax, self.ax_pos, y, self.bar_width,
- start = start,
- **kwds)
+ start=start, **kwds)
ax.set_title(label)
elif self.stacked:
mask = y > 0
@@ -1489,8 +1495,7 @@ def _make_plot(self):
neg_prior = neg_prior + np.where(mask, 0, y)
else:
rect = bar_f(ax, self.ax_pos + i * 0.75 / K, y, 0.75 / K,
- start = start,
- label=label, **kwds)
+ start=start, label=label, **kwds)
rects.append(rect)
if self.mark_right:
labels.append(self._get_marked_label(label, i))
| closes #4789
| https://api.github.com/repos/pandas-dev/pandas/pulls/4832 | 2013-09-13T13:29:21Z | 2013-09-14T20:31:47Z | 2013-09-14T20:31:47Z | 2014-07-16T08:28:07Z |
BUG: Fix copy s.t. it always copies index/columns. | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 04908ee8c9e03..285cea7938f91 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -455,6 +455,8 @@ Bug Fixes
- Tests for fillna on empty Series (:issue:`4346`), thanks @immerrr
- Fixed a bug where ``ValueError`` wasn't correctly raised when column names
weren't strings (:issue:`4956`)
+ - Fixed ``copy()`` to shallow copy axes/indices as well and thereby keep
+ separate metadata. (:issue:`4202`, :issue:`4830`)
pandas 0.12.0
-------------
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index e5071bb4484a6..ce07981793f7b 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1988,7 +1988,7 @@ def transform(self, func, *args, **kwargs):
# broadcasting
if isinstance(res, Series):
- if res.index is obj.index:
+ if res.index.is_(obj.index):
group.T.values[:] = res
else:
group.values[:] = res
diff --git a/pandas/core/index.py b/pandas/core/index.py
index f2a22580f16b4..734a6ee15307d 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -16,7 +16,6 @@
import pandas.core.common as com
from pandas.core.common import _values_from_object
from pandas.core.config import get_option
-import warnings
__all__ = ['Index']
@@ -27,6 +26,7 @@ def _indexOp(opname):
Wrapper function for index comparison operations, to avoid
code duplication.
"""
+
def wrapper(self, other):
func = getattr(self.view(np.ndarray), opname)
result = func(other)
@@ -54,6 +54,7 @@ def _shouldbe_timestamp(obj):
class Index(FrozenNDArray):
+
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
storing axis labels for all pandas objects
@@ -160,7 +161,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
elif np.isscalar(data):
raise TypeError('Index(...) must be called with a collection '
- 'of some kind, %s was passed' % repr(data))
+ 'of some kind, %s was passed' % repr(data))
else:
# other iterable of some kind
subarr = com._asarray_tuplesafe(data, dtype=object)
@@ -171,7 +172,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
return Int64Index(subarr.astype('i8'), copy=copy, name=name)
elif inferred != 'string':
if (inferred.startswith('datetime') or
- tslib.is_timestamp_array(subarr)):
+ tslib.is_timestamp_array(subarr)):
from pandas.tseries.index import DatetimeIndex
return DatetimeIndex(data, copy=copy, name=name, **kwargs)
elif inferred == 'period':
@@ -234,7 +235,7 @@ def to_series(self):
useful with map for returning an indexer based on an index
"""
import pandas as pd
- return pd.Series(self.values,index=self,name=self.name)
+ return pd.Series(self.values, index=self, name=self.name)
def astype(self, dtype):
return Index(self.values.astype(dtype), name=self.name,
@@ -279,7 +280,7 @@ def _get_names(self):
def _set_names(self, values):
if len(values) != 1:
raise ValueError('Length of new names must be 1, got %d'
- % len(values))
+ % len(values))
self.name = values[0]
names = property(fset=_set_names, fget=_get_names)
@@ -335,11 +336,11 @@ def _has_complex_internals(self):
def summary(self, name=None):
if len(self) > 0:
head = self[0]
- if hasattr(head,'format') and\
+ if hasattr(head, 'format') and\
not isinstance(head, compat.string_types):
head = head.format()
tail = self[-1]
- if hasattr(tail,'format') and\
+ if hasattr(tail, 'format') and\
not isinstance(tail, compat.string_types):
tail = tail.format()
index_summary = ', %s to %s' % (com.pprint_thing(head),
@@ -571,7 +572,7 @@ def to_native_types(self, slicer=None, **kwargs):
def _format_native_types(self, na_rep='', **kwargs):
""" actually format my specific types """
mask = isnull(self)
- values = np.array(self,dtype=object,copy=True)
+ values = np.array(self, dtype=object, copy=True)
values[mask] = na_rep
return values.tolist()
@@ -595,7 +596,7 @@ def identical(self, other):
Similar to equals, but check that other comparable attributes are also equal
"""
return self.equals(other) and all(
- ( getattr(self,c,None) == getattr(other,c,None) for c in self._comparables ))
+ (getattr(self, c, None) == getattr(other, c, None) for c in self._comparables))
def asof(self, label):
"""
@@ -886,7 +887,8 @@ def set_value(self, arr, key, value):
Fast lookup of value from 1-dimensional ndarray. Only use this if you
know what you're doing
"""
- self._engine.set_value(_values_from_object(arr), _values_from_object(key), value)
+ self._engine.set_value(
+ _values_from_object(arr), _values_from_object(key), value)
def get_level_values(self, level):
"""
@@ -1357,7 +1359,7 @@ def slice_locs(self, start=None, end=None):
# get_loc will return a boolean array for non_uniques
# if we are not monotonic
- if isinstance(start_slice,np.ndarray):
+ if isinstance(start_slice, np.ndarray):
raise KeyError("cannot peform a slice operation "
"on a non-unique non-monotonic index")
@@ -1379,7 +1381,7 @@ def slice_locs(self, start=None, end=None):
if not is_unique:
# get_loc will return a boolean array for non_uniques
- if isinstance(end_slice,np.ndarray):
+ if isinstance(end_slice, np.ndarray):
raise KeyError("cannot perform a slice operation "
"on a non-unique non-monotonic index")
@@ -1447,6 +1449,7 @@ def drop(self, labels):
class Int64Index(Index):
+
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
storing axis labels for all pandas objects. Int64Index is a special case of `Index`
@@ -1579,6 +1582,7 @@ def _wrap_joined_index(self, joined, other):
class MultiIndex(Index):
+
"""
Implements multi-level, a.k.a. hierarchical, index object for pandas
objects
@@ -1625,7 +1629,6 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
if names is not None:
subarr._set_names(names)
-
if sortorder is not None:
subarr.sortorder = int(sortorder)
else:
@@ -1636,7 +1639,6 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
def _get_levels(self):
return self._levels
-
def _set_levels(self, levels, copy=False):
# This is NOT part of the levels property because it should be
# externally not allowed to set levels. User beware if you change
@@ -1686,7 +1688,7 @@ def _get_labels(self):
def _set_labels(self, labels, copy=False):
if len(labels) != self.nlevels:
raise ValueError("Length of levels and labels must be the same.")
- self._labels = FrozenList(_ensure_frozen(labs,copy=copy)._shallow_copy()
+ self._labels = FrozenList(_ensure_frozen(labs, copy=copy)._shallow_copy()
for labs in labels)
def set_labels(self, labels, inplace=False):
@@ -1811,13 +1813,13 @@ def _set_names(self, values):
values = list(values)
if len(values) != self.nlevels:
raise ValueError('Length of names (%d) must be same as level '
- '(%d)' % (len(values),self.nlevels))
+ '(%d)' % (len(values), self.nlevels))
# set the name
for name, level in zip(values, self.levels):
level.rename(name, inplace=True)
-
- names = property(fset=_set_names, fget=_get_names, doc="Names of levels in MultiIndex")
+ names = property(
+ fset=_set_names, fget=_get_names, doc="Names of levels in MultiIndex")
def _format_native_types(self, **kwargs):
return self.tolist()
@@ -1845,7 +1847,7 @@ def _get_level_number(self, level):
count = self.names.count(level)
if count > 1:
raise ValueError('The name %s occurs multiple times, use a '
- 'level number' % level)
+ 'level number' % level)
level = self.names.index(level)
except ValueError:
if not isinstance(level, int):
@@ -1980,9 +1982,9 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
formatted = lev.take(lab).format(formatter=formatter)
# we have some NA
- mask = lab==-1
+ mask = lab == -1
if mask.any():
- formatted = np.array(formatted,dtype=object)
+ formatted = np.array(formatted, dtype=object)
formatted[mask] = na_rep
formatted = formatted.tolist()
@@ -2000,7 +2002,6 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
level.append(com.pprint_thing(name, escape_chars=('\t', '\r', '\n'))
if name is not None else '')
-
level.extend(np.array(lev, dtype=object))
result_levels.append(level)
@@ -2010,8 +2011,9 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
if sparsify:
sentinal = ''
# GH3547
- # use value of sparsify as sentinal, unless it's an obvious "Truthey" value
- if sparsify not in [True,1]:
+ # use value of sparsify as sentinal, unless it's an obvious
+ # "Truthey" value
+ if sparsify not in [True, 1]:
sentinal = sparsify
# little bit of a kludge job for #1217
result_levels = _sparsify(result_levels,
@@ -2138,7 +2140,8 @@ def __contains__(self, key):
def __reduce__(self):
"""Necessary for making this object picklable"""
object_state = list(np.ndarray.__reduce__(self))
- subclass_state = (list(self.levels), list(self.labels), self.sortorder, list(self.names))
+ subclass_state = (list(self.levels), list(
+ self.labels), self.sortorder, list(self.names))
object_state[2] = (object_state[2], subclass_state)
return tuple(object_state)
@@ -2490,7 +2493,8 @@ def reindex(self, target, method=None, level=None, limit=None,
"with a method or limit")
return self[target], target
- raise Exception("cannot handle a non-takeable non-unique multi-index!")
+ raise Exception(
+ "cannot handle a non-takeable non-unique multi-index!")
if not isinstance(target, MultiIndex):
if indexer is None:
@@ -2685,12 +2689,13 @@ def partial_selection(key):
# here we have a completely specified key, but are using some partial string matching here
# GH4758
- can_index_exactly = any([ l.is_all_dates and not isinstance(k,compat.string_types) for k, l in zip(key, self.levels) ])
- if any([ l.is_all_dates for k, l in zip(key, self.levels) ]) and not can_index_exactly:
+ can_index_exactly = any(
+ [l.is_all_dates and not isinstance(k, compat.string_types) for k, l in zip(key, self.levels)])
+ if any([l.is_all_dates for k, l in zip(key, self.levels)]) and not can_index_exactly:
indexer = slice(*self.slice_locs(key, key))
# we have a multiple selection here
- if not indexer.stop-indexer.start == 1:
+ if not indexer.stop - indexer.start == 1:
return partial_selection(key)
key = tuple(self[indexer].tolist()[0])
@@ -2913,7 +2918,8 @@ def _assert_can_do_setop(self, other):
def astype(self, dtype):
if np.dtype(dtype) != np.object_:
- raise TypeError("Setting %s dtype to anything other than object is not supported" % self.__class__)
+ raise TypeError(
+ "Setting %s dtype to anything other than object is not supported" % self.__class__)
return self._shallow_copy()
def insert(self, loc, item):
@@ -2935,7 +2941,8 @@ def insert(self, loc, item):
if not isinstance(item, tuple):
item = (item,) + ('',) * (self.nlevels - 1)
elif len(item) != self.nlevels:
- raise ValueError('Item must have length equal to number of levels.')
+ raise ValueError(
+ 'Item must have length equal to number of levels.')
new_levels = []
new_labels = []
@@ -2990,7 +2997,7 @@ def _wrap_joined_index(self, joined, other):
# For utility purposes
-def _sparsify(label_list, start=0,sentinal=''):
+def _sparsify(label_list, start=0, sentinal=''):
pivoted = lzip(*label_list)
k = len(label_list)
@@ -3031,7 +3038,7 @@ def _ensure_index(index_like, copy=False):
if isinstance(index_like, list):
if type(index_like) != list:
index_like = list(index_like)
- # #2200 ?
+ # 2200 ?
converted, all_arrays = lib.clean_index_list(index_like)
if len(converted) > 0 and all_arrays:
@@ -3169,7 +3176,8 @@ def _get_consensus_names(indexes):
# find the non-none names, need to tupleify to make
# the set hashable, then reverse on return
- consensus_names = set([ tuple(i.names) for i in indexes if all(n is not None for n in i.names) ])
+ consensus_names = set([tuple(i.names)
+ for i in indexes if all(n is not None for n in i.names)])
if len(consensus_names) == 1:
return list(list(consensus_names)[0])
return [None] * indexes[0].nlevels
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 3ab1bfb2c58ed..8fcb64e6d0eda 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2334,8 +2334,12 @@ def copy(self, deep=True):
-------
copy : BlockManager
"""
- new_axes = list(self.axes)
- return self.apply('copy', axes=new_axes, deep=deep, do_integrity_check=False)
+ if deep:
+ new_axes = [ax.view() for ax in self.axes]
+ else:
+ new_axes = list(self.axes)
+ return self.apply('copy', axes=new_axes, deep=deep,
+ ref_items=new_axes[0], do_integrity_check=False)
def as_matrix(self, items=None):
if len(self.blocks) == 0:
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index 261443a95b111..ae981180022c7 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -235,19 +235,25 @@ def __setstate__(self, state):
self._minor_axis = _ensure_index(com._unpickle_array(minor))
self._frames = frames
- def copy(self):
+ def copy(self, deep=True):
"""
- Make a (shallow) copy of the sparse panel
+ Make a copy of the sparse panel
Returns
-------
copy : SparsePanel
"""
- return SparsePanel(self._frames.copy(), items=self.items,
- major_axis=self.major_axis,
- minor_axis=self.minor_axis,
- default_fill_value=self.default_fill_value,
- default_kind=self.default_kind)
+
+ d = self._construct_axes_dict()
+ if deep:
+ new_data = dict((k, v.copy(deep=True)) for k, v in compat.iteritems(self._frames))
+ d = dict((k, v.copy(deep=True)) for k, v in compat.iteritems(d))
+ else:
+ new_data = self._frames.copy()
+ d['default_fill_value']=self.default_fill_value
+ d['default_kind']=self.default_kind
+
+ return SparsePanel(new_data, **d)
def to_frame(self, filter_observations=True):
"""
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 5cb29d717235d..38003f0096df2 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -116,7 +116,7 @@ def __init__(self, data, index=None, sparse_index=None, kind='block',
if is_sparse_array:
if isinstance(data, SparseSeries) and index is None:
- index = data.index
+ index = data.index.view()
elif index is not None:
assert(len(index) == len(data))
@@ -125,14 +125,14 @@ def __init__(self, data, index=None, sparse_index=None, kind='block',
elif isinstance(data, SparseSeries):
if index is None:
- index = data.index
+ index = data.index.view()
# extract the SingleBlockManager
data = data._data
elif isinstance(data, (Series, dict)):
if index is None:
- index = data.index
+ index = data.index.view()
data = Series(data)
data, sparse_index = make_sparse(data, kind=kind,
@@ -150,7 +150,7 @@ def __init__(self, data, index=None, sparse_index=None, kind='block',
if dtype is not None:
data = data.astype(dtype)
if index is None:
- index = data.index
+ index = data.index.view()
else:
data = data.reindex(index, copy=False)
@@ -520,7 +520,7 @@ def copy(self, deep=True):
if deep:
new_data = self._data.copy()
- return self._constructor(new_data, index=self.index,
+ return self._constructor(new_data,
sparse_index=self.sp_index,
fill_value=self.fill_value, name=self.name)
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index 45543547fd64b..a74872c8f193f 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -787,7 +787,7 @@ def test_copy(self):
cp = self.frame.copy()
tm.assert_isinstance(cp, SparseDataFrame)
assert_sp_frame_equal(cp, self.frame)
- self.assert_(cp.index is self.frame.index)
+ self.assert_(cp.index.is_(self.frame.index))
def test_constructor(self):
for col, series in compat.iteritems(self.frame):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0bc454d6ef2bc..7b753f5d6a367 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1736,6 +1736,16 @@ class SafeForSparse(object):
_multiprocess_can_split_ = True
+ def test_copy_index_name_checking(self):
+ # don't want to be able to modify the index stored elsewhere after
+ # making a copy
+ for attr in ('index', 'columns'):
+ ind = getattr(self.frame, attr)
+ ind.name = None
+ cp = self.frame.copy()
+ getattr(cp, attr).name = 'foo'
+ self.assert_(getattr(self.frame, attr).name is None)
+
def test_getitem_pop_assign_name(self):
s = self.frame['A']
self.assertEqual(s.name, 'A')
@@ -6040,16 +6050,6 @@ def test_copy(self):
copy = self.mixed_frame.copy()
self.assert_(copy._data is not self.mixed_frame._data)
- # def test_copy_index_name_checking(self):
- # # don't want to be able to modify the index stored elsewhere after
- # # making a copy
-
- # self.frame.columns.name = None
- # cp = self.frame.copy()
- # cp.columns.name = 'foo'
-
- # self.assert_(self.frame.columns.name is None)
-
def _check_method(self, method='pearson', check_minp=False):
if not check_minp:
correls = self.frame.corr(method=method)
@@ -7630,8 +7630,8 @@ def test_reindex(self):
# corner cases
- # Same index, copies values
- newFrame = self.frame.reindex(self.frame.index)
+ # Same index, copies values but not index if copy=False
+ newFrame = self.frame.reindex(self.frame.index, copy=False)
self.assert_(newFrame.index is self.frame.index)
# length zero
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 3e7ec5c3a3c12..e3c9da3630975 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -202,7 +202,8 @@ def test_is_(self):
self.assertFalse(ind.is_(ind[:]))
self.assertFalse(ind.is_(ind.view(np.ndarray).view(Index)))
self.assertFalse(ind.is_(np.array(range(10))))
- self.assertTrue(ind.is_(ind.view().base)) # quasi-implementation dependent
+ # quasi-implementation dependent
+ self.assertTrue(ind.is_(ind.view().base))
ind2 = ind.view()
ind2.name = 'bob'
self.assertTrue(ind.is_(ind2))
@@ -441,7 +442,7 @@ def test_is_all_dates(self):
def test_summary(self):
self._check_method_works(Index.summary)
# GH3869
- ind = Index(['{other}%s',"~:{range}:0"], name='A')
+ ind = Index(['{other}%s', "~:{range}:0"], name='A')
result = ind.summary()
# shouldn't be formatted accidentally.
self.assert_('~:{range}:0' in result)
@@ -1182,8 +1183,8 @@ def test_astype(self):
assert_copy(actual.labels, expected.labels)
self.check_level_names(actual, expected.names)
- assertRaisesRegexp(TypeError, "^Setting.*dtype.*object", self.index.astype, np.dtype(int))
-
+ with assertRaisesRegexp(TypeError, "^Setting.*dtype.*object"):
+ self.index.astype(np.dtype(int))
def test_constructor_single_level(self):
single_level = MultiIndex(levels=[['foo', 'bar', 'baz', 'qux']],
@@ -1230,7 +1231,6 @@ def test_copy(self):
self.assert_multiindex_copied(i_copy, self.index)
-
def test_shallow_copy(self):
i_copy = self.index._shallow_copy()
@@ -1497,10 +1497,11 @@ def test_slice_locs_with_type_mismatch(self):
df = tm.makeCustomDataframe(5, 5)
stacked = df.stack()
idx = stacked.index
- assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs, timedelta(seconds=30))
+ with assertRaisesRegexp(TypeError, '^Level type mismatch'):
+ idx.slice_locs(timedelta(seconds=30))
# TODO: Try creating a UnicodeDecodeError in exception message
- assertRaisesRegexp(TypeError, '^Level type mismatch', idx.slice_locs,
- df.index[1], (16, "a"))
+ with assertRaisesRegexp(TypeError, '^Level type mismatch'):
+ idx.slice_locs(df.index[1], (16, "a"))
def test_slice_locs_not_sorted(self):
index = MultiIndex(levels=[Index(lrange(4)),
@@ -1672,7 +1673,7 @@ def test_format_sparse_config(self):
warnings.filterwarnings('ignore',
category=FutureWarning,
module=".*format")
- # #1538
+ # GH1538
pd.set_option('display.multi_sparse', False)
result = self.index.format()
@@ -1734,11 +1735,11 @@ def test_identical(self):
mi2 = self.index.copy()
self.assert_(mi.identical(mi2))
- mi = mi.set_names(['new1','new2'])
+ mi = mi.set_names(['new1', 'new2'])
self.assert_(mi.equals(mi2))
self.assert_(not mi.identical(mi2))
- mi2 = mi2.set_names(['new1','new2'])
+ mi2 = mi2.set_names(['new1', 'new2'])
self.assert_(mi.identical(mi2))
def test_is_(self):
@@ -1877,7 +1878,7 @@ def test_diff(self):
expected.names = first.names
self.assertEqual(first.names, result.names)
assertRaisesRegexp(TypeError, "other must be a MultiIndex or a list"
- " of tuples", first.diff, [1,2,3,4,5])
+ " of tuples", first.diff, [1, 2, 3, 4, 5])
def test_from_tuples(self):
assertRaisesRegexp(TypeError, 'Cannot infer number of levels from'
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index d3d4368d8028e..2c8394bfde285 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1195,11 +1195,11 @@ def test_count(self):
result = frame.count(level='b')
expect = self.frame.count(level=1)
- assert_frame_equal(result, expect)
+ assert_frame_equal(result, expect, check_names=False)
result = frame.count(level='a')
expect = self.frame.count(level=0)
- assert_frame_equal(result, expect)
+ assert_frame_equal(result, expect, check_names=False)
series = self.series.copy()
series.index.names = ['a', 'b']
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index c78823779f6d0..a61212b341fa7 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -61,6 +61,13 @@ class SafeForLongAndSparse(object):
def test_repr(self):
foo = repr(self.panel)
+ def test_copy_names(self):
+ for attr in ('major_axis', 'minor_axis'):
+ getattr(self.panel, attr).name = None
+ cp = self.panel.copy()
+ getattr(cp, attr).name = 'foo'
+ self.assert_(getattr(self.panel, attr).name is None)
+
def test_iter(self):
tm.equalContents(list(self.panel), self.panel.items)
diff --git a/pandas/tests/test_panel4d.py b/pandas/tests/test_panel4d.py
index 4f7e75b401216..1ce909b57402f 100644
--- a/pandas/tests/test_panel4d.py
+++ b/pandas/tests/test_panel4d.py
@@ -762,11 +762,6 @@ def test_reindex(self):
major=self.panel4d.major_axis,
minor=self.panel4d.minor_axis)
- assert(result.labels is self.panel4d.labels)
- assert(result.items is self.panel4d.items)
- assert(result.major_axis is self.panel4d.major_axis)
- assert(result.minor_axis is self.panel4d.minor_axis)
-
# don't necessarily copy
result = self.panel4d.reindex()
assert_panel4d_equal(result,self.panel4d)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 60dd42865818c..b2c5782d56b1f 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -64,14 +64,17 @@ def test_copy_name(self):
result = self.ts.copy()
self.assertEquals(result.name, self.ts.name)
- # def test_copy_index_name_checking(self):
- # don't want to be able to modify the index stored elsewhere after
- # making a copy
+ def test_copy_index_name_checking(self):
+ # don't want to be able to modify the index stored elsewhere after
+ # making a copy
- # self.ts.index.name = None
- # cp = self.ts.copy()
- # cp.index.name = 'foo'
- # self.assert_(self.ts.index.name is None)
+ self.ts.index.name = None
+ self.assert_(self.ts.index.name is None)
+ self.assert_(self.ts is self.ts)
+ cp = self.ts.copy()
+ cp.index.name = 'foo'
+ print(self.ts.index.name)
+ self.assert_(self.ts.index.name is None)
def test_append_preserve_name(self):
result = self.ts[:5].append(self.ts[5:])
@@ -4270,7 +4273,8 @@ def test_align_sameindex(self):
def test_reindex(self):
identity = self.series.reindex(self.series.index)
- self.assertEqual(id(self.series.index), id(identity.index))
+ self.assert_(np.may_share_memory(self.series.index, identity.index))
+ self.assert_(identity.index.is_(self.series.index))
subIndex = self.series.index[10:20]
subSeries = self.series.reindex(subIndex)
| Fixes #4202 (and maybe some others too).
Only copies index/columns with `deep=True` on `BlockManager`. Plus some
tests...yay!
| https://api.github.com/repos/pandas-dev/pandas/pulls/4830 | 2013-09-13T00:56:27Z | 2013-09-24T03:43:59Z | 2013-09-24T03:43:59Z | 2014-06-13T02:11:01Z |
ENH/API: allow DataFrame constructor to better accept list-like collections (GH3783,GH4297) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index c80ddd01cdf07..140c3bc836fdb 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -113,6 +113,8 @@ Improvements to existing features
``io.excel.xls.writer``. (:issue:`4745`, :issue:`4750`)
- ``Panel.to_excel()`` now accepts keyword arguments that will be passed to
its ``DataFrame``'s ``to_excel()`` methods. (:issue:`4750`)
+ - allow DataFrame constructor to accept more list-like objects, e.g. list of
+ ``collections.Sequence`` and ``array.Array`` objects (:issue:`3783`,:issue:`42971`)
API Changes
~~~~~~~~~~~
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index bd601c5c8408e..fb08c5eaa4822 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -16,6 +16,7 @@
import sys
import collections
import warnings
+import types
from numpy import nan as NA
import numpy as np
@@ -24,7 +25,7 @@
from pandas.core.common import (isnull, notnull, PandasError, _try_sort,
_default_index, _maybe_upcast, _is_sequence,
_infer_dtype_from_scalar, _values_from_object,
- _coerce_to_dtypes, _DATELIKE_DTYPES)
+ _coerce_to_dtypes, _DATELIKE_DTYPES, is_list_like)
from pandas.core.generic import NDFrame
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (_NDFrameIndexer, _maybe_droplevels,
@@ -413,12 +414,14 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
else:
mgr = self._init_ndarray(data, index, columns, dtype=dtype,
copy=copy)
- elif isinstance(data, list):
+ elif isinstance(data, (list, types.GeneratorType)):
+ if isinstance(data, types.GeneratorType):
+ data = list(data)
if len(data) > 0:
if index is None and isinstance(data[0], Series):
index = _get_names_from_index(data)
- if isinstance(data[0], (list, tuple, collections.Mapping, Series)):
+ if is_list_like(data[0]) and getattr(data[0],'ndim',0) <= 1:
arrays, columns = _to_arrays(data, columns, dtype=dtype)
columns = _ensure_index(columns)
@@ -4545,7 +4548,7 @@ def isin(self, values, iloc=False):
else:
- if not com.is_list_like(values):
+ if not is_list_like(values):
raise TypeError("only list-like or dict-like objects are"
" allowed to be passed to DataFrame.isin(), "
"you passed a "
@@ -4705,7 +4708,7 @@ def extract_index(data):
elif isinstance(v, dict):
have_dicts = True
indexes.append(list(v.keys()))
- elif isinstance(v, (list, tuple, np.ndarray)):
+ elif is_list_like(v) and getattr(v,'ndim',0) <= 1:
have_raw_arrays = True
raw_lengths.append(len(v))
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c5af0b0d4d5c8..507c2055e1b68 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2606,6 +2606,57 @@ def test_constructor_list_of_lists(self):
self.assert_(com.is_integer_dtype(df['num']))
self.assert_(df['str'].dtype == np.object_)
+ def test_constructor_sequence_like(self):
+ # GH 3783
+ # collections.Squence like
+ import collections
+
+ class DummyContainer(collections.Sequence):
+ def __init__(self, lst):
+ self._lst = lst
+ def __getitem__(self, n):
+ return self._lst.__getitem__(n)
+ def __len__(self, n):
+ return self._lst.__len__()
+
+ l = [DummyContainer([1, 'a']), DummyContainer([2, 'b'])]
+ columns = ["num", "str"]
+ result = DataFrame(l, columns=columns)
+ expected = DataFrame([[1,'a'],[2,'b']],columns=columns)
+ assert_frame_equal(result, expected, check_dtype=False)
+
+ # GH 4297
+ # support Array
+ import array
+ result = DataFrame.from_items([('A', array.array('i', range(10)))])
+ expected = DataFrame({ 'A' : list(range(10)) })
+ assert_frame_equal(result, expected, check_dtype=False)
+
+ expected = DataFrame([ list(range(10)), list(range(10)) ])
+ result = DataFrame([ array.array('i', range(10)), array.array('i',range(10)) ])
+ assert_frame_equal(result, expected, check_dtype=False)
+
+ def test_constructor_iterator(self):
+
+ expected = DataFrame([ list(range(10)), list(range(10)) ])
+ result = DataFrame([ range(10), range(10) ])
+ assert_frame_equal(result, expected)
+
+ def test_constructor_generator(self):
+ #related #2305
+
+ gen1 = (i for i in range(10))
+ gen2 = (i for i in range(10))
+
+ expected = DataFrame([ list(range(10)), list(range(10)) ])
+ result = DataFrame([ gen1, gen2 ])
+ assert_frame_equal(result, expected)
+
+ gen = ([ i, 'a'] for i in range(10))
+ result = DataFrame(gen)
+ expected = DataFrame({ 0 : range(10), 1 : 'a' })
+ assert_frame_equal(result, expected)
+
def test_constructor_list_of_dicts(self):
data = [OrderedDict([['a', 1.5], ['b', 3], ['c', 4], ['d', 6]]),
OrderedDict([['a', 1.5], ['b', 3], ['d', 6]]),
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 7a993cbcf07f4..d2d0bc39fbfc9 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -353,6 +353,12 @@ def test_constructor_series(self):
assert_series_equal(s2, s1.sort_index())
+ def test_constructor_iterator(self):
+
+ expected = Series(list(range(10)))
+ result = Series(range(10))
+ assert_series_equal(result, expected)
+
def test_constructor_generator(self):
gen = (i for i in range(10))
| closes #3783
closes #4297
related to #2305 (as now accept GeneratorType), but just convert to list, don't incrementally create the frame
| https://api.github.com/repos/pandas-dev/pandas/pulls/4829 | 2013-09-13T00:51:20Z | 2013-09-13T22:54:42Z | 2013-09-13T22:54:41Z | 2014-06-21T08:22:50Z |
BUG: enhanced to_datetime with format '%Y%m%d' to handle NaT/nan better | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 5376e0396799e..101ec290a58cf 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -105,7 +105,7 @@ Improvements to existing features
test to vbench (:issue:`4705` and :issue:`4722`)
- Add ``axis`` and ``level`` keywords to ``where``, so that the ``other`` argument
can now be an alignable pandas object.
- - ``to_datetime`` with a format of 'YYYYMMDD' now parses much faster
+ - ``to_datetime`` with a format of '%Y%m%d' now parses much faster
API Changes
~~~~~~~~~~~
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index c9e643e25b761..d7ca9d9b371d4 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -845,9 +845,19 @@ def test_to_datetime_format_YYYYMMDD(self):
assert_series_equal(result, expected)
# with NaT
+ expected = Series([Timestamp("19801222"),Timestamp("19801222")] + [Timestamp("19810105")]*5)
+ expected[2] = np.nan
s[2] = np.nan
- self.assertRaises(ValueError, to_datetime, s,format='%Y%m%d')
- self.assertRaises(ValueError, to_datetime, s.apply(str),format='%Y%m%d')
+
+ result = to_datetime(s,format='%Y%m%d')
+ assert_series_equal(result, expected)
+
+ # string with NaT
+ s = s.apply(str)
+ s[2] = 'nat'
+ result = to_datetime(s,format='%Y%m%d')
+ assert_series_equal(result, expected)
+
def test_to_datetime_format_microsecond(self):
val = '01-Apr-2011 00:00:01.978'
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index cca4850c2c1bf..dd78bea385c61 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -106,8 +106,7 @@ def _convert_listlike(arg, box):
# shortcut formatting here
if format == '%Y%m%d':
try:
- carg = arg.astype(np.int64).astype(object)
- result = lib.try_parse_year_month_day(carg/10000,carg/100 % 100, carg % 100)
+ result = _attempt_YYYYMMDD(arg)
except:
raise ValueError("cannot convert the input to '%Y%m%d' date format")
@@ -144,6 +143,43 @@ def _convert_listlike(arg, box):
class DateParseError(ValueError):
pass
+def _attempt_YYYYMMDD(arg):
+ """ try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
+ arg is a passed in as an object dtype, but could really be ints/strings with nan-like/or floats (e.g. with nan) """
+
+ def calc(carg):
+ # calculate the actual result
+ carg = carg.astype(object)
+ return lib.try_parse_year_month_day(carg/10000,carg/100 % 100, carg % 100)
+
+ def calc_with_mask(carg,mask):
+ result = np.empty(carg.shape, dtype='M8[ns]')
+ iresult = result.view('i8')
+ iresult[-mask] = tslib.iNaT
+ result[mask] = calc(carg[mask].astype(np.float64).astype(np.int64)).astype('M8[ns]')
+ return result
+
+ # try intlike / strings that are ints
+ try:
+ return calc(arg.astype(np.int64))
+ except:
+ pass
+
+ # a float with actual np.nan
+ try:
+ carg = arg.astype(np.float64)
+ return calc_with_mask(carg,com.notnull(carg))
+ except:
+ pass
+
+ # string with NaN-like
+ try:
+ mask = ~lib.ismember(arg, tslib._nat_strings)
+ return calc_with_mask(arg,mask)
+ except:
+ pass
+
+ return None
# patterns for quarters like '4Q2005', '05Q1'
qpat1full = re.compile(r'(\d)Q(\d\d\d\d)')
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 999c3869daf62..353d7afc63cb3 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -154,16 +154,7 @@ def date_range(start=None, end=None, periods=None, freq=None):
timeseries_to_datetime_YYYYMMDD = \
Benchmark('to_datetime(strings,format="%Y%m%d")', setup,
- start_date=datetime(2013, 9, 1))
-
-setup = common_setup + """
-rng = date_range('1/1/2000', periods=10000, freq='D')
-strings = Series(rng.year*10000+rng.month*100+rng.day,dtype=np.int64).apply(str)
-"""
-
-timeseries_to_datetime_YYYYMMDD_old = \
- Benchmark('pandas.tslib.array_strptime(strings.values,"%Y%m%d")', setup,
- start_date=datetime(2013, 9, 1))
+ start_date=datetime(2012, 7, 1))
# ---- infer_freq
# infer_freq
| https://api.github.com/repos/pandas-dev/pandas/pulls/4828 | 2013-09-12T22:55:15Z | 2013-09-12T23:46:46Z | 2013-09-12T23:46:46Z | 2014-07-16T08:27:57Z | |
PERF: much faster to_datetime performance with a format of '%Y%m%d' | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 75194f6877a6e..5376e0396799e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -105,6 +105,7 @@ Improvements to existing features
test to vbench (:issue:`4705` and :issue:`4722`)
- Add ``axis`` and ``level`` keywords to ``where``, so that the ``other`` argument
can now be an alignable pandas object.
+ - ``to_datetime`` with a format of 'YYYYMMDD' now parses much faster
API Changes
~~~~~~~~~~~
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index b5697a98de412..c9e643e25b761 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -834,6 +834,21 @@ def test_to_datetime_format(self):
else:
self.assert_(result.equals(expected))
+ def test_to_datetime_format_YYYYMMDD(self):
+ s = Series([19801222,19801222] + [19810105]*5)
+ expected = Series([ Timestamp(x) for x in s.apply(str) ])
+
+ result = to_datetime(s,format='%Y%m%d')
+ assert_series_equal(result, expected)
+
+ result = to_datetime(s.apply(str),format='%Y%m%d')
+ assert_series_equal(result, expected)
+
+ # with NaT
+ s[2] = np.nan
+ self.assertRaises(ValueError, to_datetime, s,format='%Y%m%d')
+ self.assertRaises(ValueError, to_datetime, s.apply(str),format='%Y%m%d')
+
def test_to_datetime_format_microsecond(self):
val = '01-Apr-2011 00:00:01.978'
format = '%d-%b-%Y %H:%M:%S.%f'
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index 3087d54396691..cca4850c2c1bf 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -101,7 +101,19 @@ def _convert_listlike(arg, box):
arg = com._ensure_object(arg)
try:
if format is not None:
- result = tslib.array_strptime(arg, format)
+ result = None
+
+ # shortcut formatting here
+ if format == '%Y%m%d':
+ try:
+ carg = arg.astype(np.int64).astype(object)
+ result = lib.try_parse_year_month_day(carg/10000,carg/100 % 100, carg % 100)
+ except:
+ raise ValueError("cannot convert the input to '%Y%m%d' date format")
+
+ # fallback
+ if result is None:
+ result = tslib.array_strptime(arg, format)
else:
result = tslib.array_to_datetime(arg, raise_=errors == 'raise',
utc=utc, dayfirst=dayfirst,
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 4dd1dd2e96bdd..999c3869daf62 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -147,6 +147,24 @@ def date_range(start=None, end=None, periods=None, freq=None):
Benchmark('to_datetime(strings)', setup,
start_date=datetime(2012, 7, 11))
+setup = common_setup + """
+rng = date_range('1/1/2000', periods=10000, freq='D')
+strings = Series(rng.year*10000+rng.month*100+rng.day,dtype=np.int64).apply(str)
+"""
+
+timeseries_to_datetime_YYYYMMDD = \
+ Benchmark('to_datetime(strings,format="%Y%m%d")', setup,
+ start_date=datetime(2013, 9, 1))
+
+setup = common_setup + """
+rng = date_range('1/1/2000', periods=10000, freq='D')
+strings = Series(rng.year*10000+rng.month*100+rng.day,dtype=np.int64).apply(str)
+"""
+
+timeseries_to_datetime_YYYYMMDD_old = \
+ Benchmark('pandas.tslib.array_strptime(strings.values,"%Y%m%d")', setup,
+ start_date=datetime(2013, 9, 1))
+
# ---- infer_freq
# infer_freq
| ```
In [1]: rng = date_range('1/1/2000', periods=10000, freq='D')
In [2]: strings = Series(rng.year*10000+rng.month*100+rng.day,dtype=np.int64).apply(str)
In [3]: %timeit pandas.tslib.array_strptime(strings.values,"%Y%m%d")
10 loops, best of 3: 42.9 ms per loop
In [4]: %timeit pd.to_datetime(strings,format="%Y%m%d")
100 loops, best of 3: 9.21 ms per loop
In [5]: (pd.to_datetime(strings,format="%Y%m%d") == pandas.tslib.array_strptime(strings.values,"%Y%m%d")).all()
Out[5]: True
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4826 | 2013-09-12T17:07:14Z | 2013-09-12T18:36:28Z | 2013-09-12T18:36:28Z | 2014-06-26T11:55:44Z |
API: add is_beg_month/quarter/year, is_end_month/quarter/year accessors (#4565) | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 7918d6930341a..aa5c58652d550 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1143,6 +1143,12 @@ Time/Date Components
DatetimeIndex.tz
DatetimeIndex.freq
DatetimeIndex.freqstr
+ DatetimeIndex.is_month_start
+ DatetimeIndex.is_month_end
+ DatetimeIndex.is_quarter_start
+ DatetimeIndex.is_quarter_end
+ DatetiemIndex.is_year_start
+ DatetimeIndex.is_year_end
Selecting
~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 145100c110194..f54cc13a7d775 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -470,6 +470,10 @@ API Changes
- ``DataFrame.apply`` will use the ``reduce`` argument to determine whether a
``Series`` or a ``DataFrame`` should be returned when the ``DataFrame`` is
empty (:issue:`6007`).
+- Add ``is_month_start``, ``is_month_end``, ``is_quarter_start``, ``is_quarter_end``,
+ ``is_year_start``, ``is_year_end`` accessors for ``DateTimeIndex``/``Timestamp`` which return a boolean array
+ of whether the timestamp(s) are at the start/end of the month/quarter/year defined by the
+ frequency of the ``DateTimeIndex``/``Timestamp`` (:issue:`4565`, :issue:`6998`))
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index e3070ff1507a2..1cae66fada587 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -408,6 +408,39 @@ regularity will result in a ``DatetimeIndex`` (but frequency is lost):
.. _timeseries.offsets:
+Time/Date Components
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+There are several time/date properties that one can access from ``Timestamp`` or a collection of timestamps like a ``DateTimeIndex``.
+
+.. csv-table::
+ :header: "Property", "Description"
+ :widths: 15, 65
+
+ year, "The year of the datetime"
+ month,"The month of the datetime"
+ day,"The days of the datetime"
+ hour,"The hour of the datetime"
+ minute,"The minutes of the datetime"
+ second,"The seconds of the datetime"
+ microsecond,"The microseconds of the datetime"
+ nanosecond,"The nanoseconds of the datetime"
+ date,"Returns datetime.date"
+ time,"Returns datetime.time"
+ dayofyear,"The ordinal day of year"
+ weekofyear,"The week ordinal of the year"
+ week,"The week ordinal of the year"
+ dayofweek,"The day of the week with Monday=0, Sunday=6"
+ weekday,"The day of the week with Monday=0, Sunday=6"
+ quarter,"Quarter of the date: Jan=Mar = 1, Apr-Jun = 2, etc."
+ is_month_start,"Logical indicating if first day of month (defined by frequency)"
+ is_month_end,"Logical indicating if last day of month (defined by frequency)"
+ is_quarter_start,"Logical indicating if first day of quarter (defined by frequency)"
+ is_quarter_end,"Logical indicating if last day of quarter (defined by frequency)"
+ is_year_start,"Logical indicating if first day of year (defined by frequency)"
+ is_year_end,"Logical indicating if last day of year (defined by frequency)"
+
+
DateOffset objects
------------------
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
index 7962d21b85ecd..f281d40642063 100644
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -89,6 +89,8 @@ API changes
s.year
s.index.year
+- Add ``is_month_start``, ``is_month_end``, ``is_quarter_start``, ``is_quarter_end``, ``is_year_start``, ``is_year_end`` accessors for ``DateTimeIndex``/``Timestamp`` which return a boolean array of whether the timestamp(s) are at the start/end of the month/quarter/year defined by the frequency of the ``DateTimeIndex``/``Timestamp`` (:issue:`4565`, :issue:`6998`)
+
- More consistent behaviour for some groupby methods:
groupby ``head`` and ``tail`` now act more like ``filter`` rather than an aggregation:
diff --git a/pandas/core/base.py b/pandas/core/base.py
index ec6a4ffbcefbb..1e9adb60f534e 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -336,3 +336,9 @@ def nunique(self):
dayofyear = _field_accessor('dayofyear', "The ordinal day of the year")
quarter = _field_accessor('quarter', "The quarter of the date")
qyear = _field_accessor('qyear')
+ is_month_start = _field_accessor('is_month_start', "Logical indicating if first day of month (defined by frequency)")
+ is_month_end = _field_accessor('is_month_end', "Logical indicating if last day of month (defined by frequency)")
+ is_quarter_start = _field_accessor('is_quarter_start', "Logical indicating if first day of quarter (defined by frequency)")
+ is_quarter_end = _field_accessor('is_quarter_end', "Logical indicating if last day of quarter (defined by frequency)")
+ is_year_start = _field_accessor('is_year_start', "Logical indicating if first day of year (defined by frequency)")
+ is_year_end = _field_accessor('is_year_end', "Logical indicating if last day of year (defined by frequency)")
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 036a868fe0451..81b3d4631bfbf 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -214,7 +214,7 @@ def test_value_counts_unique_nunique(self):
# freq must be specified because repeat makes freq ambiguous
o = klass(np.repeat(values, range(1, len(o) + 1)), freq=o.freq)
else:
- o = klass(np.repeat(values, range(1, len(o) + 1)))
+ o = klass(np.repeat(values, range(1, len(o) + 1)))
expected_s = Series(range(10, 0, -1), index=values[::-1], dtype='int64')
tm.assert_series_equal(o.value_counts(), expected_s)
@@ -246,7 +246,7 @@ def test_value_counts_unique_nunique(self):
if isinstance(o, PeriodIndex):
o = klass(np.repeat(values, range(1, len(o) + 1)), freq=o.freq)
else:
- o = klass(np.repeat(values, range(1, len(o) + 1)))
+ o = klass(np.repeat(values, range(1, len(o) + 1)))
if isinstance(o, DatetimeIndex):
# DatetimeIndex: nan is casted to Nat and included
@@ -278,7 +278,7 @@ def test_value_counts_inferred(self):
s = klass(s_values)
expected = Series([4, 3, 2, 1], index=['b', 'a', 'd', 'c'])
tm.assert_series_equal(s.value_counts(), expected)
-
+
self.assert_numpy_array_equal(s.unique(), np.unique(s_values))
self.assertEquals(s.nunique(), 4)
# don't sort, have to sort after the fact as not sorting is platform-dep
@@ -410,7 +410,7 @@ def setUp(self):
def test_ops_properties(self):
self.check_ops_properties(['year','month','day','hour','minute','second','weekofyear','week','dayofweek','dayofyear','quarter'])
- self.check_ops_properties(['date','time','microsecond','nanosecond'], lambda x: isinstance(x,DatetimeIndex))
+ self.check_ops_properties(['date','time','microsecond','nanosecond', 'is_month_start', 'is_month_end', 'is_quarter_start', 'is_quarter_end', 'is_year_start', 'is_year_end'], lambda x: isinstance(x,DatetimeIndex))
class TestPeriodIndexOps(Ops):
_allowed = '_allow_period_index_ops'
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 6ac21e60ea7f3..a2e01c8110261 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -14,7 +14,7 @@
from pandas.compat import u
from pandas.tseries.frequencies import (
infer_freq, to_offset, get_period_alias,
- Resolution, get_reso_string)
+ Resolution, get_reso_string, get_offset)
from pandas.tseries.offsets import DateOffset, generate_range, Tick, CDay
from pandas.tseries.tools import parse_time_string, normalize_date
from pandas.util.decorators import cache_readonly
@@ -28,6 +28,7 @@
import pandas.algos as _algos
import pandas.index as _index
+from pandas.tslib import isleapyear
def _utc():
import pytz
@@ -43,7 +44,14 @@ def f(self):
utc = _utc()
if self.tz is not utc:
values = self._local_timestamps()
- return tslib.get_date_field(values, field)
+ if field in ['is_month_start', 'is_month_end',
+ 'is_quarter_start', 'is_quarter_end',
+ 'is_year_start', 'is_year_end']:
+ month_kw = self.freq.kwds.get('startingMonth', self.freq.kwds.get('month', 12)) if self.freq else 12
+ freqstr = self.freqstr if self.freq else None
+ return tslib.get_start_end_field(values, field, freqstr, month_kw)
+ else:
+ return tslib.get_date_field(values, field)
f.__name__ = name
f.__doc__ = docstring
return property(f)
@@ -1439,6 +1447,12 @@ def freqstr(self):
_weekday = _dayofweek
_dayofyear = _field_accessor('dayofyear', 'doy')
_quarter = _field_accessor('quarter', 'q')
+ _is_month_start = _field_accessor('is_month_start', 'is_month_start')
+ _is_month_end = _field_accessor('is_month_end', 'is_month_end')
+ _is_quarter_start = _field_accessor('is_quarter_start', 'is_quarter_start')
+ _is_quarter_end = _field_accessor('is_quarter_end', 'is_quarter_end')
+ _is_year_start = _field_accessor('is_year_start', 'is_year_start')
+ _is_year_end = _field_accessor('is_year_end', 'is_year_end')
@property
def _time(self):
@@ -1774,6 +1788,7 @@ def to_julian_date(self):
self.nanosecond/3600.0/1e+9
)/24.0)
+
def _generate_regular_range(start, end, periods, offset):
if isinstance(offset, Tick):
stride = offset.nanos
@@ -1831,7 +1846,7 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
Frequency strings can have multiples, e.g. '5H'
tz : string or None
Time zone name for returning localized DatetimeIndex, for example
- Asia/Hong_Kong
+ Asia/Hong_Kong
normalize : bool, default False
Normalize start/end dates to midnight before generating date range
name : str, default None
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index fc3ee993771d3..319eaee6d14df 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1473,7 +1473,7 @@ def test_timestamp_fields(self):
# extra fields from DatetimeIndex like quarter and week
idx = tm.makeDateIndex(100)
- fields = ['dayofweek', 'dayofyear', 'week', 'weekofyear', 'quarter']
+ fields = ['dayofweek', 'dayofyear', 'week', 'weekofyear', 'quarter', 'is_month_start', 'is_month_end', 'is_quarter_start', 'is_quarter_end', 'is_year_start', 'is_year_end']
for f in fields:
expected = getattr(idx, f)[-1]
result = getattr(Timestamp(idx[-1]), f)
@@ -2192,7 +2192,7 @@ def test_join_with_period_index(self):
class TestDatetime64(tm.TestCase):
"""
- Also test supoprt for datetime64[ns] in Series / DataFrame
+ Also test support for datetime64[ns] in Series / DataFrame
"""
def setUp(self):
@@ -2202,37 +2202,115 @@ def setUp(self):
def test_datetimeindex_accessors(self):
dti = DatetimeIndex(
- freq='Q-JAN', start=datetime(1997, 12, 31), periods=100)
+ freq='D', start=datetime(1998, 1, 1), periods=365)
self.assertEquals(dti.year[0], 1998)
self.assertEquals(dti.month[0], 1)
- self.assertEquals(dti.day[0], 31)
+ self.assertEquals(dti.day[0], 1)
self.assertEquals(dti.hour[0], 0)
self.assertEquals(dti.minute[0], 0)
self.assertEquals(dti.second[0], 0)
self.assertEquals(dti.microsecond[0], 0)
- self.assertEquals(dti.dayofweek[0], 5)
+ self.assertEquals(dti.dayofweek[0], 3)
- self.assertEquals(dti.dayofyear[0], 31)
- self.assertEquals(dti.dayofyear[1], 120)
+ self.assertEquals(dti.dayofyear[0], 1)
+ self.assertEquals(dti.dayofyear[120], 121)
- self.assertEquals(dti.weekofyear[0], 5)
- self.assertEquals(dti.weekofyear[1], 18)
+ self.assertEquals(dti.weekofyear[0], 1)
+ self.assertEquals(dti.weekofyear[120], 18)
self.assertEquals(dti.quarter[0], 1)
- self.assertEquals(dti.quarter[1], 2)
-
- self.assertEquals(len(dti.year), 100)
- self.assertEquals(len(dti.month), 100)
- self.assertEquals(len(dti.day), 100)
- self.assertEquals(len(dti.hour), 100)
- self.assertEquals(len(dti.minute), 100)
- self.assertEquals(len(dti.second), 100)
- self.assertEquals(len(dti.microsecond), 100)
- self.assertEquals(len(dti.dayofweek), 100)
- self.assertEquals(len(dti.dayofyear), 100)
- self.assertEquals(len(dti.weekofyear), 100)
- self.assertEquals(len(dti.quarter), 100)
+ self.assertEquals(dti.quarter[120], 2)
+
+ self.assertEquals(dti.is_month_start[0], True)
+ self.assertEquals(dti.is_month_start[1], False)
+ self.assertEquals(dti.is_month_start[31], True)
+ self.assertEquals(dti.is_quarter_start[0], True)
+ self.assertEquals(dti.is_quarter_start[90], True)
+ self.assertEquals(dti.is_year_start[0], True)
+ self.assertEquals(dti.is_year_start[364], False)
+ self.assertEquals(dti.is_month_end[0], False)
+ self.assertEquals(dti.is_month_end[30], True)
+ self.assertEquals(dti.is_month_end[31], False)
+ self.assertEquals(dti.is_month_end[364], True)
+ self.assertEquals(dti.is_quarter_end[0], False)
+ self.assertEquals(dti.is_quarter_end[30], False)
+ self.assertEquals(dti.is_quarter_end[89], True)
+ self.assertEquals(dti.is_quarter_end[364], True)
+ self.assertEquals(dti.is_year_end[0], False)
+ self.assertEquals(dti.is_year_end[364], True)
+
+ self.assertEquals(len(dti.year), 365)
+ self.assertEquals(len(dti.month), 365)
+ self.assertEquals(len(dti.day), 365)
+ self.assertEquals(len(dti.hour), 365)
+ self.assertEquals(len(dti.minute), 365)
+ self.assertEquals(len(dti.second), 365)
+ self.assertEquals(len(dti.microsecond), 365)
+ self.assertEquals(len(dti.dayofweek), 365)
+ self.assertEquals(len(dti.dayofyear), 365)
+ self.assertEquals(len(dti.weekofyear), 365)
+ self.assertEquals(len(dti.quarter), 365)
+ self.assertEquals(len(dti.is_month_start), 365)
+ self.assertEquals(len(dti.is_month_end), 365)
+ self.assertEquals(len(dti.is_quarter_start), 365)
+ self.assertEquals(len(dti.is_quarter_end), 365)
+ self.assertEquals(len(dti.is_year_start), 365)
+ self.assertEquals(len(dti.is_year_end), 365)
+
+ dti = DatetimeIndex(
+ freq='BQ-FEB', start=datetime(1998, 1, 1), periods=4)
+
+ self.assertEquals(sum(dti.is_quarter_start), 0)
+ self.assertEquals(sum(dti.is_quarter_end), 4)
+ self.assertEquals(sum(dti.is_year_start), 0)
+ self.assertEquals(sum(dti.is_year_end), 1)
+
+ # Ensure is_start/end accessors throw ValueError for CustomBusinessDay, CBD requires np >= 1.7
+ if not _np_version_under1p7:
+ bday_egypt = offsets.CustomBusinessDay(weekmask='Sun Mon Tue Wed Thu')
+ dti = date_range(datetime(2013, 4, 30), periods=5, freq=bday_egypt)
+ self.assertRaises(ValueError, lambda: dti.is_month_start)
+
+ dti = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-03'])
+
+ self.assertEquals(dti.is_month_start[0], 1)
+
+ tests = [
+ (Timestamp('2013-06-01', offset='M').is_month_start, 1),
+ (Timestamp('2013-06-01', offset='BM').is_month_start, 0),
+ (Timestamp('2013-06-03', offset='M').is_month_start, 0),
+ (Timestamp('2013-06-03', offset='BM').is_month_start, 1),
+ (Timestamp('2013-02-28', offset='Q-FEB').is_month_end, 1),
+ (Timestamp('2013-02-28', offset='Q-FEB').is_quarter_end, 1),
+ (Timestamp('2013-02-28', offset='Q-FEB').is_year_end, 1),
+ (Timestamp('2013-03-01', offset='Q-FEB').is_month_start, 1),
+ (Timestamp('2013-03-01', offset='Q-FEB').is_quarter_start, 1),
+ (Timestamp('2013-03-01', offset='Q-FEB').is_year_start, 1),
+ (Timestamp('2013-03-31', offset='QS-FEB').is_month_end, 1),
+ (Timestamp('2013-03-31', offset='QS-FEB').is_quarter_end, 0),
+ (Timestamp('2013-03-31', offset='QS-FEB').is_year_end, 0),
+ (Timestamp('2013-02-01', offset='QS-FEB').is_month_start, 1),
+ (Timestamp('2013-02-01', offset='QS-FEB').is_quarter_start, 1),
+ (Timestamp('2013-02-01', offset='QS-FEB').is_year_start, 1),
+ (Timestamp('2013-06-30', offset='BQ').is_month_end, 0),
+ (Timestamp('2013-06-30', offset='BQ').is_quarter_end, 0),
+ (Timestamp('2013-06-30', offset='BQ').is_year_end, 0),
+ (Timestamp('2013-06-28', offset='BQ').is_month_end, 1),
+ (Timestamp('2013-06-28', offset='BQ').is_quarter_end, 1),
+ (Timestamp('2013-06-28', offset='BQ').is_year_end, 0),
+ (Timestamp('2013-06-30', offset='BQS-APR').is_month_end, 0),
+ (Timestamp('2013-06-30', offset='BQS-APR').is_quarter_end, 0),
+ (Timestamp('2013-06-30', offset='BQS-APR').is_year_end, 0),
+ (Timestamp('2013-06-28', offset='BQS-APR').is_month_end, 1),
+ (Timestamp('2013-06-28', offset='BQS-APR').is_quarter_end, 1),
+ (Timestamp('2013-03-29', offset='BQS-APR').is_year_end, 1),
+ (Timestamp('2013-11-01', offset='AS-NOV').is_year_start, 1),
+ (Timestamp('2013-10-31', offset='AS-NOV').is_year_end, 1)]
+
+ for ts, value in tests:
+ self.assertEquals(ts, value)
+
def test_nanosecond_field(self):
dti = DatetimeIndex(np.arange(10))
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index e76a2d0cb6cf1..6d99d38049e5a 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1,7 +1,7 @@
# cython: profile=False
cimport numpy as np
-from numpy cimport (int32_t, int64_t, import_array, ndarray,
+from numpy cimport (int8_t, int32_t, int64_t, import_array, ndarray,
NPY_INT64, NPY_DATETIME, NPY_TIMEDELTA)
import numpy as np
@@ -303,6 +303,30 @@ class Timestamp(_Timestamp):
def asm8(self):
return np.int64(self.value).view('M8[ns]')
+ @property
+ def is_month_start(self):
+ return self._get_start_end_field('is_month_start')
+
+ @property
+ def is_month_end(self):
+ return self._get_start_end_field('is_month_end')
+
+ @property
+ def is_quarter_start(self):
+ return self._get_start_end_field('is_quarter_start')
+
+ @property
+ def is_quarter_end(self):
+ return self._get_start_end_field('is_quarter_end')
+
+ @property
+ def is_year_start(self):
+ return self._get_start_end_field('is_year_start')
+
+ @property
+ def is_year_end(self):
+ return self._get_start_end_field('is_year_end')
+
def tz_localize(self, tz):
"""
Convert naive Timestamp to local time zone
@@ -725,6 +749,12 @@ cdef class _Timestamp(datetime):
out = get_date_field(np.array([self.value], dtype=np.int64), field)
return out[0]
+ cpdef _get_start_end_field(self, field):
+ month_kw = self.freq.kwds.get('startingMonth', self.freq.kwds.get('month', 12)) if self.freq else 12
+ freqstr = self.freqstr if self.freq else None
+ out = get_start_end_field(np.array([self.value], dtype=np.int64), field, freqstr, month_kw)
+ return out[0]
+
cdef PyTypeObject* ts_type = <PyTypeObject*> Timestamp
@@ -2298,6 +2328,225 @@ def get_date_field(ndarray[int64_t] dtindex, object field):
raise ValueError("Field %s not supported" % field)
+@cython.wraparound(False)
+def get_start_end_field(ndarray[int64_t] dtindex, object field, object freqstr=None, int month_kw=12):
+ '''
+ Given an int64-based datetime index return array of indicators
+ of whether timestamps are at the start/end of the month/quarter/year
+ (defined by frequency).
+ '''
+ cdef:
+ _TSObject ts
+ Py_ssize_t i
+ int count = 0
+ bint is_business = 0
+ int end_month = 12
+ int start_month = 1
+ ndarray[int8_t] out
+ ndarray[int32_t, ndim=2] _month_offset
+ bint isleap
+ pandas_datetimestruct dts
+ int mo_off, dom, doy, dow, ldom
+
+ _month_offset = np.array(
+ [[ 0, 31, 59, 90, 120, 151, 181, 212, 243, 273, 304, 334, 365 ],
+ [ 0, 31, 60, 91, 121, 152, 182, 213, 244, 274, 305, 335, 366 ]],
+ dtype=np.int32 )
+
+ count = len(dtindex)
+ out = np.zeros(count, dtype='int8')
+
+ if freqstr:
+ if freqstr == 'C':
+ raise ValueError("Custom business days is not supported by %s" % field)
+ is_business = freqstr[0] == 'B'
+
+ # YearBegin(), BYearBegin() use month = starting month of year
+ # QuarterBegin(), BQuarterBegin() use startingMonth = starting month of year
+ # other offests use month, startingMonth as ending month of year.
+
+ if (freqstr[0:2] in ['MS', 'QS', 'AS']) or (freqstr[1:3] in ['MS', 'QS', 'AS']):
+ end_month = 12 if month_kw == 1 else month_kw - 1
+ start_month = month_kw
+ else:
+ end_month = month_kw
+ start_month = (end_month % 12) + 1
+ else:
+ end_month = 12
+ start_month = 1
+
+ if field == 'is_month_start':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ dom = dts.day
+ dow = ts_dayofweek(ts)
+
+ if (dom == 1 and dow < 5) or (dom <= 3 and dow == 0):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ dom = dts.day
+
+ if dom == 1:
+ out[i] = 1
+ return out.view(bool)
+
+ elif field == 'is_month_end':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ isleap = is_leapyear(dts.year)
+ mo_off = _month_offset[isleap, dts.month - 1]
+ dom = dts.day
+ doy = mo_off + dom
+ ldom = _month_offset[isleap, dts.month]
+ dow = ts_dayofweek(ts)
+
+ if (ldom == doy and dow < 5) or (dow == 4 and (ldom - doy <= 2)):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ isleap = is_leapyear(dts.year)
+ mo_off = _month_offset[isleap, dts.month - 1]
+ dom = dts.day
+ doy = mo_off + dom
+ ldom = _month_offset[isleap, dts.month]
+
+ if ldom == doy:
+ out[i] = 1
+ return out.view(bool)
+
+ elif field == 'is_quarter_start':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ dom = dts.day
+ dow = ts_dayofweek(ts)
+
+ if ((dts.month - start_month) % 3 == 0) and ((dom == 1 and dow < 5) or (dom <= 3 and dow == 0)):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ dom = dts.day
+
+ if ((dts.month - start_month) % 3 == 0) and dom == 1:
+ out[i] = 1
+ return out.view(bool)
+
+ elif field == 'is_quarter_end':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ isleap = is_leapyear(dts.year)
+ mo_off = _month_offset[isleap, dts.month - 1]
+ dom = dts.day
+ doy = mo_off + dom
+ ldom = _month_offset[isleap, dts.month]
+ dow = ts_dayofweek(ts)
+
+ if ((dts.month - end_month) % 3 == 0) and ((ldom == doy and dow < 5) or (dow == 4 and (ldom - doy <= 2))):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ isleap = is_leapyear(dts.year)
+ mo_off = _month_offset[isleap, dts.month - 1]
+ dom = dts.day
+ doy = mo_off + dom
+ ldom = _month_offset[isleap, dts.month]
+
+ if ((dts.month - end_month) % 3 == 0) and (ldom == doy):
+ out[i] = 1
+ return out.view(bool)
+
+ elif field == 'is_year_start':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ dom = dts.day
+ dow = ts_dayofweek(ts)
+
+ if (dts.month == start_month) and ((dom == 1 and dow < 5) or (dom <= 3 and dow == 0)):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ dom = dts.day
+
+ if (dts.month == start_month) and dom == 1:
+ out[i] = 1
+ return out.view(bool)
+
+ elif field == 'is_year_end':
+ if is_business:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ isleap = is_leapyear(dts.year)
+ dom = dts.day
+ mo_off = _month_offset[isleap, dts.month - 1]
+ doy = mo_off + dom
+ dow = ts_dayofweek(ts)
+ ldom = _month_offset[isleap, dts.month]
+
+ if (dts.month == end_month) and ((ldom == doy and dow < 5) or (dow == 4 and (ldom - doy <= 2))):
+ out[i] = 1
+ return out.view(bool)
+ else:
+ for i in range(count):
+ if dtindex[i] == NPY_NAT: out[i] = -1; continue
+
+ pandas_datetime_to_datetimestruct(dtindex[i], PANDAS_FR_ns, &dts)
+ ts = convert_to_tsobject(dtindex[i], None, None)
+ isleap = is_leapyear(dts.year)
+ mo_off = _month_offset[isleap, dts.month - 1]
+ dom = dts.day
+ doy = mo_off + dom
+ ldom = _month_offset[isleap, dts.month]
+
+ if (dts.month == end_month) and (ldom == doy):
+ out[i] = 1
+ return out.view(bool)
+
+ raise ValueError("Field %s not supported" % field)
+
+
cdef inline int m8_weekday(int64_t val):
ts = convert_to_tsobject(val, None, None)
return ts_dayofweek(ts)
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 06ef99442b574..a3d4d4c7d40a5 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -303,3 +303,14 @@ def date_range(start=None, end=None, periods=None, freq=None):
timeseries_custom_bmonthend_incr_n = \
Benchmark("date + 10 * cme",setup)
+
+#----------------------------------------------------------------------
+# month/quarter/year start/end accessors
+
+setup = common_setup + """
+N = 10000
+rng = date_range('1/1/1', periods=N, freq='B')
+"""
+
+timeseries_is_month_start = Benchmark('rng.is_month_start', setup,
+ start_date=datetime(2014, 4, 1))
| closes #4565
| https://api.github.com/repos/pandas-dev/pandas/pulls/4823 | 2013-09-12T07:47:33Z | 2014-04-29T23:41:57Z | 2014-04-29T23:41:57Z | 2014-06-19T15:50:34Z |
ENH/CLN: support enhanced timedelta64 operations/conversions | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 1d3980e216587..da611c0375789 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2009,6 +2009,26 @@ space. These are in terms of the total number of rows in a table.
Term('minor_axis', '=', ['A','B']) ],
start=0, stop=10)
+**Using timedelta64[ns]**
+
+.. versionadded:: 0.13
+
+Beginning in 0.13.0, you can store and query using the ``timedelta64[ns]`` type. Terms can be
+specified in the format: ``<float>(<unit>)``, where float may be signed (and fractional), and unit can be
+``D,s,ms,us,ns`` for the timedelta. Here's an example:
+
+.. warning::
+
+ This requires ``numpy >= 1.7``
+
+.. ipython:: python
+
+ from datetime import timedelta
+ dftd = DataFrame(dict(A = Timestamp('20130101'), B = [ Timestamp('20130101') + timedelta(days=i,seconds=10) for i in range(10) ]))
+ dftd['C'] = dftd['A']-dftd['B']
+ dftd
+ store.append('dftd',dftd,data_columns=True)
+ store.select('dftd',Term("C","<","-3.5D"))
Indexing
~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 087d2880511d2..75194f6877a6e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -156,6 +156,7 @@ API Changes
- a column multi-index will be recreated properly (:issue:`4710`); raise on trying to use a multi-index
with data_columns on the same axis
- ``select_as_coordinates`` will now return an ``Int64Index`` of the resultant selection set
+ - support ``timedelta64[ns]`` as a serialization type (:issue:`3577`)
- ``JSON``
- added ``date_unit`` parameter to specify resolution of timestamps. Options
@@ -190,6 +191,8 @@ API Changes
- provide automatic dtype conversions on _reduce operations (:issue:`3371`)
- exclude non-numerics if mixed types with datelike in _reduce operations (:issue:`3371`)
- default for ``tupleize_cols`` is now ``False`` for both ``to_csv`` and ``read_csv``. Fair warning in 0.12 (:issue:`3604`)
+ - moved timedeltas support to pandas.tseries.timedeltas.py; add timedeltas string parsing,
+ add top-level ``to_timedelta`` function
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 11f4ac9f487c2..5dbf1ce77bad8 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1211,6 +1211,26 @@ Time Deltas & Conversions
.. versionadded:: 0.13
+**string/integer conversion**
+
+Using the top-level ``to_timedelta``, you can convert a scalar or array from the standard
+timedelta format (produced by ``to_csv``) into a timedelta type (``np.timedelta64`` in ``nanoseconds``).
+It can also construct Series.
+
+.. warning::
+
+ This requires ``numpy >= 1.7``
+
+.. ipython:: python
+
+ to_timedelta('1 days 06:05:01.00003')
+ to_timedelta('15.5us')
+ to_timedelta(['1 days 06:05:01.00003','15.5us','nan'])
+ to_timedelta(np.arange(5),unit='s')
+ to_timedelta(np.arange(5),unit='d')
+
+**frequency conversion**
+
Timedeltas can be converted to other 'frequencies' by dividing by another timedelta.
These operations yield ``float64`` dtyped Series.
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index caf218747bdfb..f0a23b46373e9 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -80,7 +80,7 @@ API changes
See :ref:`here<io.hdf5-selecting_coordinates>` for an example.
- allow a passed locations array or mask as a ``where`` condition (:issue:`4467`).
See :ref:`here<io.hdf5-where_mask>` for an example.
-
+ - support ``timedelta64[ns]`` as a serialization type (:issue:`3577`)
- the ``format`` keyword now replaces the ``table`` keyword; allowed values are ``fixed(f)`` or ``table(t)``
the same defaults as prior < 0.13.0 remain, e.g. ``put`` implies 'fixed` or 'f' (Fixed) format
and ``append`` imples 'table' or 't' (Table) format
@@ -208,6 +208,21 @@ Enhancements
- ``timedelta64[ns]`` operations
+ - Using the new top-level ``to_timedelta``, you can convert a scalar or array from the standard
+ timedelta format (produced by ``to_csv``) into a timedelta type (``np.timedelta64`` in ``nanoseconds``).
+
+ .. warning::
+
+ This requires ``numpy >= 1.7``
+
+ .. ipython:: python
+
+ to_timedelta('1 days 06:05:01.00003')
+ to_timedelta('15.5us')
+ to_timedelta(['1 days 06:05:01.00003','15.5us','nan'])
+ to_timedelta(np.arange(5),unit='s')
+ to_timedelta(np.arange(5),unit='d')
+
- A Series of dtype ``timedelta64[ns]`` can now be divided by another
``timedelta64[ns]`` object to yield a ``float64`` dtyped Series. This
is frequency conversion. See :ref:`here<timeseries.timedeltas_convert>` for the docs.
diff --git a/pandas/__init__.py b/pandas/__init__.py
index a0edb397c28c1..03681d3fa5a3f 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -18,6 +18,19 @@
from datetime import datetime
import numpy as np
+# XXX: HACK for NumPy 1.5.1 to suppress warnings
+try:
+ np.seterr(all='ignore')
+ # np.set_printoptions(suppress=True)
+except Exception: # pragma: no cover
+ pass
+
+# numpy versioning
+from distutils.version import LooseVersion
+_np_version = np.version.short_version
+_np_version_under1p6 = LooseVersion(_np_version) < '1.6'
+_np_version_under1p7 = LooseVersion(_np_version) < '1.7'
+
from pandas.version import version as __version__
from pandas.info import __doc__
diff --git a/pandas/core/common.py b/pandas/core/common.py
index ba7c6cc511933..b58bd92a4fd1f 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -11,7 +11,6 @@
import pandas.algos as algos
import pandas.lib as lib
import pandas.tslib as tslib
-from distutils.version import LooseVersion
from pandas import compat
from pandas.compat import StringIO, BytesIO, range, long, u, zip, map
from datetime import timedelta
@@ -19,15 +18,6 @@
from pandas.core.config import get_option
from pandas.core import array as pa
-
-# XXX: HACK for NumPy 1.5.1 to suppress warnings
-try:
- np.seterr(all='ignore')
- # np.set_printoptions(suppress=True)
-except Exception: # pragma: no cover
- pass
-
-
class PandasError(Exception):
pass
@@ -35,11 +25,6 @@ class PandasError(Exception):
class AmbiguousIndexError(PandasError, KeyError):
pass
-# versioning
-_np_version = np.version.short_version
-_np_version_under1p6 = LooseVersion(_np_version) < '1.6'
-_np_version_under1p7 = LooseVersion(_np_version) < '1.7'
-
_POSSIBLY_CAST_DTYPES = set([np.dtype(t)
for t in ['M8[ns]', 'm8[ns]', 'O', 'int8', 'uint8', 'int16', 'uint16', 'int32', 'uint32', 'int64', 'uint64']])
@@ -704,34 +689,13 @@ def diff(arr, n, axis=0):
return out_arr
-
-def _coerce_scalar_to_timedelta_type(r):
- # kludgy here until we have a timedelta scalar
- # handle the numpy < 1.7 case
-
- if is_integer(r):
- r = timedelta(microseconds=r/1000)
-
- if _np_version_under1p7:
- if not isinstance(r, timedelta):
- raise AssertionError("Invalid type for timedelta scalar: %s" % type(r))
- if compat.PY3:
- # convert to microseconds in timedelta64
- r = np.timedelta64(int(r.total_seconds()*1e9 + r.microseconds*1000))
- else:
- return r
-
- if isinstance(r, timedelta):
- r = np.timedelta64(r)
- elif not isinstance(r, np.timedelta64):
- raise AssertionError("Invalid type for timedelta scalar: %s" % type(r))
- return r.astype('timedelta64[ns]')
-
def _coerce_to_dtypes(result, dtypes):
""" given a dtypes and a result set, coerce the result elements to the dtypes """
if len(result) != len(dtypes):
raise AssertionError("_coerce_to_dtypes requires equal len arrays")
+ from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
+
def conv(r,dtype):
try:
if isnull(r):
@@ -1324,68 +1288,6 @@ def _possibly_convert_platform(values):
return values
-
-def _possibly_cast_to_timedelta(value, coerce=True):
- """ try to cast to timedelta64, if already a timedeltalike, then make
- sure that we are [ns] (as numpy 1.6.2 is very buggy in this regards,
- don't force the conversion unless coerce is True
-
- if coerce='compat' force a compatibilty coercerion (to timedeltas) if needeed
- """
-
- # coercion compatability
- if coerce == 'compat' and _np_version_under1p7:
-
- def convert(td, dtype):
-
- # we have an array with a non-object dtype
- if hasattr(td,'item'):
- td = td.astype(np.int64).item()
- if td == tslib.iNaT:
- return td
- if dtype == 'm8[us]':
- td *= 1000
- return td
-
- if td == tslib.compat_NaT:
- return tslib.iNaT
-
- # convert td value to a nanosecond value
- d = td.days
- s = td.seconds
- us = td.microseconds
-
- if dtype == 'object' or dtype == 'm8[ns]':
- td = 1000*us + (s + d * 24 * 3600) * 10 ** 9
- else:
- raise ValueError("invalid conversion of dtype in np < 1.7 [%s]" % dtype)
-
- return td
-
- # < 1.7 coercion
- if not is_list_like(value):
- value = np.array([ value ])
-
- dtype = value.dtype
- return np.array([ convert(v,dtype) for v in value ], dtype='m8[ns]')
-
- # deal with numpy not being able to handle certain timedelta operations
- if isinstance(value, (ABCSeries, np.ndarray)) and value.dtype.kind == 'm':
- if value.dtype != 'timedelta64[ns]':
- value = value.astype('timedelta64[ns]')
- return value
-
- # we don't have a timedelta, but we want to try to convert to one (but
- # don't force it)
- if coerce:
- new_value = tslib.array_to_timedelta64(
- _values_from_object(value).astype(object), coerce=False)
- if new_value.dtype == 'i8':
- value = np.array(new_value, dtype='timedelta64[ns]')
-
- return value
-
-
def _possibly_cast_to_datetime(value, dtype, coerce=False):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
@@ -1423,6 +1325,7 @@ def _possibly_cast_to_datetime(value, dtype, coerce=False):
from pandas.tseries.tools import to_datetime
value = to_datetime(value, coerce=coerce).values
elif is_timedelta64:
+ from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
value = _possibly_cast_to_timedelta(value)
except:
pass
@@ -1448,6 +1351,7 @@ def _possibly_cast_to_datetime(value, dtype, coerce=False):
except:
pass
elif inferred_type in ['timedelta', 'timedelta64']:
+ from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
value = _possibly_cast_to_timedelta(value)
return value
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 212e2bad563b6..b9ffe788d183d 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -13,7 +13,7 @@
from pandas.tseries.index import DatetimeIndex
from pandas.core.internals import BlockManager
import pandas.core.common as com
-from pandas import compat
+from pandas import compat, _np_version_under1p7
from pandas.compat import map, zip, lrange
from pandas.core.common import (isnull, notnull, is_list_like,
_values_from_object,
@@ -1908,7 +1908,7 @@ def abs(self):
obj = np.abs(self)
# suprimo numpy 1.6 hacking
- if com._np_version_under1p7:
+ if _np_version_under1p7:
if self.ndim == 1:
if obj.dtype == 'm8[us]':
obj = obj.astype('m8[ns]')
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4516fcfbaee8e..8d6591c3acd60 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -19,6 +19,7 @@
_asarray_tuplesafe, is_integer_dtype,
_NS_DTYPE, _TD_DTYPE,
_infer_dtype_from_scalar, is_list_like, _values_from_object,
+ _possibly_cast_to_datetime, _possibly_castable, _possibly_convert_platform,
ABCSparseArray)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
@@ -32,6 +33,7 @@
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex, Period
from pandas.tseries.offsets import DateOffset
+from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
from pandas import compat
from pandas.util.terminal import get_terminal_size
from pandas.compat import zip, lzip, u, OrderedDict
@@ -142,7 +144,7 @@ def _convert_to_array(self, values, name=None):
values = values.to_series()
elif inferred_type in ('timedelta', 'timedelta64'):
# have a timedelta, convert to to ns here
- values = com._possibly_cast_to_timedelta(values, coerce=coerce)
+ values = _possibly_cast_to_timedelta(values, coerce=coerce)
elif inferred_type == 'integer':
# py3 compat where dtype is 'm' but is an integer
if values.dtype.kind == 'm':
@@ -160,7 +162,7 @@ def _convert_to_array(self, values, name=None):
raise TypeError("cannot use a non-absolute DateOffset in "
"datetime/timedelta operations [{0}]".format(
','.join([ com.pprint_thing(v) for v in values[mask] ])))
- values = com._possibly_cast_to_timedelta(os, coerce=coerce)
+ values = _possibly_cast_to_timedelta(os, coerce=coerce)
else:
raise TypeError("incompatible type [{0}] for a datetime/timedelta operation".format(pa.array(values).dtype))
@@ -3215,11 +3217,11 @@ def _try_cast(arr, take_fast_path):
# perf shortcut as this is the most common case
if take_fast_path:
- if com._possibly_castable(arr) and not copy and dtype is None:
+ if _possibly_castable(arr) and not copy and dtype is None:
return arr
try:
- arr = com._possibly_cast_to_datetime(arr, dtype)
+ arr = _possibly_cast_to_datetime(arr, dtype)
subarr = pa.array(arr, dtype=dtype, copy=copy)
except (ValueError, TypeError):
if dtype is not None and raise_cast_failure:
@@ -3266,9 +3268,9 @@ def _try_cast(arr, take_fast_path):
subarr = lib.maybe_convert_objects(subarr)
else:
- subarr = com._possibly_convert_platform(data)
+ subarr = _possibly_convert_platform(data)
- subarr = com._possibly_cast_to_datetime(subarr, dtype)
+ subarr = _possibly_cast_to_datetime(subarr, dtype)
else:
subarr = _try_cast(data, False)
@@ -3285,7 +3287,7 @@ def _try_cast(arr, take_fast_path):
dtype, value = _infer_dtype_from_scalar(value)
else:
# need to possibly convert the value here
- value = com._possibly_cast_to_datetime(value, dtype)
+ value = _possibly_cast_to_datetime(value, dtype)
subarr = pa.empty(len(index), dtype=dtype)
subarr.fill(value)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 6759e07ed7935..9b6a230f6a551 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -17,7 +17,7 @@
import numpy as np
import pandas
from pandas import (Series, TimeSeries, DataFrame, Panel, Panel4D, Index,
- MultiIndex, Int64Index, Timestamp)
+ MultiIndex, Int64Index, Timestamp, _np_version_under1p7)
from pandas.sparse.api import SparseSeries, SparseDataFrame, SparsePanel
from pandas.sparse.array import BlockIndex, IntIndex
from pandas.tseries.api import PeriodIndex, DatetimeIndex
@@ -29,6 +29,7 @@
from pandas.core.internals import BlockManager, make_block
from pandas.core.reshape import block2d_to_blocknd, factor_indexer
from pandas.core.index import _ensure_index
+from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
import pandas.core.common as com
from pandas.tools.merge import concat
from pandas import compat
@@ -1527,6 +1528,8 @@ def set_kind(self):
self.kind = 'integer'
elif dtype.startswith(u('date')):
self.kind = 'datetime'
+ elif dtype.startswith(u('timedelta')):
+ self.kind = 'timedelta'
elif dtype.startswith(u('bool')):
self.kind = 'bool'
else:
@@ -1547,6 +1550,11 @@ def set_atom(self, block, existing_col, min_itemsize, nan_rep, info, encoding=No
if inferred_type == 'datetime64':
self.set_atom_datetime64(block)
+ elif dtype == 'timedelta64[ns]':
+ if _np_version_under1p7:
+ raise TypeError(
+ "timdelta64 is not supported under under numpy < 1.7")
+ self.set_atom_timedelta64(block)
elif inferred_type == 'date':
raise TypeError(
"[date] is not implemented as a table column")
@@ -1667,6 +1675,16 @@ def set_atom_datetime64(self, block, values=None):
values = block.values.view('i8')
self.set_data(values, 'datetime64')
+ def get_atom_timedelta64(self, block):
+ return _tables().Int64Col(shape=block.shape[0])
+
+ def set_atom_timedelta64(self, block, values=None):
+ self.kind = 'timedelta64'
+ self.typ = self.get_atom_timedelta64(block)
+ if values is None:
+ values = block.values.view('i8')
+ self.set_data(values, 'timedelta64')
+
@property
def shape(self):
return getattr(self.data, 'shape', None)
@@ -1719,6 +1737,8 @@ def convert(self, values, nan_rep, encoding):
else:
self.data = np.asarray(self.data, dtype='M8[ns]')
+ elif dtype == u('timedelta64'):
+ self.data = np.asarray(self.data, dtype='m8[ns]')
elif dtype == u('date'):
self.data = np.array(
[date.fromtimestamp(v) for v in self.data], dtype=object)
@@ -1767,6 +1787,9 @@ def get_atom_data(self, block):
def get_atom_datetime64(self, block):
return _tables().Int64Col()
+ def get_atom_timedelta64(self, block):
+ return _tables().Int64Col()
+
class GenericDataIndexableCol(DataIndexableCol):
@@ -2007,6 +2030,11 @@ def read_array(self, key):
if dtype == u('datetime64'):
ret = np.array(ret, dtype='M8[ns]')
+ elif dtype == u('timedelta64'):
+ if _np_version_under1p7:
+ raise TypeError(
+ "timedelta64 is not supported under under numpy < 1.7")
+ ret = np.array(ret, dtype='m8[ns]')
if transposed:
return ret.T
@@ -2214,6 +2242,9 @@ def write_array(self, key, value, items=None):
elif value.dtype.type == np.datetime64:
self._handle.createArray(self.group, key, value.view('i8'))
getattr(self.group, key)._v_attrs.value_type = 'datetime64'
+ elif value.dtype.type == np.timedelta64:
+ self._handle.createArray(self.group, key, value.view('i8'))
+ getattr(self.group, key)._v_attrs.value_type = 'timedelta64'
else:
if empty_array:
self.write_array_empty(key, value)
@@ -4000,7 +4031,9 @@ def eval(self):
""" set the numexpr expression for this term """
if not self.is_valid:
- raise ValueError("query term is not valid [%s]" % str(self))
+ raise ValueError("query term is not valid [{0}]\n"
+ " all queries terms must include a reference to\n"
+ " either an axis (e.g. index or column), or a data_columns\n".format(str(self)))
# convert values if we are in the table
if self.is_in_table:
@@ -4060,6 +4093,9 @@ def stringify(value):
if v.tz is not None:
v = v.tz_convert('UTC')
return TermValue(v, v.value, kind)
+ elif kind == u('timedelta64') or kind == u('timedelta'):
+ v = _coerce_scalar_to_timedelta_type(v,unit='s').item()
+ return TermValue(int(v), v, kind)
elif (isinstance(v, datetime) or hasattr(v, 'timetuple')
or kind == u('date')):
v = time.mktime(v.timetuple())
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 7e5c3f9fff061..3f4ce72198215 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -22,7 +22,8 @@
assert_frame_equal,
assert_series_equal)
from pandas import concat, Timestamp
-from pandas import compat
+from pandas import compat, _np_version_under1p7
+from pandas.core import common as com
from numpy.testing.decorators import slow
@@ -1732,7 +1733,7 @@ def test_unimplemented_dtypes_table_columns(self):
# this fails because we have a date in the object block......
self.assertRaises(TypeError, store.append, 'df_unimplemented', df)
- def test_table_append_with_timezones(self):
+ def test_append_with_timezones(self):
from datetime import timedelta
@@ -1798,6 +1799,51 @@ def compare(a,b):
result = store.select('df')
assert_frame_equal(result,df)
+ def test_append_with_timedelta(self):
+ if _np_version_under1p7:
+ raise nose.SkipTest("requires numpy >= 1.7")
+
+ # GH 3577
+ # append timedelta
+
+ from datetime import timedelta
+ df = DataFrame(dict(A = Timestamp('20130101'), B = [ Timestamp('20130101') + timedelta(days=i,seconds=10) for i in range(10) ]))
+ df['C'] = df['A']-df['B']
+ df.ix[3:5,'C'] = np.nan
+
+ with ensure_clean(self.path) as store:
+
+ # table
+ _maybe_remove(store, 'df')
+ store.append('df',df,data_columns=True)
+ result = store.select('df')
+ assert_frame_equal(result,df)
+
+ result = store.select('df',Term("C<100000"))
+ assert_frame_equal(result,df)
+
+ result = store.select('df',Term("C","<",-3*86400))
+ assert_frame_equal(result,df.iloc[3:])
+
+ result = store.select('df',Term("C","<",'-3D'))
+ assert_frame_equal(result,df.iloc[3:])
+
+ # a bit hacky here as we don't really deal with the NaT properly
+
+ result = store.select('df',Term("C","<",'-500000s'))
+ result = result.dropna(subset=['C'])
+ assert_frame_equal(result,df.iloc[6:])
+
+ result = store.select('df',Term("C","<",'-3.5D'))
+ result = result.iloc[1:]
+ assert_frame_equal(result,df.iloc[4:])
+
+ # fixed
+ _maybe_remove(store, 'df2')
+ store.put('df2',df)
+ result = store.select('df2')
+ assert_frame_equal(result,df)
+
def test_remove(self):
with ensure_clean(self.path) as store:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 723810a19d140..c5af0b0d4d5c8 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3248,9 +3248,10 @@ def test_operators_timedelta64(self):
mixed['F'] = Timestamp('20130101')
# results in an object array
+ from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
result = mixed.min()
- expected = Series([com._coerce_scalar_to_timedelta_type(timedelta(seconds=5*60+5)),
- com._coerce_scalar_to_timedelta_type(timedelta(days=-1)),
+ expected = Series([_coerce_scalar_to_timedelta_type(timedelta(seconds=5*60+5)),
+ _coerce_scalar_to_timedelta_type(timedelta(days=-1)),
'foo',
1,
1.0,
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 1f008354756bc..7a993cbcf07f4 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -14,7 +14,7 @@
import pandas as pd
from pandas import (Index, Series, DataFrame, isnull, notnull,
- bdate_range, date_range)
+ bdate_range, date_range, _np_version_under1p7)
from pandas.core.index import MultiIndex
from pandas.tseries.index import Timestamp, DatetimeIndex
import pandas.core.config as cf
@@ -2188,7 +2188,7 @@ def test_timedeltas_with_DateOffset(self):
[Timestamp('20130101 9:06:00.005'), Timestamp('20130101 9:07:00.005')])
assert_series_equal(result, expected)
- if not com._np_version_under1p7:
+ if not _np_version_under1p7:
# operate with np.timedelta64 correctly
result = s + np.timedelta64(1, 's')
@@ -2292,7 +2292,7 @@ def test_timedelta64_operations_with_integers(self):
self.assertRaises(TypeError, sop, s2.values)
def test_timedelta64_conversions(self):
- if com._np_version_under1p7:
+ if _np_version_under1p7:
raise nose.SkipTest("cannot use 2 argument form of timedelta64 conversions with numpy < 1.7")
startdate = Series(date_range('2013-01-01', '2013-01-03'))
@@ -2317,7 +2317,7 @@ def test_timedelta64_equal_timedelta_supported_ops(self):
'm': 60 * 1000000, 's': 1000000, 'us': 1}
def timedelta64(*args):
- if com._np_version_under1p7:
+ if _np_version_under1p7:
coeffs = np.array(args)
terms = np.array([npy16_mappings[interval]
for interval in intervals])
@@ -2426,7 +2426,7 @@ def test_timedelta64_functions(self):
assert_series_equal(result, expected)
def test_timedelta_fillna(self):
- if com._np_version_under1p7:
+ if _np_version_under1p7:
raise nose.SkipTest("timedelta broken in np 1.6.1")
#GH 3371
@@ -2498,12 +2498,12 @@ def test_datetime64_fillna(self):
assert_series_equal(result,expected)
def test_sub_of_datetime_from_TimeSeries(self):
- from pandas.core import common as com
+ from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
from datetime import datetime
a = Timestamp(datetime(1993, 0o1, 0o7, 13, 30, 00))
b = datetime(1993, 6, 22, 13, 30)
a = Series([a])
- result = com._possibly_cast_to_timedelta(np.abs(a - b))
+ result = _possibly_cast_to_timedelta(np.abs(a - b))
self.assert_(result.dtype == 'timedelta64[ns]')
def test_datetime64_with_index(self):
diff --git a/pandas/tseries/api.py b/pandas/tseries/api.py
index ead5a17c4fab1..c2cc3723802fc 100644
--- a/pandas/tseries/api.py
+++ b/pandas/tseries/api.py
@@ -7,5 +7,6 @@
from pandas.tseries.frequencies import infer_freq
from pandas.tseries.period import Period, PeriodIndex, period_range, pnow
from pandas.tseries.resample import TimeGrouper
+from pandas.tseries.timedeltas import to_timedelta
from pandas.lib import NaT
import pandas.tseries.offsets as offsets
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index e91cad62e7dce..1572ca481d8a4 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -7,8 +7,7 @@
import numpy as np
from pandas.core.common import (isnull, _NS_DTYPE, _INT64_DTYPE,
- is_list_like,_possibly_cast_to_timedelta,
- _values_from_object, _maybe_box)
+ is_list_like,_values_from_object, _maybe_box)
from pandas.core.index import Index, Int64Index
import pandas.compat as compat
from pandas.compat import u
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
new file mode 100644
index 0000000000000..551507039112b
--- /dev/null
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -0,0 +1,168 @@
+# pylint: disable-msg=E1101,W0612
+
+from datetime import datetime, timedelta
+import nose
+import unittest
+
+import numpy as np
+import pandas as pd
+
+from pandas import (Index, Series, DataFrame, isnull, notnull,
+ bdate_range, date_range, _np_version_under1p7)
+import pandas.core.common as com
+from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long
+from pandas import compat, to_timedelta, tslib
+from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type as ct
+from pandas.util.testing import (assert_series_equal,
+ assert_frame_equal,
+ assert_almost_equal,
+ ensure_clean)
+import pandas.util.testing as tm
+
+def _skip_if_numpy_not_friendly():
+ # not friendly for < 1.7
+ if _np_version_under1p7:
+ raise nose.SkipTest("numpy < 1.7")
+
+class TestTimedeltas(unittest.TestCase):
+ _multiprocess_can_split_ = True
+
+ def setUp(self):
+ pass
+
+ def test_numeric_conversions(self):
+ _skip_if_numpy_not_friendly()
+
+ self.assert_(ct(0) == np.timedelta64(0,'ns'))
+ self.assert_(ct(10) == np.timedelta64(10,'ns'))
+ self.assert_(ct(10,unit='ns') == np.timedelta64(10,'ns').astype('m8[ns]'))
+
+ self.assert_(ct(10,unit='us') == np.timedelta64(10,'us').astype('m8[ns]'))
+ self.assert_(ct(10,unit='ms') == np.timedelta64(10,'ms').astype('m8[ns]'))
+ self.assert_(ct(10,unit='s') == np.timedelta64(10,'s').astype('m8[ns]'))
+ self.assert_(ct(10,unit='d') == np.timedelta64(10,'D').astype('m8[ns]'))
+
+ def test_timedelta_conversions(self):
+ _skip_if_numpy_not_friendly()
+
+ self.assert_(ct(timedelta(seconds=1)) == np.timedelta64(1,'s').astype('m8[ns]'))
+ self.assert_(ct(timedelta(microseconds=1)) == np.timedelta64(1,'us').astype('m8[ns]'))
+ self.assert_(ct(timedelta(days=1)) == np.timedelta64(1,'D').astype('m8[ns]'))
+
+ def test_short_format_converters(self):
+ _skip_if_numpy_not_friendly()
+
+ def conv(v):
+ return v.astype('m8[ns]')
+
+ self.assert_(ct('10') == np.timedelta64(10,'ns'))
+ self.assert_(ct('10ns') == np.timedelta64(10,'ns'))
+ self.assert_(ct('100') == np.timedelta64(100,'ns'))
+ self.assert_(ct('100ns') == np.timedelta64(100,'ns'))
+
+ self.assert_(ct('1000') == np.timedelta64(1000,'ns'))
+ self.assert_(ct('1000ns') == np.timedelta64(1000,'ns'))
+ self.assert_(ct('1000NS') == np.timedelta64(1000,'ns'))
+
+ self.assert_(ct('10us') == np.timedelta64(10000,'ns'))
+ self.assert_(ct('100us') == np.timedelta64(100000,'ns'))
+ self.assert_(ct('1000us') == np.timedelta64(1000000,'ns'))
+ self.assert_(ct('1000Us') == np.timedelta64(1000000,'ns'))
+ self.assert_(ct('1000uS') == np.timedelta64(1000000,'ns'))
+
+ self.assert_(ct('1ms') == np.timedelta64(1000000,'ns'))
+ self.assert_(ct('10ms') == np.timedelta64(10000000,'ns'))
+ self.assert_(ct('100ms') == np.timedelta64(100000000,'ns'))
+ self.assert_(ct('1000ms') == np.timedelta64(1000000000,'ns'))
+
+ self.assert_(ct('-1s') == -np.timedelta64(1000000000,'ns'))
+ self.assert_(ct('1s') == np.timedelta64(1000000000,'ns'))
+ self.assert_(ct('10s') == np.timedelta64(10000000000,'ns'))
+ self.assert_(ct('100s') == np.timedelta64(100000000000,'ns'))
+ self.assert_(ct('1000s') == np.timedelta64(1000000000000,'ns'))
+
+ self.assert_(ct('1d') == conv(np.timedelta64(1,'D')))
+ self.assert_(ct('-1d') == -conv(np.timedelta64(1,'D')))
+ self.assert_(ct('1D') == conv(np.timedelta64(1,'D')))
+ self.assert_(ct('10D') == conv(np.timedelta64(10,'D')))
+ self.assert_(ct('100D') == conv(np.timedelta64(100,'D')))
+ self.assert_(ct('1000D') == conv(np.timedelta64(1000,'D')))
+ self.assert_(ct('10000D') == conv(np.timedelta64(10000,'D')))
+
+ # space
+ self.assert_(ct(' 10000D ') == conv(np.timedelta64(10000,'D')))
+ self.assert_(ct(' - 10000D ') == -conv(np.timedelta64(10000,'D')))
+
+ # invalid
+ self.assertRaises(ValueError, ct, '1foo')
+ self.assertRaises(ValueError, ct, 'foo')
+
+ def test_full_format_converters(self):
+ _skip_if_numpy_not_friendly()
+
+ def conv(v):
+ return v.astype('m8[ns]')
+ d1 = np.timedelta64(1,'D')
+
+ self.assert_(ct('1days') == conv(d1))
+ self.assert_(ct('1days,') == conv(d1))
+ self.assert_(ct('- 1days,') == -conv(d1))
+
+ self.assert_(ct('00:00:01') == conv(np.timedelta64(1,'s')))
+ self.assert_(ct('06:00:01') == conv(np.timedelta64(6*3600+1,'s')))
+ self.assert_(ct('06:00:01.0') == conv(np.timedelta64(6*3600+1,'s')))
+ self.assert_(ct('06:00:01.01') == conv(np.timedelta64(1000*(6*3600+1)+10,'ms')))
+
+ self.assert_(ct('- 1days, 00:00:01') == -conv(d1+np.timedelta64(1,'s')))
+ self.assert_(ct('1days, 06:00:01') == conv(d1+np.timedelta64(6*3600+1,'s')))
+ self.assert_(ct('1days, 06:00:01.01') == conv(d1+np.timedelta64(1000*(6*3600+1)+10,'ms')))
+
+ # invalid
+ self.assertRaises(ValueError, ct, '- 1days, 00')
+
+ def test_nat_converters(self):
+ _skip_if_numpy_not_friendly()
+
+ self.assert_(to_timedelta('nat') == tslib.iNaT)
+ self.assert_(to_timedelta('nan') == tslib.iNaT)
+
+ def test_to_timedelta(self):
+ _skip_if_numpy_not_friendly()
+
+ def conv(v):
+ return v.astype('m8[ns]')
+ d1 = np.timedelta64(1,'D')
+
+ self.assert_(to_timedelta('1 days 06:05:01.00003') == conv(d1+np.timedelta64(6*3600+5*60+1,'s')+np.timedelta64(30,'us')))
+ self.assert_(to_timedelta('15.5us') == conv(np.timedelta64(15500,'ns')))
+
+ # empty string
+ result = to_timedelta('')
+ self.assert_(result == tslib.iNaT)
+
+ result = to_timedelta(['', ''])
+ self.assert_(isnull(result).all())
+
+ # pass thru
+ result = to_timedelta(np.array([np.timedelta64(1,'s')]))
+ expected = np.array([np.timedelta64(1,'s')])
+ tm.assert_almost_equal(result,expected)
+
+ # ints
+ result = np.timedelta64(0,'ns')
+ expected = to_timedelta(0)
+ self.assert_(result == expected)
+
+ # Series
+ expected = Series([timedelta(days=1), timedelta(days=1, seconds=1)])
+ result = to_timedelta(Series(['1d','1days 00:00:01']))
+ tm.assert_series_equal(result, expected)
+
+ # with units
+ result = Series([ np.timedelta64(0,'ns'), np.timedelta64(10,'s').astype('m8[ns]') ],dtype='m8[ns]')
+ expected = to_timedelta([0,10],unit='s')
+ tm.assert_series_equal(result, expected)
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
diff --git a/pandas/tseries/timedeltas.py b/pandas/tseries/timedeltas.py
new file mode 100644
index 0000000000000..4d8633546e017
--- /dev/null
+++ b/pandas/tseries/timedeltas.py
@@ -0,0 +1,226 @@
+"""
+timedelta support tools
+"""
+
+import re
+from datetime import timedelta
+
+import numpy as np
+import pandas.tslib as tslib
+from pandas import compat, _np_version_under1p7
+from pandas.core.common import (ABCSeries, is_integer, is_timedelta64_dtype,
+ _values_from_object, is_list_like)
+
+repr_timedelta = tslib.repr_timedelta64
+repr_timedelta64 = tslib.repr_timedelta64
+
+def to_timedelta(arg, box=True, unit='ns'):
+ """
+ Convert argument to timedelta
+
+ Parameters
+ ----------
+ arg : string, timedelta, array of strings (with possible NAs)
+ box : boolean, default True
+ If True returns a Series of the results, if False returns ndarray of values
+ unit : unit of the arg (D,s,ms,us,ns) denote the unit, which is an integer/float number
+
+ Returns
+ -------
+ ret : timedelta64/arrays of timedelta64 if parsing succeeded
+ """
+ if _np_version_under1p7:
+ raise ValueError("to_timedelta is not support for numpy < 1.7")
+
+ def _convert_listlike(arg, box):
+
+ if isinstance(arg, (list,tuple)):
+ arg = np.array(arg, dtype='O')
+
+ if is_timedelta64_dtype(arg):
+ if box:
+ from pandas import Series
+ return Series(arg,dtype='m8[ns]')
+ return arg
+
+ value = np.array([ _coerce_scalar_to_timedelta_type(r, unit=unit) for r in arg ])
+ if box:
+ from pandas import Series
+ value = Series(value,dtype='m8[ns]')
+ return value
+
+ if arg is None:
+ return arg
+ elif isinstance(arg, ABCSeries):
+ from pandas import Series
+ values = _convert_listlike(arg.values, box=False)
+ return Series(values, index=arg.index, name=arg.name, dtype='m8[ns]')
+ elif is_list_like(arg):
+ return _convert_listlike(arg, box=box)
+
+ return _convert_listlike([ arg ], box=False)[0]
+
+_short_search = re.compile(
+ "^\s*(?P<neg>-?)\s*(?P<value>\d*\.?\d*)\s*(?P<unit>d|s|ms|us|ns)?\s*$",re.IGNORECASE)
+_full_search = re.compile(
+ "^\s*(?P<neg>-?)\s*(?P<days>\d+)?\s*(days|d)?,?\s*(?P<time>\d{2}:\d{2}:\d{2})?(?P<frac>\.\d+)?\s*$",re.IGNORECASE)
+_nat_search = re.compile(
+ "^\s*(nat|nan)\s*$",re.IGNORECASE)
+_whitespace = re.compile('^\s*$')
+
+def _coerce_scalar_to_timedelta_type(r, unit='ns'):
+ # kludgy here until we have a timedelta scalar
+ # handle the numpy < 1.7 case
+
+ def conv(v):
+ if _np_version_under1p7:
+ return timedelta(microseconds=v/1000.0)
+ return np.timedelta64(v)
+
+ if isinstance(r, compat.string_types):
+ converter = _get_string_converter(r, unit=unit)
+ r = converter()
+ r = conv(r)
+ elif r == tslib.iNaT:
+ return r
+ elif isinstance(r, np.timedelta64):
+ r = r.astype("m8[{0}]".format(unit.lower()))
+ elif is_integer(r):
+ r = tslib.cast_from_unit(r, unit)
+ r = conv(r)
+
+ if _np_version_under1p7:
+ if not isinstance(r, timedelta):
+ raise AssertionError("Invalid type for timedelta scalar: %s" % type(r))
+ if compat.PY3:
+ # convert to microseconds in timedelta64
+ r = np.timedelta64(int(r.total_seconds()*1e9 + r.microseconds*1000))
+ else:
+ return r
+
+ if isinstance(r, timedelta):
+ r = np.timedelta64(r)
+ elif not isinstance(r, np.timedelta64):
+ raise AssertionError("Invalid type for timedelta scalar: %s" % type(r))
+ return r.astype('timedelta64[ns]')
+
+def _get_string_converter(r, unit='ns'):
+ """ return a string converter for r to process the timedelta format """
+
+ # treat as a nan
+ if _whitespace.search(r):
+ def convert(r=None, unit=None):
+ return tslib.iNaT
+ return convert
+
+ m = _short_search.search(r)
+ if m:
+ def convert(r=None, unit=unit, m=m):
+ if r is not None:
+ m = _short_search.search(r)
+
+ gd = m.groupdict()
+
+ r = float(gd['value'])
+ u = gd.get('unit')
+ if u is not None:
+ unit = u.lower()
+ if gd['neg']:
+ r *= -1
+ return tslib.cast_from_unit(r, unit)
+ return convert
+
+ m = _full_search.search(r)
+ if m:
+ def convert(r=None, unit=None, m=m):
+ if r is not None:
+ m = _full_search.search(r)
+
+ gd = m.groupdict()
+
+ # convert to seconds
+ value = float(gd['days'] or 0) * 86400
+
+ time = gd['time']
+ if time:
+ (hh,mm,ss) = time.split(':')
+ value += float(hh)*3600 + float(mm)*60 + float(ss)
+
+ frac = gd['frac']
+ if frac:
+ value += float(frac)
+
+ if gd['neg']:
+ value *= -1
+ return tslib.cast_from_unit(value, 's')
+ return convert
+
+ m = _nat_search.search(r)
+ if m:
+ def convert(r=None, unit=None, m=m):
+ return tslib.iNaT
+ return convert
+
+ # no converter
+ raise ValueError("cannot create timedelta string converter")
+
+def _possibly_cast_to_timedelta(value, coerce=True):
+ """ try to cast to timedelta64, if already a timedeltalike, then make
+ sure that we are [ns] (as numpy 1.6.2 is very buggy in this regards,
+ don't force the conversion unless coerce is True
+
+ if coerce='compat' force a compatibilty coercerion (to timedeltas) if needeed
+ """
+
+ # coercion compatability
+ if coerce == 'compat' and _np_version_under1p7:
+
+ def convert(td, dtype):
+
+ # we have an array with a non-object dtype
+ if hasattr(td,'item'):
+ td = td.astype(np.int64).item()
+ if td == tslib.iNaT:
+ return td
+ if dtype == 'm8[us]':
+ td *= 1000
+ return td
+
+ if td == tslib.compat_NaT:
+ return tslib.iNaT
+
+ # convert td value to a nanosecond value
+ d = td.days
+ s = td.seconds
+ us = td.microseconds
+
+ if dtype == 'object' or dtype == 'm8[ns]':
+ td = 1000*us + (s + d * 24 * 3600) * 10 ** 9
+ else:
+ raise ValueError("invalid conversion of dtype in np < 1.7 [%s]" % dtype)
+
+ return td
+
+ # < 1.7 coercion
+ if not is_list_like(value):
+ value = np.array([ value ])
+
+ dtype = value.dtype
+ return np.array([ convert(v,dtype) for v in value ], dtype='m8[ns]')
+
+ # deal with numpy not being able to handle certain timedelta operations
+ if isinstance(value, (ABCSeries, np.ndarray)) and value.dtype.kind == 'm':
+ if value.dtype != 'timedelta64[ns]':
+ value = value.astype('timedelta64[ns]')
+ return value
+
+ # we don't have a timedelta, but we want to try to convert to one (but
+ # don't force it)
+ if coerce:
+ new_value = tslib.array_to_timedelta64(
+ _values_from_object(value).astype(object), coerce=False)
+ if new_value.dtype == 'i8':
+ value = np.array(new_value, dtype='timedelta64[ns]')
+
+ return value
+
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 983d3385e8f85..fd97512b0528b 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -665,14 +665,14 @@ cdef convert_to_tsobject(object ts, object tz, object unit):
if ts == NPY_NAT:
obj.value = NPY_NAT
else:
- ts = ts * cast_from_unit(unit,None)
+ ts = ts * cast_from_unit(None,unit)
obj.value = ts
pandas_datetime_to_datetimestruct(ts, PANDAS_FR_ns, &obj.dts)
elif util.is_float_object(ts):
if ts != ts or ts == NPY_NAT:
obj.value = NPY_NAT
else:
- ts = cast_from_unit(unit,ts)
+ ts = cast_from_unit(ts,unit)
obj.value = ts
pandas_datetime_to_datetimestruct(ts, PANDAS_FR_ns, &obj.dts)
elif util.is_string_object(ts):
@@ -852,7 +852,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
pandas_datetimestruct dts
bint utc_convert = bool(utc)
_TSObject _ts
- int64_t m = cast_from_unit(unit,None)
+ int64_t m = cast_from_unit(None,unit)
try:
result = np.empty(n, dtype='M8[ns]')
@@ -892,7 +892,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
if val != val or val == iNaT:
iresult[i] = iNaT
else:
- iresult[i] = cast_from_unit(unit,val)
+ iresult[i] = cast_from_unit(val,unit)
else:
try:
if len(val) == 0:
@@ -1276,10 +1276,10 @@ cdef inline _get_datetime64_nanos(object val):
else:
return ival
-cdef inline int64_t cast_from_unit(object unit, object ts) except -1:
+cpdef inline int64_t cast_from_unit(object ts, object unit) except -1:
""" return a casting of the unit represented to nanoseconds
round the fractional part of a float to our precision, p """
- if unit == 'D':
+ if unit == 'D' or unit == 'd':
m = 1000000000L * 86400
p = 6
elif unit == 's':
@@ -1303,7 +1303,9 @@ cdef inline int64_t cast_from_unit(object unit, object ts) except -1:
# to avoid precision issues from float -> int
base = <int64_t> ts
frac = ts-base
- return <int64_t> (base*m) + <int64_t> (round(frac,p)*m)
+ if p:
+ frac = round(frac,p)
+ return <int64_t> (base*m) + <int64_t> (frac*m)
def cast_to_nanoseconds(ndarray arr):
cdef:
| closes .#3577
realted #3009
- ENH: add top-level `to_timedelta` to convert string/integer to timedeltas
- TST: add pandas/tseries/tests/test_timedeltas.py
- API: add full timedelta parsing and conversion to np.timedelta64[ns]
- CLN: refactored locations of timedeltas to core/tseries/timedeltas (from a series of functions in core/common)
- ENH: support timedelta64[ns] as a serialization type in HDFStore for query and append (GH3577)
| https://api.github.com/repos/pandas-dev/pandas/pulls/4822 | 2013-09-12T00:05:21Z | 2013-09-12T16:05:09Z | 2013-09-12T16:05:09Z | 2014-07-02T22:36:46Z |
ENH: Allow abs to work with PandasObjects | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 8eb6c858c0b29..212e2bad563b6 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -560,6 +560,9 @@ def __nonzero__(self):
__bool__ = __nonzero__
+ def __abs__(self):
+ return self.abs()
+
#----------------------------------------------------------------------
# Array Interface
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index f9756858b5d85..723810a19d140 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3234,9 +3234,11 @@ def test_operators_timedelta64(self):
# abs
result = diffs.abs()
+ result2 = abs(diffs)
expected = DataFrame(dict(A = df['A']-df['C'],
B = df['B']-df['A']))
assert_frame_equal(result,expected)
+ assert_frame_equal(result2, expected)
# mixed frame
mixed = diffs.copy()
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 938025c450258..fc86a78ea684b 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -355,18 +355,24 @@ def test_get_value(self):
def test_abs(self):
result = self.panel.abs()
+ result2 = abs(self.panel)
expected = np.abs(self.panel)
self.assert_panel_equal(result, expected)
+ self.assert_panel_equal(result2, expected)
df = self.panel['ItemA']
result = df.abs()
+ result2 = abs(df)
expected = np.abs(df)
assert_frame_equal(result, expected)
+ assert_frame_equal(result2, expected)
s = df['A']
result = s.abs()
+ result2 = abs(s)
expected = np.abs(s)
assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
class CheckIndexing(object):
| Add `__abs__` method so that you can use the toplevel `abs()` with
PandasObjects. I note there aren't that many existing test cases for
abs. At some point we might want to think about that.
Also related #4819
cc @thisch
| https://api.github.com/repos/pandas-dev/pandas/pulls/4821 | 2013-09-11T23:38:26Z | 2013-09-11T23:49:00Z | 2013-09-11T23:49:00Z | 2014-06-25T20:28:51Z |
API: Complex compat for Series with ndarray. (GH4819) | diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 0b0023f533705..5dcb6c20be69d 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -116,7 +116,7 @@ panelnd
The :ref:`panelnd<dsintro.panelnd>` docs.
`Construct a 5D panelnd
-http://stackoverflow.com/questions/18748598/why-my-panelnd-factory-throwing-a-keyerror`__
+<http://stackoverflow.com/questions/18748598/why-my-panelnd-factory-throwing-a-keyerror>`__
.. _cookbook.missing_data:
diff --git a/doc/source/release.rst b/doc/source/release.rst
index fd724af9917d2..087d2880511d2 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -265,6 +265,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Refactor of ``_get_numeric_data/_get_bool_data`` to core/generic.py, allowing Series/Panel functionaility
- Refactor of Series arithmetic with time-like objects (datetime/timedelta/time
etc.) into a separate, cleaned up wrapper class. (:issue:`4613`)
+- Complex compat for ``Series`` with ``ndarray``. (:issue:`4819`)
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 32f6765b5d84d..577e0a8b57930 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -35,6 +35,7 @@
from pandas import compat
from pandas.util.terminal import get_terminal_size
from pandas.compat import zip, lzip, u, OrderedDict
+from pandas.util import rwproperty
import pandas.core.array as pa
@@ -793,6 +794,23 @@ def __array_wrap__(self, result):
def __contains__(self, key):
return key in self.index
+ # complex
+ @rwproperty.getproperty
+ def real(self):
+ return self.values.real
+
+ @rwproperty.setproperty
+ def real(self, v):
+ self.values.real = v
+
+ @rwproperty.getproperty
+ def imag(self):
+ return self.values.imag
+
+ @rwproperty.setproperty
+ def imag(self, v):
+ self.values.imag = v
+
# coercion
__float__ = _coerce_method(float)
__long__ = _coerce_method(int)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 7f8fa1019261f..1f008354756bc 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2814,6 +2814,19 @@ def f(x):
expected = Series(1,index=range(10),dtype='float64')
#assert_series_equal(result,expected)
+ def test_complexx(self):
+
+ # GH4819
+ # complex access for ndarray compat
+ a = np.arange(5)
+ b = Series(a + 4j*a)
+ tm.assert_almost_equal(a,b.real)
+ tm.assert_almost_equal(4*a,b.imag)
+
+ b.real = np.arange(5)+5
+ tm.assert_almost_equal(a+5,b.real)
+ tm.assert_almost_equal(4*a,b.imag)
+
def test_underlying_data_conversion(self):
# GH 4080
| closes #4819
| https://api.github.com/repos/pandas-dev/pandas/pulls/4820 | 2013-09-11T21:06:26Z | 2013-09-11T21:23:29Z | 2013-09-11T21:23:29Z | 2014-06-23T07:11:10Z |
TST: tests for GH4812 (already fixed in master) | diff --git a/pandas/tseries/tests/test_resample.py b/pandas/tseries/tests/test_resample.py
index 1c6c4eae8d279..6ad69a466ba03 100644
--- a/pandas/tseries/tests/test_resample.py
+++ b/pandas/tseries/tests/test_resample.py
@@ -284,6 +284,17 @@ def test_resample_ohlc_dataframe(self):
# dupe columns fail atm
# df.columns = ['PRICE', 'PRICE']
+ def test_resample_dup_index(self):
+
+ # GH 4812
+ # dup columns with resample raising
+ df = DataFrame(np.random.randn(4,12),index=[2000,2000,2000,2000],columns=[ Period(year=2000,month=i+1,freq='M') for i in range(12) ])
+ df.iloc[3,:] = np.nan
+ result = df.resample('Q',axis=1)
+ expected = df.groupby(lambda x: int((x.month-1)/3),axis=1).mean()
+ expected.columns = [ Period(year=2000,quarter=i+1,freq='Q') for i in range(4) ]
+ assert_frame_equal(result, expected)
+
def test_resample_reresample(self):
dti = DatetimeIndex(
start=datetime(2005, 1, 1), end=datetime(2005, 1, 10),
| closes #4812
was already fixed!
| https://api.github.com/repos/pandas-dev/pandas/pulls/4813 | 2013-09-11T17:19:26Z | 2013-09-13T00:08:05Z | 2013-09-13T00:08:05Z | 2014-06-19T21:26:34Z |
CLN: removed need for Coordinates in HDFStore tables / added doc section | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 3a284062a2ec9..1d3980e216587 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2118,6 +2118,22 @@ These do not currently accept the ``where`` selector (coming soon)
store.select_column('df_dc', 'index')
store.select_column('df_dc', 'string')
+.. _io.hdf5-selecting_coordinates:
+
+**Selecting coordinates**
+
+Sometimes you want to get the coordinates (a.k.a the index locations) of your query. This returns an
+``Int64Index`` of the resulting locations. These coordinates can also be passed to subsequent
+``where`` operations.
+
+.. ipython:: python
+
+ df_coord = DataFrame(np.random.randn(1000,2),index=date_range('20000101',periods=1000))
+ store.append('df_coord',df_coord)
+ c = store.select_as_coordinates('df_coord','index>20020101')
+ c.summary()
+ store.select('df_coord',where=c)
+
.. _io.hdf5-where_mask:
**Selecting using a where mask**
diff --git a/doc/source/release.rst b/doc/source/release.rst
index a23aa2fcebc12..ce9c8d6319c46 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -155,6 +155,7 @@ API Changes
the ``Storer`` format has been renamed to ``Fixed``
- a column multi-index will be recreated properly (:issue:`4710`); raise on trying to use a multi-index
with data_columns on the same axis
+ - ``select_as_coordinates`` will now return an ``Int64Index`` of the resultant selection set
- ``JSON``
- added ``date_unit`` parameter to specify resolution of timestamps. Options
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 3431d82219856..caf218747bdfb 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -76,6 +76,8 @@ API changes
duplicate rows from a table (:issue:`4367`)
- removed the ``warn`` argument from ``open``. Instead a ``PossibleDataLossError`` exception will
be raised if you try to use ``mode='w'`` with an OPEN file handle (:issue:`4367`)
+ - ``select_as_coordinates`` will now return an ``Int64Index`` of the resultant selection set
+ See :ref:`here<io.hdf5-selecting_coordinates>` for an example.
- allow a passed locations array or mask as a ``where`` condition (:issue:`4467`).
See :ref:`here<io.hdf5-where_mask>` for an example.
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index d445ce8b797b5..6759e07ed7935 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -565,7 +565,7 @@ def func(_start, _stop):
def select_as_coordinates(self, key, where=None, start=None, stop=None, **kwargs):
"""
- return the selection as a Coordinates.
+ return the selection as an Index
Parameters
----------
@@ -3071,7 +3071,7 @@ def read_coordinates(self, where=None, start=None, stop=None, **kwargs):
# create the selection
self.selection = Selection(
self, where=where, start=start, stop=stop, **kwargs)
- return Coordinates(self.selection.select_coords(), group=self.group, where=where)
+ return Index(self.selection.select_coords())
def read_column(self, column, where=None, **kwargs):
""" return a single column from the table, generally only indexables are interesting """
@@ -4106,28 +4106,6 @@ def tostring(self, encoding):
return self.converted
-class Coordinates(object):
-
- """ holds a returned coordinates list, useful to select the same rows from different tables
-
- coordinates : holds the array of coordinates
- group : the source group
- where : the source where
- """
-
- def __init__(self, values, group, where, **kwargs):
- self.values = values
- self.group = group
- self.where = where
-
- def __len__(self):
- return len(self.values)
-
- def __getitem__(self, key):
- """ return a new coordinates object, sliced by the key """
- return Coordinates(self.values[key], self.group, self.where)
-
-
class Selection(object):
"""
@@ -4151,11 +4129,7 @@ def __init__(self, table, where=None, start=None, stop=None, **kwargs):
self.terms = None
self.coordinates = None
- # a coordinate
- if isinstance(where, Coordinates):
- self.coordinates = where.values
-
- elif com.is_list_like(where):
+ if com.is_list_like(where):
# see if we have a passed coordinate like
try:
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 48a2150758a3f..7e5c3f9fff061 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2879,6 +2879,7 @@ def test_coordinates(self):
result = store.select('df', where=c)
expected = df.ix[3:4, :]
tm.assert_frame_equal(result, expected)
+ self.assert_(isinstance(c, Index))
# multiple tables
_maybe_remove(store, 'df1')
| CLN: removed need for Coordinates, instead return an Index of the coordinates
DOC: added section on select_as_coordinates
| https://api.github.com/repos/pandas-dev/pandas/pulls/4809 | 2013-09-11T00:14:48Z | 2013-09-11T00:25:20Z | 2013-09-11T00:25:20Z | 2014-07-16T08:27:38Z |
BUG: Fixed an issue with a duplicate index and assignment with a dtype change (GH4686) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index a23aa2fcebc12..121ff505aa23c 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -376,6 +376,7 @@ Bug Fixes
- Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, :issue:`4550`)
- Fixed bug with reading compressed files with ``read_fwf`` in Python 3.
(:issue:`3963`)
+ - Fixed an issue with a duplicate index and assignment with a dtype change (:issue:`4686`)
pandas 0.12.0
-------------
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 91be4f42c17e4..e22202c65c140 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1797,11 +1797,13 @@ def _reset_ref_locs(self):
def _rebuild_ref_locs(self):
""" take _ref_locs and set the individual block ref_locs, skipping Nones
no effect on a unique index """
- if self._ref_locs is not None:
+ if getattr(self,'_ref_locs',None) is not None:
item_count = 0
for v in self._ref_locs:
if v is not None:
block, item_loc = v
+ if block._ref_locs is None:
+ block.reset_ref_locs()
block._ref_locs[item_loc] = item_count
item_count += 1
@@ -2595,11 +2597,11 @@ def _set_item(item, arr):
self.delete(item)
loc = _possibly_convert_to_indexer(loc)
- for i, (l, arr) in enumerate(zip(loc, value)):
+ for i, (l, k, arr) in enumerate(zip(loc, subset, value)):
# insert the item
self.insert(
- l, item, arr[None, :], allow_duplicates=True)
+ l, k, arr[None, :], allow_duplicates=True)
# reset the _ref_locs on indiviual blocks
# rebuild ref_locs
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 18ee89fbc5c66..4b17dd5ffd9db 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1292,6 +1292,24 @@ def test_astype_assignment_with_iloc(self):
result = df.get_dtype_counts().sort_index()
expected = Series({ 'int64' : 4, 'float64' : 1, 'object' : 2 }).sort_index()
+ def test_astype_assignment_with_dups(self):
+
+ # GH 4686
+ # assignment with dups that has a dtype change
+ df = DataFrame(
+ np.arange(3).reshape((1,3)),
+ columns=pd.MultiIndex.from_tuples(
+ [('A', '1'), ('B', '1'), ('A', '2')]
+ ),
+ dtype=object
+ )
+ index = df.index.copy()
+
+ df['A'] = df['A'].astype(np.float64)
+ result = df.get_dtype_counts().sort_index()
+ expected = Series({ 'float64' : 2, 'object' : 1 }).sort_index()
+ self.assert_(df.index.equals(index))
+
def test_dups_loc(self):
# GH4726
| closes #4686
| https://api.github.com/repos/pandas-dev/pandas/pulls/4806 | 2013-09-10T19:26:27Z | 2013-09-10T19:54:07Z | 2013-09-10T19:54:07Z | 2014-07-03T22:52:42Z |
CLN: move align out of series consolidating into core/generic.py | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d5c265dcf93a0..d6daa467752a9 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2202,7 +2202,7 @@ def last(self, offset):
return self.ix[start:]
def align(self, other, join='outer', axis=None, level=None, copy=True,
- fill_value=np.nan, method=None, limit=None, fill_axis=0):
+ fill_value=None, method=None, limit=None, fill_axis=0):
"""
Align two object on their axes with the
specified join method for each axis Index
@@ -2288,35 +2288,51 @@ def _align_series(self, other, join='outer', axis=None, level=None,
fill_axis=0):
from pandas import DataFrame
- fdata = self._data
- if axis == 0:
- join_index = self.index
- lidx, ridx = None, None
- if not self.index.equals(other.index):
- join_index, lidx, ridx = self.index.join(other.index, how=join,
- return_indexers=True)
+ # series/series compat
+ if isinstance(self, ABCSeries) and isinstance(other, ABCSeries):
+ if axis:
+ raise ValueError('cannot align series to a series other than axis 0')
- if lidx is not None:
- fdata = fdata.reindex_indexer(join_index, lidx, axis=1)
- elif axis == 1:
- join_index = self.columns
- lidx, ridx = None, None
- if not self.columns.equals(other.index):
- join_index, lidx, ridx = \
- self.columns.join(other.index, how=join,
- return_indexers=True)
+ join_index, lidx, ridx = self.index.join(other.index, how=join,
+ level=level,
+ return_indexers=True)
+
+ left_result = self._reindex_indexer(join_index, lidx, copy)
+ right_result = other._reindex_indexer(join_index, ridx, copy)
- if lidx is not None:
- fdata = fdata.reindex_indexer(join_index, lidx, axis=0)
else:
- raise ValueError('Must specify axis=0 or 1')
- if copy and fdata is self._data:
- fdata = fdata.copy()
+ # one has > 1 ndim
+ fdata = self._data
+ if axis == 0:
+ join_index = self.index
+ lidx, ridx = None, None
+ if not self.index.equals(other.index):
+ join_index, lidx, ridx = self.index.join(other.index, how=join,
+ return_indexers=True)
+
+ if lidx is not None:
+ fdata = fdata.reindex_indexer(join_index, lidx, axis=1)
+ elif axis == 1:
+ join_index = self.columns
+ lidx, ridx = None, None
+ if not self.columns.equals(other.index):
+ join_index, lidx, ridx = \
+ self.columns.join(other.index, how=join,
+ return_indexers=True)
+
+ if lidx is not None:
+ fdata = fdata.reindex_indexer(join_index, lidx, axis=0)
+ else:
+ raise ValueError('Must specify axis=0 or 1')
+
+ if copy and fdata is self._data:
+ fdata = fdata.copy()
- left_result = DataFrame(fdata)
- right_result = other if ridx is None else other.reindex(join_index)
+ left_result = DataFrame(fdata)
+ right_result = other if ridx is None else other.reindex(join_index)
+ # fill
fill_na = notnull(fill_value) or (method is not None)
if fill_na:
return (left_result.fillna(fill_value, method=method, limit=limit,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3f93f210ad4cf..32f6765b5d84d 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2667,45 +2667,6 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
else:
return self._constructor(mapped, index=self.index, name=self.name)
- def align(self, other, join='outer', axis=None, level=None, copy=True,
- fill_value=None, method=None, limit=None):
- """
- Align two Series object with the specified join method
-
- Parameters
- ----------
- other : Series
- join : {'outer', 'inner', 'left', 'right'}, default 'outer'
- axis : None, alignment axis (is 0 for Series)
- level : int or name
- Broadcast across a level, matching Index values on the
- passed MultiIndex level
- copy : boolean, default True
- Always return new objects. If copy=False and no reindexing is
- required, the same object will be returned (for better performance)
- fill_value : object, default None
- method : str, default 'pad'
- limit : int, default None
- fill_value, method, inplace, limit are passed to fillna
-
- Returns
- -------
- (left, right) : (Series, Series)
- Aligned Series
- """
- join_index, lidx, ridx = self.index.join(other.index, how=join,
- level=level,
- return_indexers=True)
-
- left = self._reindex_indexer(join_index, lidx, copy)
- right = other._reindex_indexer(join_index, ridx, copy)
- fill_na = (fill_value is not None) or (method is not None)
- if fill_na:
- return (left.fillna(fill_value, method=method, limit=limit),
- right.fillna(fill_value, method=method, limit=limit))
- else:
- return left, right
-
def _reindex_indexer(self, new_index, indexer, copy):
if indexer is None:
if copy:
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index d8f6d531a6983..7d2571e6c3c74 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -573,11 +573,14 @@ def _reindex_columns(self, columns, copy, level, fill_value, limit=None,
return SparseDataFrame(sdict, index=self.index, columns=columns,
default_fill_value=self._default_fill_value)
- def _reindex_with_indexers(self, reindexers, method=None, fill_value=np.nan, limit=None, copy=False):
+ def _reindex_with_indexers(self, reindexers, method=None, fill_value=None, limit=None, copy=False):
if method is not None or limit is not None:
raise NotImplementedError("cannot reindex with a method or limit with sparse")
+ if fill_value is None:
+ fill_value = np.nan
+
index, row_indexer = reindexers.get(0, (None, None))
columns, col_indexer = reindexers.get(1, (None, None))
| https://api.github.com/repos/pandas-dev/pandas/pulls/4800 | 2013-09-10T13:58:15Z | 2013-09-10T14:29:35Z | 2013-09-10T14:29:35Z | 2014-07-16T08:27:34Z | |
CLN: move clip to core/generic (adds to Panel as well), related to (GH2747) | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 9cf10d3f0780d..538965d0be7ad 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -708,6 +708,9 @@ Computations / Descriptive Stats
:toctree: generated/
Panel.abs
+ Panel.clip
+ Panel.clip_lower
+ Panel.clip_upper
Panel.count
Panel.cummax
Panel.cummin
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4f02fdbbfe97a..a23aa2fcebc12 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -260,6 +260,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Refactor ``rename`` methods to core/generic.py; fixes ``Series.rename`` for (:issue:`4605`), and adds ``rename``
with the same signature for ``Panel``
- Series (for index) / Panel (for items) now as attribute access to its elements (:issue:`1903`)
+- Refactor ``clip`` methods to core/generic.py (:issue:`4798`)
- Refactor of ``_get_numeric_data/_get_bool_data`` to core/generic.py, allowing Series/Panel functionaility
- Refactor of Series arithmetic with time-like objects (datetime/timedelta/time
etc.) into a separate, cleaned up wrapper class. (:issue:`4613`)
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index a38ff2fa6d457..3431d82219856 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -353,8 +353,9 @@ and behaviors. Series formerly subclassed directly from ``ndarray``. (:issue:`40
- Refactor ``Series.reindex`` to core/generic.py (:issue:`4604`, :issue:`4618`), allow ``method=`` in reindexing
on a Series to work
- ``Series.copy`` no longer accepts the ``order`` parameter and is now consistent with ``NDFrame`` copy
-- Refactor ``rename`` methods to core/generic.py; fixes ``Series.rename`` for (:issue`4605`), and adds ``rename``
+- Refactor ``rename`` methods to core/generic.py; fixes ``Series.rename`` for (:issue:`4605`), and adds ``rename``
with the same signature for ``Panel``
+- Refactor ``clip`` methods to core/generic.py (:issue:`4798`)
- Refactor of ``_get_numeric_data/_get_bool_data`` to core/generic.py, allowing Series/Panel functionaility
- ``Series`` (for index) / ``Panel`` (for items) now allow attribute access to its elements (:issue:`1903`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 71d7f826781df..2b0e18c0c5524 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4396,47 +4396,6 @@ def f(arr):
data = self._get_numeric_data() if numeric_only else self
return data.apply(f, axis=axis)
-
- def clip(self, lower=None, upper=None):
- """
- Trim values at input threshold(s)
-
- Parameters
- ----------
- lower : float, default None
- upper : float, default None
-
- Returns
- -------
- clipped : DataFrame
- """
-
- # GH 2747 (arguments were reversed)
- if lower is not None and upper is not None:
- lower, upper = min(lower, upper), max(lower, upper)
-
- return self.apply(lambda x: x.clip(lower=lower, upper=upper))
-
- def clip_upper(self, threshold):
- """
- Trim values above threshold
-
- Returns
- -------
- clipped : DataFrame
- """
- return self.apply(lambda x: x.clip_upper(threshold))
-
- def clip_lower(self, threshold):
- """
- Trim values below threshold
-
- Returns
- -------
- clipped : DataFrame
- """
- return self.apply(lambda x: x.clip_lower(threshold))
-
def rank(self, axis=0, numeric_only=None, method='average',
na_option='keep', ascending=True):
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 2919790300bc3..d5c265dcf93a0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1920,6 +1920,68 @@ def f(x):
return obj
+ def clip(self, lower=None, upper=None, out=None):
+ """
+ Trim values at input threshold(s)
+
+ Parameters
+ ----------
+ lower : float, default None
+ upper : float, default None
+
+ Returns
+ -------
+ clipped : Series
+ """
+ if out is not None: # pragma: no cover
+ raise Exception('out argument is not supported yet')
+
+ # GH 2747 (arguments were reversed)
+ if lower is not None and upper is not None:
+ lower, upper = min(lower, upper), max(lower, upper)
+
+ result = self
+ if lower is not None:
+ result = result.clip_lower(lower)
+ if upper is not None:
+ result = result.clip_upper(upper)
+
+ return result
+
+ def clip_upper(self, threshold):
+ """
+ Return copy of input with values above given value truncated
+
+ See also
+ --------
+ clip
+
+ Returns
+ -------
+ clipped : same type as input
+ """
+ if isnull(threshold):
+ raise ValueError("Cannot use an NA value as a clip threshold")
+
+ return self.where((self <= threshold) | isnull(self), threshold)
+
+ def clip_lower(self, threshold):
+ """
+ Return copy of the input with values below given value truncated
+
+ See also
+ --------
+ clip
+
+ Returns
+ -------
+ clipped : same type as input
+ """
+ if isnull(threshold):
+ raise ValueError("Cannot use an NA value as a clip threshold")
+
+ return self.where((self >= threshold) | isnull(self), threshold)
+
def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
group_keys=True, squeeze=False):
"""
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ef8c630a7bde8..3f93f210ad4cf 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2102,64 +2102,6 @@ def autocorr(self):
"""
return self.corr(self.shift(1))
- def clip(self, lower=None, upper=None, out=None):
- """
- Trim values at input threshold(s)
-
- Parameters
- ----------
- lower : float, default None
- upper : float, default None
-
- Returns
- -------
- clipped : Series
- """
- if out is not None: # pragma: no cover
- raise Exception('out argument is not supported yet')
-
- result = self
- if lower is not None:
- result = result.clip_lower(lower)
- if upper is not None:
- result = result.clip_upper(upper)
-
- return result
-
- def clip_upper(self, threshold):
- """
- Return copy of series with values above given value truncated
-
- See also
- --------
- clip
-
- Returns
- -------
- clipped : Series
- """
- if isnull(threshold):
- raise ValueError("Cannot use an NA value as a clip threshold")
-
- return self.where((self <= threshold) | isnull(self), threshold)
-
- def clip_lower(self, threshold):
- """
- Return copy of series with values below given value truncated
-
- See also
- --------
- clip
-
- Returns
- -------
- clipped : Series
- """
- if isnull(threshold):
- raise ValueError("Cannot use an NA value as a clip threshold")
-
- return self.where((self >= threshold) | isnull(self), threshold)
-
def dot(self, other):
"""
Matrix multiplication with DataFrame or inner-product with Series objects
| https://api.github.com/repos/pandas-dev/pandas/pulls/4798 | 2013-09-10T13:33:57Z | 2013-09-10T14:05:33Z | 2013-09-10T14:05:33Z | 2014-06-17T05:06:46Z | |
CLN: default for tupleize_cols is now False for both to_csv and read_csv. Fair warning in 0.12 (GH3604) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 67cbe35144461..3a284062a2ec9 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -153,7 +153,7 @@ They can take a number of arguments:
time and lower memory usage.
- ``mangle_dupe_cols``: boolean, default True, then duplicate columns will be specified
as 'X.0'...'X.N', rather than 'X'...'X'
- - ``tupleize_cols``: boolean, default True, if False, convert a list of tuples
+ - ``tupleize_cols``: boolean, default False, if False, convert a list of tuples
to a multi-index of columns, otherwise, leave the column index as a list of tuples
.. ipython:: python
@@ -860,19 +860,16 @@ Reading columns with a ``MultiIndex``
By specifying list of row locations for the ``header`` argument, you
can read in a ``MultiIndex`` for the columns. Specifying non-consecutive
-rows will skip the interveaning rows.
+rows will skip the interveaning rows. In order to have the pre-0.13 behavior
+of tupleizing columns, specify ``tupleize_cols=True``.
.. ipython:: python
from pandas.util.testing import makeCustomDataframe as mkdf
df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
- df.to_csv('mi.csv',tupleize_cols=False)
+ df.to_csv('mi.csv')
print open('mi.csv').read()
- pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
-
-Note: The default behavior in 0.12 remains unchanged (``tupleize_cols=True``) from prior versions,
-but starting with 0.13, the default *to* write and read multi-index columns will be in the new
-format (``tupleize_cols=False``)
+ pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1])
Note: If an ``index_col`` is not specified (e.g. you don't have an index, or wrote it
with ``df.to_csv(..., index=False``), then any ``names`` on the columns index will be *lost*.
@@ -966,7 +963,7 @@ function takes a number of arguments. Only the first is required.
- ``sep`` : Field delimiter for the output file (default ",")
- ``encoding``: a string representing the encoding to use if the contents are
non-ascii, for python versions prior to 3
- - ``tupleize_cols``: boolean, default True, if False, write as a list of tuples,
+ - ``tupleize_cols``: boolean, default False, if False, write as a list of tuples,
otherwise write in an expanded line format suitable for ``read_csv``
Writing a formatted string
diff --git a/doc/source/release.rst b/doc/source/release.rst
index de7aa675380b7..4f02fdbbfe97a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -188,6 +188,7 @@ API Changes
a list can be passed to ``to_replace`` (:issue:`4743`).
- provide automatic dtype conversions on _reduce operations (:issue:`3371`)
- exclude non-numerics if mixed types with datelike in _reduce operations (:issue:`3371`)
+ - default for ``tupleize_cols`` is now ``False`` for both ``to_csv`` and ``read_csv``. Fair warning in 0.12 (:issue:`3604`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 6b4dc979d5279..92fcfaa5f2f9c 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -787,7 +787,7 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None,
cols=None, header=True, index=True, index_label=None,
mode='w', nanRep=None, encoding=None, quoting=None,
line_terminator='\n', chunksize=None, engine=None,
- tupleize_cols=True, quotechar='"'):
+ tupleize_cols=False, quotechar='"'):
self.engine = engine # remove for 0.13
self.obj = obj
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 52d3a15d8d184..71d7f826781df 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1191,7 +1191,7 @@ def from_csv(cls, path, header=0, sep=',', index_col=0,
is used. Different default from read_table
parse_dates : boolean, default True
Parse dates. Different default from read_table
- tupleize_cols : boolean, default True
+ tupleize_cols : boolean, default False
write multi_index columns as a list of tuples (if True)
or new (expanded format) if False)
@@ -1208,7 +1208,7 @@ def from_csv(cls, path, header=0, sep=',', index_col=0,
from pandas.io.parsers import read_table
return read_table(path, header=header, sep=sep,
parse_dates=parse_dates, index_col=index_col,
- encoding=encoding, tupleize_cols=False)
+ encoding=encoding, tupleize_cols=tupleize_cols)
def to_sparse(self, fill_value=None, kind='block'):
"""
@@ -1291,7 +1291,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
cols=None, header=True, index=True, index_label=None,
mode='w', nanRep=None, encoding=None, quoting=None,
line_terminator='\n', chunksize=None,
- tupleize_cols=True, **kwds):
+ tupleize_cols=False, **kwds):
r"""Write DataFrame to a comma-separated values (csv) file
Parameters
@@ -1331,7 +1331,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
defaults to csv.QUOTE_MINIMAL
chunksize : int or None
rows to write at a time
- tupleize_cols : boolean, default True
+ tupleize_cols : boolean, default False
write multi_index columns as a list of tuples (if True)
or new (expanded format) if False)
"""
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index e1b09eb76415f..06940e3bb2b4c 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -247,7 +247,7 @@ def _read(filepath_or_buffer, kwds):
'squeeze': False,
'compression': None,
'mangle_dupe_cols': True,
- 'tupleize_cols':True,
+ 'tupleize_cols':False,
}
@@ -336,7 +336,7 @@ def parser_f(filepath_or_buffer,
encoding=None,
squeeze=False,
mangle_dupe_cols=True,
- tupleize_cols=True,
+ tupleize_cols=False,
):
# Alias sep -> delimiter.
@@ -656,7 +656,7 @@ def __init__(self, kwds):
self.na_fvalues = kwds.get('na_fvalues')
self.true_values = kwds.get('true_values')
self.false_values = kwds.get('false_values')
- self.tupleize_cols = kwds.get('tupleize_cols',True)
+ self.tupleize_cols = kwds.get('tupleize_cols',False)
self._date_conv = _make_date_converter(date_parser=self.date_parser,
dayfirst=self.dayfirst)
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index 8b90e76fa4bf3..b97929023adb6 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -310,7 +310,7 @@ cdef class TextReader:
skip_footer=0,
verbose=False,
mangle_dupe_cols=True,
- tupleize_cols=True):
+ tupleize_cols=False):
self.parser = parser_new()
self.parser.chunksize = tokenize_chunksize
| closes #3604
| https://api.github.com/repos/pandas-dev/pandas/pulls/4797 | 2013-09-10T12:09:34Z | 2013-09-10T13:07:33Z | 2013-09-10T13:07:33Z | 2014-07-08T08:55:34Z |
BUG: pickle failing on FrozenList, when using MultiIndex (GH4788) | diff --git a/pandas/core/base.py b/pandas/core/base.py
index a57af06f24cc9..a2f7f04053b9f 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -108,6 +108,9 @@ def __mul__(self, other):
__imul__ = __mul__
+ def __reduce__(self):
+ return self.__class__, (list(self),)
+
def __hash__(self):
return hash(tuple(self))
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 2b5f761026924..b561d7637c0c3 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2109,7 +2109,7 @@ def __contains__(self, key):
def __reduce__(self):
"""Necessary for making this object picklable"""
object_state = list(np.ndarray.__reduce__(self))
- subclass_state = (self.levels, self.labels, self.sortorder, self.names)
+ subclass_state = (list(self.levels), list(self.labels), self.sortorder, list(self.names))
object_state[2] = (object_state[2], subclass_state)
return tuple(object_state)
diff --git a/pandas/io/api.py b/pandas/io/api.py
index 2c8f8d1c893e2..94deb51ab4b18 100644
--- a/pandas/io/api.py
+++ b/pandas/io/api.py
@@ -10,4 +10,4 @@
from pandas.io.html import read_html
from pandas.io.sql import read_sql
from pandas.io.stata import read_stata
-from pandas.io.pickle import read_pickle
+from pandas.io.pickle import read_pickle, to_pickle
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index af1b333312309..97633873e7b40 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -39,8 +39,7 @@ def try_read(path, encoding=None):
# the param
try:
with open(path,'rb') as fh:
- with open(path,'rb') as fh:
- return pc.load(fh, encoding=encoding, compat=False)
+ return pc.load(fh, encoding=encoding, compat=False)
except:
with open(path,'rb') as fh:
return pc.load(fh, encoding=encoding, compat=True)
diff --git a/pandas/io/tests/data/legacy_pickle/0.11.0/0.11.0_x86_64_linux_3.3.0.pickle b/pandas/io/tests/data/legacy_pickle/0.11.0/0.11.0_x86_64_linux_3.3.0.pickle
index 6b471d55b1642..e057576b6894b 100644
Binary files a/pandas/io/tests/data/legacy_pickle/0.11.0/0.11.0_x86_64_linux_3.3.0.pickle and b/pandas/io/tests/data/legacy_pickle/0.11.0/0.11.0_x86_64_linux_3.3.0.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_AMD64_windows_2.7.3.pickle b/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_AMD64_windows_2.7.3.pickle
new file mode 100644
index 0000000000000..1001c0f470122
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_AMD64_windows_2.7.3.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_i686_linux_2.7.3.pickle b/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_i686_linux_2.7.3.pickle
deleted file mode 100644
index 17061f6b7dc0f..0000000000000
Binary files a/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_i686_linux_2.7.3.pickle and /dev/null differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_x86_64_linux_2.7.3.pickle b/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_x86_64_linux_2.7.3.pickle
index 470d3e89c433d..3049e94791581 100644
Binary files a/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_x86_64_linux_2.7.3.pickle and b/pandas/io/tests/data/legacy_pickle/0.12.0/0.12.0_x86_64_linux_2.7.3.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.12.0/x86_64_linux_2.7.3.pickle b/pandas/io/tests/data/legacy_pickle/0.12.0/x86_64_linux_2.7.3.pickle
deleted file mode 100644
index e8c1e52078f7c..0000000000000
Binary files a/pandas/io/tests/data/legacy_pickle/0.12.0/x86_64_linux_2.7.3.pickle and /dev/null differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.13.0/0.12.0-300-g6ffed43_x86_64_linux_2.7.3.pickle b/pandas/io/tests/data/legacy_pickle/0.13.0/0.12.0-300-g6ffed43_x86_64_linux_2.7.3.pickle
deleted file mode 100644
index 93e1f3e6c9607..0000000000000
Binary files a/pandas/io/tests/data/legacy_pickle/0.13.0/0.12.0-300-g6ffed43_x86_64_linux_2.7.3.pickle and /dev/null differ
diff --git a/pandas/io/tests/generate_legacy_pickles.py b/pandas/io/tests/generate_legacy_pickles.py
index f54a67b7f76cf..05e5d68379b09 100644
--- a/pandas/io/tests/generate_legacy_pickles.py
+++ b/pandas/io/tests/generate_legacy_pickles.py
@@ -77,16 +77,23 @@ def create_data():
index = dict(int = Index(np.arange(10)),
date = date_range('20130101',periods=10))
- mi = dict(reg = MultiIndex.from_tuples(list(zip([['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
+ mi = dict(reg2 = MultiIndex.from_tuples(tuple(zip(*[['bar', 'bar', 'baz', 'baz', 'foo', 'foo', 'qux', 'qux'],
['one', 'two', 'one', 'two', 'one', 'two', 'one', 'two']])),
names=['first', 'second']))
series = dict(float = Series(data['A']),
int = Series(data['B']),
mixed = Series(data['E']),
- ts = TimeSeries(np.arange(10).astype(np.int64),index=date_range('20130101',periods=10)))
+ ts = TimeSeries(np.arange(10).astype(np.int64),index=date_range('20130101',periods=10)),
+ mi = Series(np.arange(5).astype(np.float64),index=MultiIndex.from_tuples(tuple(zip(*[[1,1,2,2,2],
+ [3,4,3,4,5]])),
+ names=['one','two'])))
frame = dict(float = DataFrame(dict(A = series['float'], B = series['float'] + 1)),
int = DataFrame(dict(A = series['int'] , B = series['int'] + 1)),
- mixed = DataFrame(dict([ (k,data[k]) for k in ['A','B','C','D']])))
+ mixed = DataFrame(dict([ (k,data[k]) for k in ['A','B','C','D']])),
+ mi = DataFrame(dict(A = np.arange(5).astype(np.float64), B = np.arange(5).astype(np.int64)),
+ index=MultiIndex.from_tuples(tuple(zip(*[['bar','bar','baz','baz','baz'],
+ ['one','two','one','two','three']])),
+ names=['first','second'])))
panel = dict(float = Panel(dict(ItemA = frame['float'], ItemB = frame['float']+1)))
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index 92231d2ef094f..167eed95fd5a6 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -15,15 +15,34 @@
from pandas import Index
from pandas.sparse.tests import test_sparse
from pandas import compat
+from pandas.compat import u
from pandas.util.misc import is_little_endian
import pandas
+def _read_pickle(vf, encoding=None, compat=False):
+ from pandas.compat import pickle_compat as pc
+ with open(vf,'rb') as fh:
+ pc.load(fh, encoding=encoding, compat=compat)
+
class TestPickle(unittest.TestCase):
_multiprocess_can_split_ = True
def setUp(self):
from pandas.io.tests.generate_legacy_pickles import create_data
self.data = create_data()
+ self.path = u('__%s__.pickle' % tm.rands(10))
+
+ def compare_element(self, typ, result, expected):
+ if isinstance(expected,Index):
+ self.assert_(expected.equals(result))
+ return
+
+ if typ.startswith('sp_'):
+ comparator = getattr(test_sparse,"assert_%s_equal" % typ)
+ comparator(result,expected,exact_indices=False)
+ else:
+ comparator = getattr(tm,"assert_%s_equal" % typ)
+ comparator(result,expected)
def compare(self, vf):
@@ -36,19 +55,12 @@ def compare(self, vf):
for typ, dv in data.items():
for dt, result in dv.items():
-
- expected = self.data[typ][dt]
-
- if isinstance(expected,Index):
- self.assert_(expected.equals(result))
+ try:
+ expected = self.data[typ][dt]
+ except (KeyError):
continue
- if typ.startswith('sp_'):
- comparator = getattr(test_sparse,"assert_%s_equal" % typ)
- comparator(result,expected,exact_indices=False)
- else:
- comparator = getattr(tm,"assert_%s_equal" % typ)
- comparator(result,expected)
+ self.compare_element(typ, result, expected)
def read_pickles(self, version):
if not is_little_endian():
@@ -68,8 +80,18 @@ def test_read_pickles_0_11_0(self):
def test_read_pickles_0_12_0(self):
self.read_pickles('0.12.0')
- def test_read_pickles_0_13_0(self):
- self.read_pickles('0.13.0')
+ def test_round_trip_current(self):
+
+ for typ, dv in self.data.items():
+
+ for dt, expected in dv.items():
+
+ with tm.ensure_clean(self.path) as path:
+
+ pd.to_pickle(expected,path)
+
+ result = pd.read_pickle(path)
+ self.compare_element(typ, result, expected)
if __name__ == '__main__':
import nose
diff --git a/setup.py b/setup.py
index f04b39f864ecf..b7df339daf75a 100755
--- a/setup.py
+++ b/setup.py
@@ -527,7 +527,6 @@ def pxd(name):
'tests/data/legacy_pickle/0.10.1/*.pickle',
'tests/data/legacy_pickle/0.11.0/*.pickle',
'tests/data/legacy_pickle/0.12.0/*.pickle',
- 'tests/data/legacy_pickle/0.13.0/*.pickle',
'tests/data/*.csv',
'tests/data/*.dta',
'tests/data/*.txt',
| closes #4788
| https://api.github.com/repos/pandas-dev/pandas/pulls/4791 | 2013-09-09T23:04:09Z | 2013-09-10T13:05:46Z | 2013-09-10T13:05:46Z | 2014-06-26T18:22:27Z |
BUG: Fix read_fwf with compressed files. | diff --git a/doc/source/release.rst b/doc/source/release.rst
index f32ea44ed6242..53c50100072f9 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -369,6 +369,8 @@ Bug Fixes
- Bug in ``iloc`` with a slice index failing (:issue:`4771`)
- Incorrect error message with no colspecs or width in ``read_fwf``. (:issue:`4774`)
- Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, :issue:`4550`)
+ - Fixed bug with reading compressed files with ``read_fwf`` in Python 3.
+ (:issue:`3963`)
pandas 0.12.0
-------------
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index f05b0a676cde4..e1b09eb76415f 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -14,6 +14,7 @@
from pandas.core.frame import DataFrame
import datetime
import pandas.core.common as com
+from pandas.core.config import get_option
from pandas import compat
from pandas.io.date_converters import generic_parser
from pandas.io.common import get_filepath_or_buffer
@@ -1921,11 +1922,14 @@ class FixedWidthReader(object):
"""
A reader of fixed-width lines.
"""
- def __init__(self, f, colspecs, filler, thousands=None):
+ def __init__(self, f, colspecs, filler, thousands=None, encoding=None):
self.f = f
self.colspecs = colspecs
self.filler = filler # Empty characters between fields.
self.thousands = thousands
+ if encoding is None:
+ encoding = get_option('display.encoding')
+ self.encoding = encoding
if not ( isinstance(colspecs, (tuple, list))):
raise AssertionError()
@@ -1937,11 +1941,20 @@ def __init__(self, f, colspecs, filler, thousands=None):
isinstance(colspec[1], int) ):
raise AssertionError()
- def next(self):
- line = next(self.f)
- # Note: 'colspecs' is a sequence of half-open intervals.
- return [line[fromm:to].strip(self.filler or ' ')
- for (fromm, to) in self.colspecs]
+ if compat.PY3:
+ def next(self):
+ line = next(self.f)
+ if isinstance(line, bytes):
+ line = line.decode(self.encoding)
+ # Note: 'colspecs' is a sequence of half-open intervals.
+ return [line[fromm:to].strip(self.filler or ' ')
+ for (fromm, to) in self.colspecs]
+ else:
+ def next(self):
+ line = next(self.f)
+ # Note: 'colspecs' is a sequence of half-open intervals.
+ return [line[fromm:to].strip(self.filler or ' ')
+ for (fromm, to) in self.colspecs]
# Iterator protocol in Python 3 uses __next__()
__next__ = next
@@ -1959,7 +1972,8 @@ def __init__(self, f, **kwds):
PythonParser.__init__(self, f, **kwds)
def _make_reader(self, f):
- self.data = FixedWidthReader(f, self.colspecs, self.delimiter)
+ self.data = FixedWidthReader(f, self.colspecs, self.delimiter,
+ encoding=self.encoding)
##### deprecations in 0.12 #####
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 9d751de6645ce..f872ddd793935 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -2028,6 +2028,31 @@ def test_fwf_regression(self):
res = df.loc[:,c]
self.assert_(len(res))
+ def test_fwf_compression(self):
+ try:
+ import gzip
+ import bz2
+ except ImportError:
+ raise nose.SkipTest("Need gzip and bz2 to run this test")
+
+ data = """1111111111
+ 2222222222
+ 3333333333""".strip()
+ widths = [5, 5]
+ names = ['one', 'two']
+ expected = read_fwf(StringIO(data), widths=widths, names=names)
+ if compat.PY3:
+ data = bytes(data, encoding='utf-8')
+ for comp_name, compresser in [('gzip', gzip.GzipFile),
+ ('bz2', bz2.BZ2File)]:
+ with tm.ensure_clean() as path:
+ tmp = compresser(path, mode='wb')
+ tmp.write(data)
+ tmp.close()
+ result = read_fwf(path, widths=widths, names=names,
+ compression=comp_name)
+ tm.assert_frame_equal(result, expected)
+
def test_verbose_import(self):
text = """a,b,c,d
one,1,2,3
| Fixes #3963.
`gzip` and `bz2` both now return `bytes` rather than `str` in Python 3, so
need to check for bytes and decode as necessary.
replacing #4783
| https://api.github.com/repos/pandas-dev/pandas/pulls/4784 | 2013-09-09T04:46:13Z | 2013-09-09T12:27:12Z | 2013-09-09T12:27:12Z | 2014-06-19T08:29:24Z |
BUG: Fix input bytes conversion in Py3 to return str | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 140c3bc836fdb..124661021f45c 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -393,6 +393,9 @@ Bug Fixes
- Fixed bug with reading compressed files with ``read_fwf`` in Python 3.
(:issue:`3963`)
- Fixed an issue with a duplicate index and assignment with a dtype change (:issue:`4686`)
+ - Fixed bug with reading compressed files in as ``bytes`` rather than ``str``
+ in Python 3. Simplifies bytes-producing file-handling in Python 3
+ (:issue:`3963`, :issue:`4785`).
pandas 0.12.0
-------------
diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index 1b5939eb98417..12c929cd59820 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -36,6 +36,7 @@
import types
PY3 = (sys.version_info[0] >= 3)
+PY3_2 = sys.version_info[:2] == (3, 2)
try:
import __builtin__ as builtins
diff --git a/pandas/core/common.py b/pandas/core/common.py
index b58bd92a4fd1f..34aaa08b57171 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -5,6 +5,7 @@
import re
import codecs
import csv
+import sys
from numpy.lib.format import read_array, write_array
import numpy as np
@@ -1858,27 +1859,42 @@ def next(self):
def _get_handle(path, mode, encoding=None, compression=None):
+ """Gets file handle for given path and mode.
+ NOTE: Under Python 3.2, getting a compressed file handle means reading in the entire file,
+ decompressing it and decoding it to ``str`` all at once and then wrapping it in a StringIO.
+ """
if compression is not None:
- if encoding is not None:
- raise ValueError('encoding + compression not yet supported')
+ if encoding is not None and not compat.PY3:
+ msg = 'encoding + compression not yet supported in Python 2'
+ raise ValueError(msg)
if compression == 'gzip':
import gzip
- return gzip.GzipFile(path, 'rb')
+ f = gzip.GzipFile(path, 'rb')
elif compression == 'bz2':
import bz2
- return bz2.BZ2File(path, 'rb')
+
+ f = bz2.BZ2File(path, 'rb')
else:
raise ValueError('Unrecognized compression type: %s' %
compression)
-
- if compat.PY3: # pragma: no cover
- if encoding:
- f = open(path, mode, encoding=encoding)
- else:
- f = open(path, mode, errors='replace')
+ if compat.PY3_2:
+ # gzip and bz2 don't work with TextIOWrapper in 3.2
+ encoding = encoding or get_option('display.encoding')
+ f = StringIO(f.read().decode(encoding))
+ elif compat.PY3:
+ from io import TextIOWrapper
+ f = TextIOWrapper(f, encoding=encoding)
+ return f
else:
- f = open(path, mode)
+ if compat.PY3:
+ if encoding:
+ f = open(path, mode, encoding=encoding)
+ else:
+ f = open(path, mode, errors='replace')
+ else:
+ f = open(path, mode)
+
return f
if compat.PY3: # pragma: no cover
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 06940e3bb2b4c..5554bef4acf98 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1175,13 +1175,36 @@ def count_empty_vals(vals):
return sum([1 for v in vals if v == '' or v is None])
-def _wrap_compressed(f, compression):
+def _wrap_compressed(f, compression, encoding=None):
+ """wraps compressed fileobject in a decompressing fileobject
+ NOTE: For all files in Python 3.2 and for bzip'd files under all Python
+ versions, this means reading in the entire file and then re-wrapping it in
+ StringIO.
+ """
compression = compression.lower()
+ encoding = encoding or get_option('display.encoding')
if compression == 'gzip':
import gzip
- return gzip.GzipFile(fileobj=f)
+
+ f = gzip.GzipFile(fileobj=f)
+ if compat.PY3_2:
+ # 3.2's gzip doesn't support read1
+ f = StringIO(f.read().decode(encoding))
+ elif compat.PY3:
+ from io import TextIOWrapper
+
+ f = TextIOWrapper(f)
+ return f
elif compression == 'bz2':
- raise ValueError('Python cannot read bz2 data from file handle')
+ import bz2
+
+ # bz2 module can't take file objects, so have to run through decompress
+ # manually
+ data = bz2.decompress(f.read())
+ if compat.PY3:
+ data = data.decode(encoding)
+ f = StringIO(data)
+ return f
else:
raise ValueError('do not recognize compression method %s'
% compression)
@@ -1235,7 +1258,12 @@ def __init__(self, f, **kwds):
f = com._get_handle(f, 'r', encoding=self.encoding,
compression=self.compression)
elif self.compression:
- f = _wrap_compressed(f, self.compression)
+ f = _wrap_compressed(f, self.compression, self.encoding)
+ # in Python 3, convert BytesIO or fileobjects passed with an encoding
+ elif compat.PY3 and isinstance(f, compat.BytesIO):
+ from io import TextIOWrapper
+
+ f = TextIOWrapper(f, encoding=self.encoding)
if hasattr(f, 'readline'):
self._make_reader(f)
@@ -1321,14 +1349,9 @@ class MyDialect(csv.Dialect):
def _read():
line = next(f)
pat = re.compile(sep)
- if (compat.PY3 and isinstance(line, bytes)):
- yield pat.split(line.decode('utf-8').strip())
- for line in f:
- yield pat.split(line.decode('utf-8').strip())
- else:
+ yield pat.split(line.strip())
+ for line in f:
yield pat.split(line.strip())
- for line in f:
- yield pat.split(line.strip())
reader = _read()
self.data = reader
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index f872ddd793935..fb2b3fdd33bf1 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -1,3 +1,4 @@
+# -*- coding: utf-8 -*-
# pylint: disable=E1101
from datetime import datetime
@@ -2043,8 +2044,8 @@ def test_fwf_compression(self):
expected = read_fwf(StringIO(data), widths=widths, names=names)
if compat.PY3:
data = bytes(data, encoding='utf-8')
- for comp_name, compresser in [('gzip', gzip.GzipFile),
- ('bz2', bz2.BZ2File)]:
+ comps = [('gzip', gzip.GzipFile), ('bz2', bz2.BZ2File)]
+ for comp_name, compresser in comps:
with tm.ensure_clean() as path:
tmp = compresser(path, mode='wb')
tmp.write(data)
@@ -2053,6 +2054,18 @@ def test_fwf_compression(self):
compression=comp_name)
tm.assert_frame_equal(result, expected)
+ def test_BytesIO_input(self):
+ if not compat.PY3:
+ raise nose.SkipTest("Bytes-related test - only needs to work on Python 3")
+ result = pd.read_fwf(BytesIO("שלום\nשלום".encode('utf8')), widths=[2,2])
+ expected = pd.DataFrame([["של", "ום"]], columns=["של", "ום"])
+ tm.assert_frame_equal(result, expected)
+ data = BytesIO("שלום::1234\n562::123".encode('cp1255'))
+ result = pd.read_table(data, sep="::", engine='python',
+ encoding='cp1255')
+ expected = pd.DataFrame([[562, 123]], columns=["שלום","1234"])
+ tm.assert_frame_equal(result, expected)
+
def test_verbose_import(self):
text = """a,b,c,d
one,1,2,3
| Fixes #3963, #4785
Fixed bug with reading compressed files in as `bytes` (`gzip` and `bz2` both now return `bytes` rather
than `str` in Python 3) rather than `str` in Python 3, as well as the lack of conversion of `BytesIO`. Now, `_get_handle` and `_wrap_compressed` both wrap in an `io.TextIOWrapper`, so that the parsers work internally only with `str` in Python 3. In Python 3.2, has to read the entire file in first (because `gzip` and `bz2` files both lack a `read1()` method in 3.2)
Also adds support for passing fileobjects with compression == 'bz2'.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4783 | 2013-09-09T03:45:05Z | 2013-09-14T01:59:59Z | 2013-09-14T01:59:59Z | 2014-06-19T08:30:04Z |
ENH: Add axis and level keywords to where, so that the other argument can now be an alignable pandas object. | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index e3a069960ab6b..d2fd11ee43615 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -625,6 +625,18 @@ This can be done intuitively like so:
df2[df2 < 0] = 0
df2
+By default, ``where`` returns a modified copy of the data. There is an
+optional parameter ``inplace`` so that the original data can be modified
+without creating a copy:
+
+.. ipython:: python
+
+ df_orig = df.copy()
+ df_orig.where(df > 0, -df, inplace=True);
+ df_orig
+
+**alignment**
+
Furthermore, ``where`` aligns the input boolean condition (ndarray or DataFrame),
such that partial selection with setting is possible. This is analagous to
partial setting via ``.ix`` (but on the contents rather than the axis labels)
@@ -635,24 +647,30 @@ partial setting via ``.ix`` (but on the contents rather than the axis labels)
df2[ df2[1:4] > 0 ] = 3
df2
-By default, ``where`` returns a modified copy of the data. There is an
-optional parameter ``inplace`` so that the original data can be modified
-without creating a copy:
+.. versionadded:: 0.13
+
+Where can also accept ``axis`` and ``level`` parameters to align the input when
+performing the ``where``.
.. ipython:: python
- df_orig = df.copy()
+ df2 = df.copy()
+ df2.where(df2>0,df2['A'],axis='index')
- df_orig.where(df > 0, -df, inplace=True);
+This is equivalent (but faster than) the following.
- df_orig
+.. ipython:: python
+
+ df2 = df.copy()
+ df.apply(lambda x, y: x.where(x>0,y), y=df['A'])
+
+**mask**
``mask`` is the inverse boolean operation of ``where``.
.. ipython:: python
s.mask(s >= 0)
-
df.mask(df >= 0)
Take Methods
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index 0c8efb4e905ec..6b63032a6c659 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -205,6 +205,33 @@ To remind you, these are the available filling methods:
With time series data, using pad/ffill is extremely common so that the "last
known value" is available at every time point.
+Filling with a PandasObject
+~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. versionadded:: 0.12
+
+You can also fill using a direct assignment with an alignable object. The
+use case of this is to fill a DataFrame with the mean of that column.
+
+.. ipython:: python
+
+ df = DataFrame(np.random.randn(10,3))
+ df.iloc[3:5,0] = np.nan
+ df.iloc[4:6,1] = np.nan
+ df.iloc[5:8,2] = np.nan
+ df
+
+ df.fillna(df.mean())
+
+.. versionadded:: 0.13
+
+Same result as above, but is aligning the 'fill' value which is
+a Series in this case.
+
+.. ipython:: python
+
+ df.where(pd.notnull(df),df.mean(),axis='columns')
+
.. _missing_data.dropna:
Dropping axis labels with missing data: dropna
diff --git a/doc/source/release.rst b/doc/source/release.rst
index f32ea44ed6242..70c520b6831bc 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -102,6 +102,8 @@ Improvements to existing features
tests/test_frame, tests/test_multilevel (:issue:`4732`).
- Performance improvement of timesesies plotting with PeriodIndex and added
test to vbench (:issue:`4705` and :issue:`4722`)
+ - Add ``axis`` and ``level`` keywords to ``where``, so that the ``other`` argument
+ can now be an alignable pandas object.
API Changes
~~~~~~~~~~~
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index f4c5eb808689c..2919790300bc3 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2173,6 +2173,8 @@ def align(self, other, join='outer', axis=None, level=None, copy=True,
from pandas import DataFrame, Series
method = com._clean_fill_method(method)
+ if axis is not None:
+ axis = self._get_axis_number(axis)
if isinstance(other, DataFrame):
return self._align_frame(other, join=join, axis=axis, level=level,
copy=copy, fill_value=fill_value,
@@ -2262,7 +2264,8 @@ def _align_series(self, other, join='outer', axis=None, level=None,
else:
return left_result, right_result
- def where(self, cond, other=np.nan, inplace=False, try_cast=False, raise_on_error=True):
+ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
+ try_cast=False, raise_on_error=True):
"""
Return an object of same shape as self and whose corresponding
entries are from self where cond is True and otherwise are from other.
@@ -2273,6 +2276,8 @@ def where(self, cond, other=np.nan, inplace=False, try_cast=False, raise_on_erro
other : scalar or DataFrame
inplace : boolean, default False
Whether to perform the operation in place on the data
+ axis : alignment axis if needed, default None
+ level : alignment level if needed, default None
try_cast : boolean, default False
try to cast the result back to the input type (if possible),
raise_on_error : boolean, default True
@@ -2306,15 +2311,17 @@ def where(self, cond, other=np.nan, inplace=False, try_cast=False, raise_on_erro
# align with me
if other.ndim <= self.ndim:
- _, other = self.align(other, join='left', fill_value=np.nan)
+ _, other = self.align(other, join='left',
+ axis=axis, level=level,
+ fill_value=np.nan)
# if we are NOT aligned, raise as we cannot where index
- if not all([ other._get_axis(i).equals(ax) for i, ax in enumerate(self.axes) ]):
+ if axis is None and not all([ other._get_axis(i).equals(ax) for i, ax in enumerate(self.axes) ]):
raise InvalidIndexError
# slice me out of the other
else:
- raise NotImplemented
+ raise NotImplemented("cannot align with a bigger dimensional PandasObject")
elif is_list_like(other):
@@ -2386,11 +2393,11 @@ def where(self, cond, other=np.nan, inplace=False, try_cast=False, raise_on_erro
if inplace:
# we may have different type blocks come out of putmask, so
# reconstruct the block manager
- self._data = self._data.putmask(cond, other, inplace=True)
+ self._data = self._data.putmask(cond, other, align=axis is None, inplace=True)
else:
new_data = self._data.where(
- other, cond, raise_on_error=raise_on_error, try_cast=try_cast)
+ other, cond, align=axis is None, raise_on_error=raise_on_error, try_cast=try_cast)
return self._constructor(new_data)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 1716980813cea..91be4f42c17e4 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -593,22 +593,40 @@ def setitem(self, indexer, value):
return [ self ]
- def putmask(self, mask, new, inplace=False):
+ def putmask(self, mask, new, align=True, inplace=False):
""" putmask the data to the block; it is possible that we may create a new dtype of block
- return the resulting block(s) """
+ return the resulting block(s)
+
+ Parameters
+ ----------
+ mask : the condition to respect
+ new : a ndarray/object
+ align : boolean, perform alignment on other/cond, default is True
+ inplace : perform inplace modification, default is False
+
+ Returns
+ -------
+ a new block(s), the result of the putmask
+ """
new_values = self.values if inplace else self.values.copy()
# may need to align the new
if hasattr(new, 'reindex_axis'):
- axis = getattr(new, '_info_axis_number', 0)
- new = new.reindex_axis(self.items, axis=axis, copy=False).values.T
+ if align:
+ axis = getattr(new, '_info_axis_number', 0)
+ new = new.reindex_axis(self.items, axis=axis, copy=False).values.T
+ else:
+ new = new.values.T
# may need to align the mask
if hasattr(mask, 'reindex_axis'):
- axis = getattr(mask, '_info_axis_number', 0)
- mask = mask.reindex_axis(
- self.items, axis=axis, copy=False).values.T
+ if align:
+ axis = getattr(mask, '_info_axis_number', 0)
+ mask = mask.reindex_axis(
+ self.items, axis=axis, copy=False).values.T
+ else:
+ mask = mask.values.T
# if we are passed a scalar None, convert it here
if not is_list_like(new) and isnull(new):
@@ -616,6 +634,11 @@ def putmask(self, mask, new, inplace=False):
if self._can_hold_element(new):
new = self._try_cast(new)
+
+ # pseudo-broadcast
+ if isinstance(new,np.ndarray) and new.ndim == self.ndim-1:
+ new = np.repeat(new,self.shape[-1]).reshape(self.shape)
+
np.putmask(new_values, mask, new)
# maybe upcast me
@@ -842,7 +865,7 @@ def handle_error():
return [make_block(result, self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
- def where(self, other, cond, raise_on_error=True, try_cast=False):
+ def where(self, other, cond, align=True, raise_on_error=True, try_cast=False):
"""
evaluate the block; return result block(s) from the result
@@ -850,6 +873,7 @@ def where(self, other, cond, raise_on_error=True, try_cast=False):
----------
other : a ndarray/object
cond : the condition to respect
+ align : boolean, perform alignment on other/cond
raise_on_error : if True, raise when I can't perform the function, False by default (and just return
the data that we had coming in)
@@ -862,21 +886,30 @@ def where(self, other, cond, raise_on_error=True, try_cast=False):
# see if we can align other
if hasattr(other, 'reindex_axis'):
- axis = getattr(other, '_info_axis_number', 0)
- other = other.reindex_axis(self.items, axis=axis, copy=True).values
+ if align:
+ axis = getattr(other, '_info_axis_number', 0)
+ other = other.reindex_axis(self.items, axis=axis, copy=True).values
+ else:
+ other = other.values
# make sure that we can broadcast
is_transposed = False
if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
if values.ndim != other.ndim or values.shape == other.shape[::-1]:
- values = values.T
- is_transposed = True
+
+ # pseodo broadcast (its a 2d vs 1d say and where needs it in a specific direction)
+ if other.ndim >= 1 and values.ndim-1 == other.ndim and values.shape[0] != other.shape[0]:
+ other = _block_shape(other).T
+ else:
+ values = values.T
+ is_transposed = True
# see if we can align cond
if not hasattr(cond, 'shape'):
raise ValueError(
"where must have a condition that is ndarray like")
- if hasattr(cond, 'reindex_axis'):
+
+ if align and hasattr(cond, 'reindex_axis'):
axis = getattr(cond, '_info_axis_number', 0)
cond = cond.reindex_axis(self.items, axis=axis, copy=True).values
else:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 4f67fb1afdd5f..ef8c630a7bde8 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2725,7 +2725,7 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
else:
return self._constructor(mapped, index=self.index, name=self.name)
- def align(self, other, join='outer', level=None, copy=True,
+ def align(self, other, join='outer', axis=None, level=None, copy=True,
fill_value=None, method=None, limit=None):
"""
Align two Series object with the specified join method
@@ -2734,6 +2734,7 @@ def align(self, other, join='outer', level=None, copy=True,
----------
other : Series
join : {'outer', 'inner', 'left', 'right'}, default 'outer'
+ axis : None, alignment axis (is 0 for Series)
level : int or name
Broadcast across a level, matching Index values on the
passed MultiIndex level
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index cefe15952d329..f9756858b5d85 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -7931,6 +7931,35 @@ def test_where_none(self):
expected = DataFrame({'series': Series([0,1,2,3,4,5,6,7,np.nan,np.nan]) })
assert_frame_equal(df, expected)
+ def test_where_align(self):
+
+ def create():
+ df = DataFrame(np.random.randn(10,3))
+ df.iloc[3:5,0] = np.nan
+ df.iloc[4:6,1] = np.nan
+ df.iloc[5:8,2] = np.nan
+ return df
+
+ # series
+ df = create()
+ expected = df.fillna(df.mean())
+ result = df.where(pd.notnull(df),df.mean(),axis='columns')
+ assert_frame_equal(result, expected)
+
+ df.where(pd.notnull(df),df.mean(),inplace=True,axis='columns')
+ assert_frame_equal(df, expected)
+
+ df = create().fillna(0)
+ expected = df.apply(lambda x, y: x.where(x>0,y), y=df[0])
+ result = df.where(df>0,df[0],axis='index')
+ assert_frame_equal(result, expected)
+
+ # frame
+ df = create()
+ expected = df.fillna(1)
+ result = df.where(pd.notnull(df),DataFrame(1,index=df.index,columns=df.columns))
+ assert_frame_equal(result, expected)
+
def test_mask(self):
df = DataFrame(np.random.randn(5, 3))
cond = df > 0
| So traditionally a fillna that does the means of the columns is an apply operation
```
In [1]: df = DataFrame(np.random.randn(10,3))
In [2]: df.iloc[3:5,0] = np.nan
In [3]: df.iloc[4:6,1] = np.nan
In [4]: df.iloc[5:8,2] = np.nan
In [5]: df
Out[5]:
0 1 2
0 0.096030 0.197451 1.645981
1 -0.443437 0.359204 -0.382563
2 0.613981 1.418754 -0.589935
3 0.000000 0.449953 -0.308414
4 0.000000 0.000000 -0.471054
5 -2.350309 0.000000 0.000000
6 -0.218522 0.498207 0.000000
7 0.478238 0.399154 0.000000
8 0.895854 0.230992 0.025799
9 0.085675 2.189373 -0.946990
```
The following currently fails in 0.12 as where is finicky about how it broadcasts
```
In [4]: df.where(df>0,df[0],axis='index')
ValueError: other must be the same shape as self when an ndarray
```
Adding `axis` and `level` arguments to `where` (which uses align under the hood), now enables the `other` object to be a Series/DataFrame (as well as a scalar) without a whole bunch of alignment/broadcasting. This also should be quite a bit faster.
```
IIn [6]: df.where(df>0,df[0],axis='index')
Out[6]:
0 1 2
0 0.096030 0.197451 1.645981
1 -0.443437 0.359204 -0.443437
2 0.613981 1.418754 0.613981
3 0.000000 0.449953 0.000000
4 0.000000 0.000000 0.000000
5 -2.350309 -2.350309 -2.350309
6 -0.218522 0.498207 -0.218522
7 0.478238 0.399154 0.478238
8 0.895854 0.230992 0.025799
9 0.085675 2.189373 0.085675
```
This works in 0.12.
```
In [7]: df.apply(lambda x, y: x.where(x>0,y), y=df[0])
Out[7]:
0 1 2
0 0.096030 0.197451 1.645981
1 -0.443437 0.359204 -0.443437
2 0.613981 1.418754 0.613981
3 0.000000 0.449953 0.000000
4 0.000000 0.000000 0.000000
5 -2.350309 -2.350309 -2.350309
6 -0.218522 0.498207 -0.218522
7 0.478238 0.399154 0.478238
8 0.895854 0.230992 0.025799
9 0.085675 2.189373 0.085675
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4781 | 2013-09-09T03:05:55Z | 2013-09-10T11:34:51Z | 2013-09-10T11:34:51Z | 2014-06-12T07:40:42Z |
TST/BUG: duplicate indexing ops with a Series using where and inplace add buggy (GH4550/GH4548) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index cac49a53e8fc5..f32ea44ed6242 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -368,6 +368,7 @@ Bug Fixes
- Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (:issue:`4771`)
- Bug in ``iloc`` with a slice index failing (:issue:`4771`)
- Incorrect error message with no colspecs or width in ``read_fwf``. (:issue:`4774`)
+ - Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, :issue:`4550`)
pandas 0.12.0
-------------
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 58e1fbc4f177d..f4c5eb808689c 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7,7 +7,7 @@
import pandas as pd
from pandas.core.base import PandasObject
-from pandas.core.index import Index, MultiIndex, _ensure_index
+from pandas.core.index import Index, MultiIndex, _ensure_index, InvalidIndexError
import pandas.core.indexing as indexing
from pandas.core.indexing import _maybe_convert_indices
from pandas.tseries.index import DatetimeIndex
@@ -2308,6 +2308,10 @@ def where(self, cond, other=np.nan, inplace=False, try_cast=False, raise_on_erro
_, other = self.align(other, join='left', fill_value=np.nan)
+ # if we are NOT aligned, raise as we cannot where index
+ if not all([ other._get_axis(i).equals(ax) for i, ax in enumerate(self.axes) ]):
+ raise InvalidIndexError
+
# slice me out of the other
else:
raise NotImplemented
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5579e60ceb90e..4f67fb1afdd5f 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1037,9 +1037,13 @@ def __setitem__(self, key, value):
if _is_bool_indexer(key):
key = _check_bool_indexer(self.index, key)
- self.where(~key, value, inplace=True)
- else:
- self._set_with(key, value)
+ try:
+ self.where(~key, value, inplace=True)
+ return
+ except (InvalidIndexError):
+ pass
+
+ self._set_with(key, value)
def _set_with_engine(self, key, value):
values = self.values
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index b0911ed10be20..7f8fa1019261f 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1373,6 +1373,26 @@ def test_where_inplace(self):
rs.where(cond, -s, inplace=True)
assert_series_equal(rs, s.where(cond, -s))
+ def test_where_dups(self):
+ # GH 4550
+ # where crashes with dups in index
+ s1 = Series(list(range(3)))
+ s2 = Series(list(range(3)))
+ comb = pd.concat([s1,s2])
+ result = comb.where(comb < 2)
+ expected = Series([0,1,np.nan,0,1,np.nan],index=[0,1,2,0,1,2])
+ assert_series_equal(result, expected)
+
+ # GH 4548
+ # inplace updating not working with dups
+ comb[comb<1] = 5
+ expected = Series([5,1,2,5,1,2],index=[0,1,2,0,1,2])
+ assert_series_equal(comb, expected)
+
+ comb[comb<2] += 10
+ expected = Series([5,11,2,5,11,2],index=[0,1,2,0,1,2])
+ assert_series_equal(comb, expected)
+
def test_mask(self):
s = Series(np.random.randn(5))
cond = s > 0
| closes #4550,#4548
```
In [1]: s = pd.concat([Series(list(range(3))),Series(list(range(3)))])
In [2]: s
Out[2]:
0 0
1 1
2 2
0 0
1 1
2 2
dtype: int64
In [3]: s.where(s<2)
Out[3]:
0 0
1 1
2 NaN
0 0
1 1
2 NaN
dtype: float64
In [4]: s[s<1] = 5
In [5]: s
Out[5]:
0 5
1 1
2 2
0 5
1 1
2 2
dtype: int64
In [6]: s[s<2] += 10
In [7]: s
Out[7]:
0 5
1 11
2 2
0 5
1 11
2 2
dtype: int64
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4779 | 2013-09-09T01:51:08Z | 2013-09-09T02:03:15Z | 2013-09-09T02:03:14Z | 2014-06-15T21:16:42Z |
BLD: windows builds failing in pandas.json (#4764) | diff --git a/pandas/src/ujson/lib/ultrajsonenc.c b/pandas/src/ujson/lib/ultrajsonenc.c
index 4106ed6b73fcf..15d92d42f6753 100644
--- a/pandas/src/ujson/lib/ultrajsonenc.c
+++ b/pandas/src/ujson/lib/ultrajsonenc.c
@@ -549,7 +549,7 @@ int Buffer_AppendDoubleUnchecked(JSOBJ obj, JSONObjectEncoder *enc, double value
{
precision_str[0] = '%';
precision_str[1] = '.';
-#ifdef _WIN32
+#if defined(_WIN32) && defined(_MSC_VER)
sprintf_s(precision_str+2, sizeof(precision_str)-2, "%ug", enc->doublePrecision);
enc->offset += sprintf_s(str, enc->end - enc->offset, precision_str, neg ? -value : value);
#else
| https://api.github.com/repos/pandas-dev/pandas/pulls/4778 | 2013-09-09T00:59:50Z | 2013-09-18T02:36:50Z | 2013-09-18T02:36:50Z | 2014-07-16T08:27:15Z | |
DOC: correction of example in unstack docstring | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a3eb3ea54c784..9f30c3e7f5255 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3280,28 +3280,38 @@ def unstack(self, level=-1):
Parameters
----------
- level : int, string, or list of these, default last level
+ level : int, string, or list of these, default -1 (last level)
Level(s) of index to unstack, can pass level name
+ See also
+ --------
+ DataFrame.pivot : Pivot a table based on column values.
+ DataFrame.stack : Pivot a level of the column labels (inverse operation
+ from `unstack`).
+
Examples
--------
+ >>> index = pd.MultiIndex.from_tuples([('one', 'a'), ('one', 'b'),
+ ... ('two', 'a'), ('two', 'b')])
+ >>> s = pd.Series(np.arange(1.0, 5.0), index=index)
>>> s
- one a 1.
- one b 2.
- two a 3.
- two b 4.
+ one a 1
+ b 2
+ two a 3
+ b 4
+ dtype: float64
>>> s.unstack(level=-1)
a b
- one 1. 2.
- two 3. 4.
+ one 1 2
+ two 3 4
- >>> df = s.unstack(level=0)
- >>> df
+ >>> s.unstack(level=0)
one two
- a 1. 2.
- b 3. 4.
+ a 1 3
+ b 2 4
+ >>> df = s.unstack(level=0)
>>> df.unstack()
one a 1.
b 3.
| There is an error in the example of the `unstack` docstring (http://pandas.pydata.org/pandas-docs/dev/generated/pandas.DataFrame.unstack.html ):
```
>>> df = s.unstack(level=0)
>>> df
one two
a 1. 2.
b 3. 4
```
should be
```
one two
a 1. 3.
b 2. 4
```
So this PR fixes that, but in the same time I did some other adjustments. Mainly:
- added a "see also"
- added in the example the code to create the series, so one can run the example him/herself
| https://api.github.com/repos/pandas-dev/pandas/pulls/4776 | 2013-09-08T16:56:25Z | 2013-09-20T22:18:14Z | 2013-09-20T22:18:14Z | 2014-07-16T08:27:13Z |
DOC: some stylistic improvements to docstring rendering in documentation | diff --git a/doc/source/themes/nature_with_gtoc/static/nature.css_t b/doc/source/themes/nature_with_gtoc/static/nature.css_t
index 2e0bed922c1e6..61b0e2cce5e5a 100644
--- a/doc/source/themes/nature_with_gtoc/static/nature.css_t
+++ b/doc/source/themes/nature_with_gtoc/static/nature.css_t
@@ -178,6 +178,10 @@ div.body h4 { font-size: 110%; background-color: #D8DEE3; }
div.body h5 { font-size: 100%; background-color: #D8DEE3; }
div.body h6 { font-size: 100%; background-color: #D8DEE3; }
+p.rubric {
+ border-bottom: 1px solid rgb(201, 201, 201);
+}
+
a.headerlink {
color: #c60f0f;
font-size: 0.8em;
@@ -231,10 +235,10 @@ p.admonition-title:after {
pre {
padding: 10px;
- background-color: White;
+ background-color: rgb(250,250,250);
color: #222;
line-height: 1.2em;
- border: 1px solid #C6C9CB;
+ border: 1px solid rgb(201,201,201);
font-size: 1.1em;
margin: 1.5em 0 1.5em 0;
-webkit-box-shadow: 1px 1px 1px #d8d8d8;
@@ -258,3 +262,49 @@ div.viewcode-block:target {
border-top: 1px solid #ac9;
border-bottom: 1px solid #ac9;
}
+
+
+/**
+ * Styling for field lists
+ */
+
+ /* grey highlighting of 'parameter' and 'returns' field */
+table.field-list {
+ border-collapse: separate;
+ border-spacing: 10px;
+ margin-left: 1px;
+ /* border-left: 5px solid rgb(238, 238, 238) !important; */
+}
+
+table.field-list th.field-name {
+ /* display: inline-block; */
+ padding: 1px 8px 1px 5px;
+ white-space: nowrap;
+ background-color: rgb(238, 238, 238);
+}
+
+/* italic font for parameter types */
+table.field-list td.field-body > p {
+ font-style: italic;
+}
+
+table.field-list td.field-body > p > strong {
+ font-style: normal;
+}
+
+/* reduced space around parameter description */
+td.field-body blockquote {
+ border-left: none;
+ margin: 0em 0em 0.3em;
+ padding-left: 30px;
+}
+
+
+/**
+ * See also
+ */
+
+div.seealso dd {
+ margin-top: 0;
+ margin-bottom: 0;
+}
| This is a PR with some stylistic improvements to the docstring rendering in the documentation. The incentive was that I found the reference (docstring) documentation not always clearly organised.
Things I changed (mainly copied from the new numpy/scipy docs):
- reduced space around parameter description
- italic font for parameter types
- assure that colon after "Parameters" is on same line and some grey background in Parameter field to highlight the
- "See also" box: put link and description on same line
- light background color in code examples
- line under Notes and Examples section headers
You can see the result here: http://jorisvandenbossche.github.io/example-pandas-docs/html-docstring-rendering/generated/pandas.DataFrame.apply.html#pandas.DataFrame.apply
and compare with the original: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.apply.html#pandas.DataFrame.apply
Each of the points is of course debatable and I can change some of them back. Maybe the grey background of the code cells (this also applies to the tutorial docs) and the grey highlighting of the "parameter" and "returns" field (this can alse be the full block as in eg http://docs.scipy.org/doc/numpy/reference/generated/numpy.reshape.html#numpy.reshape) is maybe the most debatable.
Style is of course always subjective :-) So what do you think?
| https://api.github.com/repos/pandas-dev/pandas/pulls/4775 | 2013-09-08T16:16:49Z | 2013-09-20T22:21:44Z | 2013-09-20T22:21:44Z | 2014-07-16T08:27:12Z |
BUG: read_fwf: incorrect error message with no colspecs or widths | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 96527d0161687..66ac9d813f056 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -335,6 +335,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Bug in setting with ``loc/ix`` a single indexer with a multi-index axis and a numpy array, related to (:issue:`3777`)
- Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (:issue:`4771`)
- Bug in ``iloc`` with a slice index failing (:issue:`4771`)
+ - Incorrect error message with no colspecs or width in ``read_fwf``. (:issue:`4774`)
pandas 0.12
===========
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8cf7eaa1b19e3..f05b0a676cde4 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -414,7 +414,9 @@ def parser_f(filepath_or_buffer,
@Appender(_read_fwf_doc)
def read_fwf(filepath_or_buffer, colspecs=None, widths=None, **kwds):
# Check input arguments.
- if bool(colspecs is None) == bool(widths is None):
+ if colspecs is None and widths is None:
+ raise ValueError("Must specify either colspecs or widths")
+ elif colspecs is not None and widths is not None:
raise ValueError("You must specify only one of 'widths' and "
"'colspecs'")
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 6668cfd73a6b7..9d751de6645ce 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -1995,8 +1995,11 @@ def test_fwf(self):
StringIO(data3), colspecs=colspecs, delimiter='~', header=None)
tm.assert_frame_equal(df, expected)
- self.assertRaises(ValueError, read_fwf, StringIO(data3),
- colspecs=colspecs, widths=[6, 10, 10, 7])
+ with tm.assertRaisesRegexp(ValueError, "must specify only one of"):
+ read_fwf(StringIO(data3), colspecs=colspecs, widths=[6, 10, 10, 7])
+
+ with tm.assertRaisesRegexp(ValueError, "Must specify either"):
+ read_fwf(StringIO(data3))
def test_fwf_regression(self):
# GH 3594
| https://api.github.com/repos/pandas-dev/pandas/pulls/4774 | 2013-09-08T01:57:55Z | 2013-09-08T03:14:03Z | 2013-09-08T03:14:03Z | 2014-06-12T14:15:28Z | |
TST: add dups on both index tests for HDFStore | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 0a9e6855f094a..d445ce8b797b5 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -667,7 +667,7 @@ def func(_start, _stop):
axis = list(set([t.non_index_axes[0][0] for t in tbls]))[0]
# concat and return
- return concat(objs, axis=axis, verify_integrity=True).consolidate()
+ return concat(objs, axis=axis, verify_integrity=False).consolidate()
if iterator or chunksize is not None:
return TableIterator(self, func, nrows=nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index e9f4cf7d0f96f..48a2150758a3f 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2342,6 +2342,16 @@ def test_select_with_dups(self):
result = store.select('df',columns=['B','A'])
assert_frame_equal(result,expected,by_blocks=True)
+ # duplicates on both index and columns
+ with ensure_clean(self.path) as store:
+ store.append('df',df)
+ store.append('df',df)
+
+ expected = df.loc[:,['B','A']]
+ expected = concat([expected, expected])
+ result = store.select('df',columns=['B','A'])
+ assert_frame_equal(result,expected,by_blocks=True)
+
def test_wide_table_dups(self):
wp = tm.makePanel()
with ensure_clean(self.path) as store:
| https://api.github.com/repos/pandas-dev/pandas/pulls/4773 | 2013-09-07T21:02:33Z | 2013-09-07T21:18:52Z | 2013-09-07T21:18:52Z | 2014-07-16T08:27:09Z | |
BUG: Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (GH4771) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index e12e6c91d46d0..930f100fd86dc 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -331,6 +331,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Bug in multi-indexing with a partial string selection as one part of a MultIndex (:issue:`4758`)
- Bug with reindexing on the index with a non-unique index will now raise ``ValueError`` (:issue:`4746`)
- Bug in setting with ``loc/ix`` a single indexer with a multi-index axis and a numpy array, related to (:issue:`3777`)
+ - Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (:issue:`4771`)
+ - Bug in ``iloc`` with a slice index failing (:issue:`4771`)
pandas 0.12
===========
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 57db36b252e3c..e27430b06c45c 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2174,7 +2174,7 @@ def get_slice(self, slobj, axis=0, raise_on_error=False):
placement=blk._ref_locs)
new_blocks = [newb]
else:
- return self.reindex_items(new_items)
+ return self.reindex_items(new_items, indexer=np.arange(len(self.items))[slobj])
else:
new_blocks = self._slice_blocks(slobj, axis)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index d6088c2d72525..18ee89fbc5c66 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -16,7 +16,7 @@
MultiIndex, DatetimeIndex, Timestamp)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
assert_frame_equal, assert_panel_equal)
-from pandas import compat
+from pandas import compat, concat
import pandas.util.testing as tm
import pandas.lib as lib
@@ -359,6 +359,29 @@ def test_iloc_getitem_slice(self):
self.check_result('slice', 'iloc', slice(1,3), 'ix', { 0 : [2,4], 1: [3,6], 2: [4,8] }, typs = ['ints'])
self.check_result('slice', 'iloc', slice(1,3), 'indexer', slice(1,3), typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+ def test_iloc_getitem_slice_dups(self):
+
+ df1 = DataFrame(np.random.randn(10,4),columns=['A','A','B','B'])
+ df2 = DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])
+
+ # axis=1
+ df = concat([df1,df2],axis=1)
+ assert_frame_equal(df.iloc[:,:4],df1)
+ assert_frame_equal(df.iloc[:,4:],df2)
+
+ df = concat([df2,df1],axis=1)
+ assert_frame_equal(df.iloc[:,:2],df2)
+ assert_frame_equal(df.iloc[:,2:],df1)
+
+ assert_frame_equal(df.iloc[:,0:3],concat([df2,df1.iloc[:,[0]]],axis=1))
+
+ # axis=0
+ df = concat([df,df],axis=0)
+ assert_frame_equal(df.iloc[0:10,:2],df2)
+ assert_frame_equal(df.iloc[0:10,2:],df1)
+ assert_frame_equal(df.iloc[10:,:2],df2)
+ assert_frame_equal(df.iloc[10:,2:],df1)
+
def test_iloc_getitem_out_of_bounds(self):
# out-of-bounds slice
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 765dbc07b464f..d7fedecdb0ef2 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -649,6 +649,7 @@ def __init__(self, data_list, join_index, indexers, axis=1, copy=True):
for data, indexer in zip(data_list, indexers):
if not data.is_consolidated():
data = data.consolidate()
+ data._set_ref_locs()
self.units.append(_JoinUnit(data.blocks, indexer))
self.join_index = join_index
@@ -682,7 +683,6 @@ def get_result(self):
blockmaps = self._prepare_blocks()
kinds = _get_merge_block_kinds(blockmaps)
- result_is_unique = self.result_axes[0].is_unique
result_blocks = []
# maybe want to enable flexible copying <-- what did I mean?
@@ -692,23 +692,28 @@ def get_result(self):
if klass in mapping:
klass_blocks.extend((unit, b) for b in mapping[klass])
res_blk = self._get_merged_block(klass_blocks)
-
- # if we have a unique result index, need to clear the _ref_locs
- # a non-unique is set as we are creating
- if result_is_unique:
- res_blk.set_ref_locs(None)
-
result_blocks.append(res_blk)
return BlockManager(result_blocks, self.result_axes)
def _get_merged_block(self, to_merge):
if len(to_merge) > 1:
+
+ # placement set here
return self._merge_blocks(to_merge)
else:
unit, block = to_merge[0]
- return unit.reindex_block(block, self.axis,
- self.result_items, copy=self.copy)
+ blk = unit.reindex_block(block, self.axis,
+ self.result_items, copy=self.copy)
+
+ # set placement / invalidate on a unique result
+ if self.result_items.is_unique and blk._ref_locs is not None:
+ if not self.copy:
+ blk = blk.copy()
+ blk.set_ref_locs(None)
+
+ return blk
+
def _merge_blocks(self, merge_chunks):
"""
@@ -736,7 +741,18 @@ def _merge_blocks(self, merge_chunks):
# does not sort
new_block_items = _concat_indexes([b.items for _, b in merge_chunks])
- return make_block(out, new_block_items, self.result_items)
+
+ # need to set placement if we have a non-unique result
+ # calculate by the existing placement plus the offset in the result set
+ placement = None
+ if not self.result_items.is_unique:
+ nchunks = len(merge_chunks)
+ offsets = np.array([0] + [ len(self.result_items) / nchunks ] * (nchunks-1)).cumsum()
+ placement = []
+ for (unit, blk), offset in zip(merge_chunks,offsets):
+ placement.extend(blk.ref_locs+offset)
+
+ return make_block(out, new_block_items, self.result_items, placement=placement)
class _JoinUnit(object):
@@ -992,6 +1008,7 @@ def _prepare_blocks(self):
blockmaps = []
for data in reindexed_data:
data = data.consolidate()
+ data._set_ref_locs()
blockmaps.append(data.get_block_map(typ='dict'))
return blockmaps, reindexed_data
@@ -1063,7 +1080,10 @@ def _concat_blocks(self, blocks):
# or maybe would require performance test)
raise PandasError('dtypes are not consistent throughout '
'DataFrames')
- return make_block(concat_values, blocks[0].items, self.new_axes[0])
+ return make_block(concat_values,
+ blocks[0].items,
+ self.new_axes[0],
+ placement=blocks[0]._ref_locs)
else:
offsets = np.r_[0, np.cumsum([len(x._data.axes[0]) for
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 5cfe22781f362..f7eb3c125db61 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -1396,6 +1396,54 @@ def test_crossed_dtypes_weird_corner(self):
[df, df2], keys=['one', 'two'], names=['first', 'second'])
self.assertEqual(result.index.names, ('first', 'second'))
+ def test_dups_index(self):
+ # GH 4771
+
+ # single dtypes
+ df = DataFrame(np.random.randint(0,10,size=40).reshape(10,4),columns=['A','A','C','C'])
+
+ result = concat([df,df],axis=1)
+ assert_frame_equal(result.iloc[:,:4],df)
+ assert_frame_equal(result.iloc[:,4:],df)
+
+ result = concat([df,df],axis=0)
+ assert_frame_equal(result.iloc[:10],df)
+ assert_frame_equal(result.iloc[10:],df)
+
+ # multi dtypes
+ df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']),
+ DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])],
+ axis=1)
+
+ result = concat([df,df],axis=1)
+ assert_frame_equal(result.iloc[:,:6],df)
+ assert_frame_equal(result.iloc[:,6:],df)
+
+ result = concat([df,df],axis=0)
+ assert_frame_equal(result.iloc[:10],df)
+ assert_frame_equal(result.iloc[10:],df)
+
+ # append
+ result = df.iloc[0:8,:].append(df.iloc[8:])
+ assert_frame_equal(result, df)
+
+ result = df.iloc[0:8,:].append(df.iloc[8:9]).append(df.iloc[9:10])
+ assert_frame_equal(result, df)
+
+ expected = concat([df,df],axis=0)
+ result = df.append(df)
+ assert_frame_equal(result, expected)
+
+ def test_join_dups(self):
+ df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']),
+ DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])],
+ axis=1)
+
+ expected = concat([df,df],axis=1)
+ result = df.join(df,rsuffix='_2')
+ result.columns = expected.columns
+ assert_frame_equal(result, expected)
+
def test_handle_empty_objects(self):
df = DataFrame(np.random.randn(10, 4), columns=list('abcd'))
| closes #4771
TST/BUG: Bug in iloc with a slice index failing (GH4771)
| https://api.github.com/repos/pandas-dev/pandas/pulls/4772 | 2013-09-07T19:16:11Z | 2013-09-07T20:57:14Z | 2013-09-07T20:57:14Z | 2014-06-13T14:32:04Z |
REF/BUG/ENH/API: refactor read_html to use TextParser | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4f4681b112664..78236bbf821dd 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -167,6 +167,8 @@ Improvements to existing features
- Improve support for converting R datasets to pandas objects (more
informative index for timeseries and numeric, support for factors, dist, and
high-dimensional arrays).
+ - :func:`~pandas.read_html` now supports the ``parse_dates``,
+ ``tupleize_cols`` and ``thousands`` parameters (:issue:`4770`).
API Changes
~~~~~~~~~~~
@@ -373,6 +375,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
``core/generic.py`` (:issue:`4435`).
- Refactor cum objects to core/generic.py (:issue:`4435`), note that these have a more numpy-like
function signature.
+ - :func:`~pandas.read_html` now uses ``TextParser`` to parse HTML data from
+ bs4/lxml (:issue:`4770`).
.. _release.bug_fixes-0.13.0:
@@ -538,6 +542,15 @@ Bug Fixes
- Make sure series-series boolean comparions are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
- Tests/fix for multi-index construction of an all-nan frame (:isue:`4078`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't correctly inferring
+ values of tables with commas (:issue:`5029`)
+ - Fixed a bug where :func:`~pandas.read_html` wasn't providing a stable
+ ordering of returned tables (:issue:`4770`, :issue:`5029`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly parsing when
+ passed ``index_col=0`` (:issue:`5066`).
+ - Fixed a bug where :func:`~pandas.read_html` was incorrectly infering the
+ type of headers (:issue:`5048`).
+
pandas 0.12.0
-------------
diff --git a/pandas/io/html.py b/pandas/io/html.py
index df94e0ffa2e79..96bedbf390af6 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -7,15 +7,18 @@
import re
import numbers
import collections
+import warnings
from distutils.version import LooseVersion
import numpy as np
-from pandas import DataFrame, MultiIndex, isnull
from pandas.io.common import _is_url, urlopen, parse_url
-from pandas.compat import range, lrange, lmap, u, map
-from pandas import compat
+from pandas.io.parsers import TextParser
+from pandas.compat import (lrange, lmap, u, string_types, iteritems, text_type,
+ raise_with_traceback)
+from pandas.core import common as com
+from pandas import Series
try:
@@ -45,7 +48,7 @@
#############
# READ HTML #
#############
-_RE_WHITESPACE = re.compile(r'([\r\n]+|\s{2,})')
+_RE_WHITESPACE = re.compile(r'[\r\n]+|\s{2,}')
def _remove_whitespace(s, regex=_RE_WHITESPACE):
@@ -67,7 +70,7 @@ def _remove_whitespace(s, regex=_RE_WHITESPACE):
return regex.sub(' ', s.strip())
-def _get_skiprows_iter(skiprows):
+def _get_skiprows(skiprows):
"""Get an iterator given an integer, slice or container.
Parameters
@@ -80,11 +83,6 @@ def _get_skiprows_iter(skiprows):
TypeError
* If `skiprows` is not a slice, integer, or Container
- Raises
- ------
- TypeError
- * If `skiprows` is not a slice, integer, or Container
-
Returns
-------
it : iterable
@@ -92,13 +90,12 @@ def _get_skiprows_iter(skiprows):
"""
if isinstance(skiprows, slice):
return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
- elif isinstance(skiprows, numbers.Integral):
- return lrange(skiprows)
- elif isinstance(skiprows, collections.Container):
+ elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
return skiprows
- else:
- raise TypeError('{0} is not a valid type for skipping'
- ' rows'.format(type(skiprows)))
+ elif skiprows is None:
+ return 0
+ raise TypeError('%r is not a valid type for skipping rows' %
+ type(skiprows).__name__)
def _read(io):
@@ -120,11 +117,10 @@ def _read(io):
elif os.path.isfile(io):
with open(io) as f:
raw_text = f.read()
- elif isinstance(io, compat.string_types):
+ elif isinstance(io, string_types):
raw_text = io
else:
- raise TypeError("Cannot read object of type "
- "'{0.__class__.__name__!r}'".format(io))
+ raise TypeError("Cannot read object of type %r" % type(io).__name__)
return raw_text
@@ -194,12 +190,6 @@ def _parse_raw_data(self, rows):
A callable that takes a row node as input and returns a list of the
column node in that row. This must be defined by subclasses.
- Raises
- ------
- AssertionError
- * If `text_getter` is not callable
- * If `column_finder` is not callable
-
Returns
-------
data : list of list of strings
@@ -254,7 +244,7 @@ def _parse_tables(self, doc, match, attrs):
Raises
------
- AssertionError
+ ValueError
* If `match` does not match any text in the document.
Returns
@@ -406,25 +396,28 @@ def _parse_tfoot(self, table):
def _parse_tables(self, doc, match, attrs):
element_name = self._strainer.name
tables = doc.find_all(element_name, attrs=attrs)
+
if not tables:
- # known sporadically working release
- raise AssertionError('No tables found')
+ raise ValueError('No tables found')
- mts = [table.find(text=match) for table in tables]
- matched_tables = [mt for mt in mts if mt is not None]
- tables = list(set(mt.find_parent(element_name)
- for mt in matched_tables))
+ result = []
+ unique_tables = set()
- if not tables:
- raise AssertionError("No tables found matching "
- "'{0}'".format(match.pattern))
- return tables
+ for table in tables:
+ if (table not in unique_tables and
+ table.find(text=match) is not None):
+ result.append(table)
+ unique_tables.add(table)
+
+ if not result:
+ raise ValueError("No tables found matching pattern %r" %
+ match.pattern)
+ return result
def _setup_build_doc(self):
raw_text = _read(self.io)
if not raw_text:
- raise AssertionError('No text parsed from document: '
- '{0}'.format(self.io))
+ raise ValueError('No text parsed from document: %s' % self.io)
return raw_text
def _build_doc(self):
@@ -432,7 +425,7 @@ def _build_doc(self):
return BeautifulSoup(self._setup_build_doc(), features='html5lib')
-def _build_node_xpath_expr(attrs):
+def _build_xpath_expr(attrs):
"""Build an xpath expression to simulate bs4's ability to pass in kwargs to
search for attributes when using the lxml parser.
@@ -450,8 +443,8 @@ def _build_node_xpath_expr(attrs):
if 'class_' in attrs:
attrs['class'] = attrs.pop('class_')
- s = (u("@{k}='{v}'").format(k=k, v=v) for k, v in compat.iteritems(attrs))
- return u('[{0}]').format(' and '.join(s))
+ s = [u("@%s=%r") % (k, v) for k, v in iteritems(attrs)]
+ return u('[%s]') % ' and '.join(s)
_re_namespace = {'re': 'http://exslt.org/regular-expressions'}
@@ -491,23 +484,20 @@ def _parse_tr(self, table):
def _parse_tables(self, doc, match, kwargs):
pattern = match.pattern
- # check all descendants for the given pattern
- check_all_expr = u('//*')
- if pattern:
- check_all_expr += u("[re:test(text(), '{0}')]").format(pattern)
-
- # go up the tree until we find a table
- check_table_expr = '/ancestor::table'
- xpath_expr = check_all_expr + check_table_expr
+ # 1. check all descendants for the given pattern and only search tables
+ # 2. go up the tree until we find a table
+ query = '//table//*[re:test(text(), %r)]/ancestor::table'
+ xpath_expr = u(query) % pattern
# if any table attributes were given build an xpath expression to
# search for them
if kwargs:
- xpath_expr += _build_node_xpath_expr(kwargs)
+ xpath_expr += _build_xpath_expr(kwargs)
+
tables = doc.xpath(xpath_expr, namespaces=_re_namespace)
+
if not tables:
- raise AssertionError("No tables found matching regex "
- "'{0}'".format(pattern))
+ raise ValueError("No tables found matching regex %r" % pattern)
return tables
def _build_doc(self):
@@ -528,6 +518,7 @@ def _build_doc(self):
"""
from lxml.html import parse, fromstring, HTMLParser
from lxml.etree import XMLSyntaxError
+
parser = HTMLParser(recover=False)
try:
@@ -552,8 +543,8 @@ def _build_doc(self):
scheme = parse_url(self.io).scheme
if scheme not in _valid_schemes:
# lxml can't parse it
- msg = ('{0} is not a valid url scheme, valid schemes are '
- '{1}').format(scheme, _valid_schemes)
+ msg = ('%r is not a valid url scheme, valid schemes are '
+ '%s') % (scheme, _valid_schemes)
raise ValueError(msg)
else:
# something else happened: maybe a faulty connection
@@ -583,101 +574,38 @@ def _parse_raw_tfoot(self, table):
table.xpath(expr)]
-def _data_to_frame(data, header, index_col, infer_types, skiprows):
- """Parse a BeautifulSoup table into a DataFrame.
+def _expand_elements(body):
+ lens = Series(lmap(len, body))
+ lens_max = lens.max()
+ not_max = lens[lens != lens_max]
- Parameters
- ----------
- data : tuple of lists
- The raw data to be placed into a DataFrame. This is a list of lists of
- strings or unicode. If it helps, it can be thought of as a matrix of
- strings instead.
-
- header : int or None
- An integer indicating the row to use for the column header or None
- indicating no header will be used.
+ for ind, length in iteritems(not_max):
+ body[ind] += [np.nan] * (lens_max - length)
- index_col : int or None
- An integer indicating the column to use for the index or None
- indicating no column will be used.
- infer_types : bool
- Whether to convert numbers and dates.
+def _data_to_frame(data, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands):
+ head, body, _ = data # _ is footer which is rarely used: ignore for now
- skiprows : collections.Container or int or slice
- Iterable used to skip rows.
+ if head:
+ body = [head] + body
- Returns
- -------
- df : DataFrame
- A DataFrame containing the data from `data`
-
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
+ if header is None: # special case when a table has <th> elements
+ header = 0
- Raises
- ------
- ValueError
- * If `skiprows` is not found in the rows of the parsed DataFrame.
-
- See Also
- --------
- read_html
-
- Notes
- -----
- The `data` parameter is guaranteed not to be a list of empty lists.
- """
- thead, tbody, tfoot = data
- columns = thead or None
- df = DataFrame(tbody, columns=columns)
+ # fill out elements of body that are "ragged"
+ _expand_elements(body)
- if skiprows is not None:
- it = _get_skiprows_iter(skiprows)
+ tp = TextParser(body, header=header, index_col=index_col,
+ skiprows=_get_skiprows(skiprows),
+ parse_dates=parse_dates, tupleize_cols=tupleize_cols,
+ thousands=thousands)
+ df = tp.read()
- try:
- df = df.drop(it)
- except ValueError:
- raise ValueError('Labels {0} not found when trying to skip'
- ' rows'.format(it))
-
- # convert to numbers/dates where possible
- # must be sequential since dates trump numbers if both args are given
- if infer_types:
- df = df.convert_objects(convert_numeric=True)
+ if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
df = df.convert_objects(convert_dates='coerce')
-
- if header is not None:
- header_rows = df.iloc[header]
-
- if header_rows.ndim == 2:
- names = header_rows.index
- df.columns = MultiIndex.from_arrays(header_rows.values,
- names=names)
- else:
- df.columns = header_rows
-
- df = df.drop(df.index[header])
-
- if index_col is not None:
- cols = df.columns[index_col]
-
- try:
- cols = cols.tolist()
- except AttributeError:
- pass
-
- # drop by default
- df.set_index(cols, inplace=True)
- if df.index.nlevels == 1:
- if isnull(df.index.name) or not df.index.name:
- df.index.name = None
- else:
- names = [name or None for name in df.index.names]
- df.index = MultiIndex.from_tuples(df.index.values, names=names)
-
+ else:
+ df = df.applymap(text_type)
return df
@@ -701,15 +629,15 @@ def _parser_dispatch(flavor):
Raises
------
- AssertionError
+ ValueError
* If `flavor` is not a valid backend.
ImportError
* If you do not have the requested `flavor`
"""
valid_parsers = list(_valid_parsers.keys())
if flavor not in valid_parsers:
- raise AssertionError('"{0!r}" is not a valid flavor, valid flavors are'
- ' {1}'.format(flavor, valid_parsers))
+ raise ValueError('%r is not a valid flavor, valid flavors are %s' %
+ (flavor, valid_parsers))
if flavor in ('bs4', 'html5lib'):
if not _HAS_HTML5LIB:
@@ -717,46 +645,54 @@ def _parser_dispatch(flavor):
if not _HAS_BS4:
raise ImportError("bs4 not found please install it")
if bs4.__version__ == LooseVersion('4.2.0'):
- raise AssertionError("You're using a version"
- " of BeautifulSoup4 (4.2.0) that has been"
- " known to cause problems on certain"
- " operating systems such as Debian. "
- "Please install a version of"
- " BeautifulSoup4 != 4.2.0, both earlier"
- " and later releases will work.")
+ raise ValueError("You're using a version"
+ " of BeautifulSoup4 (4.2.0) that has been"
+ " known to cause problems on certain"
+ " operating systems such as Debian. "
+ "Please install a version of"
+ " BeautifulSoup4 != 4.2.0, both earlier"
+ " and later releases will work.")
else:
if not _HAS_LXML:
raise ImportError("lxml not found please install it")
return _valid_parsers[flavor]
-def _validate_parser_flavor(flavor):
+def _print_as_set(s):
+ return '{%s}' % ', '.join([com.pprint_thing(el) for el in s])
+
+
+def _validate_flavor(flavor):
if flavor is None:
- flavor = ['lxml', 'bs4']
- elif isinstance(flavor, compat.string_types):
- flavor = [flavor]
+ flavor = 'lxml', 'bs4'
+ elif isinstance(flavor, string_types):
+ flavor = flavor,
elif isinstance(flavor, collections.Iterable):
- if not all(isinstance(flav, compat.string_types) for flav in flavor):
- raise TypeError('{0} is not an iterable of strings'.format(flavor))
+ if not all(isinstance(flav, string_types) for flav in flavor):
+ raise TypeError('Object of type %r is not an iterable of strings' %
+ type(flavor).__name__)
else:
- raise TypeError('{0} is not a valid "flavor"'.format(flavor))
-
- flavor = list(flavor)
- valid_flavors = list(_valid_parsers.keys())
-
- if not set(flavor) & set(valid_flavors):
- raise ValueError('{0} is not a valid set of flavors, valid flavors are'
- ' {1}'.format(flavor, valid_flavors))
+ fmt = '{0!r}' if isinstance(flavor, string_types) else '{0}'
+ fmt += ' is not a valid flavor'
+ raise ValueError(fmt.format(flavor))
+
+ flavor = tuple(flavor)
+ valid_flavors = set(_valid_parsers)
+ flavor_set = set(flavor)
+
+ if not flavor_set & valid_flavors:
+ raise ValueError('%s is not a valid set of flavors, valid flavors are '
+ '%s' % (_print_as_set(flavor_set),
+ _print_as_set(valid_flavors)))
return flavor
-def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
- # bonus: re.compile is idempotent under function iteration so you can pass
- # a compiled regex to it and it will return itself
- flavor = _validate_parser_flavor(flavor)
- compiled_match = re.compile(match)
+def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands, attrs):
+ flavor = _validate_flavor(flavor)
+ compiled_match = re.compile(match) # you can pass a compiled regex here
- # ugly hack because python 3 DELETES the exception variable!
+ # hack around python 3 deleting the exception variable
retained = None
for flav in flavor:
parser = _parser_dispatch(flav)
@@ -769,25 +705,26 @@ def _parse(flavor, io, match, header, index_col, skiprows, infer_types, attrs):
else:
break
else:
- raise retained
+ raise_with_traceback(retained)
- return [_data_to_frame(table, header, index_col, infer_types, skiprows)
+ return [_data_to_frame(table, header, index_col, skiprows, infer_types,
+ parse_dates, tupleize_cols, thousands)
for table in tables]
def read_html(io, match='.+', flavor=None, header=None, index_col=None,
- skiprows=None, infer_types=True, attrs=None):
- r"""Read an HTML table into a DataFrame.
+ skiprows=None, infer_types=None, attrs=None, parse_dates=False,
+ tupleize_cols=False, thousands=','):
+ r"""Read HTML tables into a ``list`` of ``DataFrame`` objects.
Parameters
----------
io : str or file-like
- A string or file like object that can be either a url, a file-like
- object, or a raw string containing HTML. Note that lxml only accepts
- the http, ftp and file url protocols. If you have a URI that starts
- with ``'https'`` you might removing the ``'s'``.
+ A URL, a file-like object, or a raw string containing HTML. Note that
+ lxml only accepts the http, ftp and file url protocols. If you have a
+ URL that starts with ``'https'`` you might try removing the ``'s'``.
- match : str or regex, optional, default '.+'
+ match : str or compiled regular expression, optional
The set of tables containing text matching this regex or string will be
returned. Unless the HTML is extremely simple you will probably need to
pass a non-empty string here. Defaults to '.+' (match any non-empty
@@ -795,44 +732,30 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
This value is converted to a regular expression so that there is
consistent behavior between Beautiful Soup and lxml.
- flavor : str, container of strings, default ``None``
- The parsing engine to use under the hood. 'bs4' and 'html5lib' are
- synonymous with each other, they are both there for backwards
- compatibility. The default of ``None`` tries to use ``lxml`` to parse
- and if that fails it falls back on ``bs4`` + ``html5lib``.
+ flavor : str or None, container of strings
+ The parsing engine to use. 'bs4' and 'html5lib' are synonymous with
+ each other, they are both there for backwards compatibility. The
+ default of ``None`` tries to use ``lxml`` to parse and if that fails it
+ falls back on ``bs4`` + ``html5lib``.
- header : int or array-like or None, optional, default ``None``
- The row (or rows for a MultiIndex) to use to make the columns headers.
- Note that this row will be removed from the data.
+ header : int or list-like or None, optional
+ The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
+ make the columns headers.
- index_col : int or array-like or None, optional, default ``None``
- The column to use to make the index. Note that this column will be
- removed from the data.
+ index_col : int or list-like or None, optional
+ The column (or list of columns) to use to create the index.
- skiprows : int or collections.Container or slice or None, optional, default ``None``
- If an integer is given then skip this many rows after parsing the
- column header. If a sequence of integers is given skip those specific
- rows (0-based). Note that
+ skiprows : int or list-like or slice or None, optional
+ 0-based. Number of rows to skip after parsing the column integer. If a
+ sequence of integers or a slice is given, will skip the rows indexed by
+ that sequence. Note that a single element sequence means 'skip the nth
+ row' whereas an integer means 'skip n rows'.
- .. code-block:: python
-
- skiprows == 0
-
- yields the same result as
-
- .. code-block:: python
+ infer_types : bool, optional
+ This option is deprecated in 0.13, an will have no effect in 0.14. It
+ defaults to ``True``.
- skiprows is None
-
- If `skiprows` is a positive integer, say :math:`n`, then
- it is treated as "skip :math:`n` rows", *not* as "skip the
- :math:`n^\textrm{th}` row".
-
- infer_types : bool, optional, default ``True``
- Whether to convert numeric types and date-appearing strings to numbers
- and dates, respectively.
-
- attrs : dict or None, optional, default ``None``
+ attrs : dict or None, optional
This is a dictionary of attributes that you can pass to use to identify
the table in the HTML. These are not checked for validity before being
passed to lxml or Beautiful Soup. However, these attributes must be
@@ -858,33 +781,38 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
<http://www.w3.org/TR/html-markup/table.html>`__. It contains the
latest information on table attributes for the modern web.
+ parse_dates : bool, optional
+ See :func:`~pandas.read_csv` for details.
+
+ tupleize_cols : bool, optional
+ If ``False`` try to parse multiple header rows into a
+ :class:`~pandas.MultiIndex`, otherwise return raw tuples. Defaults to
+ ``False``.
+
+ thousands : str, optional
+ Separator to use to parse thousands. Defaults to ``','``.
+
Returns
-------
dfs : list of DataFrames
- A list of DataFrames, each of which is the parsed data from each of the
- tables on the page.
Notes
-----
- Before using this function you should probably read the :ref:`gotchas about
- the parser libraries that this function uses <html-gotchas>`.
+ Before using this function you should read the :ref:`gotchas about the
+ HTML parsing libraries <html-gotchas>`.
- There's as little cleaning of the data as possible due to the heterogeneity
- and general disorder of HTML on the web.
+ Expect to do some cleanup after you call this function. For example, you
+ might need to manually assign column names if the column names are
+ converted to NaN when you pass the `header=0` argument. We try to assume as
+ little as possible about the structure of the table and push the
+ idiosyncrasies of the HTML contained in the table to the user.
- Expect some cleanup after you call this function. For example,
- you might need to pass `infer_types=False` and perform manual conversion if
- the column names are converted to NaN when you pass the `header=0`
- argument. We try to assume as little as possible about the structure of the
- table and push the idiosyncrasies of the HTML contained in the table to
- you, the user.
+ This function searches for ``<table>`` elements and only for ``<tr>``
+ and ``<th>`` rows and ``<td>`` elements within each ``<tr>`` or ``<th>``
+ element in the table. ``<td>`` stands for "table data".
- This function only searches for <table> elements and only for <tr> and <th>
- rows and <td> elements within those rows. This could be extended by
- subclassing one of the parser classes contained in :mod:`pandas.io.html`.
-
- Similar to :func:`read_csv` the `header` argument is applied **after**
- `skiprows` is applied.
+ Similar to :func:`~pandas.read_csv` the `header` argument is applied
+ **after** `skiprows` is applied.
This function will *always* return a list of :class:`DataFrame` *or*
it will fail, e.g., it will *not* return an empty list.
@@ -892,12 +820,21 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
Examples
--------
See the :ref:`read_html documentation in the IO section of the docs
- <io.read_html>` for many examples of reading HTML.
+ <io.read_html>` for some examples of reading in HTML tables.
+
+ See Also
+ --------
+ pandas.read_csv
"""
+ if infer_types is not None:
+ warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
+ else:
+ infer_types = True # TODO: remove in 0.14
+
# Type check here. We don't want to parse only to fail because of an
# invalid value of an integer skiprows.
if isinstance(skiprows, numbers.Integral) and skiprows < 0:
- raise AssertionError('cannot skip rows starting from the end of the '
- 'data (you passed a negative value)')
+ raise ValueError('cannot skip rows starting from the end of the '
+ 'data (you passed a negative value)')
return _parse(flavor, io, match, header, index_col, skiprows, infer_types,
- attrs)
+ parse_dates, tupleize_cols, thousands, attrs)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3ef3cbf856fef..8a2f249f6af06 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -606,16 +606,10 @@ def _failover_to_python(self):
raise NotImplementedError
def read(self, nrows=None):
- suppressed_warnings = False
if nrows is not None:
if self.options.get('skip_footer'):
raise ValueError('skip_footer not supported for iteration')
- # # XXX hack
- # if isinstance(self._engine, CParserWrapper):
- # suppressed_warnings = True
- # self._engine.set_error_bad_lines(False)
-
ret = self._engine.read(nrows)
if self.options.get('as_recarray'):
@@ -710,7 +704,6 @@ def _should_parse_dates(self, i):
else:
return (j in self.parse_dates) or (name in self.parse_dates)
-
def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_names=False):
""" extract and return the names, index_names, col_names
header is a list-of-lists returned from the parsers """
@@ -728,12 +721,10 @@ def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_
ic = [ ic ]
sic = set(ic)
- orig_header = list(header)
-
# clean the index_names
index_names = header.pop(-1)
- (index_names, names,
- index_col) = _clean_index_names(index_names, self.index_col)
+ index_names, names, index_col = _clean_index_names(index_names,
+ self.index_col)
# extract the columns
field_count = len(header[0])
@@ -766,7 +757,7 @@ def _maybe_make_multi_index_columns(self, columns, col_names=None):
return columns
def _make_index(self, data, alldata, columns, indexnamerow=False):
- if not _is_index_col(self.index_col) or len(self.index_col) == 0:
+ if not _is_index_col(self.index_col) or not self.index_col:
index = None
elif not self._has_complex_date_col:
@@ -1430,7 +1421,7 @@ def read(self, rows=None):
self._first_chunk = False
columns = list(self.orig_names)
- if len(content) == 0: # pragma: no cover
+ if not len(content): # pragma: no cover
# DataFrame with the right metadata, even though it's length 0
return _get_empty_meta(self.orig_names,
self.index_col,
@@ -1468,8 +1459,8 @@ def _convert_data(self, data):
col = self.orig_names[col]
clean_conv[col] = f
- return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues, self.verbose,
- clean_conv)
+ return self._convert_to_ndarrays(data, self.na_values, self.na_fvalues,
+ self.verbose, clean_conv)
def _infer_columns(self):
names = self.names
@@ -1478,16 +1469,15 @@ def _infer_columns(self):
header = self.header
# we have a mi columns, so read and extra line
- if isinstance(header,(list,tuple,np.ndarray)):
+ if isinstance(header, (list, tuple, np.ndarray)):
have_mi_columns = True
- header = list(header) + [header[-1]+1]
+ header = list(header) + [header[-1] + 1]
else:
have_mi_columns = False
- header = [ header ]
+ header = [header]
columns = []
for level, hr in enumerate(header):
-
if len(self.buf) > 0:
line = self.buf[0]
else:
@@ -1521,10 +1511,11 @@ def _infer_columns(self):
if names is not None:
if len(names) != len(columns[0]):
- raise Exception('Number of passed names did not match '
- 'number of header fields in the file')
+ raise ValueError('Number of passed names did not match '
+ 'number of header fields in the file')
if len(columns) > 1:
- raise Exception('Cannot pass names with multi-index columns')
+ raise TypeError('Cannot pass names with multi-index '
+ 'columns')
columns = [ names ]
else:
diff --git a/pandas/io/tests/data/macau.html b/pandas/io/tests/data/macau.html
new file mode 100644
index 0000000000000..be62b3221518d
--- /dev/null
+++ b/pandas/io/tests/data/macau.html
@@ -0,0 +1,3691 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd">
+<!-- saved from url=(0037)http://www.camacau.com/statistic_list -->
+<html><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
+
+
+<link rel="stylesheet" type="text/css" href="./macau_files/style.css" media="screen">
+<script type="text/javascript" src="./macau_files/jquery.js"></script>
+
+
+
+
+
+<script type="text/javascript">
+
+function slideSwitch() {
+
+ var $active = $('#banner1 a.active');
+
+ var totalTmp=document.getElementById("bannerTotal").innerHTML;
+
+ var randomTmp=Math.floor(Math.random()*totalTmp+1);
+
+ var $next = $('#image'+randomTmp).length?$('#image'+randomTmp):$('#banner1 a:first');
+
+ if($next.attr("id")==$active.attr("id")){
+
+ $next = $active.next().length ? $active.next():$('#banner1 a:first');
+ }
+
+ $active.removeClass("active");
+
+ $next.addClass("active").show();
+
+ $active.hide();
+
+}
+
+jQuery(function() {
+
+ var totalTmp=document.getElementById("bannerTotal").innerHTML;
+ if(totalTmp>1){
+ setInterval( "slideSwitch()", 5000 );
+ }
+
+});
+
+</script>
+<script type="text/javascript">
+function close_notice(){
+jQuery("#tbNotice").hide();
+}
+</script>
+
+<title>Traffic Statistics - Passengers</title>
+
+<!-- GOOGLE STATISTICS
+<script type="text/javascript">
+
+ var _gaq = _gaq || [];
+ _gaq.push(['_setAccount', 'UA-24989877-2']);
+ _gaq.push(['_trackPageview']);
+
+ (function() {
+ var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+ ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+ var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+ })();
+
+</script>
+-->
+<style type="text/css"></style><style type="text/css"></style><script id="fireplug-jssdk" src="./macau_files/all.js"></script><style type="text/css">.fireplug-credit-widget-overlay{z-index:9999999999999999999;background-color:rgba(91,91,91,0.6)}.fireplug-credit-widget-overlay div,.fireplug-credit-widget-overlay span,.fireplug-credit-widget-overlay applet,.fireplug-credit-widget-overlay object,.fireplug-credit-widget-overlay iframe,.fireplug-credit-widget-overlay h1,.fireplug-credit-widget-overlay h2,.fireplug-credit-widget-overlay h3,.fireplug-credit-widget-overlay h4,.fireplug-credit-widget-overlay h5,.fireplug-credit-widget-overlay h6,.fireplug-credit-widget-overlay p,.fireplug-credit-widget-overlay blockquote,.fireplug-credit-widget-overlay pre,.fireplug-credit-widget-overlay a,.fireplug-credit-widget-overlay abbr,.fireplug-credit-widget-overlay acronym,.fireplug-credit-widget-overlay address,.fireplug-credit-widget-overlay big,.fireplug-credit-widget-overlay cite,.fireplug-credit-widget-overlay code,.fireplug-credit-widget-overlay del,.fireplug-credit-widget-overlay dfn,.fireplug-credit-widget-overlay em,.fireplug-credit-widget-overlay img,.fireplug-credit-widget-overlay ins,.fireplug-credit-widget-overlay kbd,.fireplug-credit-widget-overlay q,.fireplug-credit-widget-overlay s,.fireplug-credit-widget-overlay samp,.fireplug-credit-widget-overlay small,.fireplug-credit-widget-overlay strike,.fireplug-credit-widget-overlay strong,.fireplug-credit-widget-overlay sub,.fireplug-credit-widget-overlay sup,.fireplug-credit-widget-overlay tt,.fireplug-credit-widget-overlay var,.fireplug-credit-widget-overlay b,.fireplug-credit-widget-overlay u,.fireplug-credit-widget-overlay i,.fireplug-credit-widget-overlay center,.fireplug-credit-widget-overlay dl,.fireplug-credit-widget-overlay dt,.fireplug-credit-widget-overlay dd,.fireplug-credit-widget-overlay ol,.fireplug-credit-widget-overlay ul,.fireplug-credit-widget-overlay li,.fireplug-credit-widget-overlay fieldset,.fireplug-credit-widget-overlay form,.fireplug-credit-widget-overlay label,.fireplug-credit-widget-overlay legend,.fireplug-credit-widget-overlay table,.fireplug-credit-widget-overlay caption,.fireplug-credit-widget-overlay tbody,.fireplug-credit-widget-overlay tfoot,.fireplug-credit-widget-overlay thead,.fireplug-credit-widget-overlay tr,.fireplug-credit-widget-overlay th,.fireplug-credit-widget-overlay td,.fireplug-credit-widget-overlay article,.fireplug-credit-widget-overlay aside,.fireplug-credit-widget-overlay canvas,.fireplug-credit-widget-overlay details,.fireplug-credit-widget-overlay embed,.fireplug-credit-widget-overlay figure,.fireplug-credit-widget-overlay figcaption,.fireplug-credit-widget-overlay footer,.fireplug-credit-widget-overlay header,.fireplug-credit-widget-overlay hgroup,.fireplug-credit-widget-overlay menu,.fireplug-credit-widget-overlay nav,.fireplug-credit-widget-overlay output,.fireplug-credit-widget-overlay ruby,.fireplug-credit-widget-overlay section,.fireplug-credit-widget-overlay summary,.fireplug-credit-widget-overlay time,.fireplug-credit-widget-overlay mark,.fireplug-credit-widget-overlay audio,.fireplug-credit-widget-overlay video{margin:0;padding:0;border:0;font:inherit;font-size:100%;vertical-align:baseline}.fireplug-credit-widget-overlay table{border-collapse:collapse;border-spacing:0}.fireplug-credit-widget-overlay caption,.fireplug-credit-widget-overlay th,.fireplug-credit-widget-overlay td{text-align:left;font-weight:normal;vertical-align:middle}.fireplug-credit-widget-overlay q,.fireplug-credit-widget-overlay blockquote{quotes:none}.fireplug-credit-widget-overlay q:before,.fireplug-credit-widget-overlay q:after,.fireplug-credit-widget-overlay blockquote:before,.fireplug-credit-widget-overlay blockquote:after{content:"";content:none}.fireplug-credit-widget-overlay a img{border:none}.fireplug-credit-widget-overlay .fireplug-credit-widget-overlay-item{z-index:9999999999999999999;-webkit-box-shadow:#333 0px 0px 10px;-moz-box-shadow:#333 0px 0px 10px;box-shadow:#333 0px 0px 10px}.fireplug-credit-widget-overlay-body{height:100% !important;overflow:hidden !important}.fp-getcredit iframe{border:none;overflow:hidden;height:20px;width:145px}
+</style></head>
+<body>
+<div id="full">
+<div id="container">
+
+
+<div id="top">
+ <div id="lang">
+
+ <a href="http://www.camacau.com/changeLang?lang=zh_TW&url=/statistic_list">繁體中文</a> |
+ <a href="http://www.camacau.com/changeLang?lang=zh_CN&url=/statistic_list">簡體中文</a>
+ <!--<a href="changeLang?lang=pt_PT&url=/statistic_list" >Portuguese</a>
+ -->
+ </div>
+</div>
+
+<div id="header">
+ <div id="sitelogo"><a href="http://www.camacau.com/index" style="color : #FFF;"><img src="./macau_files/cam h04.jpg"></a></div>
+ <div id="navcontainer">
+ <div id="menu">
+ <div id="search">
+ <form id="searchForm" name="searchForm" action="http://www.camacau.com/search" method="POST">
+ <input id="keyword" name="keyword" type="text">
+ <a href="javascript:document.searchForm.submit();">Search</a> |
+ <a href="mailto:mkd@macau-airport.com">Contact Us</a> |
+ <a href="http://www.camacau.com/sitemap">SiteMap</a> |
+
+ <a href="http://www.camacau.com/rssBuilder.action"><img src="./macau_files/rssIcon.png" alt="RSS">RSS</a>
+ </form></div>
+ </div>
+</div>
+</div>
+<div id="menu2">
+ <div>
+
+
+ <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" id="FlashID4" title="Main Page">
+ <param name="movie" value="flash/button_index_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_index_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" id="FlashID4" title="Our Business">
+ <param name="movie" value="flash/button_our business_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_our%20business_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <object id="FlashID" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" title="About Us">
+ <param name="movie" value="flash/button_about us_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_about%20us_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <object id="FlashID3" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" title="Media Centre">
+ <param name="movie" value="flash/button_media centre_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_media%20centre_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="wmode" value="opaque">
+ <param name="scale" value="exactfit">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" id="FlashID5" title="Related Links">
+ <param name="movie" value="flash/button_related links_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_related%20links_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <object id="FlashID2" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" title="Interactive">
+ <param name="movie" value="flash/button_interactive_EN.swf">
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <!-- 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。 -->
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。 -->
+ <!--[if !IE]>-->
+ <object type="application/x-shockwave-flash" data="http://www.camacau.com/flash/button_interactive_EN.swf" width="92" height="20">
+ <!--<![endif]-->
+ <param name="quality" value="high">
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque">
+ <param name="swfversion" value="6.0.65.0">
+ <param name="expressinstall" value="flash/expressInstall.swf">
+ <!-- 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。 -->
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="./macau_files/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33"></a></p>
+ </div>
+ <!--[if !IE]>-->
+ </object>
+ <!--<![endif]-->
+ </object>
+
+ <!--<object classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" width="95" height="20" id="FlashID4" title="Group of Public">
+ <param name="movie" value="flash/button_pressRelease_EN.swf" />
+ <param name="quality" value="high" />
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque" />
+ <param name="swfversion" value="6.0.65.0" />
+ 此 param 標籤會提示使用 Flash Player 6.0 r65 和更新版本的使用者下載最新版本的 Flash Player。如果您不想讓使用者看到這項提示,請將其刪除。
+ <param name="expressinstall" value="flash/expressInstall.swf" />
+ 下一個物件標籤僅供非 IE 瀏覽器使用。因此,請使用 IECC 將其自 IE 隱藏。
+ [if !IE]>
+ <object type="application/x-shockwave-flash" data="flash/button_pressRelease_EN.swf" width="92" height="20">
+ <![endif]
+ <param name="quality" value="high" />
+ <param name="scale" value="exactfit">
+ <param name="wmode" value="opaque" />
+ <param name="swfversion" value="6.0.65.0" />
+ <param name="expressinstall" value="flash/expressInstall.swf" />
+ 瀏覽器會為使用 Flash Player 6.0 和更早版本的使用者顯示下列替代內容。
+ <div>
+ <h4>這個頁面上的內容需要較新版本的 Adobe Flash Player。</h4>
+ <p><a href="http://www.adobe.com/go/getflashplayer"><img src="http://www.adobe.com/images/shared/download_buttons/get_flash_player.gif" alt="取得 Adobe Flash Player" width="112" height="33" /></a></p>
+ </div>
+ [if !IE]>
+ </object>
+ <![endif]
+ </object>
+
+ --></div>
+ </div>
+
+
+
+
+
+
+
+<style>
+#slider ul li
+{
+height: 90px;
+list-style:none;
+width:95%;
+font-size:11pt;
+text-indent:2em;
+text-align:justify;
+text-justify:inter-ideograph;
+color:#663300;
+}
+
+
+#slider
+{
+margin: auto;
+overflow: hidden;
+/* Non Core */
+background: #f6f7f8;
+box-shadow: 4px 4px 15px #aaa;
+-o-box-shadow: 4px 4px 15px #aaa;
+-icab-box-shadow: 4px 4px 15px #aaa;
+-khtml-box-shadow: 4px 4px 15px #aaa;
+-moz-box-shadow: 4px 4px 15px #aaa;
+-webkit-box-shadow: 4px 4px 15px #aaa;
+border: 4px solid #bcc5cb;
+
+border-width: 1px 2px 2px 1px;
+
+-o-border-radius: 10px;
+-icab-border-radius: 10px;
+-khtml-border-radius: 10px;
+-moz-border-radius: 10px;
+-webkit-border-radius: 10px;
+border-radius: 10px;
+
+}
+
+#close_tbNotice img
+{
+width:20px;
+height:20px;
+align:right;
+cursor:pointer;
+}
+</style>
+
+<div id="banner">
+ <!--<div id="leftGradient"></div>-->
+
+ <table id="tbNotice" style="display:none;width:800px;z-index:999;position:absolute;left:20%;" align="center">
+ <tbody><tr height="40px"><td></td></tr>
+ <tr><td>
+
+ <div id="slider">
+ <div id="close_tbNotice"><img src="./macau_files/delete.png" onclick="close_notice()"></div>
+ <ul>
+ <li>
+
+
+
+
+ </li>
+ </ul>
+
+ </div>
+ <div id="show_notice" style="display:none;">
+
+ </div>
+
+ </td>
+
+ </tr>
+ <tr><td align="right"></td></tr>
+ </tbody></table>
+
+
+ <div class="gradient">
+
+ </div>
+ <div class="banner1" id="banner1">
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: none;" id="image1" class="">
+ <img src="./macau_files/41.jpeg" alt="Slideshow Image 1">
+ </a>
+
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: none;" id="image2" class="">
+ <img src="./macau_files/45.jpeg" alt="Slideshow Image 2">
+ </a>
+
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: none;" id="image3" class="">
+ <img src="./macau_files/46.jpeg" alt="Slideshow Image 3">
+ </a>
+
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: inline;" id="image4" class="active">
+ <img src="./macau_files/47.jpeg" alt="Slideshow Image 4">
+ </a>
+
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: none;" id="image5" class="">
+ <img src="./macau_files/48.jpeg" alt="Slideshow Image 5">
+ </a>
+
+
+
+
+
+ <a href="http://www.macau-airport.com/" target="_blank" style="display: none;" id="image6" class="">
+ <img src="./macau_files/49.jpeg" alt="Slideshow Image 6">
+ </a>
+
+
+
+
+
+ <a href="http://www.4cpscac.com/" target="_blank" style="display: none;" id="image7" class="">
+ <img src="./macau_files/50.jpg" alt="Slideshow Image 7">
+ </a>
+
+
+
+
+ </div>
+ <div id="bannerTotal" style="display:none;">7</div>
+</div>
+
+<div id="content">
+ <div id="leftnav">
+
+ <div id="navmenu">
+
+
+
+
+
+<link href="./macau_files/ddaccordion.css" rel="stylesheet" type="text/css">
+<script type="text/javascript" src="./macau_files/ddaccordion.js"></script>
+
+
+
+<script type="text/javascript">
+ ddaccordion.init({
+ headerclass: "leftmenu_silverheader", //Shared CSS class name of headers group
+ contentclass: "leftmenu_submenu", //Shared CSS class name of contents group
+ revealtype: "clickgo", //Reveal content when user clicks or onmouseover the header? Valid value: "click", "clickgo", or "mouseover"
+ mouseoverdelay: 100, //if revealtype="mouseover", set delay in milliseconds before header expands onMouseover
+ collapseprev: true, //Collapse previous content (so only one open at any time)? true/false
+ defaultexpanded: [0], //index of content(s) open by default [index1, index2, etc] [] denotes no content
+ onemustopen: false, //Specify whether at least one header should be open always (so never all headers closed)
+ animatedefault: true, //Should contents open by default be animated into view?
+ persiststate: true, //persist state of opened contents within browser session?
+ toggleclass: ["", "selected"], //Two CSS classes to be applied to the header when it's collapsed and expanded, respectively ["class1", "class2"]
+ togglehtml: ["", "", ""], //Additional HTML added to the header when it's collapsed and expanded, respectively ["position", "html1", "html2"] (see docs)
+ animatespeed: "normal", //speed of animation: integer in milliseconds (ie: 200), or keywords "fast", "normal", or "slow"
+ oninit:function(headers, expandedindices){ //custom code to run when headers have initalized
+ //do nothing
+ },
+ onopenclose:function(header, index, state, isuseractivated){ //custom code to run whenever a header is opened or closed
+ //do nothing
+
+ }
+});
+</script><style type="text/css">
+.leftmenu_submenu{display: none}
+a.hiddenajaxlink{display: none}
+</style>
+
+
+ <table>
+ <tbody><tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/geographic_information">MIA Geographical Information</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/airport_services">Scope of Service</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/services_agreement">Air Services Agreement</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/airport_charges" class="leftmenu_silverheader selected" headerindex="0h"><span>Airport Charges</span></a></td></tr>
+ <tr><td colspan="2" style="padding-top:0px;padding-bottom:0px;padding-right:0px;">
+ <table class="leftmenu_submenu" contentindex="0c" style="display: block;">
+ <tbody><tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/airport_charges1">Passenger Service Fees</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/airport_charges2">Aircraft Parking fees</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/airport_charges3">Airport Security Fee</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/airport_charges4">Utilization fees</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/airport_charges5">Refuelling Charge</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/calculation">Calculation of Landing fee Rate</a></td></tr></tbody></table></td></tr>
+ </tbody></table>
+ </td>
+ </tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/application_facilities">Application of Credit Facilities</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="javascript:void(0)" class="leftmenu_silverheader " headerindex="1h"><span>Passenger Flight Incentive Program</span></a></td></tr>
+ <tr><td colspan="2" style="padding-top:0px;padding-bottom:0px;padding-right:0px;">
+ <table class="leftmenu_submenu" contentindex="1c" style="display: none;">
+ <tbody><tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/incentive_program1">Incentive policy for new routes and additional flights</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/incentive_program1_1">Passenger flights</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/incentive_program1_2">Charter flights</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/docs/MIA_Route_Development_IncentiveApp_Form.pdf" target="_blank">Route Development Incentive Application Form</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/online_application">Online Application</a></td></tr></tbody></table></td></tr>
+ </tbody></table>
+ </td>
+ </tr>
+
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/slot_application">Slot Application</a></td></tr>
+
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/freighter_forwards">Macau Freight Forwarders</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/ctplatform">Cargo Tracking Platform</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/for_rent">For Rent</a></td></tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/capacity">Airport Capacity</a></td></tr>
+
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td style="color: #606060;text-decoration: none;"><a href="javascript:void(0)" class="leftmenu_silverheader " headerindex="2h">Airport Characteristics & Traffic Statistics</a></td></tr>
+ <tr><td colspan="2" style="padding-top:0px;padding-bottom:0px;padding-right:0px;">
+ <table class="leftmenu_submenu" contentindex="2c" style="display: none;">
+ <!--<tr><td> </td><td><table class="submenu"><tr><td><img width="20" height="15" src="images/sub_icon.gif"/></td><td><a href="airport_characteristics">Airport Characteristics</a></td></tr></table></td></tr>
+ -->
+ <tbody><tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="./macau_files/macau.html">Traffic Statistics - Passengers</a></td></tr></tbody></table></td></tr>
+ <tr><td> </td><td><table class="submenu"><tbody><tr><td><img width="20" height="15" src="./macau_files/sub_icon.gif"></td><td><a href="http://www.camacau.com/statistics_cargo">Traffic Statistics - Cargo</a></td></tr></tbody></table></td></tr>
+ </tbody></table>
+ </td>
+ </tr>
+ <tr><td><img width="20" height="15" src="./macau_files/double.gif"></td><td><a href="http://www.camacau.com/operational_routes">Operational Routes</a></td></tr>
+
+ <!--<tr><td><img width="20" height="15" src="images/double.gif"/></td><td><a href="route_development">Member Registration</a></td></tr>
+
+ --><!--<tr><td><img width="20" height="15" src="images/double.gif"/></td><td><a href="cargo_arrival">Cargo Flight Information</a></td></tr>-->
+
+ <!--<tr><td><img width="20" height="15" src="images/double.gif"/></td><td><a href="/mvnforum/mvnforum/index">Forum</a></td></tr>-->
+
+ </tbody></table>
+
+
+ </div>
+ </div>
+
+<div id="under">
+ <div id="contextTitle">
+ <h2 class="con">Traffic Statistics - Passengers</h2>
+
+ </div>
+ <div class="contextTitleAfter"></div>
+ <div>
+
+
+ <div id="context">
+ <!--/*begin context*/-->
+ <div class="Container">
+ <div id="Scroller-1">
+ <div class="Scroller-Container">
+ <div id="statisticspassengers" style="width:550px;">
+
+
+ <span id="title">Traffic Statistics</span>
+
+
+
+
+
+ <br><br><br>
+ <span id="title">Passengers Figure(2008-2013) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2013</th>
+
+ <th align="center">2012</th>
+
+ <th align="center">2011</th>
+
+ <th align="center">2010</th>
+
+ <th align="center">2009</th>
+
+ <th align="center">2008</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 374,917
+ </td>
+
+ <td align="center">
+
+ 362,379
+ </td>
+
+ <td align="center">
+
+ 301,503
+ </td>
+
+ <td align="center">
+
+ 358,902
+ </td>
+
+ <td align="center">
+
+ 342,323
+ </td>
+
+ <td align="center">
+
+ 420,574
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 393,152
+ </td>
+
+ <td align="center">
+
+ 312,405
+ </td>
+
+ <td align="center">
+
+ 301,259
+ </td>
+
+ <td align="center">
+
+ 351,654
+ </td>
+
+ <td align="center">
+
+ 297,755
+ </td>
+
+ <td align="center">
+
+ 442,809
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 408,755
+ </td>
+
+ <td align="center">
+
+ 334,000
+ </td>
+
+ <td align="center">
+
+ 318,908
+ </td>
+
+ <td align="center">
+
+ 360,365
+ </td>
+
+ <td align="center">
+
+ 387,879
+ </td>
+
+ <td align="center">
+
+ 468,540
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 408,860
+ </td>
+
+ <td align="center">
+
+ 358,198
+ </td>
+
+ <td align="center">
+
+ 339,060
+ </td>
+
+ <td align="center">
+
+ 352,976
+ </td>
+
+ <td align="center">
+
+ 400,553
+ </td>
+
+ <td align="center">
+
+ 492,930
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 374,397
+ </td>
+
+ <td align="center">
+
+ 329,218
+ </td>
+
+ <td align="center">
+
+ 321,060
+ </td>
+
+ <td align="center">
+
+ 330,407
+ </td>
+
+ <td align="center">
+
+ 335,967
+ </td>
+
+ <td align="center">
+
+ 465,045
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 401,995
+ </td>
+
+ <td align="center">
+
+ 356,679
+ </td>
+
+ <td align="center">
+
+ 343,006
+ </td>
+
+ <td align="center">
+
+ 326,724
+ </td>
+
+ <td align="center">
+
+ 296,748
+ </td>
+
+ <td align="center">
+
+ 426,764
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 423,081
+ </td>
+
+ <td align="center">
+
+ 378,993
+ </td>
+
+ <td align="center">
+
+ 356,580
+ </td>
+
+ <td align="center">
+
+ 351,110
+ </td>
+
+ <td align="center">
+
+ 439,425
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 453,391
+ </td>
+
+ <td align="center">
+
+ 395,883
+ </td>
+
+ <td align="center">
+
+ 364,011
+ </td>
+
+ <td align="center">
+
+ 404,076
+ </td>
+
+ <td align="center">
+
+ 425,814
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 384,887
+ </td>
+
+ <td align="center">
+
+ 325,124
+ </td>
+
+ <td align="center">
+
+ 308,940
+ </td>
+
+ <td align="center">
+
+ 317,226
+ </td>
+
+ <td align="center">
+
+ 379,898
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 383,889
+ </td>
+
+ <td align="center">
+
+ 333,102
+ </td>
+
+ <td align="center">
+
+ 317,040
+ </td>
+
+ <td align="center">
+
+ 355,935
+ </td>
+
+ <td align="center">
+
+ 415,339
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 379,065
+ </td>
+
+ <td align="center">
+
+ 327,803
+ </td>
+
+ <td align="center">
+
+ 303,186
+ </td>
+
+ <td align="center">
+
+ 372,104
+ </td>
+
+ <td align="center">
+
+ 366,411
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 413,873
+ </td>
+
+ <td align="center">
+
+ 359,313
+ </td>
+
+ <td align="center">
+
+ 348,051
+ </td>
+
+ <td align="center">
+
+ 388,573
+ </td>
+
+ <td align="center">
+
+ 354,253
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 2,362,076
+ </td>
+
+ <td align="center">
+
+ 4,491,065
+ </td>
+
+ <td align="center">
+
+ 4,045,014
+ </td>
+
+ <td align="center">
+
+ 4,078,836
+ </td>
+
+ <td align="center">
+
+ 4,250,249
+ </td>
+
+ <td align="center">
+
+ 5,097,802
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Passengers Figure(2002-2007) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2007</th>
+
+ <th align="center">2006</th>
+
+ <th align="center">2005</th>
+
+ <th align="center">2004</th>
+
+ <th align="center">2003</th>
+
+ <th align="center">2002</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 381,887
+ </td>
+
+ <td align="center">
+
+ 323,282
+ </td>
+
+ <td align="center">
+
+ 289,701
+ </td>
+
+ <td align="center">
+
+ 288,507
+ </td>
+
+ <td align="center">
+
+ 290,140
+ </td>
+
+ <td align="center">
+
+ 268,783
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 426,014
+ </td>
+
+ <td align="center">
+
+ 360,820
+ </td>
+
+ <td align="center">
+
+ 348,723
+ </td>
+
+ <td align="center">
+
+ 207,710
+ </td>
+
+ <td align="center">
+
+ 323,264
+ </td>
+
+ <td align="center">
+
+ 323,654
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 443,805
+ </td>
+
+ <td align="center">
+
+ 389,125
+ </td>
+
+ <td align="center">
+
+ 321,953
+ </td>
+
+ <td align="center">
+
+ 273,910
+ </td>
+
+ <td align="center">
+
+ 295,052
+ </td>
+
+ <td align="center">
+
+ 360,668
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 500,917
+ </td>
+
+ <td align="center">
+
+ 431,550
+ </td>
+
+ <td align="center">
+
+ 367,976
+ </td>
+
+ <td align="center">
+
+ 324,931
+ </td>
+
+ <td align="center">
+
+ 144,082
+ </td>
+
+ <td align="center">
+
+ 380,648
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 468,637
+ </td>
+
+ <td align="center">
+
+ 399,743
+ </td>
+
+ <td align="center">
+
+ 359,298
+ </td>
+
+ <td align="center">
+
+ 250,601
+ </td>
+
+ <td align="center">
+
+ 47,333
+ </td>
+
+ <td align="center">
+
+ 359,547
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 463,676
+ </td>
+
+ <td align="center">
+
+ 393,713
+ </td>
+
+ <td align="center">
+
+ 360,147
+ </td>
+
+ <td align="center">
+
+ 296,000
+ </td>
+
+ <td align="center">
+
+ 94,294
+ </td>
+
+ <td align="center">
+
+ 326,508
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+ 490,404
+ </td>
+
+ <td align="center">
+
+ 465,497
+ </td>
+
+ <td align="center">
+
+ 413,131
+ </td>
+
+ <td align="center">
+
+ 365,454
+ </td>
+
+ <td align="center">
+
+ 272,784
+ </td>
+
+ <td align="center">
+
+ 388,061
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+ 490,830
+ </td>
+
+ <td align="center">
+
+ 478,474
+ </td>
+
+ <td align="center">
+
+ 409,281
+ </td>
+
+ <td align="center">
+
+ 372,802
+ </td>
+
+ <td align="center">
+
+ 333,840
+ </td>
+
+ <td align="center">
+
+ 384,719
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+ 446,594
+ </td>
+
+ <td align="center">
+
+ 412,444
+ </td>
+
+ <td align="center">
+
+ 354,751
+ </td>
+
+ <td align="center">
+
+ 321,456
+ </td>
+
+ <td align="center">
+
+ 295,447
+ </td>
+
+ <td align="center">
+
+ 334,029
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+ 465,757
+ </td>
+
+ <td align="center">
+
+ 461,215
+ </td>
+
+ <td align="center">
+
+ 390,435
+ </td>
+
+ <td align="center">
+
+ 358,362
+ </td>
+
+ <td align="center">
+
+ 291,193
+ </td>
+
+ <td align="center">
+
+ 372,706
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 455,132
+ </td>
+
+ <td align="center">
+
+ 425,116
+ </td>
+
+ <td align="center">
+
+ 323,347
+ </td>
+
+ <td align="center">
+
+ 327,593
+ </td>
+
+ <td align="center">
+
+ 268,282
+ </td>
+
+ <td align="center">
+
+ 350,324
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 465,225
+ </td>
+
+ <td align="center">
+
+ 435,114
+ </td>
+
+ <td align="center">
+
+ 308,999
+ </td>
+
+ <td align="center">
+
+ 326,933
+ </td>
+
+ <td align="center">
+
+ 249,855
+ </td>
+
+ <td align="center">
+
+ 322,056
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 5,498,878
+ </td>
+
+ <td align="center">
+
+ 4,976,093
+ </td>
+
+ <td align="center">
+
+ 4,247,742
+ </td>
+
+ <td align="center">
+
+ 3,714,259
+ </td>
+
+ <td align="center">
+
+ 2,905,566
+ </td>
+
+ <td align="center">
+
+ 4,171,703
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Passengers Figure(1996-2001) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2001</th>
+
+ <th align="center">2000</th>
+
+ <th align="center">1999</th>
+
+ <th align="center">1998</th>
+
+ <th align="center">1997</th>
+
+ <th align="center">1996</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 265,603
+ </td>
+
+ <td align="center">
+
+ 184,381
+ </td>
+
+ <td align="center">
+
+ 161,264
+ </td>
+
+ <td align="center">
+
+ 161,432
+ </td>
+
+ <td align="center">
+
+ 117,984
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 249,259
+ </td>
+
+ <td align="center">
+
+ 264,066
+ </td>
+
+ <td align="center">
+
+ 209,569
+ </td>
+
+ <td align="center">
+
+ 168,777
+ </td>
+
+ <td align="center">
+
+ 150,772
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 312,319
+ </td>
+
+ <td align="center">
+
+ 226,483
+ </td>
+
+ <td align="center">
+
+ 186,965
+ </td>
+
+ <td align="center">
+
+ 172,060
+ </td>
+
+ <td align="center">
+
+ 149,795
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 351,793
+ </td>
+
+ <td align="center">
+
+ 296,541
+ </td>
+
+ <td align="center">
+
+ 237,449
+ </td>
+
+ <td align="center">
+
+ 180,241
+ </td>
+
+ <td align="center">
+
+ 179,049
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 338,692
+ </td>
+
+ <td align="center">
+
+ 288,949
+ </td>
+
+ <td align="center">
+
+ 230,691
+ </td>
+
+ <td align="center">
+
+ 172,391
+ </td>
+
+ <td align="center">
+
+ 189,925
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 332,630
+ </td>
+
+ <td align="center">
+
+ 271,181
+ </td>
+
+ <td align="center">
+
+ 231,328
+ </td>
+
+ <td align="center">
+
+ 157,519
+ </td>
+
+ <td align="center">
+
+ 175,402
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+ 344,658
+ </td>
+
+ <td align="center">
+
+ 304,276
+ </td>
+
+ <td align="center">
+
+ 243,534
+ </td>
+
+ <td align="center">
+
+ 205,595
+ </td>
+
+ <td align="center">
+
+ 173,103
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+ 360,899
+ </td>
+
+ <td align="center">
+
+ 300,418
+ </td>
+
+ <td align="center">
+
+ 257,616
+ </td>
+
+ <td align="center">
+
+ 241,140
+ </td>
+
+ <td align="center">
+
+ 178,118
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+ 291,817
+ </td>
+
+ <td align="center">
+
+ 280,803
+ </td>
+
+ <td align="center">
+
+ 210,885
+ </td>
+
+ <td align="center">
+
+ 183,954
+ </td>
+
+ <td align="center">
+
+ 163,385
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+ 327,232
+ </td>
+
+ <td align="center">
+
+ 298,873
+ </td>
+
+ <td align="center">
+
+ 231,251
+ </td>
+
+ <td align="center">
+
+ 205,726
+ </td>
+
+ <td align="center">
+
+ 176,879
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 315,538
+ </td>
+
+ <td align="center">
+
+ 265,528
+ </td>
+
+ <td align="center">
+
+ 228,637
+ </td>
+
+ <td align="center">
+
+ 181,677
+ </td>
+
+ <td align="center">
+
+ 146,804
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 314,866
+ </td>
+
+ <td align="center">
+
+ 257,929
+ </td>
+
+ <td align="center">
+
+ 210,922
+ </td>
+
+ <td align="center">
+
+ 183,975
+ </td>
+
+ <td align="center">
+
+ 151,362
+ </td>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 3,805,306
+ </td>
+
+ <td align="center">
+
+ 3,239,428
+ </td>
+
+ <td align="center">
+
+ 2,640,111
+ </td>
+
+ <td align="center">
+
+ 2,214,487
+ </td>
+
+ <td align="center">
+
+ 1,952,578
+ </td>
+
+ <td align="center">
+
+ 0
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Passengers Figure(1995-1995) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">1995</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 6,601
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 37,041
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 43,642
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+
+ <br><br><br>
+ <div align="right"><img src="./macau_files/pass_stat.jpg" alt="passenger statistic picture" width="565" height="318"></div>
+ <br><br><br>
+
+
+ <!--statistics-movement -->
+
+ <br><br><br>
+ <span id="title">Movement Statistics(2008-2013) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2013</th>
+
+ <th align="center">2012</th>
+
+ <th align="center">2011</th>
+
+ <th align="center">2010</th>
+
+ <th align="center">2009</th>
+
+ <th align="center">2008</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 3,925
+ </td>
+
+ <td align="center">
+
+ 3,463
+ </td>
+
+ <td align="center">
+
+ 3,289
+ </td>
+
+ <td align="center">
+
+ 3,184
+ </td>
+
+ <td align="center">
+
+ 3,488
+ </td>
+
+ <td align="center">
+
+ 4,568
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 3,632
+ </td>
+
+ <td align="center">
+
+ 2,983
+ </td>
+
+ <td align="center">
+
+ 2,902
+ </td>
+
+ <td align="center">
+
+ 3,053
+ </td>
+
+ <td align="center">
+
+ 3,347
+ </td>
+
+ <td align="center">
+
+ 4,527
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 3,909
+ </td>
+
+ <td align="center">
+
+ 3,166
+ </td>
+
+ <td align="center">
+
+ 3,217
+ </td>
+
+ <td align="center">
+
+ 3,175
+ </td>
+
+ <td align="center">
+
+ 3,636
+ </td>
+
+ <td align="center">
+
+ 4,594
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 3,903
+ </td>
+
+ <td align="center">
+
+ 3,258
+ </td>
+
+ <td align="center">
+
+ 3,146
+ </td>
+
+ <td align="center">
+
+ 3,023
+ </td>
+
+ <td align="center">
+
+ 3,709
+ </td>
+
+ <td align="center">
+
+ 4,574
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 4,075
+ </td>
+
+ <td align="center">
+
+ 3,234
+ </td>
+
+ <td align="center">
+
+ 3,266
+ </td>
+
+ <td align="center">
+
+ 3,033
+ </td>
+
+ <td align="center">
+
+ 3,603
+ </td>
+
+ <td align="center">
+
+ 4,511
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 4,038
+ </td>
+
+ <td align="center">
+
+ 3,272
+ </td>
+
+ <td align="center">
+
+ 3,316
+ </td>
+
+ <td align="center">
+
+ 2,909
+ </td>
+
+ <td align="center">
+
+ 3,057
+ </td>
+
+ <td align="center">
+
+ 4,081
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,661
+ </td>
+
+ <td align="center">
+
+ 3,359
+ </td>
+
+ <td align="center">
+
+ 3,062
+ </td>
+
+ <td align="center">
+
+ 3,354
+ </td>
+
+ <td align="center">
+
+ 4,215
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,942
+ </td>
+
+ <td align="center">
+
+ 3,417
+ </td>
+
+ <td align="center">
+
+ 3,077
+ </td>
+
+ <td align="center">
+
+ 3,395
+ </td>
+
+ <td align="center">
+
+ 4,139
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,703
+ </td>
+
+ <td align="center">
+
+ 3,169
+ </td>
+
+ <td align="center">
+
+ 3,095
+ </td>
+
+ <td align="center">
+
+ 3,100
+ </td>
+
+ <td align="center">
+
+ 3,752
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,727
+ </td>
+
+ <td align="center">
+
+ 3,469
+ </td>
+
+ <td align="center">
+
+ 3,179
+ </td>
+
+ <td align="center">
+
+ 3,375
+ </td>
+
+ <td align="center">
+
+ 3,874
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,722
+ </td>
+
+ <td align="center">
+
+ 3,145
+ </td>
+
+ <td align="center">
+
+ 3,159
+ </td>
+
+ <td align="center">
+
+ 3,213
+ </td>
+
+ <td align="center">
+
+ 3,567
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+
+ </td>
+
+ <td align="center">
+
+ 3,866
+ </td>
+
+ <td align="center">
+
+ 3,251
+ </td>
+
+ <td align="center">
+
+ 3,199
+ </td>
+
+ <td align="center">
+
+ 3,324
+ </td>
+
+ <td align="center">
+
+ 3,362
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 23,482
+ </td>
+
+ <td align="center">
+
+ 41,997
+ </td>
+
+ <td align="center">
+
+ 38,946
+ </td>
+
+ <td align="center">
+
+ 37,148
+ </td>
+
+ <td align="center">
+
+ 40,601
+ </td>
+
+ <td align="center">
+
+ 49,764
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Movement Statistics(2002-2007) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2007</th>
+
+ <th align="center">2006</th>
+
+ <th align="center">2005</th>
+
+ <th align="center">2004</th>
+
+ <th align="center">2003</th>
+
+ <th align="center">2002</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 4,384
+ </td>
+
+ <td align="center">
+
+ 3,933
+ </td>
+
+ <td align="center">
+
+ 3,528
+ </td>
+
+ <td align="center">
+
+ 3,051
+ </td>
+
+ <td align="center">
+
+ 3,257
+ </td>
+
+ <td align="center">
+
+ 2,711
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 4,131
+ </td>
+
+ <td align="center">
+
+ 3,667
+ </td>
+
+ <td align="center">
+
+ 3,331
+ </td>
+
+ <td align="center">
+
+ 2,372
+ </td>
+
+ <td align="center">
+
+ 3,003
+ </td>
+
+ <td align="center">
+
+ 2,747
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 4,349
+ </td>
+
+ <td align="center">
+
+ 4,345
+ </td>
+
+ <td align="center">
+
+ 3,549
+ </td>
+
+ <td align="center">
+
+ 3,049
+ </td>
+
+ <td align="center">
+
+ 3,109
+ </td>
+
+ <td align="center">
+
+ 2,985
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 4,460
+ </td>
+
+ <td align="center">
+
+ 4,490
+ </td>
+
+ <td align="center">
+
+ 3,832
+ </td>
+
+ <td align="center">
+
+ 3,359
+ </td>
+
+ <td align="center">
+
+ 2,033
+ </td>
+
+ <td align="center">
+
+ 2,928
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 4,629
+ </td>
+
+ <td align="center">
+
+ 4,245
+ </td>
+
+ <td align="center">
+
+ 3,663
+ </td>
+
+ <td align="center">
+
+ 3,251
+ </td>
+
+ <td align="center">
+
+ 1,229
+ </td>
+
+ <td align="center">
+
+ 3,109
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 4,365
+ </td>
+
+ <td align="center">
+
+ 4,124
+ </td>
+
+ <td align="center">
+
+ 3,752
+ </td>
+
+ <td align="center">
+
+ 3,414
+ </td>
+
+ <td align="center">
+
+ 1,217
+ </td>
+
+ <td align="center">
+
+ 3,049
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+ 4,612
+ </td>
+
+ <td align="center">
+
+ 4,386
+ </td>
+
+ <td align="center">
+
+ 3,876
+ </td>
+
+ <td align="center">
+
+ 3,664
+ </td>
+
+ <td align="center">
+
+ 2,423
+ </td>
+
+ <td align="center">
+
+ 3,078
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+ 4,446
+ </td>
+
+ <td align="center">
+
+ 4,373
+ </td>
+
+ <td align="center">
+
+ 3,987
+ </td>
+
+ <td align="center">
+
+ 3,631
+ </td>
+
+ <td align="center">
+
+ 3,040
+ </td>
+
+ <td align="center">
+
+ 3,166
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+ 4,414
+ </td>
+
+ <td align="center">
+
+ 4,311
+ </td>
+
+ <td align="center">
+
+ 3,782
+ </td>
+
+ <td align="center">
+
+ 3,514
+ </td>
+
+ <td align="center">
+
+ 2,809
+ </td>
+
+ <td align="center">
+
+ 3,239
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+ 4,445
+ </td>
+
+ <td align="center">
+
+ 4,455
+ </td>
+
+ <td align="center">
+
+ 3,898
+ </td>
+
+ <td align="center">
+
+ 3,744
+ </td>
+
+ <td align="center">
+
+ 3,052
+ </td>
+
+ <td align="center">
+
+ 3,562
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 4,563
+ </td>
+
+ <td align="center">
+
+ 4,285
+ </td>
+
+ <td align="center">
+
+ 3,951
+ </td>
+
+ <td align="center">
+
+ 3,694
+ </td>
+
+ <td align="center">
+
+ 3,125
+ </td>
+
+ <td align="center">
+
+ 3,546
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 4,588
+ </td>
+
+ <td align="center">
+
+ 4,435
+ </td>
+
+ <td align="center">
+
+ 3,855
+ </td>
+
+ <td align="center">
+
+ 3,763
+ </td>
+
+ <td align="center">
+
+ 2,996
+ </td>
+
+ <td align="center">
+
+ 3,444
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 53,386
+ </td>
+
+ <td align="center">
+
+ 51,049
+ </td>
+
+ <td align="center">
+
+ 45,004
+ </td>
+
+ <td align="center">
+
+ 40,506
+ </td>
+
+ <td align="center">
+
+ 31,293
+ </td>
+
+ <td align="center">
+
+ 37,564
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Movement Statistics(1996-2001) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">2001</th>
+
+ <th align="center">2000</th>
+
+ <th align="center">1999</th>
+
+ <th align="center">1998</th>
+
+ <th align="center">1997</th>
+
+ <th align="center">1996</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+ 2,694
+ </td>
+
+ <td align="center">
+
+ 2,201
+ </td>
+
+ <td align="center">
+
+ 1,835
+ </td>
+
+ <td align="center">
+
+ 2,177
+ </td>
+
+ <td align="center">
+
+ 1,353
+ </td>
+
+ <td align="center">
+
+ 744
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+ 2,364
+ </td>
+
+ <td align="center">
+
+ 2,357
+ </td>
+
+ <td align="center">
+
+ 1,826
+ </td>
+
+ <td align="center">
+
+ 1,740
+ </td>
+
+ <td align="center">
+
+ 1,339
+ </td>
+
+ <td align="center">
+
+ 692
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+ 2,543
+ </td>
+
+ <td align="center">
+
+ 2,206
+ </td>
+
+ <td align="center">
+
+ 1,895
+ </td>
+
+ <td align="center">
+
+ 1,911
+ </td>
+
+ <td align="center">
+
+ 1,533
+ </td>
+
+ <td align="center">
+
+ 872
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+ 2,531
+ </td>
+
+ <td align="center">
+
+ 2,311
+ </td>
+
+ <td align="center">
+
+ 2,076
+ </td>
+
+ <td align="center">
+
+ 1,886
+ </td>
+
+ <td align="center">
+
+ 1,587
+ </td>
+
+ <td align="center">
+
+ 1,026
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+ 2,579
+ </td>
+
+ <td align="center">
+
+ 2,383
+ </td>
+
+ <td align="center">
+
+ 1,914
+ </td>
+
+ <td align="center">
+
+ 2,102
+ </td>
+
+ <td align="center">
+
+ 1,720
+ </td>
+
+ <td align="center">
+
+ 1,115
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+ 2,681
+ </td>
+
+ <td align="center">
+
+ 2,370
+ </td>
+
+ <td align="center">
+
+ 1,890
+ </td>
+
+ <td align="center">
+
+ 2,038
+ </td>
+
+ <td align="center">
+
+ 1,716
+ </td>
+
+ <td align="center">
+
+ 1,037
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+ 2,903
+ </td>
+
+ <td align="center">
+
+ 2,609
+ </td>
+
+ <td align="center">
+
+ 1,916
+ </td>
+
+ <td align="center">
+
+ 2,078
+ </td>
+
+ <td align="center">
+
+ 1,693
+ </td>
+
+ <td align="center">
+
+ 1,209
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+ 3,037
+ </td>
+
+ <td align="center">
+
+ 2,487
+ </td>
+
+ <td align="center">
+
+ 1,968
+ </td>
+
+ <td align="center">
+
+ 2,061
+ </td>
+
+ <td align="center">
+
+ 1,676
+ </td>
+
+ <td align="center">
+
+ 1,241
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+ 2,767
+ </td>
+
+ <td align="center">
+
+ 2,329
+ </td>
+
+ <td align="center">
+
+ 1,955
+ </td>
+
+ <td align="center">
+
+ 1,970
+ </td>
+
+ <td align="center">
+
+ 1,681
+ </td>
+
+ <td align="center">
+
+ 1,263
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+ 2,922
+ </td>
+
+ <td align="center">
+
+ 2,417
+ </td>
+
+ <td align="center">
+
+ 2,267
+ </td>
+
+ <td align="center">
+
+ 1,969
+ </td>
+
+ <td align="center">
+
+ 1,809
+ </td>
+
+ <td align="center">
+
+ 1,368
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 2,670
+ </td>
+
+ <td align="center">
+
+ 2,273
+ </td>
+
+ <td align="center">
+
+ 2,132
+ </td>
+
+ <td align="center">
+
+ 2,102
+ </td>
+
+ <td align="center">
+
+ 1,786
+ </td>
+
+ <td align="center">
+
+ 1,433
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 2,815
+ </td>
+
+ <td align="center">
+
+ 2,749
+ </td>
+
+ <td align="center">
+
+ 2,187
+ </td>
+
+ <td align="center">
+
+ 1,981
+ </td>
+
+ <td align="center">
+
+ 1,944
+ </td>
+
+ <td align="center">
+
+ 1,386
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 32,506
+ </td>
+
+ <td align="center">
+
+ 28,692
+ </td>
+
+ <td align="center">
+
+ 23,861
+ </td>
+
+ <td align="center">
+
+ 24,015
+ </td>
+
+ <td align="center">
+
+ 19,837
+ </td>
+
+ <td align="center">
+
+ 13,386
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+ <br><br><br>
+ <span id="title">Movement Statistics(1995-1995) </span><br><br>
+ <table class="style1">
+ <tbody>
+ <tr height="17">
+ <th align="right"> </th>
+
+ <th align="center">1995</th>
+
+ </tr>
+ <tr height="17">
+ <th align="right">January</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">February</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">March</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">April</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">May</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">June</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">July</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">August</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">September</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">October</th>
+
+ <td align="center">
+
+
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">November</th>
+
+ <td align="center">
+
+ 126
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">December</th>
+
+ <td align="center">
+
+ 536
+ </td>
+
+ </tr>
+ <tr height="17">
+ <th align="right">Total</th>
+
+ <td align="center">
+
+ 662
+ </td>
+
+ </tr>
+ </tbody>
+ </table>
+
+
+ <br><br><br>
+ <div align="right"><img src="./macau_files/mov_stat.jpg" alt="passenger statistic picture" width="565" height="318"></div>
+
+
+ </div>
+
+ </div>
+ </div>
+ </div>
+
+
+ <!--/*end context*/-->
+ </div>
+ </div>
+
+ <div id="buttombar"><img height="100" src="./macau_files/buttombar.gif"></div>
+ <div id="logo">
+
+
+
+ <div>
+
+ <a href="http://www.macau-airport.com/envirop/zh/default.php" style="display: inline;"><img height="80" src="./macau_files/38.jpg"></a>
+
+ </div>
+
+
+ <div>
+
+ <a href="http://www.macau-airport.com/envirop/en/default.php" style="display: inline;"><img height="80" src="./macau_files/36.jpg"></a>
+
+ </div>
+
+</div>
+</div>
+
+
+
+</div>
+
+
+<div id="footer">
+<hr>
+ <div id="footer-left">
+ <a href="http://www.camacau.com/index">Main Page</a> |
+ <a href="http://www.camacau.com/geographic_information">Our Business</a> |
+ <a href="http://www.camacau.com/about_us">About Us</a> |
+ <a href="http://www.camacau.com/pressReleases_list">Media Centre</a> |
+ <a href="http://www.camacau.com/rlinks2">Related Links</a> |
+ <a href="http://www.camacau.com/download_list">Interactive</a>
+ </div>
+ <div id="footer-right">Macau International Airport Co. Ltd. | Copyright 2013 | All rights reserved</div>
+</div>
+</div>
+</div>
+
+<div id="___fireplug_chrome_extension___" style="display: none;"></div><iframe id="rdbIndicator" width="100%" height="270" border="0" src="./macau_files/indicator.html" style="display: none; border: 0; position: fixed; left: 0; top: 0; z-index: 2147483647"></iframe><link rel="stylesheet" type="text/css" media="screen" href="chrome-extension://fcdjadjbdihbaodagojiomdljhjhjfho/css/atd.css"></body></html>
\ No newline at end of file
diff --git a/pandas/io/tests/data/nyse_wsj.html b/pandas/io/tests/data/nyse_wsj.html
new file mode 100644
index 0000000000000..aa3d470a5fbc6
--- /dev/null
+++ b/pandas/io/tests/data/nyse_wsj.html
@@ -0,0 +1,1207 @@
+<table border="0" cellpadding="0" cellspacing="0" class="autocompleteContainer">
+ <tbody>
+ <tr>
+ <td>
+ <div class="symbolCompleteContainer">
+ <div><input autocomplete="off" maxlength="80" name="KEYWORDS" type="text" value=""/></div>
+ </div>
+ <div class="hat_button">
+ <span class="hat_button_text">SEARCH</span>
+ </div>
+ <div style="clear: both;"><div class="subSymbolCompleteResults"></div></div>
+ </td>
+ </tr>
+ </tbody>
+</table>
+<table bgcolor="" border="0" cellpadding="0" cellspacing="0" width="100%"><tbody><tr>
+ <td height="0"><img alt="" border="0" height="0" src="null/img/b.gif" width="1"/></td>
+</tr></tbody></table>
+<table border="0" cellpadding="0" cellspacing="0" class="mdcTable" width="100%">
+ <tbody><tr>
+ <td class="colhead" style="text-align:left"> </td>
+ <td class="colhead" style="text-align:left">Issue<span class="textb10gray" style="margin-left: 8px;">(Roll over for charts and headlines)</span>
+ </td>
+ <td class="colhead">Volume</td>
+ <td class="colhead">Price</td>
+ <td class="colhead" style="width:35px;">Chg</td>
+ <td class="colhead">% Chg</td>
+ </tr>
+ <tr>
+ <td class="num">1</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=JCP" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'JCP')">J.C. Penney (JCP)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">250,697,455</td>
+ <td class="nnum">$9.05</td>
+ <td class="nnum">-1.37</td>
+ <td class="nnum" style="border-right:0px">-13.15</td>
+ </tr>
+ <tr>
+ <td class="num">2</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=BAC" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'BAC')">Bank of America (BAC)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">77,162,103</td>
+ <td class="nnum">13.90</td>
+ <td class="nnum">-0.18</td>
+ <td class="nnum" style="border-right:0px">-1.28</td>
+ </tr>
+ <tr>
+ <td class="num">3</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=RAD" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'RAD')">Rite Aid (RAD)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">52,140,382</td>
+ <td class="nnum">4.70</td>
+ <td class="nnum">-0.08</td>
+ <td class="nnum" style="border-right:0px">-1.67</td>
+ </tr>
+ <tr>
+ <td class="num">4</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=F" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'F')">Ford Motor (F)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">33,745,287</td>
+ <td class="nnum">17.05</td>
+ <td class="nnum">-0.22</td>
+ <td class="nnum" style="border-right:0px">-1.27</td>
+ </tr>
+ <tr>
+ <td class="num">5</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=PFE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'PFE')">Pfizer (PFE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">27,801,853</td>
+ <td class="pnum">28.88</td>
+ <td class="pnum">0.36</td>
+ <td class="pnum" style="border-right:0px">1.26</td>
+ </tr>
+ <tr>
+ <td class="num">6</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=HTZ" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'HTZ')">Hertz Global Hldgs (HTZ)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">25,821,264</td>
+ <td class="pnum">22.32</td>
+ <td class="pnum">0.69</td>
+ <td class="pnum" style="border-right:0px">3.19</td>
+ </tr>
+ <tr>
+ <td class="num">7</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=GE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'GE')">General Electric (GE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">25,142,064</td>
+ <td class="nnum">24.05</td>
+ <td class="nnum">-0.20</td>
+ <td class="nnum" style="border-right:0px">-0.82</td>
+ </tr>
+ <tr>
+ <td class="num">8</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ELN" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ELN')">Elan ADS (ELN)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">24,725,209</td>
+ <td class="pnum">15.59</td>
+ <td class="pnum">0.08</td>
+ <td class="pnum" style="border-right:0px">0.52</td>
+ </tr>
+ <tr>
+ <td class="num">9</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=JPM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'JPM')">JPMorgan Chase (JPM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">22,402,756</td>
+ <td class="pnum">52.24</td>
+ <td class="pnum">0.35</td>
+ <td class="pnum" style="border-right:0px">0.67</td>
+ </tr>
+ <tr>
+ <td class="num">10</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=RF" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'RF')">Regions Financial (RF)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">20,790,532</td>
+ <td class="pnum">9.30</td>
+ <td class="pnum">0.12</td>
+ <td class="pnum" style="border-right:0px">1.31</td>
+ </tr>
+ <tr>
+ <td class="num">11</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=VMEM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'VMEM')">Violin Memory (VMEM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">20,669,846</td>
+ <td class="nnum">7.02</td>
+ <td class="nnum">-1.98</td>
+ <td class="nnum" style="border-right:0px">-22.00</td>
+ </tr>
+ <tr>
+ <td class="num">12</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=C" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'C')">Citigroup (C)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">19,979,932</td>
+ <td class="nnum">48.89</td>
+ <td class="nnum">-0.04</td>
+ <td class="nnum" style="border-right:0px">-0.08</td>
+ </tr>
+ <tr>
+ <td class="num">13</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=NOK" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'NOK')">Nokia ADS (NOK)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">19,585,075</td>
+ <td class="pnum">6.66</td>
+ <td class="pnum">0.02</td>
+ <td class="pnum" style="border-right:0px">0.30</td>
+ </tr>
+ <tr>
+ <td class="num">14</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=WFC" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'WFC')">Wells Fargo (WFC)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">19,478,590</td>
+ <td class="nnum">41.59</td>
+ <td class="nnum">-0.02</td>
+ <td class="nnum" style="border-right:0px">-0.05</td>
+ </tr>
+ <tr>
+ <td class="num">15</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=VALE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'VALE')">Vale ADS (VALE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">18,781,987</td>
+ <td class="nnum">15.60</td>
+ <td class="nnum">-0.52</td>
+ <td class="nnum" style="border-right:0px">-3.23</td>
+ </tr>
+ <tr>
+ <td class="num">16</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=DAL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'DAL')">Delta Air Lines (DAL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">16,013,956</td>
+ <td class="nnum">23.57</td>
+ <td class="nnum">-0.44</td>
+ <td class="nnum" style="border-right:0px">-1.83</td>
+ </tr>
+ <tr>
+ <td class="num">17</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=EMC" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'EMC')">EMC (EMC)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">15,771,252</td>
+ <td class="nnum">26.07</td>
+ <td class="nnum">-0.11</td>
+ <td class="nnum" style="border-right:0px">-0.42</td>
+ </tr>
+ <tr>
+ <td class="num">18</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=NKE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'NKE')">Nike Cl B (NKE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">15,514,717</td>
+ <td class="pnum">73.64</td>
+ <td class="pnum">3.30</td>
+ <td class="pnum" style="border-right:0px">4.69</td>
+ </tr>
+ <tr>
+ <td class="num">19</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=AA" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'AA')">Alcoa (AA)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">14,061,073</td>
+ <td class="nnum">8.20</td>
+ <td class="nnum">-0.07</td>
+ <td class="nnum" style="border-right:0px">-0.85</td>
+ </tr>
+ <tr>
+ <td class="num">20</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=GM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'GM')">General Motors (GM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">13,984,004</td>
+ <td class="nnum">36.37</td>
+ <td class="nnum">-0.58</td>
+ <td class="nnum" style="border-right:0px">-1.57</td>
+ </tr>
+ <tr>
+ <td class="num">21</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ORCL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ORCL')">Oracle (ORCL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">13,856,671</td>
+ <td class="nnum">33.78</td>
+ <td class="nnum">-0.03</td>
+ <td class="nnum" style="border-right:0px">-0.09</td>
+ </tr>
+ <tr>
+ <td class="num">22</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=T" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'T')">AT&T (T)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">13,736,948</td>
+ <td class="nnum">33.98</td>
+ <td class="nnum">-0.25</td>
+ <td class="nnum" style="border-right:0px">-0.73</td>
+ </tr>
+ <tr>
+ <td class="num">23</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=TSL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'TSL')">Trina Solar ADS (TSL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">13,284,202</td>
+ <td class="pnum">14.83</td>
+ <td class="pnum">1.99</td>
+ <td class="pnum" style="border-right:0px">15.50</td>
+ </tr>
+ <tr>
+ <td class="num">24</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=YGE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'YGE')">Yingli Green Energy Holding ADS (YGE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">12,978,378</td>
+ <td class="pnum">6.73</td>
+ <td class="pnum">0.63</td>
+ <td class="pnum" style="border-right:0px">10.33</td>
+ </tr>
+ <tr>
+ <td class="num">25</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=PBR" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'PBR')">Petroleo Brasileiro ADS (PBR)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">12,833,660</td>
+ <td class="nnum">15.40</td>
+ <td class="nnum">-0.21</td>
+ <td class="nnum" style="border-right:0px">-1.35</td>
+ </tr>
+ <tr>
+ <td class="num">26</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=UAL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'UAL')">United Continental Holdings (UAL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">12,603,225</td>
+ <td class="nnum">30.91</td>
+ <td class="nnum">-3.16</td>
+ <td class="nnum" style="border-right:0px">-9.28</td>
+ </tr>
+ <tr>
+ <td class="num">27</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=KO" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'KO')">Coca-Cola (KO)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">12,343,452</td>
+ <td class="nnum">38.40</td>
+ <td class="nnum">-0.34</td>
+ <td class="nnum" style="border-right:0px">-0.88</td>
+ </tr>
+ <tr>
+ <td class="num">28</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ACI" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ACI')">Arch Coal (ACI)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">12,261,138</td>
+ <td class="nnum">4.25</td>
+ <td class="nnum">-0.28</td>
+ <td class="nnum" style="border-right:0px">-6.18</td>
+ </tr>
+ <tr>
+ <td class="num">29</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MS" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MS')">Morgan Stanley (MS)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,956,345</td>
+ <td class="nnum">27.08</td>
+ <td class="nnum">-0.07</td>
+ <td class="nnum" style="border-right:0px">-0.26</td>
+ </tr>
+ <tr>
+ <td class="num">30</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=P" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'P')">Pandora Media (P)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,829,963</td>
+ <td class="pnum">25.52</td>
+ <td class="pnum">0.13</td>
+ <td class="pnum" style="border-right:0px">0.51</td>
+ </tr>
+ <tr>
+ <td class="num">31</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ABX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ABX')">Barrick Gold (ABX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,775,585</td>
+ <td class="num">18.53</td>
+ <td class="num">0.00</td>
+ <td class="num" style="border-right:0px">0.00</td>
+ </tr>
+ <tr>
+ <td class="num">32</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ABT" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ABT')">Abbott Laboratories (ABT)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,755,718</td>
+ <td class="nnum">33.14</td>
+ <td class="nnum">-0.52</td>
+ <td class="nnum" style="border-right:0px">-1.54</td>
+ </tr>
+ <tr>
+ <td class="num">33</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=BSBR" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'BSBR')">Banco Santander Brasil ADS (BSBR)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,587,310</td>
+ <td class="pnum">7.01</td>
+ <td class="pnum">0.46</td>
+ <td class="pnum" style="border-right:0px">7.02</td>
+ </tr>
+ <tr>
+ <td class="num">34</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=AMD" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'AMD')">Advanced Micro Devices (AMD)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,337,609</td>
+ <td class="nnum">3.86</td>
+ <td class="nnum">-0.03</td>
+ <td class="nnum" style="border-right:0px">-0.77</td>
+ </tr>
+ <tr>
+ <td class="num">35</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=NLY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'NLY')">Annaly Capital Management (NLY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">11,004,440</td>
+ <td class="nnum">11.63</td>
+ <td class="nnum">-0.07</td>
+ <td class="nnum" style="border-right:0px">-0.60</td>
+ </tr>
+ <tr>
+ <td class="num">36</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ANR" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ANR')">Alpha Natural Resources (ANR)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,941,074</td>
+ <td class="nnum">6.08</td>
+ <td class="nnum">-0.19</td>
+ <td class="nnum" style="border-right:0px">-3.03</td>
+ </tr>
+ <tr>
+ <td class="num">37</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=XOM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'XOM')">Exxon Mobil (XOM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,668,115</td>
+ <td class="nnum">86.90</td>
+ <td class="nnum">-0.17</td>
+ <td class="nnum" style="border-right:0px">-0.20</td>
+ </tr>
+ <tr>
+ <td class="num">38</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ITUB" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ITUB')">Itau Unibanco Holding ADS (ITUB)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,638,803</td>
+ <td class="pnum">14.30</td>
+ <td class="pnum">0.23</td>
+ <td class="pnum" style="border-right:0px">1.63</td>
+ </tr>
+ <tr>
+ <td class="num">39</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MRK" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MRK')">Merck&Co (MRK)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,388,152</td>
+ <td class="pnum">47.79</td>
+ <td class="pnum">0.11</td>
+ <td class="pnum" style="border-right:0px">0.23</td>
+ </tr>
+ <tr>
+ <td class="num">40</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ALU" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ALU')">Alcatel-Lucent ADS (ALU)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,181,833</td>
+ <td class="pnum">3.65</td>
+ <td class="pnum">0.01</td>
+ <td class="pnum" style="border-right:0px">0.27</td>
+ </tr>
+ <tr>
+ <td class="num">41</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=VZ" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'VZ')">Verizon Communications (VZ)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,139,321</td>
+ <td class="nnum">47.00</td>
+ <td class="nnum">-0.67</td>
+ <td class="nnum" style="border-right:0px">-1.41</td>
+ </tr>
+ <tr>
+ <td class="num">42</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MHR" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MHR')">Magnum Hunter Resources (MHR)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">10,004,303</td>
+ <td class="pnum">6.33</td>
+ <td class="pnum">0.46</td>
+ <td class="pnum" style="border-right:0px">7.84</td>
+ </tr>
+ <tr>
+ <td class="num">43</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=HPQ" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'HPQ')">Hewlett-Packard (HPQ)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,948,935</td>
+ <td class="nnum">21.17</td>
+ <td class="nnum">-0.13</td>
+ <td class="nnum" style="border-right:0px">-0.61</td>
+ </tr>
+ <tr>
+ <td class="num">44</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=PHM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'PHM')">PulteGroup (PHM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,899,141</td>
+ <td class="nnum">16.57</td>
+ <td class="nnum">-0.41</td>
+ <td class="nnum" style="border-right:0px">-2.41</td>
+ </tr>
+ <tr>
+ <td class="num">45</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SOL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SOL')">ReneSola ADS (SOL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,667,438</td>
+ <td class="pnum">4.84</td>
+ <td class="pnum">0.39</td>
+ <td class="pnum" style="border-right:0px">8.76</td>
+ </tr>
+ <tr>
+ <td class="num">46</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=GLW" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'GLW')">Corning (GLW)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,547,265</td>
+ <td class="nnum">14.73</td>
+ <td class="nnum">-0.21</td>
+ <td class="nnum" style="border-right:0px">-1.41</td>
+ </tr>
+ <tr>
+ <td class="num">47</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=COLE" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'COLE')">Cole Real Estate Investments (COLE)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,544,021</td>
+ <td class="pnum">12.21</td>
+ <td class="pnum">0.01</td>
+ <td class="pnum" style="border-right:0px">0.08</td>
+ </tr>
+ <tr>
+ <td class="num">48</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=DOW" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'DOW')">Dow Chemical (DOW)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,150,479</td>
+ <td class="nnum">39.02</td>
+ <td class="nnum">-0.97</td>
+ <td class="nnum" style="border-right:0px">-2.43</td>
+ </tr>
+ <tr>
+ <td class="num">49</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=IGT" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'IGT')">International Game Technology (IGT)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">9,129,123</td>
+ <td class="nnum">19.23</td>
+ <td class="nnum">-1.44</td>
+ <td class="nnum" style="border-right:0px">-6.97</td>
+ </tr>
+ <tr>
+ <td class="num">50</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ACN" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ACN')">Accenture Cl A (ACN)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,773,260</td>
+ <td class="nnum">74.09</td>
+ <td class="nnum">-1.78</td>
+ <td class="nnum" style="border-right:0px">-2.35</td>
+ </tr>
+ <tr>
+ <td class="num">51</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=KEY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'KEY')">KeyCorp (KEY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,599,333</td>
+ <td class="pnum">11.36</td>
+ <td class="pnum">0.02</td>
+ <td class="pnum" style="border-right:0px">0.18</td>
+ </tr>
+ <tr>
+ <td class="num">52</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=BMY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'BMY')">Bristol-Myers Squibb (BMY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,440,709</td>
+ <td class="nnum">46.20</td>
+ <td class="nnum">-0.73</td>
+ <td class="nnum" style="border-right:0px">-1.56</td>
+ </tr>
+ <tr>
+ <td class="num">53</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SID" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SID')">Companhia Siderurgica Nacional ADS (SID)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,437,636</td>
+ <td class="nnum">4.36</td>
+ <td class="nnum">-0.05</td>
+ <td class="nnum" style="border-right:0px">-1.13</td>
+ </tr>
+ <tr>
+ <td class="num">54</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=HRB" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'HRB')">H&R Block (HRB)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,240,984</td>
+ <td class="pnum">26.36</td>
+ <td class="pnum">0.31</td>
+ <td class="pnum" style="border-right:0px">1.19</td>
+ </tr>
+ <tr>
+ <td class="num">55</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MTG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MTG')">MGIC Investment (MTG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,135,037</td>
+ <td class="nnum">7.26</td>
+ <td class="nnum">-0.10</td>
+ <td class="nnum" style="border-right:0px">-1.36</td>
+ </tr>
+ <tr>
+ <td class="num">56</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=RNG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'RNG')">RingCentral Cl A (RNG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,117,469</td>
+ <td class="pnum">18.20</td>
+ <td class="pnum">5.20</td>
+ <td class="pnum" style="border-right:0px">40.00</td>
+ </tr>
+ <tr>
+ <td class="num">57</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=X" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'X')">United States Steel (X)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,107,899</td>
+ <td class="nnum">20.44</td>
+ <td class="nnum">-0.66</td>
+ <td class="nnum" style="border-right:0px">-3.13</td>
+ </tr>
+ <tr>
+ <td class="num">58</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CLF" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CLF')">Cliffs Natural Resources (CLF)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,041,572</td>
+ <td class="nnum">21.00</td>
+ <td class="nnum">-0.83</td>
+ <td class="nnum" style="border-right:0px">-3.80</td>
+ </tr>
+ <tr>
+ <td class="num">59</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=NEM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'NEM')">Newmont Mining (NEM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">8,014,250</td>
+ <td class="nnum">27.98</td>
+ <td class="nnum">-0.19</td>
+ <td class="nnum" style="border-right:0px">-0.67</td>
+ </tr>
+ <tr>
+ <td class="num">60</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MO" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MO')">Altria Group (MO)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,786,048</td>
+ <td class="nnum">34.71</td>
+ <td class="nnum">-0.29</td>
+ <td class="nnum" style="border-right:0px">-0.83</td>
+ </tr>
+ <tr>
+ <td class="num">61</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SD" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SD')">SandRidge Energy (SD)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,782,745</td>
+ <td class="nnum">5.93</td>
+ <td class="nnum">-0.06</td>
+ <td class="nnum" style="border-right:0px">-1.00</td>
+ </tr>
+ <tr>
+ <td class="num">62</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MCP" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MCP')">Molycorp (MCP)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,735,831</td>
+ <td class="nnum">6.73</td>
+ <td class="nnum">-0.45</td>
+ <td class="nnum" style="border-right:0px">-6.27</td>
+ </tr>
+ <tr>
+ <td class="num">63</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=HAL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'HAL')">Halliburton (HAL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,728,735</td>
+ <td class="nnum">48.39</td>
+ <td class="nnum">-0.32</td>
+ <td class="nnum" style="border-right:0px">-0.66</td>
+ </tr>
+ <tr>
+ <td class="num">64</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=TSM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'TSM')">Taiwan Semiconductor Manufacturing ADS (TSM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,661,397</td>
+ <td class="nnum">17.07</td>
+ <td class="nnum">-0.25</td>
+ <td class="nnum" style="border-right:0px">-1.44</td>
+ </tr>
+ <tr>
+ <td class="num">65</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=FCX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'FCX')">Freeport-McMoRan Copper&Gold (FCX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,622,803</td>
+ <td class="nnum">33.42</td>
+ <td class="nnum">-0.45</td>
+ <td class="nnum" style="border-right:0px">-1.33</td>
+ </tr>
+ <tr>
+ <td class="num">66</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=KOG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'KOG')">Kodiak Oil&Gas (KOG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,543,806</td>
+ <td class="pnum">11.94</td>
+ <td class="pnum">0.16</td>
+ <td class="pnum" style="border-right:0px">1.36</td>
+ </tr>
+ <tr>
+ <td class="num">67</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=XRX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'XRX')">Xerox (XRX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,440,689</td>
+ <td class="nnum">10.37</td>
+ <td class="nnum">-0.01</td>
+ <td class="nnum" style="border-right:0px">-0.10</td>
+ </tr>
+ <tr>
+ <td class="num">68</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=S" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'S')">Sprint (S)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,291,351</td>
+ <td class="nnum">6.16</td>
+ <td class="nnum">-0.14</td>
+ <td class="nnum" style="border-right:0px">-2.22</td>
+ </tr>
+ <tr>
+ <td class="num">69</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=TWO" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'TWO')">Two Harbors Investment (TWO)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,153,803</td>
+ <td class="pnum">9.79</td>
+ <td class="pnum">0.05</td>
+ <td class="pnum" style="border-right:0px">0.51</td>
+ </tr>
+ <tr>
+ <td class="num">70</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=WLT" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'WLT')">Walter Energy (WLT)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,152,192</td>
+ <td class="nnum">14.19</td>
+ <td class="nnum">-0.36</td>
+ <td class="nnum" style="border-right:0px">-2.47</td>
+ </tr>
+ <tr>
+ <td class="num">71</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=IP" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'IP')">International Paper (IP)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,123,722</td>
+ <td class="nnum">45.44</td>
+ <td class="nnum">-1.85</td>
+ <td class="nnum" style="border-right:0px">-3.91</td>
+ </tr>
+ <tr>
+ <td class="num">72</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=PPL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'PPL')">PPL (PPL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">7,026,292</td>
+ <td class="nnum">30.34</td>
+ <td class="nnum">-0.13</td>
+ <td class="nnum" style="border-right:0px">-0.43</td>
+ </tr>
+ <tr>
+ <td class="num">73</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=GG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'GG')">Goldcorp (GG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,857,447</td>
+ <td class="pnum">25.76</td>
+ <td class="pnum">0.08</td>
+ <td class="pnum" style="border-right:0px">0.31</td>
+ </tr>
+ <tr>
+ <td class="num">74</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=TWX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'TWX')">Time Warner (TWX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,807,237</td>
+ <td class="pnum">66.20</td>
+ <td class="pnum">1.33</td>
+ <td class="pnum" style="border-right:0px">2.05</td>
+ </tr>
+ <tr>
+ <td class="num">75</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SNV" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SNV')">Synovus Financial (SNV)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,764,805</td>
+ <td class="pnum">3.29</td>
+ <td class="pnum">0.02</td>
+ <td class="pnum" style="border-right:0px">0.61</td>
+ </tr>
+ <tr>
+ <td class="num">76</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=AKS" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'AKS')">AK Steel Holding (AKS)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,662,599</td>
+ <td class="nnum">3.83</td>
+ <td class="nnum">-0.11</td>
+ <td class="nnum" style="border-right:0px">-2.79</td>
+ </tr>
+ <tr>
+ <td class="num">77</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=BSX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'BSX')">Boston Scientific (BSX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,629,084</td>
+ <td class="nnum">11.52</td>
+ <td class="nnum">-0.15</td>
+ <td class="nnum" style="border-right:0px">-1.29</td>
+ </tr>
+ <tr>
+ <td class="num">78</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=EGO" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'EGO')">Eldorado Gold (EGO)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,596,902</td>
+ <td class="nnum">6.65</td>
+ <td class="nnum">-0.03</td>
+ <td class="nnum" style="border-right:0px">-0.45</td>
+ </tr>
+ <tr>
+ <td class="num">79</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=NR" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'NR')">Newpark Resources (NR)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,552,453</td>
+ <td class="pnum">12.56</td>
+ <td class="pnum">0.09</td>
+ <td class="pnum" style="border-right:0px">0.72</td>
+ </tr>
+ <tr>
+ <td class="num">80</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=ABBV" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'ABBV')">AbbVie (ABBV)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,525,524</td>
+ <td class="nnum">44.33</td>
+ <td class="nnum">-0.67</td>
+ <td class="nnum" style="border-right:0px">-1.49</td>
+ </tr>
+ <tr>
+ <td class="num">81</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MBI" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MBI')">MBIA (MBI)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,416,587</td>
+ <td class="nnum">10.38</td>
+ <td class="nnum">-0.43</td>
+ <td class="nnum" style="border-right:0px">-3.98</td>
+ </tr>
+ <tr>
+ <td class="num">82</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SAI" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SAI')">SAIC (SAI)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,404,587</td>
+ <td class="pnum">16.03</td>
+ <td class="pnum">0.13</td>
+ <td class="pnum" style="border-right:0px">0.82</td>
+ </tr>
+ <tr>
+ <td class="num">83</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=PG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'PG')">Procter&Gamble (PG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,389,143</td>
+ <td class="nnum">77.21</td>
+ <td class="nnum">-0.84</td>
+ <td class="nnum" style="border-right:0px">-1.08</td>
+ </tr>
+ <tr>
+ <td class="num">84</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=IAG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'IAG')">IAMGOLD (IAG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,293,001</td>
+ <td class="nnum">4.77</td>
+ <td class="nnum">-0.06</td>
+ <td class="nnum" style="border-right:0px">-1.24</td>
+ </tr>
+ <tr>
+ <td class="num">85</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=SWY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'SWY')">Safeway (SWY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,268,184</td>
+ <td class="nnum">32.25</td>
+ <td class="nnum">-0.29</td>
+ <td class="nnum" style="border-right:0px">-0.89</td>
+ </tr>
+ <tr>
+ <td class="num">86</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=KGC" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'KGC')">Kinross Gold (KGC)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">6,112,658</td>
+ <td class="nnum">4.99</td>
+ <td class="nnum">-0.03</td>
+ <td class="nnum" style="border-right:0px">-0.60</td>
+ </tr>
+ <tr>
+ <td class="num">87</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MGM" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MGM')">MGM Resorts International (MGM)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,986,143</td>
+ <td class="nnum">20.22</td>
+ <td class="nnum">-0.05</td>
+ <td class="nnum" style="border-right:0px">-0.25</td>
+ </tr>
+ <tr>
+ <td class="num">88</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CX')">Cemex ADS (CX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,907,040</td>
+ <td class="nnum">11.27</td>
+ <td class="nnum">-0.06</td>
+ <td class="nnum" style="border-right:0px">-0.53</td>
+ </tr>
+ <tr>
+ <td class="num">89</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=AIG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'AIG')">American International Group (AIG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,900,133</td>
+ <td class="nnum">49.15</td>
+ <td class="nnum">-0.30</td>
+ <td class="nnum" style="border-right:0px">-0.61</td>
+ </tr>
+ <tr>
+ <td class="num">90</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CHK" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CHK')">Chesapeake Energy (CHK)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,848,016</td>
+ <td class="nnum">26.21</td>
+ <td class="nnum">-0.20</td>
+ <td class="nnum" style="border-right:0px">-0.76</td>
+ </tr>
+ <tr>
+ <td class="num">91</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=RSH" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'RSH')">RadioShack (RSH)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,837,833</td>
+ <td class="nnum">3.44</td>
+ <td class="nnum">-0.43</td>
+ <td class="nnum" style="border-right:0px">-11.11</td>
+ </tr>
+ <tr>
+ <td class="num">92</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=USB" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'USB')">U.S. Bancorp (USB)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,814,373</td>
+ <td class="nnum">36.50</td>
+ <td class="nnum">-0.04</td>
+ <td class="nnum" style="border-right:0px">-0.11</td>
+ </tr>
+ <tr>
+ <td class="num">93</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=LLY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'LLY')">Eli Lilly (LLY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,776,991</td>
+ <td class="nnum">50.50</td>
+ <td class="nnum">-0.54</td>
+ <td class="nnum" style="border-right:0px">-1.06</td>
+ </tr>
+ <tr>
+ <td class="num">94</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MET" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MET')">MetLife (MET)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,774,996</td>
+ <td class="nnum">47.21</td>
+ <td class="nnum">-0.37</td>
+ <td class="nnum" style="border-right:0px">-0.78</td>
+ </tr>
+ <tr>
+ <td class="num">95</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=AUY" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'AUY')">Yamana Gold (AUY)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,742,426</td>
+ <td class="pnum">10.37</td>
+ <td class="pnum">0.03</td>
+ <td class="pnum" style="border-right:0px">0.29</td>
+ </tr>
+ <tr>
+ <td class="num">96</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CBS" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CBS')">CBS Cl B (CBS)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,718,858</td>
+ <td class="nnum">55.50</td>
+ <td class="nnum">-0.06</td>
+ <td class="nnum" style="border-right:0px">-0.11</td>
+ </tr>
+ <tr>
+ <td class="num">97</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CSX" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CSX')">CSX (CSX)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,710,066</td>
+ <td class="nnum">25.85</td>
+ <td class="nnum">-0.13</td>
+ <td class="nnum" style="border-right:0px">-0.50</td>
+ </tr>
+ <tr>
+ <td class="num">98</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=CCL" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'CCL')">Carnival (CCL)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,661,325</td>
+ <td class="nnum">32.88</td>
+ <td class="nnum">-0.05</td>
+ <td class="nnum" style="border-right:0px">-0.15</td>
+ </tr>
+ <tr>
+ <td class="num">99</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=MOS" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'MOS')">Mosaic (MOS)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,595,592</td>
+ <td class="nnum">43.43</td>
+ <td class="nnum">-0.76</td>
+ <td class="nnum" style="border-right:0px">-1.72</td>
+ </tr>
+ <tr>
+ <td class="num">100</td>
+ <td class="text" style="max-width:307px">
+ <a class="linkb" href="/public/quotes/main.html?symbol=WAG" onmouseout="com.dowjones.rolloverQuotes.hidelater();" onmouseover="com.dowjones.rolloverQuotes.show(this,'WAG')">Walgreen (WAG)
+ </a>
+ </td>
+ <td align="right" class="num" style="font-weight:bold;">5,568,310</td>
+ <td class="nnum">54.51</td>
+ <td class="nnum">-0.22</td>
+ <td class="nnum" style="border-right:0px">-0.40</td>
+ </tr>
+</tbody></table>
+<table bgcolor="" border="0" cellpadding="0" cellspacing="0" width="100%">
+ <tbody><tr><td height="20px"><img alt="" border="0" height="20px" src="/img/b.gif" width="1"/></td></tr>
+</tbody></table>
+<table align="center" bgcolor="#ffffff" border="0" cellpadding="0" cellspacing="0" style="border:1px solid #cfc7b7;margin-bottom:5px;" width="575px">
+ <tbody><tr>
+ <td bgcolor="#e9e7e0" class="b12" colspan="3" style="padding:3px 0px 3px 0px;"><span class="p10" style="color:#000; float:right">An Advertising Feature </span> PARTNER CENTER</td>
+ </tr>
+
+ <tr>
+ <td align="center" class="p10" style="padding:10px 0px 5px 0px;border-right:1px solid #cfc7b7;" valign="top">
+
+
+
+ <script type="text/javascript">
+<!--
+ var tempHTML = '';
+ var adURL = 'http://ad.doubleclick.net/adi/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=1;sz=170x67;ord=26093260932609326093;';
+ if ( isSafari ) {
+ tempHTML += '<iframe id="mdc_tradingcenter1" src="'+adURL+'" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170">';
+ } else {
+ tempHTML += '<iframe id="mdc_tradingcenter1" src="/static_html_files/blank.htm" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170px;">';
+ ListOfIframes.mdc_tradingcenter1= adURL;
+ }
+ tempHTML += '<a href="http://ad.doubleclick.net/jump/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=1;sz=170x67;ord=26093260932609326093;" target="_new">';
+ tempHTML += '<img src="http://ad.doubleclick.net/ad/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=1;sz=170x67;ord=26093260932609326093;" border="0" width="170" height="67" vspace="0" alt="Advertisement" /></a><br /></iframe>';
+ document.write(tempHTML);
+ // -->
+ </script>
+ </td>
+
+ <td align="center" class="p10" style="padding:10px 0px 5px 0px;border-right:1px solid #cfc7b7;" valign="top">
+
+
+
+ <script type="text/javascript">
+<!--
+ var tempHTML = '';
+ var adURL = 'http://ad.doubleclick.net/adi/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=2;sz=170x67;ord=26093260932609326093;';
+ if ( isSafari ) {
+ tempHTML += '<iframe id="mdc_tradingcenter2" src="'+adURL+'" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170">';
+ } else {
+ tempHTML += '<iframe id="mdc_tradingcenter2" src="/static_html_files/blank.htm" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170px;">';
+ ListOfIframes.mdc_tradingcenter2= adURL;
+ }
+ tempHTML += '<a href="http://ad.doubleclick.net/jump/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=2;sz=170x67;ord=26093260932609326093;" target="_new">';
+ tempHTML += '<img src="http://ad.doubleclick.net/ad/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=2;sz=170x67;ord=26093260932609326093;" border="0" width="170" height="67" vspace="0" alt="Advertisement" /></a><br /></iframe>';
+ document.write(tempHTML);
+ // -->
+ </script>
+ </td>
+
+ <td align="center" class="p10" style="padding:10px 0px 5px 0px;" valign="top">
+
+
+
+ <script type="text/javascript">
+<!--
+ var tempHTML = '';
+ var adURL = 'http://ad.doubleclick.net/adi/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=3;sz=170x67;ord=26093260932609326093;';
+ if ( isSafari ) {
+ tempHTML += '<iframe id="mdc_tradingcenter3" src="'+adURL+'" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170">';
+ } else {
+ tempHTML += '<iframe id="mdc_tradingcenter3" src="/static_html_files/blank.htm" width="170" height="67" marginwidth="0" marginheight="0" hspace="0" vspace="0" frameborder="0" scrolling="no" bordercolor="#000000" style="width:170px;">';
+ ListOfIframes.mdc_tradingcenter3= adURL;
+ }
+ tempHTML += '<a href="http://ad.doubleclick.net/jump/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=3;sz=170x67;ord=26093260932609326093;" target="_new">';
+ tempHTML += '<img src="http://ad.doubleclick.net/ad/'+((GetCookie('etsFlag'))?'ets.wsj.com':'brokerbuttons.wsj.com')+'/markets_front;!category=;msrc=' + msrc + ';' + segQS + ';' + mc + ';tile=3;sz=170x67;ord=26093260932609326093;" border="0" width="170" height="67" vspace="0" alt="Advertisement" /></a><br /></iframe>';
+ document.write(tempHTML);
+ // -->
+ </script>
+ </td>
+
+ </tr>
+
+</tbody></table>
+<table bgcolor="" border="0" cellpadding="0" cellspacing="0" width="100%">
+ <tbody><tr><td height="20px"><img alt="" border="0" height="20px" src="/img/b.gif" width="1"/></td></tr>
+</tbody></table>
diff --git a/pandas/io/tests/data/valid_markup.html b/pandas/io/tests/data/valid_markup.html
index 5db90da3baec4..0130e9ed9d5f3 100644
--- a/pandas/io/tests/data/valid_markup.html
+++ b/pandas/io/tests/data/valid_markup.html
@@ -35,35 +35,26 @@
<td>7</td>
<td>0</td>
</tr>
- <tr>
- <th>4</th>
- <td>4</td>
- <td>3</td>
- </tr>
- <tr>
- <th>5</th>
- <td>5</td>
- <td>4</td>
- </tr>
- <tr>
- <th>6</th>
- <td>4</td>
- <td>5</td>
- </tr>
- <tr>
- <th>7</th>
- <td>1</td>
- <td>4</td>
+ </tbody>
+ </table>
+ <table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>a</th>
+ <th>b</th>
</tr>
+ </thead>
+ <tbody>
<tr>
- <th>8</th>
+ <th>0</th>
<td>6</td>
<td>7</td>
</tr>
<tr>
- <th>9</th>
- <td>8</td>
- <td>5</td>
+ <th>1</th>
+ <td>4</td>
+ <td>0</td>
</tr>
</tbody>
</table>
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 44e4b5cfda7b6..9b0fb1cacfb65 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -1,33 +1,31 @@
from __future__ import print_function
+
import os
import re
-from unittest import TestCase
import warnings
+import unittest
+
+try:
+ from importlib import import_module
+except ImportError:
+ import_module = __import__
+
from distutils.version import LooseVersion
-from pandas.io.common import URLError
import nose
-from nose.tools import assert_raises
import numpy as np
from numpy.random import rand
from numpy.testing.decorators import slow
-from pandas.compat import map, zip, StringIO
-import pandas.compat as compat
-
-try:
- from importlib import import_module
-except ImportError:
- import_module = __import__
+from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index,
+ date_range, Series)
+from pandas.compat import map, zip, StringIO, string_types
+from pandas.io.common import URLError, urlopen
from pandas.io.html import read_html
-from pandas.io.common import urlopen
-
-from pandas import DataFrame, MultiIndex, read_csv, Timestamp
-from pandas.util.testing import (assert_frame_equal, network,
- get_data_path)
-from pandas.util.testing import makeCustomDataframe as mkdf
+import pandas.util.testing as tm
+from pandas.util.testing import makeCustomDataframe as mkdf, network
def _have_module(module_name):
@@ -40,11 +38,11 @@ def _have_module(module_name):
def _skip_if_no(module_name):
if not _have_module(module_name):
- raise nose.SkipTest("{0} not found".format(module_name))
+ raise nose.SkipTest("{0!r} not found".format(module_name))
def _skip_if_none_of(module_names):
- if isinstance(module_names, compat.string_types):
+ if isinstance(module_names, string_types):
_skip_if_no(module_names)
if module_names == 'bs4':
import bs4
@@ -54,17 +52,14 @@ def _skip_if_none_of(module_names):
not_found = [module_name for module_name in module_names if not
_have_module(module_name)]
if set(not_found) & set(module_names):
- raise nose.SkipTest("{0} not found".format(not_found))
+ raise nose.SkipTest("{0!r} not found".format(not_found))
if 'bs4' in module_names:
import bs4
if bs4.__version__ == LooseVersion('4.2.0'):
raise nose.SkipTest("Bad version of bs4: 4.2.0")
-DATA_PATH = get_data_path()
-
-def isframe(x):
- return isinstance(x, DataFrame)
+DATA_PATH = tm.get_data_path()
def assert_framelist_equal(list1, list2, *args, **kwargs):
@@ -72,10 +67,12 @@ def assert_framelist_equal(list1, list2, *args, **kwargs):
'len(list1) == {0}, '
'len(list2) == {1}'.format(len(list1),
len(list2)))
- assert all(map(lambda x, y: isframe(x) and isframe(y), list1, list2)), \
- 'not all list elements are DataFrames'
+ msg = 'not all list elements are DataFrames'
+ both_frames = all(map(lambda x, y: isinstance(x, DataFrame) and
+ isinstance(y, DataFrame), list1, list2))
+ assert both_frames, msg
for frame_i, frame_j in zip(list1, list2):
- assert_frame_equal(frame_i, frame_j, *args, **kwargs)
+ tm.assert_frame_equal(frame_i, frame_j, *args, **kwargs)
assert not frame_i.empty, 'frames are both empty'
@@ -83,13 +80,13 @@ def test_bs4_version_fails():
_skip_if_none_of(('bs4', 'html5lib'))
import bs4
if bs4.__version__ == LooseVersion('4.2.0'):
- assert_raises(AssertionError, read_html, os.path.join(DATA_PATH,
- "spam.html"),
- flavor='bs4')
+ tm.assert_raises(AssertionError, read_html, os.path.join(DATA_PATH,
+ "spam.html"),
+ flavor='bs4')
-class TestReadHtmlBase(TestCase):
- def run_read_html(self, *args, **kwargs):
+class TestReadHtml(unittest.TestCase):
+ def read_html(self, *args, **kwargs):
kwargs['flavor'] = kwargs.get('flavor', self.flavor)
return read_html(*args, **kwargs)
@@ -112,18 +109,16 @@ def test_to_html_compat(self):
df = mkdf(4, 3, data_gen_f=lambda *args: rand(), c_idx_names=False,
r_idx_names=False).applymap('{0:.3f}'.format).astype(float)
out = df.to_html()
- res = self.run_read_html(out, attrs={'class': 'dataframe'},
+ res = self.read_html(out, attrs={'class': 'dataframe'},
index_col=0)[0]
- print(df.dtypes)
- print(res.dtypes)
- assert_frame_equal(res, df)
+ tm.assert_frame_equal(res, df)
@network
def test_banklist_url(self):
url = 'http://www.fdic.gov/bank/individual/failed/banklist.html'
- df1 = self.run_read_html(url, 'First Federal Bank of Florida',
+ df1 = self.read_html(url, 'First Federal Bank of Florida',
attrs={"id": 'table'})
- df2 = self.run_read_html(url, 'Metcalf Bank', attrs={'id': 'table'})
+ df2 = self.read_html(url, 'Metcalf Bank', attrs={'id': 'table'})
assert_framelist_equal(df1, df2)
@@ -131,133 +126,148 @@ def test_banklist_url(self):
def test_spam_url(self):
url = ('http://ndb.nal.usda.gov/ndb/foods/show/1732?fg=&man=&'
'lfacet=&format=&count=&max=25&offset=&sort=&qlookup=spam')
- df1 = self.run_read_html(url, '.*Water.*')
- df2 = self.run_read_html(url, 'Unit')
+ df1 = self.read_html(url, '.*Water.*')
+ df2 = self.read_html(url, 'Unit')
assert_framelist_equal(df1, df2)
@slow
def test_banklist(self):
- df1 = self.run_read_html(self.banklist_data, '.*Florida.*',
+ df1 = self.read_html(self.banklist_data, '.*Florida.*',
attrs={'id': 'table'})
- df2 = self.run_read_html(self.banklist_data, 'Metcalf Bank',
+ df2 = self.read_html(self.banklist_data, 'Metcalf Bank',
attrs={'id': 'table'})
assert_framelist_equal(df1, df2)
- def test_spam(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*',
- infer_types=False)
- df2 = self.run_read_html(self.spam_data, 'Unit', infer_types=False)
+ def test_spam_no_types(self):
+ with tm.assert_produces_warning(FutureWarning):
+ df1 = self.read_html(self.spam_data, '.*Water.*',
+ infer_types=False)
+ with tm.assert_produces_warning(FutureWarning):
+ df2 = self.read_html(self.spam_data, 'Unit', infer_types=False)
assert_framelist_equal(df1, df2)
- print(df1[0])
+
+ self.assertEqual(df1[0].ix[0, 0], 'Proximates')
+ self.assertEqual(df1[0].columns[0], 'Nutrient')
+
+ def test_spam_with_types(self):
+ df1 = self.read_html(self.spam_data, '.*Water.*')
+ df2 = self.read_html(self.spam_data, 'Unit')
+ assert_framelist_equal(df1, df2)
self.assertEqual(df1[0].ix[0, 0], 'Proximates')
self.assertEqual(df1[0].columns[0], 'Nutrient')
def test_spam_no_match(self):
- dfs = self.run_read_html(self.spam_data)
+ dfs = self.read_html(self.spam_data)
for df in dfs:
- self.assert_(isinstance(df, DataFrame))
+ tm.assert_isinstance(df, DataFrame)
def test_banklist_no_match(self):
- dfs = self.run_read_html(self.banklist_data, attrs={'id': 'table'})
+ dfs = self.read_html(self.banklist_data, attrs={'id': 'table'})
for df in dfs:
- self.assert_(isinstance(df, DataFrame))
+ tm.assert_isinstance(df, DataFrame)
def test_spam_header(self):
- df = self.run_read_html(self.spam_data, '.*Water.*', header=0)
- df = self.run_read_html(self.spam_data, '.*Water.*', header=1)[0]
- self.assertEqual(df.columns[0], 'Water')
+ df = self.read_html(self.spam_data, '.*Water.*', header=1)[0]
+ self.assertEqual(df.columns[0], 'Proximates')
self.assertFalse(df.empty)
def test_skiprows_int(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', skiprows=1)
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=1)
+ df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=1)
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_xrange(self):
- df1 = [self.run_read_html(self.spam_data, '.*Water.*').pop()[2:]]
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=range(2))
-
- assert_framelist_equal(df1, df2)
+ df1 = self.read_html(self.spam_data, '.*Water.*',
+ skiprows=range(2))[0]
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=range(2))[0]
+ tm.assert_frame_equal(df1, df2)
def test_skiprows_list(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', skiprows=[1, 2])
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=[2, 1])
+ df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=[1, 2])
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=[2, 1])
assert_framelist_equal(df1, df2)
def test_skiprows_set(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*',
+ df1 = self.read_html(self.spam_data, '.*Water.*',
skiprows=set([1, 2]))
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=set([2, 1]))
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=set([2, 1]))
assert_framelist_equal(df1, df2)
def test_skiprows_slice(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', skiprows=1)
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=1)
+ df1 = self.read_html(self.spam_data, '.*Water.*', skiprows=1)
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=1)
assert_framelist_equal(df1, df2)
def test_skiprows_slice_short(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*',
+ df1 = self.read_html(self.spam_data, '.*Water.*',
skiprows=slice(2))
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=slice(2))
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=slice(2))
assert_framelist_equal(df1, df2)
def test_skiprows_slice_long(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*',
+ df1 = self.read_html(self.spam_data, '.*Water.*',
skiprows=slice(2, 5))
- df2 = self.run_read_html(self.spam_data, 'Unit',
+ df2 = self.read_html(self.spam_data, 'Unit',
skiprows=slice(4, 1, -1))
assert_framelist_equal(df1, df2)
def test_skiprows_ndarray(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*',
+ df1 = self.read_html(self.spam_data, '.*Water.*',
skiprows=np.arange(2))
- df2 = self.run_read_html(self.spam_data, 'Unit', skiprows=np.arange(2))
+ df2 = self.read_html(self.spam_data, 'Unit', skiprows=np.arange(2))
assert_framelist_equal(df1, df2)
def test_skiprows_invalid(self):
- self.assertRaises(ValueError, self.run_read_html, self.spam_data,
- '.*Water.*', skiprows='asdf')
+ with tm.assertRaisesRegexp(TypeError,
+ 'is not a valid type for skipping rows'):
+ self.read_html(self.spam_data, '.*Water.*', skiprows='asdf')
def test_index(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', index_col=0)
- df2 = self.run_read_html(self.spam_data, 'Unit', index_col=0)
+ df1 = self.read_html(self.spam_data, '.*Water.*', index_col=0)
+ df2 = self.read_html(self.spam_data, 'Unit', index_col=0)
assert_framelist_equal(df1, df2)
def test_header_and_index_no_types(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', header=1,
- index_col=0, infer_types=False)
- df2 = self.run_read_html(self.spam_data, 'Unit', header=1, index_col=0,
- infer_types=False)
+ with tm.assert_produces_warning(FutureWarning):
+ df1 = self.read_html(self.spam_data, '.*Water.*', header=1,
+ index_col=0, infer_types=False)
+ with tm.assert_produces_warning(FutureWarning):
+ df2 = self.read_html(self.spam_data, 'Unit', header=1,
+ index_col=0, infer_types=False)
assert_framelist_equal(df1, df2)
def test_header_and_index_with_types(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', header=1,
+ df1 = self.read_html(self.spam_data, '.*Water.*', header=1,
index_col=0)
- df2 = self.run_read_html(self.spam_data, 'Unit', header=1, index_col=0)
+ df2 = self.read_html(self.spam_data, 'Unit', header=1, index_col=0)
assert_framelist_equal(df1, df2)
def test_infer_types(self):
- df1 = self.run_read_html(self.spam_data, '.*Water.*', index_col=0,
- infer_types=False)
- df2 = self.run_read_html(self.spam_data, 'Unit', index_col=0,
- infer_types=False)
+ with tm.assert_produces_warning(FutureWarning):
+ df1 = self.read_html(self.spam_data, '.*Water.*', index_col=0,
+ infer_types=False)
+ with tm.assert_produces_warning(FutureWarning):
+ df2 = self.read_html(self.spam_data, 'Unit', index_col=0,
+ infer_types=False)
assert_framelist_equal(df1, df2)
- df2 = self.run_read_html(self.spam_data, 'Unit', index_col=0,
- infer_types=True)
+ with tm.assert_produces_warning(FutureWarning):
+ df2 = self.read_html(self.spam_data, 'Unit', index_col=0,
+ infer_types=True)
- self.assertRaises(AssertionError, assert_framelist_equal, df1, df2)
+ with tm.assertRaises(AssertionError):
+ assert_framelist_equal(df1, df2)
def test_string_io(self):
with open(self.spam_data) as f:
@@ -266,129 +276,197 @@ def test_string_io(self):
with open(self.spam_data) as f:
data2 = StringIO(f.read())
- df1 = self.run_read_html(data1, '.*Water.*', infer_types=False)
- df2 = self.run_read_html(data2, 'Unit', infer_types=False)
+ df1 = self.read_html(data1, '.*Water.*')
+ df2 = self.read_html(data2, 'Unit')
assert_framelist_equal(df1, df2)
def test_string(self):
with open(self.spam_data) as f:
data = f.read()
- df1 = self.run_read_html(data, '.*Water.*', infer_types=False)
- df2 = self.run_read_html(data, 'Unit', infer_types=False)
+ df1 = self.read_html(data, '.*Water.*')
+ df2 = self.read_html(data, 'Unit')
assert_framelist_equal(df1, df2)
def test_file_like(self):
with open(self.spam_data) as f:
- df1 = self.run_read_html(f, '.*Water.*', infer_types=False)
+ df1 = self.read_html(f, '.*Water.*')
with open(self.spam_data) as f:
- df2 = self.run_read_html(f, 'Unit', infer_types=False)
+ df2 = self.read_html(f, 'Unit')
assert_framelist_equal(df1, df2)
@network
def test_bad_url_protocol(self):
- self.assertRaises(URLError, self.run_read_html,
- 'git://github.com', '.*Water.*')
+ with tm.assertRaises(URLError):
+ self.read_html('git://github.com', match='.*Water.*')
@network
def test_invalid_url(self):
- self.assertRaises(URLError, self.run_read_html,
- 'http://www.a23950sdfa908sd.com')
+ with tm.assertRaises(URLError):
+ self.read_html('http://www.a23950sdfa908sd.com', match='.*Water.*')
@slow
def test_file_url(self):
url = self.banklist_data
- dfs = self.run_read_html('file://' + url, 'First',
- attrs={'id': 'table'})
- self.assert_(isinstance(dfs, list))
+ dfs = self.read_html('file://' + url, 'First', attrs={'id': 'table'})
+ tm.assert_isinstance(dfs, list)
for df in dfs:
- self.assert_(isinstance(df, DataFrame))
+ tm.assert_isinstance(df, DataFrame)
@slow
def test_invalid_table_attrs(self):
url = self.banklist_data
- self.assertRaises(AssertionError, self.run_read_html, url,
- 'First Federal Bank of Florida',
- attrs={'id': 'tasdfable'})
+ with tm.assertRaisesRegexp(ValueError, 'No tables found'):
+ self.read_html(url, 'First Federal Bank of Florida',
+ attrs={'id': 'tasdfable'})
def _bank_data(self, *args, **kwargs):
- return self.run_read_html(self.banklist_data, 'Metcalf',
- attrs={'id': 'table'}, *args, **kwargs)
+ return self.read_html(self.banklist_data, 'Metcalf',
+ attrs={'id': 'table'}, *args, **kwargs)
@slow
def test_multiindex_header(self):
df = self._bank_data(header=[0, 1])[0]
- self.assert_(isinstance(df.columns, MultiIndex))
+ tm.assert_isinstance(df.columns, MultiIndex)
@slow
def test_multiindex_index(self):
df = self._bank_data(index_col=[0, 1])[0]
- self.assert_(isinstance(df.index, MultiIndex))
+ tm.assert_isinstance(df.index, MultiIndex)
@slow
def test_multiindex_header_index(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1])[0]
- self.assert_(isinstance(df.columns, MultiIndex))
- self.assert_(isinstance(df.index, MultiIndex))
+ tm.assert_isinstance(df.columns, MultiIndex)
+ tm.assert_isinstance(df.index, MultiIndex)
+
+ @slow
+ def test_multiindex_header_skiprows_tuples(self):
+ df = self._bank_data(header=[0, 1], skiprows=1, tupleize_cols=True)[0]
+ tm.assert_isinstance(df.columns, Index)
@slow
def test_multiindex_header_skiprows(self):
df = self._bank_data(header=[0, 1], skiprows=1)[0]
- self.assert_(isinstance(df.columns, MultiIndex))
+ tm.assert_isinstance(df.columns, MultiIndex)
@slow
def test_multiindex_header_index_skiprows(self):
df = self._bank_data(header=[0, 1], index_col=[0, 1], skiprows=1)[0]
- self.assert_(isinstance(df.index, MultiIndex))
+ tm.assert_isinstance(df.index, MultiIndex)
+ tm.assert_isinstance(df.columns, MultiIndex)
@slow
def test_regex_idempotency(self):
url = self.banklist_data
- dfs = self.run_read_html('file://' + url,
+ dfs = self.read_html('file://' + url,
match=re.compile(re.compile('Florida')),
attrs={'id': 'table'})
- self.assert_(isinstance(dfs, list))
+ tm.assert_isinstance(dfs, list)
for df in dfs:
- self.assert_(isinstance(df, DataFrame))
-
- def test_negative_skiprows_spam(self):
- url = self.spam_data
- self.assertRaises(AssertionError, self.run_read_html, url, 'Water',
- skiprows=-1)
+ tm.assert_isinstance(df, DataFrame)
- def test_negative_skiprows_banklist(self):
- url = self.banklist_data
- self.assertRaises(AssertionError, self.run_read_html, url, 'Florida',
- skiprows=-1)
+ def test_negative_skiprows(self):
+ with tm.assertRaisesRegexp(ValueError,
+ '\(you passed a negative value\)'):
+ self.read_html(self.spam_data, 'Water', skiprows=-1)
@network
def test_multiple_matches(self):
url = 'http://code.google.com/p/pythonxy/wiki/StandardPlugins'
- dfs = self.run_read_html(url, match='Python',
+ dfs = self.read_html(url, match='Python',
attrs={'class': 'wikitable'})
self.assert_(len(dfs) > 1)
@network
def test_pythonxy_plugins_table(self):
url = 'http://code.google.com/p/pythonxy/wiki/StandardPlugins'
- dfs = self.run_read_html(url, match='Python',
+ dfs = self.read_html(url, match='Python',
attrs={'class': 'wikitable'})
zz = [df.iloc[0, 0] for df in dfs]
self.assertEqual(sorted(zz), sorted(['Python', 'SciTE']))
+ @slow
+ def test_thousands_macau_stats(self):
+ all_non_nan_table_index = -2
+ macau_data = os.path.join(DATA_PATH, 'macau.html')
+ dfs = self.read_html(macau_data, index_col=0,
+ attrs={'class': 'style1'})
+ df = dfs[all_non_nan_table_index]
+
+ self.assertFalse(any(s.isnull().any() for _, s in df.iteritems()))
+
+ @slow
+ def test_thousands_macau_index_col(self):
+ all_non_nan_table_index = -2
+ macau_data = os.path.join(DATA_PATH, 'macau.html')
+ dfs = self.read_html(macau_data, index_col=0, header=0)
+ df = dfs[all_non_nan_table_index]
+
+ self.assertFalse(any(s.isnull().any() for _, s in df.iteritems()))
+
+ def test_countries_municipalities(self):
+ # GH5048
+ data1 = StringIO('''<table>
+ <thead>
+ <tr>
+ <th>Country</th>
+ <th>Municipality</th>
+ <th>Year</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <td>Ukraine</td>
+ <th>Odessa</th>
+ <td>1944</td>
+ </tr>
+ </tbody>
+ </table>''')
+ data2 = StringIO('''
+ <table>
+ <tbody>
+ <tr>
+ <th>Country</th>
+ <th>Municipality</th>
+ <th>Year</th>
+ </tr>
+ <tr>
+ <td>Ukraine</td>
+ <th>Odessa</th>
+ <td>1944</td>
+ </tr>
+ </tbody>
+ </table>''')
+ res1 = self.read_html(data1)
+ res2 = self.read_html(data2, header=0)
+ assert_framelist_equal(res1, res2)
+
+ def test_nyse_wsj_commas_table(self):
+ data = os.path.join(DATA_PATH, 'nyse_wsj.html')
+ df = self.read_html(data, index_col=0, header=0,
+ attrs={'class': 'mdcTable'})[0]
+
+ columns = Index(['Issue(Roll over for charts and headlines)',
+ 'Volume', 'Price', 'Chg', '% Chg'])
+ nrows = 100
+ self.assertEqual(df.shape[0], nrows)
+ self.assertTrue(df.columns.equals(columns))
+
@slow
def test_banklist_header(self):
from pandas.io.html import _remove_whitespace
+
def try_remove_ws(x):
try:
return _remove_whitespace(x)
except AttributeError:
return x
- df = self.run_read_html(self.banklist_data, 'Metcalf',
+ df = self.read_html(self.banklist_data, 'Metcalf',
attrs={'id': 'table'})[0]
ground_truth = read_csv(os.path.join(DATA_PATH, 'banklist.csv'),
converters={'Updated Date': Timestamp,
@@ -412,8 +490,8 @@ def try_remove_ws(x):
dfnew = df.applymap(try_remove_ws).replace(old, new)
gtnew = ground_truth.applymap(try_remove_ws)
converted = dfnew.convert_objects(convert_numeric=True)
- assert_frame_equal(converted.convert_objects(convert_dates='coerce'),
- gtnew)
+ tm.assert_frame_equal(converted.convert_objects(convert_dates='coerce'),
+ gtnew)
@slow
def test_gold_canyon(self):
@@ -422,13 +500,93 @@ def test_gold_canyon(self):
raw_text = f.read()
self.assert_(gc in raw_text)
- df = self.run_read_html(self.banklist_data, 'Gold Canyon',
- attrs={'id': 'table'}, infer_types=False)[0]
+ df = self.read_html(self.banklist_data, 'Gold Canyon',
+ attrs={'id': 'table'})[0]
self.assert_(gc in df.to_string())
+ def test_different_number_of_rows(self):
+ expected = """<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>C_l0_g0</th>
+ <th>C_l0_g1</th>
+ <th>C_l0_g2</th>
+ <th>C_l0_g3</th>
+ <th>C_l0_g4</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>R_l0_g0</th>
+ <td> 0.763</td>
+ <td> 0.233</td>
+ <td> nan</td>
+ <td> nan</td>
+ <td> nan</td>
+ </tr>
+ <tr>
+ <th>R_l0_g1</th>
+ <td> 0.244</td>
+ <td> 0.285</td>
+ <td> 0.392</td>
+ <td> 0.137</td>
+ <td> 0.222</td>
+ </tr>
+ </tbody>
+ </table>"""
+ out = """<table border="1" class="dataframe">
+ <thead>
+ <tr style="text-align: right;">
+ <th></th>
+ <th>C_l0_g0</th>
+ <th>C_l0_g1</th>
+ <th>C_l0_g2</th>
+ <th>C_l0_g3</th>
+ <th>C_l0_g4</th>
+ </tr>
+ </thead>
+ <tbody>
+ <tr>
+ <th>R_l0_g0</th>
+ <td> 0.763</td>
+ <td> 0.233</td>
+ </tr>
+ <tr>
+ <th>R_l0_g1</th>
+ <td> 0.244</td>
+ <td> 0.285</td>
+ <td> 0.392</td>
+ <td> 0.137</td>
+ <td> 0.222</td>
+ </tr>
+ </tbody>
+ </table>"""
+ expected = self.read_html(expected, index_col=0)[0]
+ res = self.read_html(out, index_col=0)[0]
+ tm.assert_frame_equal(expected, res)
+
+ def test_parse_dates_list(self):
+ df = DataFrame({'date': date_range('1/1/2001', periods=10)})
+ expected = df.to_html()
+ res = read_html(expected, parse_dates=[0], index_col=0)
+ tm.assert_frame_equal(df, res[0])
+
+ def test_parse_dates_combine(self):
+ raw_dates = Series(date_range('1/1/2001', periods=10))
+ df = DataFrame({'date': raw_dates.map(lambda x: str(x.date())),
+ 'time': raw_dates.map(lambda x: str(x.time()))})
+ res = read_html(df.to_html(), parse_dates={'datetime': [1, 2]},
+ index_col=1)
+ newdf = DataFrame({'datetime': raw_dates})
+ tm.assert_frame_equal(newdf, res[0])
+
+
+class TestReadHtmlLxml(unittest.TestCase):
+ def setUp(self):
+ self.try_skip()
-class TestReadHtmlLxml(TestCase):
- def run_read_html(self, *args, **kwargs):
+ def read_html(self, *args, **kwargs):
self.flavor = ['lxml']
self.try_skip()
kwargs['flavor'] = kwargs.get('flavor', self.flavor)
@@ -437,31 +595,28 @@ def run_read_html(self, *args, **kwargs):
def try_skip(self):
_skip_if_no('lxml')
- def test_spam_data_fail(self):
+ def test_data_fail(self):
from lxml.etree import XMLSyntaxError
spam_data = os.path.join(DATA_PATH, 'spam.html')
- self.assertRaises(XMLSyntaxError, self.run_read_html, spam_data,
- flavor=['lxml'])
-
- def test_banklist_data_fail(self):
- from lxml.etree import XMLSyntaxError
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- self.assertRaises(XMLSyntaxError, self.run_read_html, banklist_data, flavor=['lxml'])
+
+ with tm.assertRaises(XMLSyntaxError):
+ self.read_html(spam_data, flavor=['lxml'])
+
+ with tm.assertRaises(XMLSyntaxError):
+ self.read_html(banklist_data, flavor=['lxml'])
def test_works_on_valid_markup(self):
filename = os.path.join(DATA_PATH, 'valid_markup.html')
- dfs = self.run_read_html(filename, index_col=0, flavor=['lxml'])
- self.assert_(isinstance(dfs, list))
- self.assert_(isinstance(dfs[0], DataFrame))
-
- def setUp(self):
- self.try_skip()
+ dfs = self.read_html(filename, index_col=0, flavor=['lxml'])
+ tm.assert_isinstance(dfs, list)
+ tm.assert_isinstance(dfs[0], DataFrame)
@slow
def test_fallback_success(self):
_skip_if_none_of(('bs4', 'html5lib'))
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- self.run_read_html(banklist_data, '.*Water.*', flavor=['lxml',
+ self.read_html(banklist_data, '.*Water.*', flavor=['lxml',
'html5lib'])
@@ -505,3 +660,11 @@ def test_lxml_finds_tables():
def test_lxml_finds_tbody():
filepath = os.path.join(DATA_PATH, "spam.html")
assert get_lxml_elements(filepath, 'tbody')
+
+
+def test_same_ordering():
+ _skip_if_none_of(['bs4', 'lxml', 'html5lib'])
+ filename = os.path.join(DATA_PATH, 'valid_markup.html')
+ dfs_lxml = read_html(filename, index_col=0, flavor=['lxml'])
+ dfs_bs4 = read_html(filename, index_col=0, flavor=['bs4'])
+ assert_framelist_equal(dfs_lxml, dfs_bs4)
| closes #4697 (refactor issue) (REF/ENH)
closes #4700 (header inconsistency issue) (API)
closes #5029 (comma issue, added this data set, ordering issue) (BUG)
closes #5048 (header type conversion issue) (BUG)
closes #5066 (index_col issue) (BUG)
- [x] figure out `skiprows`, `header`, and `index_col` interaction (a somewhat longstanding `MultiIndex` sorting issue, I just took the long way to get there :))
- ~~spam url not working anymore~~ (US gov "shutdown" is responsible for this, it correctly skips)
- ~~table ordering doc blurb/HTML gotchas~~ (was an actual "bug", now fixed in this PR)
- ~~add tests for rows with a different length~~ (this is already done by the existing tests)
| https://api.github.com/repos/pandas-dev/pandas/pulls/4770 | 2013-09-07T04:14:16Z | 2013-10-03T02:26:05Z | 2013-10-03T02:26:05Z | 2015-08-23T12:59:28Z |
TST: more robust testing for HDFStore dups | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index bcf2345913f1e..0a9e6855f094a 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -667,7 +667,7 @@ def func(_start, _stop):
axis = list(set([t.non_index_axes[0][0] for t in tbls]))[0]
# concat and return
- return concat(objs, axis=axis, verify_integrity=True)
+ return concat(objs, axis=axis, verify_integrity=True).consolidate()
if iterator or chunksize is not None:
return TableIterator(self, func, nrows=nrows, start=start, stop=stop, chunksize=chunksize, auto_close=auto_close)
@@ -2910,9 +2910,7 @@ def create_axes(self, axes, obj, validate=True, nan_rep=None, data_columns=None,
# reindex by our non_index_axes & compute data_columns
for a in self.non_index_axes:
- labels = _ensure_index(a[1])
- if not labels.equals(obj._get_axis(a[0])):
- obj = obj.reindex_axis(labels, axis=a[0])
+ obj = _reindex_axis(obj, a[0], a[1])
# figure out data_columns and get out blocks
block_obj = self.get_object(obj).consolidate()
@@ -3000,11 +2998,7 @@ def process_axes(self, obj, columns=None):
# reorder by any non_index_axes & limit to the select columns
for axis, labels in self.non_index_axes:
- if columns is not None:
- labels = Index(labels) & Index(columns)
- labels = _ensure_index(labels)
- if not labels.equals(obj._get_axis(axis)):
- obj = obj.reindex_axis(labels, axis=axis)
+ obj = _reindex_axis(obj, axis, labels, columns)
# apply the selection filters (but keep in the same order)
if self.selection.filter:
@@ -3219,7 +3213,7 @@ def read(self, where=None, columns=None, **kwargs):
if len(objs) == 1:
wp = objs[0]
else:
- wp = concat(objs, axis=0, verify_integrity=False)
+ wp = concat(objs, axis=0, verify_integrity=False).consolidate()
# apply the selection filters & axis orderings
wp = self.process_axes(wp, columns=columns)
@@ -3510,7 +3504,7 @@ def read(self, where=None, columns=None, **kwargs):
if len(frames) == 1:
df = frames[0]
else:
- df = concat(frames, axis=1, verify_integrity=False)
+ df = concat(frames, axis=1, verify_integrity=False).consolidate()
# apply the selection filters & axis orderings
df = self.process_axes(df, columns=columns)
@@ -3683,6 +3677,26 @@ class AppendableNDimTable(AppendablePanelTable):
obj_type = Panel4D
+def _reindex_axis(obj, axis, labels, other=None):
+ ax = obj._get_axis(axis)
+ labels = _ensure_index(labels)
+
+ # try not to reindex even if other is provided
+ # if it equals our current index
+ if other is not None:
+ other = _ensure_index(other)
+ if (other is None or labels.equals(other)) and labels.equals(ax):
+ return obj
+
+ labels = _ensure_index(labels.unique())
+ if other is not None:
+ labels = labels & _ensure_index(other.unique())
+ if not labels.equals(ax):
+ slicer = [ slice(None, None) ] * obj.ndim
+ slicer[axis] = labels
+ obj = obj.loc[tuple(slicer)]
+ return obj
+
def _get_info(info, name):
""" get/create the info for this name """
try:
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 2ef4a9287a664..e9f4cf7d0f96f 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2298,15 +2298,24 @@ def test_wide_table(self):
def test_select_with_dups(self):
-
# single dtypes
df = DataFrame(np.random.randn(10,4),columns=['A','A','B','B'])
df.index = date_range('20130101 9:30',periods=10,freq='T')
with ensure_clean(self.path) as store:
store.append('df',df)
+
result = store.select('df')
- assert_frame_equal(result,df)
+ expected = df
+ assert_frame_equal(result,expected,by_blocks=True)
+
+ result = store.select('df',columns=df.columns)
+ expected = df
+ assert_frame_equal(result,expected,by_blocks=True)
+
+ result = store.select('df',columns=['A'])
+ expected = df.loc[:,['A']]
+ assert_frame_equal(result,expected)
# dups accross dtypes
df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']),
@@ -2316,8 +2325,22 @@ def test_select_with_dups(self):
with ensure_clean(self.path) as store:
store.append('df',df)
+
result = store.select('df')
- assert_frame_equal(result,df)
+ expected = df
+ assert_frame_equal(result,expected,by_blocks=True)
+
+ result = store.select('df',columns=df.columns)
+ expected = df
+ assert_frame_equal(result,expected,by_blocks=True)
+
+ expected = df.loc[:,['A']]
+ result = store.select('df',columns=['A'])
+ assert_frame_equal(result,expected,by_blocks=True)
+
+ expected = df.loc[:,['B','A']]
+ result = store.select('df',columns=['B','A'])
+ assert_frame_equal(result,expected,by_blocks=True)
def test_wide_table_dups(self):
wp = tm.makePanel()
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index c652c2da3214c..abc13fb2ad9ee 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -258,27 +258,41 @@ def assert_frame_equal(left, right, check_dtype=True,
check_column_type=False,
check_frame_type=False,
check_less_precise=False,
- check_names=True):
+ check_names=True,
+ by_blocks=False):
if check_frame_type:
assert_isinstance(left, type(right))
assert_isinstance(left, DataFrame)
assert_isinstance(right, DataFrame)
if check_less_precise:
- assert_almost_equal(left.columns, right.columns)
+ if not by_blocks:
+ assert_almost_equal(left.columns, right.columns)
assert_almost_equal(left.index, right.index)
else:
- assert_index_equal(left.columns, right.columns)
+ if not by_blocks:
+ assert_index_equal(left.columns, right.columns)
assert_index_equal(left.index, right.index)
- for i, col in enumerate(left.columns):
- assert col in right
- lcol = left.icol(i)
- rcol = right.icol(i)
- assert_series_equal(lcol, rcol,
- check_dtype=check_dtype,
- check_index_type=check_index_type,
- check_less_precise=check_less_precise)
+ # compare by blocks
+ if by_blocks:
+ rblocks = right.blocks
+ lblocks = left.blocks
+ for dtype in list(set(list(lblocks.keys()) + list(rblocks.keys()))):
+ assert dtype in lblocks
+ assert dtype in rblocks
+ assert_frame_equal(lblocks[dtype],rblocks[dtype],check_dtype=check_dtype)
+
+ # compare by columns
+ else:
+ for i, col in enumerate(left.columns):
+ assert col in right
+ lcol = left.icol(i)
+ rcol = right.icol(i)
+ assert_series_equal(lcol, rcol,
+ check_dtype=check_dtype,
+ check_index_type=check_index_type,
+ check_less_precise=check_less_precise)
if check_index_type:
assert_isinstance(left.index, type(right.index))
| https://api.github.com/repos/pandas-dev/pandas/pulls/4769 | 2013-09-07T02:25:23Z | 2013-09-07T02:35:28Z | 2013-09-07T02:35:28Z | 2014-07-16T08:26:59Z | |
BUG: reading from a store with duplicate columns across dtypes would raise (GH4767) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 9a34cdbdfb5a8..01c2b39fbb97b 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -237,6 +237,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- ``read_hdf`` was not respecting as passed ``mode`` (:issue:`4504`)
- appending a 0-len table will work correctly (:issue:`4273`)
- ``to_hdf`` was raising when passing both arguments ``append`` and ``table`` (:issue:`4584`)
+ - reading from a store with duplicate columns across dtypes would raise (:issue:`4767`)
- Fixed bug in tslib.tz_convert(vals, tz1, tz2): it could raise IndexError exception while
trying to access trans[pos + 1] (:issue:`4496`)
- The ``by`` argument now works correctly with the ``layout`` argument
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 5ab63d016c3b8..bcf2345913f1e 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3219,7 +3219,7 @@ def read(self, where=None, columns=None, **kwargs):
if len(objs) == 1:
wp = objs[0]
else:
- wp = concat(objs, axis=0, verify_integrity=True)
+ wp = concat(objs, axis=0, verify_integrity=False)
# apply the selection filters & axis orderings
wp = self.process_axes(wp, columns=columns)
@@ -3510,7 +3510,7 @@ def read(self, where=None, columns=None, **kwargs):
if len(frames) == 1:
df = frames[0]
else:
- df = concat(frames, axis=1, verify_integrity=True)
+ df = concat(frames, axis=1, verify_integrity=False)
# apply the selection filters & axis orderings
df = self.process_axes(df, columns=columns)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6941452075f4b..2ef4a9287a664 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2296,6 +2296,29 @@ def test_wide_table(self):
wp = tm.makePanel()
self._check_roundtrip_table(wp, assert_panel_equal)
+ def test_select_with_dups(self):
+
+
+ # single dtypes
+ df = DataFrame(np.random.randn(10,4),columns=['A','A','B','B'])
+ df.index = date_range('20130101 9:30',periods=10,freq='T')
+
+ with ensure_clean(self.path) as store:
+ store.append('df',df)
+ result = store.select('df')
+ assert_frame_equal(result,df)
+
+ # dups accross dtypes
+ df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']),
+ DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])],
+ axis=1)
+ df.index = date_range('20130101 9:30',periods=10,freq='T')
+
+ with ensure_clean(self.path) as store:
+ store.append('df',df)
+ result = store.select('df')
+ assert_frame_equal(result,df)
+
def test_wide_table_dups(self):
wp = tm.makePanel()
with ensure_clean(self.path) as store:
| closes #4767
| https://api.github.com/repos/pandas-dev/pandas/pulls/4768 | 2013-09-07T00:10:23Z | 2013-09-07T00:20:06Z | 2013-09-07T00:20:06Z | 2014-06-12T07:59:14Z |
BUG: Bug in setting with loc/ix a single indexer on a multi-index axis and a listlike (related to GH3777) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 9a34cdbdfb5a8..1e0c980ca752d 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -327,6 +327,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Bug with Series indexing not raising an error when the right-hand-side has an incorrect length (:issue:`2702`)
- Bug in multi-indexing with a partial string selection as one part of a MultIndex (:issue:`4758`)
- Bug with reindexing on the index with a non-unique index will now raise ``ValueError`` (:issue:`4746`)
+ - Bug in setting with ``loc/ix`` a single indexer with a multi-index axis and a numpy array, related to (:issue:`3777`)
pandas 0.12
===========
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 19eeecfeb2bde..72196fcdad38d 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -163,6 +163,10 @@ def _setitem_with_indexer(self, indexer, value):
labels = _safe_append_to_index(index, key)
self.obj._data = self.obj.reindex_axis(labels,i)._data
+ if isinstance(labels,MultiIndex):
+ self.obj.sortlevel(inplace=True)
+ labels = self.obj._get_axis(i)
+
nindexer.append(labels.get_loc(key))
else:
@@ -198,33 +202,77 @@ def _setitem_with_indexer(self, indexer, value):
elif self.ndim >= 3:
return self.obj.__setitem__(indexer,value)
+ # set
+ info_axis = self.obj._info_axis_number
+ item_labels = self.obj._get_axis(info_axis)
+
+ # if we have a complicated setup, take the split path
+ if isinstance(indexer, tuple) and any([ isinstance(ax,MultiIndex) for ax in self.obj.axes ]):
+ take_split_path = True
+
# align and set the values
if take_split_path:
+
if not isinstance(indexer, tuple):
indexer = self._tuplify(indexer)
if isinstance(value, ABCSeries):
value = self._align_series(indexer, value)
- info_axis = self.obj._info_axis_number
info_idx = indexer[info_axis]
-
if com.is_integer(info_idx):
info_idx = [info_idx]
+ labels = item_labels[info_idx]
+
+ # if we have a partial multiindex, then need to adjust the plane indexer here
+ if len(labels) == 1 and isinstance(self.obj[labels[0]].index,MultiIndex):
+ index = self.obj[labels[0]].index
+ idx = indexer[:info_axis][0]
+ try:
+ if idx in index:
+ idx = index.get_loc(idx)
+ except:
+ pass
+ plane_indexer = tuple([idx]) + indexer[info_axis + 1:]
+ lplane_indexer = _length_of_indexer(plane_indexer[0],index)
- plane_indexer = indexer[:info_axis] + indexer[info_axis + 1:]
- item_labels = self.obj._get_axis(info_axis)
+ if is_list_like(value) and lplane_indexer != len(value):
+ raise ValueError("cannot set using a multi-index selection indexer with a different length than the value")
+
+ # non-mi
+ else:
+ plane_indexer = indexer[:info_axis] + indexer[info_axis + 1:]
+ if info_axis > 0:
+ plane_axis = self.obj.axes[:info_axis][0]
+ lplane_indexer = _length_of_indexer(plane_indexer[0],plane_axis)
+ else:
+ lplane_indexer = 0
def setter(item, v):
s = self.obj[item]
- pi = plane_indexer[0] if len(plane_indexer) == 1 else plane_indexer
+ pi = plane_indexer[0] if lplane_indexer == 1 else plane_indexer
# set the item, possibly having a dtype change
s = s.copy()
s._data = s._data.setitem(pi,v)
self.obj[item] = s
- labels = item_labels[info_idx]
+ def can_do_equal_len():
+ """ return True if we have an equal len settable """
+ if not len(labels) == 1:
+ return False
+
+ l = len(value)
+ item = labels[0]
+ index = self.obj[item].index
+
+ # equal len list/ndarray
+ if len(index) == l:
+ return True
+ elif lplane_indexer == l:
+ return True
+
+ return False
if _is_list_like(value):
@@ -251,8 +299,7 @@ def setter(item, v):
setter(item, value[:,i])
# we have an equal len list/ndarray
- elif len(labels) == 1 and (
- len(self.obj[labels[0]]) == len(value) or len(plane_indexer[0]) == len(value)):
+ elif can_do_equal_len():
setter(labels[0], value)
# per label values
@@ -1104,6 +1151,31 @@ def _convert_key(self, key):
# 32-bit floating point machine epsilon
_eps = np.finfo('f4').eps
+def _length_of_indexer(indexer,target=None):
+ """ return the length of a single non-tuple indexer which could be a slice """
+ if target is not None and isinstance(indexer, slice):
+ l = len(target)
+ start = indexer.start
+ stop = indexer.stop
+ step = indexer.step
+ if start is None:
+ start = 0
+ elif start < 0:
+ start += l
+ if stop is None or stop > l:
+ stop = l
+ elif stop < 0:
+ stop += l
+ if step is None:
+ step = 1
+ elif step < 0:
+ step = abs(step)
+ return (stop-start) / step
+ elif isinstance(indexer, (ABCSeries, np.ndarray, list)):
+ return len(indexer)
+ elif not is_list_like(indexer):
+ return 1
+ raise AssertionError("cannot find the length of the indexer")
def _convert_to_index_sliceable(obj, key):
""" if we are index sliceable, then return my slicer, otherwise return None """
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 91fdc712fb9b8..57db36b252e3c 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -12,7 +12,8 @@
is_list_like, _infer_dtype_from_scalar)
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_handle_legacy_indexes)
-from pandas.core.indexing import _check_slice_bounds, _maybe_convert_indices
+from pandas.core.indexing import (_check_slice_bounds, _maybe_convert_indices,
+ _length_of_indexer)
import pandas.core.common as com
from pandas.sparse.array import _maybe_to_sparse, SparseArray
import pandas.lib as lib
@@ -563,22 +564,7 @@ def setitem(self, indexer, value):
elif isinstance(indexer, slice):
if is_list_like(value) and l:
- start = indexer.start
- stop = indexer.stop
- step = indexer.step
- if start is None:
- start = 0
- elif start < 0:
- start += l
- if stop is None or stop > l:
- stop = len(values)
- elif stop < 0:
- stop += l
- if step is None:
- step = 1
- elif step < 0:
- step = abs(step)
- if (stop-start) / step != len(value):
+ if len(value) != _length_of_indexer(indexer, values):
raise ValueError("cannot set using a slice indexer with a different length than the value")
try:
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 66193248ffb7d..d6088c2d72525 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -917,6 +917,60 @@ def f():
#result = wp.loc[['Item1', 'Item2'], :, ['A', 'B']]
#tm.assert_panel_equal(result,expected)
+ def test_multiindex_assignment(self):
+
+ # GH3777 part 2
+
+ # mixed dtype
+ df = DataFrame(np.random.randint(5,10,size=9).reshape(3, 3),
+ columns=list('abc'),
+ index=[[4,4,8],[8,10,12]])
+ df['d'] = np.nan
+ arr = np.array([0.,1.])
+
+ df.ix[4,'d'] = arr
+ assert_series_equal(df.ix[4,'d'],Series(arr,index=[8,10],name='d'))
+
+ # single dtype
+ df = DataFrame(np.random.randint(5,10,size=9).reshape(3, 3),
+ columns=list('abc'),
+ index=[[4,4,8],[8,10,12]])
+
+ df.ix[4,'c'] = arr
+ assert_series_equal(df.ix[4,'c'],Series(arr,index=[8,10],name='c',dtype='int64'))
+
+ # scalar ok
+ df.ix[4,'c'] = 10
+ assert_series_equal(df.ix[4,'c'],Series(10,index=[8,10],name='c',dtype='int64'))
+
+ # invalid assignments
+ def f():
+ df.ix[4,'c'] = [0,1,2,3]
+ self.assertRaises(ValueError, f)
+
+ def f():
+ df.ix[4,'c'] = [0]
+ self.assertRaises(ValueError, f)
+
+ # groupby example
+ NUM_ROWS = 100
+ NUM_COLS = 10
+ col_names = ['A'+num for num in map(str,np.arange(NUM_COLS).tolist())]
+ index_cols = col_names[:5]
+ df = DataFrame(np.random.randint(5, size=(NUM_ROWS,NUM_COLS)), dtype=np.int64, columns=col_names)
+ df = df.set_index(index_cols).sort_index()
+ grp = df.groupby(level=index_cols[:4])
+ df['new_col'] = np.nan
+
+ f_index = np.arange(5)
+ def f(name,df2):
+ return Series(np.arange(df2.shape[0]),name=df2.index.values[0]).reindex(f_index)
+ new_df = pd.concat([ f(name,df2) for name, df2 in grp ],axis=1).T
+
+ for name, df2 in grp:
+ new_vals = np.arange(df2.shape[0])
+ df.ix[name, 'new_col'] = new_vals
+
def test_multi_assign(self):
# GH 3626, an assignement of a sub-df to a df
| related to #3777
This shows enlarging (and setting inplace)
```
In [8]: df = DataFrame(np.random.randint(5,10,size=9).reshape(3, 3),
...: ...: columns=list('abc'),
...: ...: index=[[4,4,8],[8,10,12]])
In [9]:
In [9]: df
Out[9]:
a b c
4 8 8 8 8
10 9 7 6
8 12 8 7 9
In [10]: df.loc[4,'d'] = [0,1.]
In [11]: df
Out[11]:
a b c d
4 8 8 8 8 0
10 9 7 6 1
8 12 8 7 9 NaN
In [12]: df.loc[4,'d'] = [3,4]
In [13]: df
Out[13]:
a b c d
4 8 8 8 8 3
10 9 7 6 4
8 12 8 7 9 NaN
```
Invalid assignments
```
In [10]: df.loc[4,'d'] = [3]
ValueError: cannot set using a multi-index selection indexer with a different length than the value
In [11]: df.loc[4,'d'] = [3,4,5]
ValueError: cannot set using a multi-index selection indexer with a different length than the value
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4766 | 2013-09-06T23:00:49Z | 2013-09-07T00:10:52Z | 2013-09-07T00:10:52Z | 2014-06-18T19:30:24Z |
API: raise a TypeError when isin is passed a string | diff --git a/doc/source/api.rst b/doc/source/api.rst
index e964ce569532a..9cf10d3f0780d 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -451,6 +451,7 @@ Indexing, iteration
DataFrame.pop
DataFrame.tail
DataFrame.xs
+ DataFrame.isin
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 00aba51eac37e..80cd935bc67e9 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -148,6 +148,9 @@ pandas 0.13
behavior.
- ``DataFrame.update()`` no longer raises a ``DataConflictError``, it now
will raise a ``ValueError`` instead (if necessary) (:issue:`4732`)
+ - ``Series.isin()`` and ``DataFrame.isin()`` now raise a ``TypeError`` when
+ passed a string (:issue:`4763`). Pass a ``list`` of one element (containing
+ the string) instead.
**Internal Refactoring**
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 0cd9f7f3f5330..8c6e7697f8ea1 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4609,6 +4609,11 @@ def isin(self, values, iloc=False):
else:
+ if not com.is_list_like(values):
+ raise TypeError("only list-like or dict-like objects are"
+ " allowed to be passed to DataFrame.isin(), "
+ "you passed a "
+ "{0!r}".format(type(values).__name__))
return DataFrame(lib.ismember(self.values.ravel(),
set(values)).reshape(self.shape),
self.index,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 1160f85751aee..5579e60ceb90e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2805,17 +2805,47 @@ def take(self, indices, axis=0, convert=True):
def isin(self, values):
"""
- Return boolean vector showing whether each element in the Series is
- exactly contained in the passed sequence of values
+ Return a boolean :ref:`~pandas.Series` showing whether each element in
+ the ref:`~pandas.Series` is exactly contained in the passed sequence of
+ ``values``.
Parameters
----------
- values : sequence
+ values : list-like
+ The sequence of values to test. Passing in a single string will
+ raise a ``TypeError``:
+
+ .. code-block:: python
+
+ from pandas import Series
+ s = Series(list('abc'))
+ s.isin('a')
+
+ Instead, turn a single string into a ``list`` of one element:
+
+ .. code-block:: python
+
+ from pandas import Series
+ s = Series(list('abc'))
+ s.isin(['a'])
Returns
-------
- isin : Series (boolean dtype)
+ isin : Series (bool dtype)
+
+ Raises
+ ------
+ TypeError
+ * If ``values`` is a string
+
+ See Also
+ --------
+ pandas.DataFrame.isin
"""
+ if not com.is_list_like(values):
+ raise TypeError("only list-like objects are allowed to be passed"
+ " to Series.isin(), you passed a "
+ "{0!r}".format(type(values).__name__))
value_set = set(values)
result = lib.ismember(_values_from_object(self), value_set)
return self._constructor(result, self.index, name=self.name)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c39634281ebb7..b4ec36ac5f29e 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -10915,6 +10915,16 @@ def test_isin_dict(self):
expected.iloc[0, 0] = True
assert_frame_equal(result, expected)
+ def test_isin_with_string_scalar(self):
+ #GH4763
+ df = DataFrame({'vals': [1, 2, 3, 4], 'ids': ['a', 'b', 'f', 'n'],
+ 'ids2': ['a', 'n', 'c', 'n']},
+ index=['foo', 'bar', 'baz', 'qux'])
+ with tm.assertRaises(TypeError):
+ df.isin('a')
+
+ with tm.assertRaises(TypeError):
+ df.isin('aaa')
if __name__ == '__main__':
# unittest.main()
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 514245e82ac28..556973acdcb95 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -4433,6 +4433,16 @@ def test_isin(self):
expected = Series([True, False, True, False, False, False, True, True])
assert_series_equal(result, expected)
+ def test_isin_with_string_scalar(self):
+ #GH4763
+ s = Series(['A', 'B', 'C', 'a', 'B', 'B', 'A', 'C'])
+ with tm.assertRaises(TypeError):
+ s.isin('a')
+
+ with tm.assertRaises(TypeError):
+ s = Series(['aaa', 'b', 'c'])
+ s.isin('aaa')
+
def test_fillna_int(self):
s = Series(np.random.randint(-100, 100, 50))
s.fillna(method='ffill', inplace=True)
| closes #4763
| https://api.github.com/repos/pandas-dev/pandas/pulls/4765 | 2013-09-06T19:08:12Z | 2013-09-06T20:07:10Z | 2013-09-06T20:07:10Z | 2014-06-22T20:42:31Z |
Gotachas -> Gotchas | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 96f9fd912b664..58c5b54968614 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -723,4 +723,4 @@ If you are trying an operation and you see an exception like:
See :ref:`Comparisons<basics.compare>` for an explanation and what to do.
-See :ref:`Gotachas<gotchas>` as well.
+See :ref:`Gotchas<gotchas>` as well.
| Just a quick typo fix.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4762 | 2013-09-06T04:46:59Z | 2013-09-06T04:50:46Z | 2013-09-06T04:50:46Z | 2014-07-16T08:26:51Z |
BUG: in multi-indexing with a partial string selection (GH4758) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 00aba51eac37e..adea4601b5a4c 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -322,6 +322,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Bug in using ``iloc/loc`` with a cross-sectional and duplicate indicies (:issue:`4726`)
- Bug with using ``QUOTE_NONE`` with ``to_csv`` causing ``Exception``. (:issue:`4328`)
- Bug with Series indexing not raising an error when the right-hand-side has an incorrect length (:issue:`2702`)
+ - Bug in multi-indexing with a partial string selection as one part of a MultIndex (:issue:`4758`)
pandas 0.12
===========
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 57a913acf6355..2b5f761026924 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2596,10 +2596,15 @@ def _maybe_drop_levels(indexer, levels, drop_level):
if not drop_level:
return self[indexer]
# kludgearound
- new_index = self[indexer]
+ orig_index = new_index = self[indexer]
levels = [self._get_level_number(i) for i in levels]
for i in sorted(levels, reverse=True):
- new_index = new_index.droplevel(i)
+ try:
+ new_index = new_index.droplevel(i)
+ except:
+
+ # no dropping here
+ return orig_index
return new_index
if isinstance(level, (tuple, list)):
@@ -2635,20 +2640,37 @@ def _maybe_drop_levels(indexer, levels, drop_level):
pass
if not any(isinstance(k, slice) for k in key):
- if len(key) == self.nlevels:
- if self.is_unique:
- return self._engine.get_loc(_values_from_object(key)), None
- else:
- indexer = slice(*self.slice_locs(key, key))
- return indexer, self[indexer]
- else:
- # partial selection
+
+ # partial selection
+ def partial_selection(key):
indexer = slice(*self.slice_locs(key, key))
if indexer.start == indexer.stop:
raise KeyError(key)
ilevels = [i for i in range(len(key))
if key[i] != slice(None, None)]
return indexer, _maybe_drop_levels(indexer, ilevels, drop_level)
+
+ if len(key) == self.nlevels:
+
+ if self.is_unique:
+
+ # here we have a completely specified key, but are using some partial string matching here
+ # GH4758
+ can_index_exactly = any([ l.is_all_dates and not isinstance(k,compat.string_types) for k, l in zip(key, self.levels) ])
+ if any([ l.is_all_dates for k, l in zip(key, self.levels) ]) and not can_index_exactly:
+ indexer = slice(*self.slice_locs(key, key))
+
+ # we have a multiple selection here
+ if not indexer.stop-indexer.start == 1:
+ return partial_selection(key)
+
+ key = tuple(self[indexer].tolist()[0])
+
+ return self._engine.get_loc(_values_from_object(key)), None
+ else:
+ return partial_selection(key)
+ else:
+ return partial_selection(key)
else:
indexer = None
for i, k in enumerate(key):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 636a5e88817ee..9ecdf1930604f 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -952,9 +952,15 @@ def _has_valid_type(self, key, axis):
if not len(ax):
raise KeyError("The [%s] axis is empty" % self.obj._get_axis_name(axis))
- if not key in ax:
+ try:
+ if not key in ax:
+ raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
+ except (TypeError):
+
+ # if we have a weird type of key/ax
raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
+
return True
def _getitem_axis(self, key, axis=0):
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 21462780e2ffd..d3d4368d8028e 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -1842,9 +1842,9 @@ def test_duplicate_mi(self):
columns=list('ABCD'))
df = df.set_index(['A','B'])
df = df.sortlevel(0)
- result = df.loc[('foo','bar')]
expected = DataFrame([['foo','bar',1.0,1],['foo','bar',2.0,2],['foo','bar',5.0,5]],
columns=list('ABCD')).set_index(['A','B'])
+ result = df.loc[('foo','bar')]
assert_frame_equal(result,expected)
def test_multiindex_set_index(self):
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index 28b52d073c813..6c18b6582c4cc 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -175,6 +175,7 @@ def _check_output(res, col, rows=['A', 'B'], cols=['C']):
exp = self.data.groupby(rows)[col].mean()
tm.assert_series_equal(cmarg, exp)
+ res.sortlevel(inplace=True)
rmarg = res.xs(('All', ''))[:-1]
exp = self.data.groupby(cols)[col].mean()
tm.assert_series_equal(rmarg, exp)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 5bed7777cf439..b5697a98de412 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1970,495 +1970,6 @@ def test_join_self(self):
joined = index.join(index, how=kind)
self.assert_(index is joined)
-# infortunately, too much has changed to handle these legacy pickles
-# class TestLegacySupport(unittest.TestCase):
-class LegacySupport(object):
-
- _multiprocess_can_split_ = True
-
- @classmethod
- def setUpClass(cls):
- if compat.PY3:
- raise nose.SkipTest
-
- pth, _ = os.path.split(os.path.abspath(__file__))
- filepath = os.path.join(pth, 'data', 'frame.pickle')
-
- with open(filepath, 'rb') as f:
- cls.frame = pickle.load(f)
-
- filepath = os.path.join(pth, 'data', 'series.pickle')
- with open(filepath, 'rb') as f:
- cls.series = pickle.load(f)
-
- def test_pass_offset_warn(self):
- buf = StringIO()
-
- sys.stderr = buf
- DatetimeIndex(start='1/1/2000', periods=10, offset='H')
- sys.stderr = sys.__stderr__
-
- def test_unpickle_legacy_frame(self):
- dtindex = DatetimeIndex(start='1/3/2005', end='1/14/2005',
- freq=BDay(1))
-
- unpickled = self.frame
-
- self.assertEquals(type(unpickled.index), DatetimeIndex)
- self.assertEquals(len(unpickled), 10)
- self.assert_((unpickled.columns == Int64Index(np.arange(5))).all())
- self.assert_((unpickled.index == dtindex).all())
- self.assertEquals(unpickled.index.offset, BDay(1, normalize=True))
-
- def test_unpickle_legacy_series(self):
- from pandas.core.datetools import BDay
-
- unpickled = self.series
-
- dtindex = DatetimeIndex(start='1/3/2005', end='1/14/2005',
- freq=BDay(1))
-
- self.assertEquals(type(unpickled.index), DatetimeIndex)
- self.assertEquals(len(unpickled), 10)
- self.assert_((unpickled.index == dtindex).all())
- self.assertEquals(unpickled.index.offset, BDay(1, normalize=True))
-
- def test_unpickle_legacy_len0_daterange(self):
- pth, _ = os.path.split(os.path.abspath(__file__))
- filepath = os.path.join(pth, 'data', 'series_daterange0.pickle')
-
- result = pd.read_pickle(filepath)
-
- ex_index = DatetimeIndex([], freq='B')
-
- self.assert_(result.index.equals(ex_index))
- tm.assert_isinstance(result.index.freq, offsets.BDay)
- self.assert_(len(result) == 0)
-
- def test_arithmetic_interaction(self):
- index = self.frame.index
- obj_index = index.asobject
-
- dseries = Series(rand(len(index)), index=index)
- oseries = Series(dseries.values, index=obj_index)
-
- result = dseries + oseries
- expected = dseries * 2
- tm.assert_isinstance(result.index, DatetimeIndex)
- assert_series_equal(result, expected)
-
- result = dseries + oseries[:5]
- expected = dseries + dseries[:5]
- tm.assert_isinstance(result.index, DatetimeIndex)
- assert_series_equal(result, expected)
-
- def test_join_interaction(self):
- index = self.frame.index
- obj_index = index.asobject
-
- def _check_join(left, right, how='inner'):
- ra, rb, rc = left.join(right, how=how, return_indexers=True)
- ea, eb, ec = left.join(DatetimeIndex(right), how=how,
- return_indexers=True)
-
- tm.assert_isinstance(ra, DatetimeIndex)
- self.assert_(ra.equals(ea))
-
- assert_almost_equal(rb, eb)
- assert_almost_equal(rc, ec)
-
- _check_join(index[:15], obj_index[5:], how='inner')
- _check_join(index[:15], obj_index[5:], how='outer')
- _check_join(index[:15], obj_index[5:], how='right')
- _check_join(index[:15], obj_index[5:], how='left')
-
- def test_join_nonunique(self):
- idx1 = to_datetime(['2012-11-06 16:00:11.477563',
- '2012-11-06 16:00:11.477563'])
- idx2 = to_datetime(['2012-11-06 15:11:09.006507',
- '2012-11-06 15:11:09.006507'])
- rs = idx1.join(idx2, how='outer')
- self.assert_(rs.is_monotonic)
-
- def test_unpickle_daterange(self):
- pth, _ = os.path.split(os.path.abspath(__file__))
- filepath = os.path.join(pth, 'data', 'daterange_073.pickle')
-
- rng = read_pickle(filepath)
- tm.assert_isinstance(rng[0], datetime)
- tm.assert_isinstance(rng.offset, offsets.BDay)
- self.assert_(rng.values.dtype == object)
-
- def test_setops(self):
- index = self.frame.index
- obj_index = index.asobject
-
- result = index[:5].union(obj_index[5:])
- expected = index
- tm.assert_isinstance(result, DatetimeIndex)
- self.assert_(result.equals(expected))
-
- result = index[:10].intersection(obj_index[5:])
- expected = index[5:10]
- tm.assert_isinstance(result, DatetimeIndex)
- self.assert_(result.equals(expected))
-
- result = index[:10] - obj_index[5:]
- expected = index[:5]
- tm.assert_isinstance(result, DatetimeIndex)
- self.assert_(result.equals(expected))
-
- def test_index_conversion(self):
- index = self.frame.index
- obj_index = index.asobject
-
- conv = DatetimeIndex(obj_index)
- self.assert_(conv.equals(index))
-
- self.assertRaises(ValueError, DatetimeIndex, ['a', 'b', 'c', 'd'])
-
- def test_tolist(self):
- rng = date_range('1/1/2000', periods=10)
-
- result = rng.tolist()
- tm.assert_isinstance(result[0], Timestamp)
-
- def test_object_convert_fail(self):
- idx = DatetimeIndex([NaT])
- self.assertRaises(ValueError, idx.astype, 'O')
-
- def test_setops_conversion_fail(self):
- index = self.frame.index
-
- right = Index(['a', 'b', 'c', 'd'])
-
- result = index.union(right)
- expected = Index(np.concatenate([index.asobject, right]))
- self.assert_(result.equals(expected))
-
- result = index.intersection(right)
- expected = Index([])
- self.assert_(result.equals(expected))
-
- def test_legacy_time_rules(self):
- rules = [('WEEKDAY', 'B'),
- ('EOM', 'BM'),
- ('W@MON', 'W-MON'), ('W@TUE', 'W-TUE'), ('W@WED', 'W-WED'),
- ('W@THU', 'W-THU'), ('W@FRI', 'W-FRI'),
- ('Q@JAN', 'BQ-JAN'), ('Q@FEB', 'BQ-FEB'), ('Q@MAR', 'BQ-MAR'),
- ('A@JAN', 'BA-JAN'), ('A@FEB', 'BA-FEB'), ('A@MAR', 'BA-MAR'),
- ('A@APR', 'BA-APR'), ('A@MAY', 'BA-MAY'), ('A@JUN', 'BA-JUN'),
- ('A@JUL', 'BA-JUL'), ('A@AUG', 'BA-AUG'), ('A@SEP', 'BA-SEP'),
- ('A@OCT', 'BA-OCT'), ('A@NOV', 'BA-NOV'), ('A@DEC', 'BA-DEC'),
- ('WOM@1FRI', 'WOM-1FRI'), ('WOM@2FRI', 'WOM-2FRI'),
- ('WOM@3FRI', 'WOM-3FRI'), ('WOM@4FRI', 'WOM-4FRI')]
-
- start, end = '1/1/2000', '1/1/2010'
-
- for old_freq, new_freq in rules:
- old_rng = date_range(start, end, freq=old_freq)
- new_rng = date_range(start, end, freq=new_freq)
- self.assert_(old_rng.equals(new_rng))
-
- # test get_legacy_offset_name
- offset = datetools.get_offset(new_freq)
- old_name = datetools.get_legacy_offset_name(offset)
- self.assertEquals(old_name, old_freq)
-
- def test_ms_vs_MS(self):
- left = datetools.get_offset('ms')
- right = datetools.get_offset('MS')
- self.assert_(left == datetools.Milli())
- self.assert_(right == datetools.MonthBegin())
-
- def test_rule_aliases(self):
- rule = datetools.to_offset('10us')
- self.assert_(rule == datetools.Micro(10))
-
- def test_slice_year(self):
- dti = DatetimeIndex(freq='B', start=datetime(2005, 1, 1), periods=500)
-
- s = Series(np.arange(len(dti)), index=dti)
- result = s['2005']
- expected = s[s.index.year == 2005]
- assert_series_equal(result, expected)
-
- df = DataFrame(np.random.rand(len(dti), 5), index=dti)
- result = df.ix['2005']
- expected = df[df.index.year == 2005]
- assert_frame_equal(result, expected)
-
- rng = date_range('1/1/2000', '1/1/2010')
-
- result = rng.get_loc('2009')
- expected = slice(3288, 3653)
- self.assert_(result == expected)
-
- def test_slice_quarter(self):
- dti = DatetimeIndex(freq='D', start=datetime(2000, 6, 1), periods=500)
-
- s = Series(np.arange(len(dti)), index=dti)
- self.assertEquals(len(s['2001Q1']), 90)
-
- df = DataFrame(np.random.rand(len(dti), 5), index=dti)
- self.assertEquals(len(df.ix['1Q01']), 90)
-
- def test_slice_month(self):
- dti = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
- s = Series(np.arange(len(dti)), index=dti)
- self.assertEquals(len(s['2005-11']), 30)
-
- df = DataFrame(np.random.rand(len(dti), 5), index=dti)
- self.assertEquals(len(df.ix['2005-11']), 30)
-
- assert_series_equal(s['2005-11'], s['11-2005'])
-
- def test_partial_slice(self):
- rng = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
- s = Series(np.arange(len(rng)), index=rng)
-
- result = s['2005-05':'2006-02']
- expected = s['20050501':'20060228']
- assert_series_equal(result, expected)
-
- result = s['2005-05':]
- expected = s['20050501':]
- assert_series_equal(result, expected)
-
- result = s[:'2006-02']
- expected = s[:'20060228']
- assert_series_equal(result, expected)
-
- result = s['2005-1-1']
- self.assert_(result == s.irow(0))
-
- self.assertRaises(Exception, s.__getitem__, '2004-12-31')
-
- def test_partial_slice_daily(self):
- rng = DatetimeIndex(freq='H', start=datetime(2005, 1, 31), periods=500)
- s = Series(np.arange(len(rng)), index=rng)
-
- result = s['2005-1-31']
- assert_series_equal(result, s.ix[:24])
-
- self.assertRaises(Exception, s.__getitem__, '2004-12-31 00')
-
- def test_partial_slice_hourly(self):
- rng = DatetimeIndex(freq='T', start=datetime(2005, 1, 1, 20, 0, 0),
- periods=500)
- s = Series(np.arange(len(rng)), index=rng)
-
- result = s['2005-1-1']
- assert_series_equal(result, s.ix[:60 * 4])
-
- result = s['2005-1-1 20']
- assert_series_equal(result, s.ix[:60])
-
- self.assert_(s['2005-1-1 20:00'] == s.ix[0])
- self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:15')
-
- def test_partial_slice_minutely(self):
- rng = DatetimeIndex(freq='S', start=datetime(2005, 1, 1, 23, 59, 0),
- periods=500)
- s = Series(np.arange(len(rng)), index=rng)
-
- result = s['2005-1-1 23:59']
- assert_series_equal(result, s.ix[:60])
-
- result = s['2005-1-1']
- assert_series_equal(result, s.ix[:60])
-
- self.assert_(s['2005-1-1 23:59:00'] == s.ix[0])
- self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:00:00')
-
- def test_date_range_normalize(self):
- snap = datetime.today()
- n = 50
-
- rng = date_range(snap, periods=n, normalize=False, freq='2D')
-
- offset = timedelta(2)
- values = np.array([snap + i * offset for i in range(n)],
- dtype='M8[ns]')
-
- self.assert_(np.array_equal(rng, values))
-
- rng = date_range(
- '1/1/2000 08:15', periods=n, normalize=False, freq='B')
- the_time = time(8, 15)
- for val in rng:
- self.assert_(val.time() == the_time)
-
- def test_timedelta(self):
- # this is valid too
- index = date_range('1/1/2000', periods=50, freq='B')
- shifted = index + timedelta(1)
- back = shifted + timedelta(-1)
- self.assert_(tm.equalContents(index, back))
- self.assertEqual(shifted.freq, index.freq)
- self.assertEqual(shifted.freq, back.freq)
-
- result = index - timedelta(1)
- expected = index + timedelta(-1)
- self.assert_(result.equals(expected))
-
- # GH4134, buggy with timedeltas
- rng = date_range('2013', '2014')
- s = Series(rng)
- result1 = rng - pd.offsets.Hour(1)
- result2 = DatetimeIndex(s - np.timedelta64(100000000))
- result3 = rng - np.timedelta64(100000000)
- result4 = DatetimeIndex(s - pd.offsets.Hour(1))
- self.assert_(result1.equals(result4))
- self.assert_(result2.equals(result3))
-
- def test_shift(self):
- ts = Series(np.random.randn(5),
- index=date_range('1/1/2000', periods=5, freq='H'))
-
- result = ts.shift(1, freq='5T')
- exp_index = ts.index.shift(1, freq='5T')
- self.assert_(result.index.equals(exp_index))
-
- # GH #1063, multiple of same base
- result = ts.shift(1, freq='4H')
- exp_index = ts.index + datetools.Hour(4)
- self.assert_(result.index.equals(exp_index))
-
- idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'])
- self.assertRaises(ValueError, idx.shift, 1)
-
- def test_setops_preserve_freq(self):
- rng = date_range('1/1/2000', '1/1/2002')
-
- result = rng[:50].union(rng[50:100])
- self.assert_(result.freq == rng.freq)
-
- result = rng[:50].union(rng[30:100])
- self.assert_(result.freq == rng.freq)
-
- result = rng[:50].union(rng[60:100])
- self.assert_(result.freq is None)
-
- result = rng[:50].intersection(rng[25:75])
- self.assert_(result.freqstr == 'D')
-
- nofreq = DatetimeIndex(list(rng[25:75]))
- result = rng[:50].union(nofreq)
- self.assert_(result.freq == rng.freq)
-
- result = rng[:50].intersection(nofreq)
- self.assert_(result.freq == rng.freq)
-
- def test_min_max(self):
- rng = date_range('1/1/2000', '12/31/2000')
- rng2 = rng.take(np.random.permutation(len(rng)))
-
- the_min = rng2.min()
- the_max = rng2.max()
- tm.assert_isinstance(the_min, Timestamp)
- tm.assert_isinstance(the_max, Timestamp)
- self.assertEqual(the_min, rng[0])
- self.assertEqual(the_max, rng[-1])
-
- self.assertEqual(rng.min(), rng[0])
- self.assertEqual(rng.max(), rng[-1])
-
- def test_min_max_series(self):
- rng = date_range('1/1/2000', periods=10, freq='4h')
- lvls = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C']
- df = DataFrame({'TS': rng, 'V': np.random.randn(len(rng)),
- 'L': lvls})
-
- result = df.TS.max()
- exp = Timestamp(df.TS.iget(-1))
- self.assertTrue(isinstance(result, Timestamp))
- self.assertEqual(result, exp)
-
- result = df.TS.min()
- exp = Timestamp(df.TS.iget(0))
- self.assertTrue(isinstance(result, Timestamp))
- self.assertEqual(result, exp)
-
- def test_from_M8_structured(self):
- dates = [(datetime(2012, 9, 9, 0, 0),
- datetime(2012, 9, 8, 15, 10))]
- arr = np.array(dates,
- dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')])
- df = DataFrame(arr)
-
- self.assertEqual(df['Date'][0], dates[0][0])
- self.assertEqual(df['Forecasting'][0], dates[0][1])
-
- s = Series(arr['Date'])
- self.assertTrue(s[0], Timestamp)
- self.assertEqual(s[0], dates[0][0])
-
- s = Series.from_array(arr['Date'], Index([0]))
- self.assertEqual(s[0], dates[0][0])
-
- def test_get_level_values_box(self):
- from pandas import MultiIndex
-
- dates = date_range('1/1/2000', periods=4)
- levels = [dates, [0, 1]]
- labels = [[0, 0, 1, 1, 2, 2, 3, 3],
- [0, 1, 0, 1, 0, 1, 0, 1]]
-
- index = MultiIndex(levels=levels, labels=labels)
-
- self.assertTrue(isinstance(index.get_level_values(0)[0], Timestamp))
-
- def test_frame_apply_dont_convert_datetime64(self):
- from pandas.tseries.offsets import BDay
- df = DataFrame({'x1': [datetime(1996, 1, 1)]})
-
- df = df.applymap(lambda x: x + BDay())
- df = df.applymap(lambda x: x + BDay())
-
- self.assertTrue(df.x1.dtype == 'M8[ns]')
-
-
-class TestLegacyCompat(unittest.TestCase):
-
- def setUp(self):
- # suppress deprecation warnings
- sys.stderr = StringIO()
-
- def test_inferTimeRule(self):
- from pandas.tseries.frequencies import inferTimeRule
-
- index1 = [datetime(2010, 1, 29, 0, 0),
- datetime(2010, 2, 26, 0, 0),
- datetime(2010, 3, 31, 0, 0)]
-
- index2 = [datetime(2010, 3, 26, 0, 0),
- datetime(2010, 3, 29, 0, 0),
- datetime(2010, 3, 30, 0, 0)]
-
- index3 = [datetime(2010, 3, 26, 0, 0),
- datetime(2010, 3, 27, 0, 0),
- datetime(2010, 3, 29, 0, 0)]
-
- # LEGACY
- assert inferTimeRule(index1) == 'EOM'
- assert inferTimeRule(index2) == 'WEEKDAY'
-
- self.assertRaises(Exception, inferTimeRule, index1[:2])
- self.assertRaises(Exception, inferTimeRule, index3)
-
- def test_time_rule(self):
- result = DateRange('1/1/2000', '1/30/2000', time_rule='WEEKDAY')
- result2 = DateRange('1/1/2000', '1/30/2000', timeRule='WEEKDAY')
- expected = date_range('1/1/2000', '1/30/2000', freq='B')
-
- self.assert_(result.equals(expected))
- self.assert_(result2.equals(expected))
-
- def tearDown(self):
- sys.stderr = sys.__stderr__
-
-
class TestDatetime64(unittest.TestCase):
"""
Also test supoprt for datetime64[ns] in Series / DataFrame
@@ -2956,6 +2467,273 @@ def test_hash_equivalent(self):
stamp = Timestamp(datetime(2011, 1, 1))
self.assertEquals(d[stamp], 5)
+class TestSlicing(unittest.TestCase):
+
+ def test_slice_year(self):
+ dti = DatetimeIndex(freq='B', start=datetime(2005, 1, 1), periods=500)
+
+ s = Series(np.arange(len(dti)), index=dti)
+ result = s['2005']
+ expected = s[s.index.year == 2005]
+ assert_series_equal(result, expected)
+
+ df = DataFrame(np.random.rand(len(dti), 5), index=dti)
+ result = df.ix['2005']
+ expected = df[df.index.year == 2005]
+ assert_frame_equal(result, expected)
+
+ rng = date_range('1/1/2000', '1/1/2010')
+
+ result = rng.get_loc('2009')
+ expected = slice(3288, 3653)
+ self.assert_(result == expected)
+
+ def test_slice_quarter(self):
+ dti = DatetimeIndex(freq='D', start=datetime(2000, 6, 1), periods=500)
+
+ s = Series(np.arange(len(dti)), index=dti)
+ self.assertEquals(len(s['2001Q1']), 90)
+
+ df = DataFrame(np.random.rand(len(dti), 5), index=dti)
+ self.assertEquals(len(df.ix['1Q01']), 90)
+
+ def test_slice_month(self):
+ dti = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
+ s = Series(np.arange(len(dti)), index=dti)
+ self.assertEquals(len(s['2005-11']), 30)
+
+ df = DataFrame(np.random.rand(len(dti), 5), index=dti)
+ self.assertEquals(len(df.ix['2005-11']), 30)
+
+ assert_series_equal(s['2005-11'], s['11-2005'])
+
+ def test_partial_slice(self):
+ rng = DatetimeIndex(freq='D', start=datetime(2005, 1, 1), periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-05':'2006-02']
+ expected = s['20050501':'20060228']
+ assert_series_equal(result, expected)
+
+ result = s['2005-05':]
+ expected = s['20050501':]
+ assert_series_equal(result, expected)
+
+ result = s[:'2006-02']
+ expected = s[:'20060228']
+ assert_series_equal(result, expected)
+
+ result = s['2005-1-1']
+ self.assert_(result == s.irow(0))
+
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31')
+
+ def test_partial_slice_daily(self):
+ rng = DatetimeIndex(freq='H', start=datetime(2005, 1, 31), periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-31']
+ assert_series_equal(result, s.ix[:24])
+
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00')
+
+ def test_partial_slice_hourly(self):
+ rng = DatetimeIndex(freq='T', start=datetime(2005, 1, 1, 20, 0, 0),
+ periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-1']
+ assert_series_equal(result, s.ix[:60 * 4])
+
+ result = s['2005-1-1 20']
+ assert_series_equal(result, s.ix[:60])
+
+ self.assert_(s['2005-1-1 20:00'] == s.ix[0])
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:15')
+
+ def test_partial_slice_minutely(self):
+ rng = DatetimeIndex(freq='S', start=datetime(2005, 1, 1, 23, 59, 0),
+ periods=500)
+ s = Series(np.arange(len(rng)), index=rng)
+
+ result = s['2005-1-1 23:59']
+ assert_series_equal(result, s.ix[:60])
+
+ result = s['2005-1-1']
+ assert_series_equal(result, s.ix[:60])
+
+ self.assert_(s[Timestamp('2005-1-1 23:59:00')] == s.ix[0])
+ self.assertRaises(Exception, s.__getitem__, '2004-12-31 00:00:00')
+
+ def test_partial_slicing_with_multiindex(self):
+
+ # GH 4758
+ # partial string indexing with a multi-index buggy
+ df = DataFrame({'ACCOUNT':["ACCT1", "ACCT1", "ACCT1", "ACCT2"],
+ 'TICKER':["ABC", "MNP", "XYZ", "XYZ"],
+ 'val':[1,2,3,4]},
+ index=date_range("2013-06-19 09:30:00", periods=4, freq='5T'))
+ df_multi = df.set_index(['ACCOUNT', 'TICKER'], append=True)
+
+ expected = DataFrame([[1]],index=Index(['ABC'],name='TICKER'),columns=['val'])
+ result = df_multi.loc[('2013-06-19 09:30:00', 'ACCT1')]
+ assert_frame_equal(result, expected)
+
+ expected = df_multi.loc[(pd.Timestamp('2013-06-19 09:30:00', tz=None), 'ACCT1', 'ABC')]
+ result = df_multi.loc[('2013-06-19 09:30:00', 'ACCT1', 'ABC')]
+ assert_series_equal(result, expected)
+
+ # this is a KeyError as we don't do partial string selection on multi-levels
+ def f():
+ df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')]
+ self.assertRaises(KeyError, f)
+
+ def test_date_range_normalize(self):
+ snap = datetime.today()
+ n = 50
+
+ rng = date_range(snap, periods=n, normalize=False, freq='2D')
+
+ offset = timedelta(2)
+ values = np.array([snap + i * offset for i in range(n)],
+ dtype='M8[ns]')
+
+ self.assert_(np.array_equal(rng, values))
+
+ rng = date_range(
+ '1/1/2000 08:15', periods=n, normalize=False, freq='B')
+ the_time = time(8, 15)
+ for val in rng:
+ self.assert_(val.time() == the_time)
+
+ def test_timedelta(self):
+ # this is valid too
+ index = date_range('1/1/2000', periods=50, freq='B')
+ shifted = index + timedelta(1)
+ back = shifted + timedelta(-1)
+ self.assert_(tm.equalContents(index, back))
+ self.assertEqual(shifted.freq, index.freq)
+ self.assertEqual(shifted.freq, back.freq)
+
+ result = index - timedelta(1)
+ expected = index + timedelta(-1)
+ self.assert_(result.equals(expected))
+
+ # GH4134, buggy with timedeltas
+ rng = date_range('2013', '2014')
+ s = Series(rng)
+ result1 = rng - pd.offsets.Hour(1)
+ result2 = DatetimeIndex(s - np.timedelta64(100000000))
+ result3 = rng - np.timedelta64(100000000)
+ result4 = DatetimeIndex(s - pd.offsets.Hour(1))
+ self.assert_(result1.equals(result4))
+ self.assert_(result2.equals(result3))
+
+ def test_shift(self):
+ ts = Series(np.random.randn(5),
+ index=date_range('1/1/2000', periods=5, freq='H'))
+
+ result = ts.shift(1, freq='5T')
+ exp_index = ts.index.shift(1, freq='5T')
+ self.assert_(result.index.equals(exp_index))
+
+ # GH #1063, multiple of same base
+ result = ts.shift(1, freq='4H')
+ exp_index = ts.index + datetools.Hour(4)
+ self.assert_(result.index.equals(exp_index))
+
+ idx = DatetimeIndex(['2000-01-01', '2000-01-02', '2000-01-04'])
+ self.assertRaises(ValueError, idx.shift, 1)
+
+ def test_setops_preserve_freq(self):
+ rng = date_range('1/1/2000', '1/1/2002')
+
+ result = rng[:50].union(rng[50:100])
+ self.assert_(result.freq == rng.freq)
+
+ result = rng[:50].union(rng[30:100])
+ self.assert_(result.freq == rng.freq)
+
+ result = rng[:50].union(rng[60:100])
+ self.assert_(result.freq is None)
+
+ result = rng[:50].intersection(rng[25:75])
+ self.assert_(result.freqstr == 'D')
+
+ nofreq = DatetimeIndex(list(rng[25:75]))
+ result = rng[:50].union(nofreq)
+ self.assert_(result.freq == rng.freq)
+
+ result = rng[:50].intersection(nofreq)
+ self.assert_(result.freq == rng.freq)
+
+ def test_min_max(self):
+ rng = date_range('1/1/2000', '12/31/2000')
+ rng2 = rng.take(np.random.permutation(len(rng)))
+
+ the_min = rng2.min()
+ the_max = rng2.max()
+ tm.assert_isinstance(the_min, Timestamp)
+ tm.assert_isinstance(the_max, Timestamp)
+ self.assertEqual(the_min, rng[0])
+ self.assertEqual(the_max, rng[-1])
+
+ self.assertEqual(rng.min(), rng[0])
+ self.assertEqual(rng.max(), rng[-1])
+
+ def test_min_max_series(self):
+ rng = date_range('1/1/2000', periods=10, freq='4h')
+ lvls = ['A', 'A', 'A', 'B', 'B', 'B', 'C', 'C', 'C', 'C']
+ df = DataFrame({'TS': rng, 'V': np.random.randn(len(rng)),
+ 'L': lvls})
+
+ result = df.TS.max()
+ exp = Timestamp(df.TS.iget(-1))
+ self.assertTrue(isinstance(result, Timestamp))
+ self.assertEqual(result, exp)
+
+ result = df.TS.min()
+ exp = Timestamp(df.TS.iget(0))
+ self.assertTrue(isinstance(result, Timestamp))
+ self.assertEqual(result, exp)
+
+ def test_from_M8_structured(self):
+ dates = [(datetime(2012, 9, 9, 0, 0),
+ datetime(2012, 9, 8, 15, 10))]
+ arr = np.array(dates,
+ dtype=[('Date', 'M8[us]'), ('Forecasting', 'M8[us]')])
+ df = DataFrame(arr)
+
+ self.assertEqual(df['Date'][0], dates[0][0])
+ self.assertEqual(df['Forecasting'][0], dates[0][1])
+
+ s = Series(arr['Date'])
+ self.assertTrue(s[0], Timestamp)
+ self.assertEqual(s[0], dates[0][0])
+
+ s = Series.from_array(arr['Date'], Index([0]))
+ self.assertEqual(s[0], dates[0][0])
+
+ def test_get_level_values_box(self):
+ from pandas import MultiIndex
+
+ dates = date_range('1/1/2000', periods=4)
+ levels = [dates, [0, 1]]
+ labels = [[0, 0, 1, 1, 2, 2, 3, 3],
+ [0, 1, 0, 1, 0, 1, 0, 1]]
+
+ index = MultiIndex(levels=levels, labels=labels)
+
+ self.assertTrue(isinstance(index.get_level_values(0)[0], Timestamp))
+
+ def test_frame_apply_dont_convert_datetime64(self):
+ from pandas.tseries.offsets import BDay
+ df = DataFrame({'x1': [datetime(1996, 1, 1)]})
+
+ df = df.applymap(lambda x: x + BDay())
+ df = df.applymap(lambda x: x + BDay())
+
+ self.assertTrue(df.x1.dtype == 'M8[ns]')
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_timeseries_legacy.py b/pandas/tseries/tests/test_timeseries_legacy.py
new file mode 100644
index 0000000000000..babf60758f751
--- /dev/null
+++ b/pandas/tseries/tests/test_timeseries_legacy.py
@@ -0,0 +1,300 @@
+# pylint: disable-msg=E1101,W0612
+from datetime import datetime, time, timedelta
+import sys
+import os
+import unittest
+
+import nose
+
+import numpy as np
+randn = np.random.randn
+
+from pandas import (Index, Series, TimeSeries, DataFrame,
+ isnull, date_range, Timestamp, DatetimeIndex,
+ Int64Index, to_datetime, bdate_range)
+
+from pandas.core.daterange import DateRange
+import pandas.core.datetools as datetools
+import pandas.tseries.offsets as offsets
+import pandas.tseries.frequencies as fmod
+import pandas as pd
+
+from pandas.util.testing import assert_series_equal, assert_almost_equal
+import pandas.util.testing as tm
+
+from pandas.tslib import NaT, iNaT
+import pandas.lib as lib
+import pandas.tslib as tslib
+
+import pandas.index as _index
+
+from pandas.compat import(
+ range, long, StringIO, lrange, lmap, map, zip, cPickle as pickle, product
+)
+from pandas import read_pickle
+import pandas.core.datetools as dt
+from numpy.random import rand
+from numpy.testing import assert_array_equal
+from pandas.util.testing import assert_frame_equal
+import pandas.compat as compat
+from pandas.core.datetools import BDay
+import pandas.core.common as com
+from pandas import concat
+
+from numpy.testing.decorators import slow
+
+
+def _skip_if_no_pytz():
+ try:
+ import pytz
+ except ImportError:
+ raise nose.SkipTest
+
+# infortunately, too much has changed to handle these legacy pickles
+# class TestLegacySupport(unittest.TestCase):
+class LegacySupport(object):
+
+ _multiprocess_can_split_ = True
+
+ @classmethod
+ def setUpClass(cls):
+ if compat.PY3:
+ raise nose.SkipTest
+
+ pth, _ = os.path.split(os.path.abspath(__file__))
+ filepath = os.path.join(pth, 'data', 'frame.pickle')
+
+ with open(filepath, 'rb') as f:
+ cls.frame = pickle.load(f)
+
+ filepath = os.path.join(pth, 'data', 'series.pickle')
+ with open(filepath, 'rb') as f:
+ cls.series = pickle.load(f)
+
+ def test_pass_offset_warn(self):
+ buf = StringIO()
+
+ sys.stderr = buf
+ DatetimeIndex(start='1/1/2000', periods=10, offset='H')
+ sys.stderr = sys.__stderr__
+
+ def test_unpickle_legacy_frame(self):
+ dtindex = DatetimeIndex(start='1/3/2005', end='1/14/2005',
+ freq=BDay(1))
+
+ unpickled = self.frame
+
+ self.assertEquals(type(unpickled.index), DatetimeIndex)
+ self.assertEquals(len(unpickled), 10)
+ self.assert_((unpickled.columns == Int64Index(np.arange(5))).all())
+ self.assert_((unpickled.index == dtindex).all())
+ self.assertEquals(unpickled.index.offset, BDay(1, normalize=True))
+
+ def test_unpickle_legacy_series(self):
+ from pandas.core.datetools import BDay
+
+ unpickled = self.series
+
+ dtindex = DatetimeIndex(start='1/3/2005', end='1/14/2005',
+ freq=BDay(1))
+
+ self.assertEquals(type(unpickled.index), DatetimeIndex)
+ self.assertEquals(len(unpickled), 10)
+ self.assert_((unpickled.index == dtindex).all())
+ self.assertEquals(unpickled.index.offset, BDay(1, normalize=True))
+
+ def test_unpickle_legacy_len0_daterange(self):
+ pth, _ = os.path.split(os.path.abspath(__file__))
+ filepath = os.path.join(pth, 'data', 'series_daterange0.pickle')
+
+ result = pd.read_pickle(filepath)
+
+ ex_index = DatetimeIndex([], freq='B')
+
+ self.assert_(result.index.equals(ex_index))
+ tm.assert_isinstance(result.index.freq, offsets.BDay)
+ self.assert_(len(result) == 0)
+
+ def test_arithmetic_interaction(self):
+ index = self.frame.index
+ obj_index = index.asobject
+
+ dseries = Series(rand(len(index)), index=index)
+ oseries = Series(dseries.values, index=obj_index)
+
+ result = dseries + oseries
+ expected = dseries * 2
+ tm.assert_isinstance(result.index, DatetimeIndex)
+ assert_series_equal(result, expected)
+
+ result = dseries + oseries[:5]
+ expected = dseries + dseries[:5]
+ tm.assert_isinstance(result.index, DatetimeIndex)
+ assert_series_equal(result, expected)
+
+ def test_join_interaction(self):
+ index = self.frame.index
+ obj_index = index.asobject
+
+ def _check_join(left, right, how='inner'):
+ ra, rb, rc = left.join(right, how=how, return_indexers=True)
+ ea, eb, ec = left.join(DatetimeIndex(right), how=how,
+ return_indexers=True)
+
+ tm.assert_isinstance(ra, DatetimeIndex)
+ self.assert_(ra.equals(ea))
+
+ assert_almost_equal(rb, eb)
+ assert_almost_equal(rc, ec)
+
+ _check_join(index[:15], obj_index[5:], how='inner')
+ _check_join(index[:15], obj_index[5:], how='outer')
+ _check_join(index[:15], obj_index[5:], how='right')
+ _check_join(index[:15], obj_index[5:], how='left')
+
+ def test_join_nonunique(self):
+ idx1 = to_datetime(['2012-11-06 16:00:11.477563',
+ '2012-11-06 16:00:11.477563'])
+ idx2 = to_datetime(['2012-11-06 15:11:09.006507',
+ '2012-11-06 15:11:09.006507'])
+ rs = idx1.join(idx2, how='outer')
+ self.assert_(rs.is_monotonic)
+
+ def test_unpickle_daterange(self):
+ pth, _ = os.path.split(os.path.abspath(__file__))
+ filepath = os.path.join(pth, 'data', 'daterange_073.pickle')
+
+ rng = read_pickle(filepath)
+ tm.assert_isinstance(rng[0], datetime)
+ tm.assert_isinstance(rng.offset, offsets.BDay)
+ self.assert_(rng.values.dtype == object)
+
+ def test_setops(self):
+ index = self.frame.index
+ obj_index = index.asobject
+
+ result = index[:5].union(obj_index[5:])
+ expected = index
+ tm.assert_isinstance(result, DatetimeIndex)
+ self.assert_(result.equals(expected))
+
+ result = index[:10].intersection(obj_index[5:])
+ expected = index[5:10]
+ tm.assert_isinstance(result, DatetimeIndex)
+ self.assert_(result.equals(expected))
+
+ result = index[:10] - obj_index[5:]
+ expected = index[:5]
+ tm.assert_isinstance(result, DatetimeIndex)
+ self.assert_(result.equals(expected))
+
+ def test_index_conversion(self):
+ index = self.frame.index
+ obj_index = index.asobject
+
+ conv = DatetimeIndex(obj_index)
+ self.assert_(conv.equals(index))
+
+ self.assertRaises(ValueError, DatetimeIndex, ['a', 'b', 'c', 'd'])
+
+ def test_tolist(self):
+ rng = date_range('1/1/2000', periods=10)
+
+ result = rng.tolist()
+ tm.assert_isinstance(result[0], Timestamp)
+
+ def test_object_convert_fail(self):
+ idx = DatetimeIndex([NaT])
+ self.assertRaises(ValueError, idx.astype, 'O')
+
+ def test_setops_conversion_fail(self):
+ index = self.frame.index
+
+ right = Index(['a', 'b', 'c', 'd'])
+
+ result = index.union(right)
+ expected = Index(np.concatenate([index.asobject, right]))
+ self.assert_(result.equals(expected))
+
+ result = index.intersection(right)
+ expected = Index([])
+ self.assert_(result.equals(expected))
+
+ def test_legacy_time_rules(self):
+ rules = [('WEEKDAY', 'B'),
+ ('EOM', 'BM'),
+ ('W@MON', 'W-MON'), ('W@TUE', 'W-TUE'), ('W@WED', 'W-WED'),
+ ('W@THU', 'W-THU'), ('W@FRI', 'W-FRI'),
+ ('Q@JAN', 'BQ-JAN'), ('Q@FEB', 'BQ-FEB'), ('Q@MAR', 'BQ-MAR'),
+ ('A@JAN', 'BA-JAN'), ('A@FEB', 'BA-FEB'), ('A@MAR', 'BA-MAR'),
+ ('A@APR', 'BA-APR'), ('A@MAY', 'BA-MAY'), ('A@JUN', 'BA-JUN'),
+ ('A@JUL', 'BA-JUL'), ('A@AUG', 'BA-AUG'), ('A@SEP', 'BA-SEP'),
+ ('A@OCT', 'BA-OCT'), ('A@NOV', 'BA-NOV'), ('A@DEC', 'BA-DEC'),
+ ('WOM@1FRI', 'WOM-1FRI'), ('WOM@2FRI', 'WOM-2FRI'),
+ ('WOM@3FRI', 'WOM-3FRI'), ('WOM@4FRI', 'WOM-4FRI')]
+
+ start, end = '1/1/2000', '1/1/2010'
+
+ for old_freq, new_freq in rules:
+ old_rng = date_range(start, end, freq=old_freq)
+ new_rng = date_range(start, end, freq=new_freq)
+ self.assert_(old_rng.equals(new_rng))
+
+ # test get_legacy_offset_name
+ offset = datetools.get_offset(new_freq)
+ old_name = datetools.get_legacy_offset_name(offset)
+ self.assertEquals(old_name, old_freq)
+
+ def test_ms_vs_MS(self):
+ left = datetools.get_offset('ms')
+ right = datetools.get_offset('MS')
+ self.assert_(left == datetools.Milli())
+ self.assert_(right == datetools.MonthBegin())
+
+ def test_rule_aliases(self):
+ rule = datetools.to_offset('10us')
+ self.assert_(rule == datetools.Micro(10))
+
+class TestLegacyCompat(unittest.TestCase):
+
+ def setUp(self):
+ # suppress deprecation warnings
+ sys.stderr = StringIO()
+
+ def test_inferTimeRule(self):
+ from pandas.tseries.frequencies import inferTimeRule
+
+ index1 = [datetime(2010, 1, 29, 0, 0),
+ datetime(2010, 2, 26, 0, 0),
+ datetime(2010, 3, 31, 0, 0)]
+
+ index2 = [datetime(2010, 3, 26, 0, 0),
+ datetime(2010, 3, 29, 0, 0),
+ datetime(2010, 3, 30, 0, 0)]
+
+ index3 = [datetime(2010, 3, 26, 0, 0),
+ datetime(2010, 3, 27, 0, 0),
+ datetime(2010, 3, 29, 0, 0)]
+
+ # LEGACY
+ assert inferTimeRule(index1) == 'EOM'
+ assert inferTimeRule(index2) == 'WEEKDAY'
+
+ self.assertRaises(Exception, inferTimeRule, index1[:2])
+ self.assertRaises(Exception, inferTimeRule, index3)
+
+ def test_time_rule(self):
+ result = DateRange('1/1/2000', '1/30/2000', time_rule='WEEKDAY')
+ result2 = DateRange('1/1/2000', '1/30/2000', timeRule='WEEKDAY')
+ expected = date_range('1/1/2000', '1/30/2000', freq='B')
+
+ self.assert_(result.equals(expected))
+ self.assert_(result2.equals(expected))
+
+ def tearDown(self):
+ sys.stderr = sys.__stderr__
+
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ exit=False)
| closes #4758
```
In [2]: df = DataFrame({'ACCOUNT':["ACCT1", "ACCT1", "ACCT1", "ACCT2"],
...: 'TICKER':["ABC", "MNP", "XYZ", "XYZ"],
...: 'val':[1,2,3,4]},
...: index=date_range("2013-06-19 09:30:00", periods=4, freq='5T'))
In [3]: df_multi = df.set_index(['ACCOUNT', 'TICKER'], append=True)
In [4]: df_multi.loc[(pd.Timestamp('2013-06-19 09:30:00', tz=None), 'ACCT1', 'ABC')]
Out[4]:
val 1
Name: (2013-06-19 09:30:00, ACCT1, ABC), dtype: int64
In [5]: df_multi.loc[('2013-06-19 09:30:00', 'ACCT1', 'ABC')]
Out[5]:
val 1
Name: (2013-06-19 09:30:00, ACCT1, ABC), dtype: int64
```
This is quite difficult to do, a partial selection on a single indexer, so KeyError for now
```
In [6]: df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')]
KeyError: 'the label [ACCT1] is not in the [columns]'
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4761 | 2013-09-06T03:08:12Z | 2013-09-06T08:16:56Z | 2013-09-06T08:16:56Z | 2014-07-02T05:01:36Z |
[ArrowStringArray] PERF: small perf gain for object fallback | diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index a1278a129c40f..1e699e3a769b2 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -792,7 +792,8 @@ def _str_map(self, f, na_value=None, dtype: Dtype | None = None):
result = lib.map_infer_mask(
arr, f, mask.view("uint8"), convert=False, na_value=na_value
)
- return self._from_sequence(result)
+ result = pa.array(result, mask=mask, type=pa.string(), from_pandas=True)
+ return type(self)(result)
else:
# This is when the result type is object. We reach this when
# -> We know the result type is truly object (e.g. .encode returns bytes
| ```
before after ratio
[a43c42c3] [a5dd50ea]
<master> <small-perf>
- 26.8±0.05ms 26.1±0.1ms 0.97 strings.Methods.time_get('arrow_string')
- 21.2±0.1ms 20.3±0.1ms 0.96 strings.Methods.time_replace('arrow_string')
- 18.1±0.1ms 17.3±0.1ms 0.96 strings.Methods.time_lstrip('arrow_string')
- 18.5±0.1ms 17.7±0.2ms 0.95 strings.Methods.time_strip('arrow_string')
- 18.3±0.1ms 17.4±0.09ms 0.95 strings.Methods.time_rstrip('arrow_string')
- 28.0±0.3ms 26.6±0.07ms 0.95 strings.Methods.time_center('arrow_string')
- 21.2±0.2ms 20.1±0.1ms 0.95 strings.Methods.time_normalize('arrow_string')
- 28.1±0.2ms 26.5±0.2ms 0.94 strings.Methods.time_pad('arrow_string')
- 13.6±0.1ms 11.5±0.1ms 0.85 strings.Methods.time_isdigit('str')
- 13.6±0.2ms 11.3±0.04ms 0.83 strings.Methods.time_isnumeric('str')
SOME BENCHMARKS HAVE CHANGED SIGNIFICANTLY.
PERFORMANCE INCREASED.
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/41338 | 2021-05-05T17:56:45Z | 2021-05-06T01:50:39Z | 2021-05-06T01:50:39Z | 2021-05-06T08:16:28Z |
CI: Upgrade "actions/(checkout|cache)" to version 2 | diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index ca0c75f9de94f..a5a802c678e20 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -22,7 +22,9 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v1
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
- name: Looking for unwanted patterns
run: ci/code_checks.sh patterns
@@ -94,7 +96,9 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v1
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
- name: Set up pandas
uses: ./.github/actions/setup
@@ -147,7 +151,9 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v1
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
- name: Set up pandas
uses: ./.github/actions/setup
diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml
index a5aef7825c770..e119a26550e1c 100644
--- a/.github/workflows/database.yml
+++ b/.github/workflows/database.yml
@@ -56,10 +56,12 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v1
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
- name: Cache conda
- uses: actions/cache@v1
+ uses: actions/cache@v2
env:
CACHE_NUMBER: 0
with:
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml
index 34e6c2c9d94ce..3a4d3c106f851 100644
--- a/.github/workflows/posix.yml
+++ b/.github/workflows/posix.yml
@@ -44,10 +44,12 @@ jobs:
steps:
- name: Checkout
- uses: actions/checkout@v1
+ uses: actions/checkout@v2
+ with:
+ fetch-depth: 0
- name: Cache conda
- uses: actions/cache@v1
+ uses: actions/cache@v2
env:
CACHE_NUMBER: 0
with:
| According to the change log of both "checkout" and "cache" this should increase performance.
---
Fell free to close this PR if this was already discussed, I haven't saw any discussion about this topic. | https://api.github.com/repos/pandas-dev/pandas/pulls/41336 | 2021-05-05T17:21:32Z | 2021-05-24T06:02:23Z | 2021-05-24T06:02:23Z | 2021-08-26T13:52:29Z |
ENH: Add dropna argument to DataFrame.value_counts() | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 320912ec38890..cddaaa295af01 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -224,6 +224,7 @@ Other enhancements
- :meth:`.GroupBy.any` and :meth:`.GroupBy.all` return a ``BooleanDtype`` for columns with nullable data types (:issue:`33449`)
- Constructing a :class:`DataFrame` or :class:`Series` with the ``data`` argument being a Python iterable that is *not* a NumPy ``ndarray`` consisting of NumPy scalars will now result in a dtype with a precision the maximum of the NumPy scalars; this was already the case when ``data`` is a NumPy ``ndarray`` (:issue:`40908`)
- Add keyword ``sort`` to :func:`pivot_table` to allow non-sorting of the result (:issue:`39143`)
+- Add keyword ``dropna`` to :meth:`DataFrame.value_counts` to allow counting rows that include ``NA`` values (:issue:`41325`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 50837e1b3ed50..f3902b0a9d288 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6380,6 +6380,7 @@ def value_counts(
normalize: bool = False,
sort: bool = True,
ascending: bool = False,
+ dropna: bool = True,
):
"""
Return a Series containing counts of unique rows in the DataFrame.
@@ -6396,6 +6397,10 @@ def value_counts(
Sort by frequencies.
ascending : bool, default False
Sort in ascending order.
+ dropna : bool, default True
+ Don’t include counts of rows that contain NA values.
+
+ .. versionadded:: 1.3.0
Returns
-------
@@ -6451,11 +6456,36 @@ def value_counts(
2 2 0.25
6 0 0.25
dtype: float64
+
+ With `dropna` set to `False` we can also count rows with NA values.
+
+ >>> df = pd.DataFrame({'first_name': ['John', 'Anne', 'John', 'Beth'],
+ ... 'middle_name': ['Smith', pd.NA, pd.NA, 'Louise']})
+ >>> df
+ first_name middle_name
+ 0 John Smith
+ 1 Anne <NA>
+ 2 John <NA>
+ 3 Beth Louise
+
+ >>> df.value_counts()
+ first_name middle_name
+ Beth Louise 1
+ John Smith 1
+ dtype: int64
+
+ >>> df.value_counts(dropna=False)
+ first_name middle_name
+ Anne NaN 1
+ Beth Louise 1
+ John Smith 1
+ NaN 1
+ dtype: int64
"""
if subset is None:
subset = self.columns.tolist()
- counts = self.groupby(subset).grouper.size()
+ counts = self.groupby(subset, dropna=dropna).grouper.size()
if sort:
counts = counts.sort_values(ascending=ascending)
diff --git a/pandas/tests/frame/methods/test_value_counts.py b/pandas/tests/frame/methods/test_value_counts.py
index 23f9ebdb4479d..6e8528845ea6b 100644
--- a/pandas/tests/frame/methods/test_value_counts.py
+++ b/pandas/tests/frame/methods/test_value_counts.py
@@ -100,3 +100,47 @@ def test_data_frame_value_counts_empty_normalize():
expected = pd.Series([], dtype=np.float64)
tm.assert_series_equal(result, expected)
+
+
+def test_data_frame_value_counts_dropna_true(nulls_fixture):
+ # GH 41334
+ df = pd.DataFrame(
+ {
+ "first_name": ["John", "Anne", "John", "Beth"],
+ "middle_name": ["Smith", nulls_fixture, nulls_fixture, "Louise"],
+ },
+ )
+ result = df.value_counts()
+ expected = pd.Series(
+ data=[1, 1],
+ index=pd.MultiIndex.from_arrays(
+ [("Beth", "John"), ("Louise", "Smith")], names=["first_name", "middle_name"]
+ ),
+ )
+
+ tm.assert_series_equal(result, expected)
+
+
+def test_data_frame_value_counts_dropna_false(nulls_fixture):
+ # GH 41334
+ df = pd.DataFrame(
+ {
+ "first_name": ["John", "Anne", "John", "Beth"],
+ "middle_name": ["Smith", nulls_fixture, nulls_fixture, "Louise"],
+ },
+ )
+
+ result = df.value_counts(dropna=False)
+ expected = pd.Series(
+ data=[1, 1, 1, 1],
+ index=pd.MultiIndex(
+ levels=[
+ pd.Index(["Anne", "Beth", "John"]),
+ pd.Index(["Louise", "Smith", nulls_fixture]),
+ ],
+ codes=[[0, 1, 2, 2], [2, 0, 1, 2]],
+ names=["first_name", "middle_name"],
+ ),
+ )
+
+ tm.assert_series_equal(result, expected)
| - [x] closes #41325
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41334 | 2021-05-05T16:18:56Z | 2021-05-10T14:39:30Z | 2021-05-10T14:39:30Z | 2022-09-08T08:15:31Z |
[ArrowStringArray] PERF: use pa.compute.match_substring_regex for str.fullmatch if available | diff --git a/asv_bench/benchmarks/strings.py b/asv_bench/benchmarks/strings.py
index 0f68d1043b49d..2560d6726249e 100644
--- a/asv_bench/benchmarks/strings.py
+++ b/asv_bench/benchmarks/strings.py
@@ -83,6 +83,9 @@ def time_find(self, dtype):
def time_rfind(self, dtype):
self.s.str.rfind("[A-Z]+")
+ def time_fullmatch(self, dtype):
+ self.s.str.fullmatch("A")
+
def time_get(self, dtype):
self.s.str.get(0)
diff --git a/doc/redirects.csv b/doc/redirects.csv
index de69d0168835d..9b8a5a73dedff 100644
--- a/doc/redirects.csv
+++ b/doc/redirects.csv
@@ -1197,6 +1197,7 @@ generated/pandas.Series.str.extractall,../reference/api/pandas.Series.str.extrac
generated/pandas.Series.str.extract,../reference/api/pandas.Series.str.extract
generated/pandas.Series.str.findall,../reference/api/pandas.Series.str.findall
generated/pandas.Series.str.find,../reference/api/pandas.Series.str.find
+generated/pandas.Series.str.fullmatch,../reference/api/pandas.Series.str.fullmatch
generated/pandas.Series.str.get_dummies,../reference/api/pandas.Series.str.get_dummies
generated/pandas.Series.str.get,../reference/api/pandas.Series.str.get
generated/pandas.Series.str,../reference/api/pandas.Series.str
diff --git a/doc/source/reference/series.rst b/doc/source/reference/series.rst
index cc2937695e80f..3ff3b2bb53fda 100644
--- a/doc/source/reference/series.rst
+++ b/doc/source/reference/series.rst
@@ -415,6 +415,7 @@ strings and apply several methods to it. These can be accessed like
Series.str.extractall
Series.str.find
Series.str.findall
+ Series.str.fullmatch
Series.str.get
Series.str.index
Series.str.join
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 252bc215869ac..d5ee28eb7017e 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -844,6 +844,14 @@ def _str_match(
pat = "^" + pat
return self._str_contains(pat, case, flags, na, regex=True)
+ def _str_fullmatch(self, pat, case: bool = True, flags: int = 0, na: Scalar = None):
+ if pa_version_under4p0:
+ return super()._str_fullmatch(pat, case, flags, na)
+
+ if not pat.endswith("$") or pat.endswith("//$"):
+ pat = pat + "$"
+ return self._str_match(pat, case, flags, na)
+
def _str_isalnum(self):
result = pc.utf8_is_alnum(self._data)
return BooleanDtype().__from_arrow__(result)
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 5606380908f38..696b06f174e28 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -1170,7 +1170,7 @@ def match(self, pat, case=True, flags=0, na=None):
Returns
-------
- Series/array of boolean values
+ Series/Index/array of boolean values
See Also
--------
@@ -1197,14 +1197,14 @@ def fullmatch(self, pat, case=True, flags=0, na=None):
If True, case sensitive.
flags : int, default 0 (no flags)
Regex module flags, e.g. re.IGNORECASE.
- na : scalar, optional.
+ na : scalar, optional
Fill value for missing values. The default depends on dtype of the
array. For object-dtype, ``numpy.nan`` is used. For ``StringDtype``,
``pandas.NA`` is used.
Returns
-------
- Series/array of boolean values
+ Series/Index/array of boolean values
See Also
--------
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 843b0ba55e691..0815d23f2b493 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -494,10 +494,32 @@ def test_fullmatch(any_string_dtype):
expected = Series([True, False, np.nan, False], dtype=expected_dtype)
tm.assert_series_equal(result, expected)
+
+def test_fullmatch_na_kwarg(any_string_dtype):
+ ser = Series(
+ ["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype
+ )
+ result = ser.str.fullmatch(".*BAD[_]+.*BAD", na=False)
+ expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean"
+ expected = Series([True, False, False, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+
+def test_fullmatch_case_kwarg(any_string_dtype):
ser = Series(["ab", "AB", "abc", "ABC"], dtype=any_string_dtype)
- result = ser.str.fullmatch("ab", case=False)
expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean"
+
+ expected = Series([True, False, False, False], dtype=expected_dtype)
+
+ result = ser.str.fullmatch("ab", case=True)
+ tm.assert_series_equal(result, expected)
+
expected = Series([True, True, False, False], dtype=expected_dtype)
+
+ result = ser.str.fullmatch("ab", case=False)
+ tm.assert_series_equal(result, expected)
+
+ result = ser.str.fullmatch("ab", flags=re.IGNORECASE)
tm.assert_series_equal(result, expected)
| https://api.github.com/repos/pandas-dev/pandas/pulls/41332 | 2021-05-05T15:43:30Z | 2021-05-17T20:52:07Z | 2021-05-17T20:52:07Z | 2021-05-18T08:41:20Z | |
REF: dont special-case ngroups==0 | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 6b5cedf8a5243..c28db9b669a4b 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -105,10 +105,11 @@ cdef class SeriesBinGrouper(_BaseGrouper):
Py_ssize_t nresults, ngroups
cdef public:
+ ndarray bins # ndarray[int64_t]
ndarray arr, index, dummy_arr, dummy_index
- object values, f, bins, typ, ityp, name, idtype
+ object values, f, typ, ityp, name, idtype
- def __init__(self, object series, object f, object bins):
+ def __init__(self, object series, object f, ndarray[int64_t] bins):
assert len(bins) > 0 # otherwise we get IndexError in get_result
@@ -133,6 +134,8 @@ cdef class SeriesBinGrouper(_BaseGrouper):
if len(bins) > 0 and bins[-1] == len(series):
self.ngroups = len(bins)
else:
+ # TODO: not reached except in test_series_bin_grouper directly
+ # constructing SeriesBinGrouper; can we rule this case out?
self.ngroups = len(bins) + 1
def get_result(self):
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index c5ef18c51a533..7a5054fa2a1a5 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1331,18 +1331,7 @@ def _agg_py_fallback(
# reductions; see GH#28949
ser = df.iloc[:, 0]
- # Create SeriesGroupBy with observed=True so that it does
- # not try to add missing categories if grouping over multiple
- # Categoricals. This will done by later self._reindex_output()
- # Doing it here creates an error. See GH#34951
- sgb = get_groupby(ser, self.grouper, observed=True)
- # For SeriesGroupBy we could just use self instead of sgb
-
- if self.ngroups > 0:
- res_values = self.grouper.agg_series(ser, alt)
- else:
- # equiv: res_values = self._python_agg_general(alt)
- res_values = sgb._python_apply_general(alt, ser)._values
+ res_values = self.grouper.agg_series(ser, alt)
if isinstance(values, Categorical):
# Because we only get here with known dtype-preserving
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index d1a46c1c36439..2f4b126c119ef 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -969,8 +969,8 @@ def _cython_operation(
@final
def agg_series(self, obj: Series, func: F) -> ArrayLike:
- # Caller is responsible for checking ngroups != 0
- assert self.ngroups != 0
+ # test_groupby_empty_with_category gets here with self.ngroups == 0
+ # and len(obj) > 0
cast_back = True
if len(obj) == 0:
@@ -1007,7 +1007,6 @@ def _aggregate_series_fast(self, obj: Series, func: F) -> np.ndarray:
# - obj.index is not a MultiIndex
# - obj is backed by an ndarray, not ExtensionArray
# - len(obj) > 0
- # - ngroups != 0
func = com.is_builtin_func(func)
ids, _, ngroups = self.group_info
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index aa126ae801f1e..92e5e709a9b2e 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -56,7 +56,7 @@ def test_series_grouper_requires_nonempty_raises():
def test_series_bin_grouper():
obj = Series(np.random.randn(10))
- bins = np.array([3, 6])
+ bins = np.array([3, 6], dtype=np.int64)
grouper = libreduction.SeriesBinGrouper(obj, np.mean, bins)
result, counts = grouper.get_result()
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41331 | 2021-05-05T15:42:27Z | 2021-05-06T23:22:16Z | 2021-05-06T23:22:16Z | 2021-05-06T23:23:31Z |
CLN: remove unnecessary return from agg_series | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index b7254ffecb2bc..c5ef18c51a533 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1269,7 +1269,7 @@ def _python_agg_general(self, func, *args, **kwargs):
try:
# if this function is invalid for this dtype, we will ignore it.
- result, counts = self.grouper.agg_series(obj, f)
+ result = self.grouper.agg_series(obj, f)
except TypeError:
continue
@@ -1339,7 +1339,7 @@ def _agg_py_fallback(
# For SeriesGroupBy we could just use self instead of sgb
if self.ngroups > 0:
- res_values, _ = self.grouper.agg_series(ser, alt)
+ res_values = self.grouper.agg_series(ser, alt)
else:
# equiv: res_values = self._python_agg_general(alt)
res_values = sgb._python_apply_general(alt, ser)._values
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 60d79718cd85f..a6f2a537375a4 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -969,28 +969,28 @@ def _cython_operation(
)
@final
- def agg_series(self, obj: Series, func: F) -> tuple[ArrayLike, np.ndarray]:
+ def agg_series(self, obj: Series, func: F) -> ArrayLike:
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
cast_back = True
if len(obj) == 0:
# SeriesGrouper would raise if we were to call _aggregate_series_fast
- result, counts = self._aggregate_series_pure_python(obj, func)
+ result = self._aggregate_series_pure_python(obj, func)
elif is_extension_array_dtype(obj.dtype):
# _aggregate_series_fast would raise TypeError when
# calling libreduction.Slider
# In the datetime64tz case it would incorrectly cast to tz-naive
# TODO: can we get a performant workaround for EAs backed by ndarray?
- result, counts = self._aggregate_series_pure_python(obj, func)
+ result = self._aggregate_series_pure_python(obj, func)
elif obj.index._has_complex_internals:
# Preempt TypeError in _aggregate_series_fast
- result, counts = self._aggregate_series_pure_python(obj, func)
+ result = self._aggregate_series_pure_python(obj, func)
else:
- result, counts = self._aggregate_series_fast(obj, func)
+ result = self._aggregate_series_fast(obj, func)
cast_back = False
npvalues = lib.maybe_convert_objects(result, try_float=False)
@@ -999,11 +999,11 @@ def agg_series(self, obj: Series, func: F) -> tuple[ArrayLike, np.ndarray]:
out = maybe_cast_pointwise_result(npvalues, obj.dtype, numeric_only=True)
else:
out = npvalues
- return out, counts
+ return out
+
+ def _aggregate_series_fast(self, obj: Series, func: F) -> np.ndarray:
+ # -> np.ndarray[object]
- def _aggregate_series_fast(
- self, obj: Series, func: F
- ) -> tuple[ArrayLike, np.ndarray]:
# At this point we have already checked that
# - obj.index is not a MultiIndex
# - obj is backed by an ndarray, not ExtensionArray
@@ -1018,11 +1018,12 @@ def _aggregate_series_fast(
obj = obj.take(indexer)
ids = ids.take(indexer)
sgrouper = libreduction.SeriesGrouper(obj, func, ids, ngroups)
- result, counts = sgrouper.get_result()
- return result, counts
+ result, _ = sgrouper.get_result()
+ return result
@final
- def _aggregate_series_pure_python(self, obj: Series, func: F):
+ def _aggregate_series_pure_python(self, obj: Series, func: F) -> np.ndarray:
+ # -> np.ndarray[object]
ids, _, ngroups = self.group_info
counts = np.zeros(ngroups, dtype=int)
@@ -1047,7 +1048,7 @@ def _aggregate_series_pure_python(self, obj: Series, func: F):
counts[i] = group.shape[0]
result[i] = res
- return result, counts
+ return result
class BinGrouper(BaseGrouper):
@@ -1205,16 +1206,17 @@ def groupings(self) -> list[grouper.Grouping]:
ping = grouper.Grouping(lev, lev, in_axis=False, level=None, name=lev.name)
return [ping]
- def _aggregate_series_fast(
- self, obj: Series, func: F
- ) -> tuple[ArrayLike, np.ndarray]:
+ def _aggregate_series_fast(self, obj: Series, func: F) -> np.ndarray:
+ # -> np.ndarray[object]
+
# At this point we have already checked that
# - obj.index is not a MultiIndex
# - obj is backed by an ndarray, not ExtensionArray
# - ngroups != 0
# - len(self.bins) > 0
sbg = libreduction.SeriesBinGrouper(obj, func, self.bins)
- return sbg.get_result()
+ result, _ = sbg.get_result()
+ return result
def _is_indexed_like(obj, axes, axis: int) -> bool:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41330 | 2021-05-05T15:36:40Z | 2021-05-06T01:40:08Z | 2021-05-06T01:40:08Z | 2021-05-06T02:01:42Z |
Revert "CI: pin py310-dev version to alpha 7" | diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml
index 221501ae028f3..2643dc5ec656e 100644
--- a/.github/workflows/python-dev.yml
+++ b/.github/workflows/python-dev.yml
@@ -22,7 +22,7 @@ jobs:
- name: Set up Python Dev Version
uses: actions/setup-python@v2
with:
- python-version: '3.10.0-alpha.7'
+ python-version: '3.10-dev'
- name: Install dependencies
run: |
| Reverts pandas-dev/pandas#41315, closes #41313 | https://api.github.com/repos/pandas-dev/pandas/pulls/41328 | 2021-05-05T12:01:54Z | 2021-05-05T12:58:42Z | 2021-05-05T12:58:42Z | 2021-05-05T12:58:45Z |
[ArrowStringArray] CLN: remove hasattr checks | diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index af2dfe796f82d..8d64bf8852946 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -20,6 +20,12 @@
np_version_under1p19,
np_version_under1p20,
)
+from pandas.compat.pyarrow import (
+ pa_version_under1p0,
+ pa_version_under2p0,
+ pa_version_under3p0,
+ pa_version_under4p0,
+)
PY38 = sys.version_info >= (3, 8)
PY39 = sys.version_info >= (3, 9)
@@ -136,4 +142,8 @@ def get_lzma_file(lzma):
"np_version_under1p18",
"np_version_under1p19",
"np_version_under1p20",
+ "pa_version_under1p0",
+ "pa_version_under2p0",
+ "pa_version_under3p0",
+ "pa_version_under4p0",
]
diff --git a/pandas/compat/pyarrow.py b/pandas/compat/pyarrow.py
new file mode 100644
index 0000000000000..e9ca9b99d4380
--- /dev/null
+++ b/pandas/compat/pyarrow.py
@@ -0,0 +1,18 @@
+""" support pyarrow compatibility across versions """
+
+from distutils.version import LooseVersion
+
+try:
+ import pyarrow as pa
+
+ _pa_version = pa.__version__
+ _palv = LooseVersion(_pa_version)
+ pa_version_under1p0 = _palv < LooseVersion("1.0.0")
+ pa_version_under2p0 = _palv < LooseVersion("2.0.0")
+ pa_version_under3p0 = _palv < LooseVersion("3.0.0")
+ pa_version_under4p0 = _palv < LooseVersion("4.0.0")
+except ImportError:
+ pa_version_under1p0 = True
+ pa_version_under2p0 = True
+ pa_version_under3p0 = True
+ pa_version_under4p0 = True
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index de987b8d34f08..a1278a129c40f 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -22,6 +22,11 @@
Scalar,
type_t,
)
+from pandas.compat import (
+ pa_version_under2p0,
+ pa_version_under3p0,
+ pa_version_under4p0,
+)
from pandas.util._decorators import doc
from pandas.util._validators import validate_fillna_kwargs
@@ -667,9 +672,7 @@ def take(
return type(self)(self._data.take(indices_array))
def isin(self, values):
-
- # pyarrow.compute.is_in added in pyarrow 2.0.0
- if not hasattr(pc, "is_in"):
+ if pa_version_under2p0:
return super().isin(values)
value_set = [
@@ -684,7 +687,7 @@ def isin(self, values):
return np.zeros(len(self), dtype=bool)
kwargs = {}
- if LooseVersion(pa.__version__) < "3.0.0":
+ if pa_version_under3p0:
# in pyarrow 2.0.0 skip_null is ignored but is a required keyword and raises
# with unexpected keyword argument in pyarrow 3.0.0+
kwargs["skip_null"] = True
@@ -802,11 +805,10 @@ def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex: bool = True):
return super()._str_contains(pat, case, flags, na, regex)
if regex:
- # match_substring_regex added in pyarrow 4.0.0
- if hasattr(pc, "match_substring_regex") and case:
- result = pc.match_substring_regex(self._data, pat)
- else:
+ if pa_version_under4p0 or case is False:
return super()._str_contains(pat, case, flags, na, regex)
+ else:
+ result = pc.match_substring_regex(self._data, pat)
else:
if case:
result = pc.match_substring(self._data, pat)
@@ -818,27 +820,25 @@ def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex: bool = True):
return result
def _str_startswith(self, pat, na=None):
- # match_substring_regex added in pyarrow 4.0.0
- if hasattr(pc, "match_substring_regex"):
- result = pc.match_substring_regex(self._data, "^" + re.escape(pat))
- result = BooleanDtype().__from_arrow__(result)
- if not isna(na):
- result[isna(result)] = bool(na)
- return result
- else:
+ if pa_version_under4p0:
return super()._str_startswith(pat, na)
+ result = pc.match_substring_regex(self._data, "^" + re.escape(pat))
+ result = BooleanDtype().__from_arrow__(result)
+ if not isna(na):
+ result[isna(result)] = bool(na)
+ return result
+
def _str_endswith(self, pat, na=None):
- # match_substring_regex added in pyarrow 4.0.0
- if hasattr(pc, "match_substring_regex"):
- result = pc.match_substring_regex(self._data, re.escape(pat) + "$")
- result = BooleanDtype().__from_arrow__(result)
- if not isna(na):
- result[isna(result)] = bool(na)
- return result
- else:
+ if pa_version_under4p0:
return super()._str_endswith(pat, na)
+ result = pc.match_substring_regex(self._data, re.escape(pat) + "$")
+ result = BooleanDtype().__from_arrow__(result)
+ if not isna(na):
+ result[isna(result)] = bool(na)
+ return result
+
def _str_match(
self, pat: str, case: bool = True, flags: int = 0, na: Scalar = None
):
@@ -871,13 +871,12 @@ def _str_isnumeric(self):
return BooleanDtype().__from_arrow__(result)
def _str_isspace(self):
- # utf8_is_space added in pyarrow 2.0.0
- if hasattr(pc, "utf8_is_space"):
- result = pc.utf8_is_space(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
+ if pa_version_under2p0:
return super()._str_isspace()
+ result = pc.utf8_is_space(self._data)
+ return BooleanDtype().__from_arrow__(result)
+
def _str_istitle(self):
result = pc.utf8_is_title(self._data)
return BooleanDtype().__from_arrow__(result)
@@ -887,13 +886,12 @@ def _str_isupper(self):
return BooleanDtype().__from_arrow__(result)
def _str_len(self):
- # utf8_length added in pyarrow 4.0.0
- if hasattr(pc, "utf8_length"):
- result = pc.utf8_length(self._data)
- return Int64Dtype().__from_arrow__(result)
- else:
+ if pa_version_under4p0:
return super()._str_len()
+ result = pc.utf8_length(self._data)
+ return Int64Dtype().__from_arrow__(result)
+
def _str_lower(self):
return type(self)(pc.utf8_lower(self._data))
@@ -901,34 +899,31 @@ def _str_upper(self):
return type(self)(pc.utf8_upper(self._data))
def _str_strip(self, to_strip=None):
+ if pa_version_under4p0:
+ return super()._str_strip(to_strip)
+
if to_strip is None:
- # utf8_trim_whitespace added in pyarrow 4.0.0
- if hasattr(pc, "utf8_trim_whitespace"):
- return type(self)(pc.utf8_trim_whitespace(self._data))
+ result = pc.utf8_trim_whitespace(self._data)
else:
- # utf8_trim added in pyarrow 4.0.0
- if hasattr(pc, "utf8_trim"):
- return type(self)(pc.utf8_trim(self._data, characters=to_strip))
- return super()._str_strip(to_strip)
+ result = pc.utf8_trim(self._data, characters=to_strip)
+ return type(self)(result)
def _str_lstrip(self, to_strip=None):
+ if pa_version_under4p0:
+ return super()._str_lstrip(to_strip)
+
if to_strip is None:
- # utf8_ltrim_whitespace added in pyarrow 4.0.0
- if hasattr(pc, "utf8_ltrim_whitespace"):
- return type(self)(pc.utf8_ltrim_whitespace(self._data))
+ result = pc.utf8_ltrim_whitespace(self._data)
else:
- # utf8_ltrim added in pyarrow 4.0.0
- if hasattr(pc, "utf8_ltrim"):
- return type(self)(pc.utf8_ltrim(self._data, characters=to_strip))
- return super()._str_lstrip(to_strip)
+ result = pc.utf8_ltrim(self._data, characters=to_strip)
+ return type(self)(result)
def _str_rstrip(self, to_strip=None):
+ if pa_version_under4p0:
+ return super()._str_rstrip(to_strip)
+
if to_strip is None:
- # utf8_rtrim_whitespace added in pyarrow 4.0.0
- if hasattr(pc, "utf8_rtrim_whitespace"):
- return type(self)(pc.utf8_rtrim_whitespace(self._data))
+ result = pc.utf8_rtrim_whitespace(self._data)
else:
- # utf8_rtrim added in pyarrow 4.0.0
- if hasattr(pc, "utf8_rtrim"):
- return type(self)(pc.utf8_rtrim(self._data, characters=to_strip))
- return super()._str_rstrip(to_strip)
+ result = pc.utf8_rtrim(self._data, characters=to_strip)
+ return type(self)(result)
| xref https://github.com/pandas-dev/pandas/pull/41281#discussion_r625926407 | https://api.github.com/repos/pandas-dev/pandas/pulls/41327 | 2021-05-05T11:51:06Z | 2021-05-05T15:23:44Z | 2021-05-05T15:23:44Z | 2021-05-05T15:24:04Z |
[ArrowStringArray] PERF: use pa.compute.match_substring_regex for str.match if available | diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 44298401d02cb..e48de531db86c 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -19,6 +19,7 @@
Dtype,
NpDtype,
PositionalIndexer,
+ Scalar,
type_t,
)
from pandas.util._decorators import doc
@@ -808,6 +809,13 @@ def _str_endswith(self, pat, na=None):
else:
return super()._str_endswith(pat, na)
+ def _str_match(
+ self, pat: str, case: bool = True, flags: int = 0, na: Scalar = None
+ ):
+ if not pat.startswith("^"):
+ pat = "^" + pat
+ return self._str_contains(pat, case, flags, na, regex=True)
+
def _str_isalnum(self):
result = pc.utf8_is_alnum(self._data)
return BooleanDtype().__from_arrow__(result)
diff --git a/pandas/core/strings/base.py b/pandas/core/strings/base.py
index b8033668aa18f..a77f8861a7c02 100644
--- a/pandas/core/strings/base.py
+++ b/pandas/core/strings/base.py
@@ -61,11 +61,7 @@ def _str_repeat(self, repeats):
@abc.abstractmethod
def _str_match(
- self,
- pat: Union[str, Pattern],
- case: bool = True,
- flags: int = 0,
- na: Scalar = np.nan,
+ self, pat: str, case: bool = True, flags: int = 0, na: Scalar = np.nan
):
pass
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index 0d8db3d3778a3..869eabc76b555 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -186,11 +186,7 @@ def rep(x, r):
return result
def _str_match(
- self,
- pat: Union[str, Pattern],
- case: bool = True,
- flags: int = 0,
- na: Scalar = None,
+ self, pat: str, case: bool = True, flags: int = 0, na: Scalar = None
):
if not case:
flags |= re.IGNORECASE
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 404a7aaf3c9a9..06a7c6d56a61d 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -409,19 +409,39 @@ def test_replace_literal(any_string_dtype):
values.str.replace(compiled_pat, "", regex=False)
-def test_match():
+def test_match(any_string_dtype):
# New match behavior introduced in 0.13
- values = Series(["fooBAD__barBAD", np.nan, "foo"])
+ expected_dtype = "object" if any_string_dtype == "object" else "boolean"
+
+ values = Series(["fooBAD__barBAD", np.nan, "foo"], dtype=any_string_dtype)
result = values.str.match(".*(BAD[_]+).*(BAD)")
- exp = Series([True, np.nan, False])
- tm.assert_series_equal(result, exp)
+ expected = Series([True, np.nan, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
- values = Series(["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"])
+ values = Series(
+ ["fooBAD__barBAD", "BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype
+ )
result = values.str.match(".*BAD[_]+.*BAD")
- exp = Series([True, True, np.nan, False])
- tm.assert_series_equal(result, exp)
+ expected = Series([True, True, np.nan, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
- # mixed
+ result = values.str.match("BAD[_]+.*BAD")
+ expected = Series([False, True, np.nan, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+ values = Series(
+ ["fooBAD__barBAD", "^BAD_BADleroybrown", np.nan, "foo"], dtype=any_string_dtype
+ )
+ result = values.str.match("^BAD[_]+.*BAD")
+ expected = Series([False, False, np.nan, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = values.str.match("\\^BAD[_]+.*BAD")
+ expected = Series([False, True, np.nan, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+
+def test_match_mixed_object():
mixed = Series(
[
"aBAD_BAD",
@@ -435,22 +455,34 @@ def test_match():
2.0,
]
)
- rs = Series(mixed).str.match(".*(BAD[_]+).*(BAD)")
- xp = Series([True, np.nan, True, np.nan, np.nan, False, np.nan, np.nan, np.nan])
- assert isinstance(rs, Series)
- tm.assert_series_equal(rs, xp)
+ result = Series(mixed).str.match(".*(BAD[_]+).*(BAD)")
+ expected = Series(
+ [True, np.nan, True, np.nan, np.nan, False, np.nan, np.nan, np.nan]
+ )
+ assert isinstance(result, Series)
+ tm.assert_series_equal(result, expected)
+
- # na GH #6609
- res = Series(["a", 0, np.nan]).str.match("a", na=False)
- exp = Series([True, False, False])
- tm.assert_series_equal(exp, res)
- res = Series(["a", 0, np.nan]).str.match("a")
- exp = Series([True, np.nan, np.nan])
- tm.assert_series_equal(exp, res)
+def test_match_na_kwarg(any_string_dtype):
+ # GH #6609
+ s = Series(["a", "b", np.nan], dtype=any_string_dtype)
- values = Series(["ab", "AB", "abc", "ABC"])
+ result = s.str.match("a", na=False)
+ expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean"
+ expected = Series([True, False, False], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+ result = s.str.match("a")
+ expected_dtype = "object" if any_string_dtype == "object" else "boolean"
+ expected = Series([True, False, np.nan], dtype=expected_dtype)
+ tm.assert_series_equal(result, expected)
+
+
+def test_match_case_kwarg(any_string_dtype):
+ values = Series(["ab", "AB", "abc", "ABC"], dtype=any_string_dtype)
result = values.str.match("ab", case=False)
- expected = Series([True, True, True, True])
+ expected_dtype = np.bool_ if any_string_dtype == "object" else "boolean"
+ expected = Series([True, True, True, True], dtype=expected_dtype)
tm.assert_series_equal(result, expected)
|
```
[ 50.00%] ··· strings.Methods.time_match ok
[ 50.00%] ··· ============== ==========
dtype
-------------- ----------
str 28.3±0ms
string 22.5±0ms
arrow_string 2.46±0ms
============== ==========
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/41326 | 2021-05-05T10:43:24Z | 2021-05-05T12:31:31Z | 2021-05-05T12:31:31Z | 2021-05-05T12:45:09Z |
BUG: groupby(axis=0).rank(axis=1) | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 320912ec38890..7520b14127c28 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -885,6 +885,7 @@ Groupby/resample/rolling
- Bug in :meth:`DataFrame.rolling` returning sum not zero for all ``NaN`` window with ``min_periods=0`` if calculation is not numerical stable (:issue:`41053`)
- Bug in :meth:`SeriesGroupBy.agg` failing to retain ordered :class:`CategoricalDtype` on order-preserving aggregations (:issue:`41147`)
- Bug in :meth:`DataFrameGroupBy.min` and :meth:`DataFrameGroupBy.max` with multiple object-dtype columns and ``numeric_only=False`` incorrectly raising ``ValueError`` (:issue:41111`)
+- Bug in :meth:`DataFrameGroupBy.rank` with the GroupBy object's ``axis=0`` and the ``rank`` method's keyword ``axis=1`` (:issue:`41320`)
Reshaping
^^^^^^^^^
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 7a8b41fbdf141..9bc9895f3798f 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2662,14 +2662,23 @@ def rank(
if na_option not in {"keep", "top", "bottom"}:
msg = "na_option must be one of 'keep', 'top', or 'bottom'"
raise ValueError(msg)
+
+ kwargs = {
+ "ties_method": method,
+ "ascending": ascending,
+ "na_option": na_option,
+ "pct": pct,
+ }
+ if axis != 0:
+ # DataFrame uses different keyword name
+ kwargs["method"] = kwargs.pop("ties_method")
+ return self.apply(lambda x: x.rank(axis=axis, numeric_only=False, **kwargs))
+
return self._cython_transform(
"rank",
numeric_only=False,
- ties_method=method,
- ascending=ascending,
- na_option=na_option,
- pct=pct,
axis=axis,
+ **kwargs,
)
@final
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index ffd6209cb83fb..ae46d1b024cc2 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -44,6 +44,7 @@
ensure_float64,
ensure_int64,
ensure_platform_int,
+ is_1d_only_ea_obj,
is_bool_dtype,
is_categorical_dtype,
is_complex_dtype,
@@ -600,9 +601,11 @@ def cython_operation(
if values.ndim > 2:
raise NotImplementedError("number of dimensions is currently limited to 2")
elif values.ndim == 2:
+ assert axis == 1, axis
+ elif not is_1d_only_ea_obj(values):
# Note: it is *not* the case that axis is always 0 for 1-dim values,
# as we can have 1D ExtensionArrays that we need to treat as 2D
- assert axis == 1, axis
+ assert axis == 0
dtype = values.dtype
is_numeric = is_numeric_dtype(dtype)
diff --git a/pandas/tests/groupby/test_rank.py b/pandas/tests/groupby/test_rank.py
index e07c5f404a02a..20edf03c5b96c 100644
--- a/pandas/tests/groupby/test_rank.py
+++ b/pandas/tests/groupby/test_rank.py
@@ -600,3 +600,18 @@ def test_rank_multiindex():
)
tm.assert_frame_equal(result, expected)
+
+
+def test_groupby_axis0_rank_axis1():
+ # GH#41320
+ df = DataFrame(
+ {0: [1, 3, 5, 7], 1: [2, 4, 6, 8], 2: [1.5, 3.5, 5.5, 7.5]},
+ index=["a", "a", "b", "b"],
+ )
+ gb = df.groupby(level=0, axis=0)
+
+ res = gb.rank(axis=1)
+
+ # This should match what we get when "manually" operating group-by-group
+ expected = concat([df.loc["a"].rank(axis=1), df.loc["b"].rank(axis=1)], axis=0)
+ tm.assert_frame_equal(res, expected)
| - [x] closes #41320
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41324 | 2021-05-05T03:46:20Z | 2021-05-05T14:52:58Z | 2021-05-05T14:52:58Z | 2021-05-05T15:01:02Z |
ENH: loosen XLS signature | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 84f9dae8a0850..196a5b5cd136e 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -197,7 +197,7 @@ Other enhancements
- Improved integer type mapping from pandas to SQLAlchemy when using :meth:`DataFrame.to_sql` (:issue:`35076`)
- :func:`to_numeric` now supports downcasting of nullable ``ExtensionDtype`` objects (:issue:`33013`)
- Add support for dict-like names in :class:`MultiIndex.set_names` and :class:`MultiIndex.rename` (:issue:`20421`)
-- :func:`pandas.read_excel` can now auto detect .xlsb files (:issue:`35416`)
+- :func:`pandas.read_excel` can now auto detect .xlsb files and older .xls files (:issue:`35416`, :issue:`41225`)
- :class:`pandas.ExcelWriter` now accepts an ``if_sheet_exists`` parameter to control the behaviour of append mode when writing to existing sheets (:issue:`40230`)
- :meth:`.Rolling.sum`, :meth:`.Expanding.sum`, :meth:`.Rolling.mean`, :meth:`.Expanding.mean`, :meth:`.ExponentialMovingWindow.mean`, :meth:`.Rolling.median`, :meth:`.Expanding.median`, :meth:`.Rolling.max`, :meth:`.Expanding.max`, :meth:`.Rolling.min`, and :meth:`.Expanding.min` now support ``Numba`` execution with the ``engine`` keyword (:issue:`38895`, :issue:`41267`)
- :meth:`DataFrame.apply` can now accept NumPy unary operators as strings, e.g. ``df.apply("sqrt")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
@@ -843,6 +843,8 @@ I/O
- Bug in :func:`read_csv` and :func:`read_excel` not respecting dtype for duplicated column name when ``mangle_dupe_cols`` is set to ``True`` (:issue:`35211`)
- Bug in :func:`read_csv` and :func:`read_table` misinterpreting arguments when ``sys.setprofile`` had been previously called (:issue:`41069`)
- Bug in the conversion from pyarrow to pandas (e.g. for reading Parquet) with nullable dtypes and a pyarrow array whose data buffer size is not a multiple of dtype size (:issue:`40896`)
+- Bug in :func:`read_excel` would raise an error when pandas could not determine the file type, even when user specified the ``engine`` argument (:issue:`41225`)
+-
Period
^^^^^^
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 4b81b69976c62..ae9fba25cf002 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -1014,16 +1014,21 @@ def close(self):
return content
-XLS_SIGNATURE = b"\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1"
+XLS_SIGNATURES = (
+ b"\x09\x00\x04\x00\x07\x00\x10\x00", # BIFF2
+ b"\x09\x02\x06\x00\x00\x00\x10\x00", # BIFF3
+ b"\x09\x04\x06\x00\x00\x00\x10\x00", # BIFF4
+ b"\xD0\xCF\x11\xE0\xA1\xB1\x1A\xE1", # Compound File Binary
+)
ZIP_SIGNATURE = b"PK\x03\x04"
-PEEK_SIZE = max(len(XLS_SIGNATURE), len(ZIP_SIGNATURE))
+PEEK_SIZE = max(map(len, XLS_SIGNATURES + (ZIP_SIGNATURE,)))
@doc(storage_options=_shared_docs["storage_options"])
def inspect_excel_format(
content_or_path: FilePathOrBuffer,
storage_options: StorageOptions = None,
-) -> str:
+) -> str | None:
"""
Inspect the path or content of an excel file and get its format.
@@ -1037,8 +1042,8 @@ def inspect_excel_format(
Returns
-------
- str
- Format of file.
+ str or None
+ Format of file if it can be determined.
Raises
------
@@ -1063,10 +1068,10 @@ def inspect_excel_format(
peek = buf
stream.seek(0)
- if peek.startswith(XLS_SIGNATURE):
+ if any(peek.startswith(sig) for sig in XLS_SIGNATURES):
return "xls"
elif not peek.startswith(ZIP_SIGNATURE):
- raise ValueError("File is not a recognized excel file")
+ return None
# ZipFile typing is overly-strict
# https://github.com/python/typeshed/issues/4212
@@ -1174,8 +1179,12 @@ def __init__(
ext = inspect_excel_format(
content_or_path=path_or_buffer, storage_options=storage_options
)
+ if ext is None:
+ raise ValueError(
+ "Excel file format cannot be determined, you must specify "
+ "an engine manually."
+ )
- # ext will always be valid, otherwise inspect_excel_format would raise
engine = config.get_option(f"io.excel.{ext}.reader", silent=True)
if engine == "auto":
engine = get_default_engine(ext, mode="reader")
@@ -1190,12 +1199,13 @@ def __init__(
path_or_buffer, storage_options=storage_options
)
- if ext != "xls" and xlrd_version >= Version("2"):
+ # Pass through if ext is None, otherwise check if ext valid for xlrd
+ if ext and ext != "xls" and xlrd_version >= Version("2"):
raise ValueError(
f"Your version of xlrd is {xlrd_version}. In xlrd >= 2.0, "
f"only the xls format is supported. Install openpyxl instead."
)
- elif ext != "xls":
+ elif ext and ext != "xls":
caller = inspect.stack()[1]
if (
caller.filename.endswith(
diff --git a/pandas/tests/io/excel/test_readers.py b/pandas/tests/io/excel/test_readers.py
index a46cb70097bd8..aec638a0d8612 100644
--- a/pandas/tests/io/excel/test_readers.py
+++ b/pandas/tests/io/excel/test_readers.py
@@ -727,9 +727,20 @@ def test_missing_file_raises(self, read_ext):
def test_corrupt_bytes_raises(self, read_ext, engine):
bad_stream = b"foo"
- if engine is None or engine == "xlrd":
+ if engine is None:
error = ValueError
- msg = "File is not a recognized excel file"
+ msg = (
+ "Excel file format cannot be determined, you must "
+ "specify an engine manually."
+ )
+ elif engine == "xlrd":
+ from xlrd import XLRDError
+
+ error = XLRDError
+ msg = (
+ "Unsupported format, or corrupt file: Expected BOF "
+ "record; found b'foo'"
+ )
else:
error = BadZipFile
msg = "File is not a zip file"
diff --git a/pandas/tests/io/excel/test_xlrd.py b/pandas/tests/io/excel/test_xlrd.py
index bf0a0de442ae1..2bb9ba2a397be 100644
--- a/pandas/tests/io/excel/test_xlrd.py
+++ b/pandas/tests/io/excel/test_xlrd.py
@@ -1,3 +1,5 @@
+import io
+
import pytest
from pandas.compat._optional import import_optional_dependency
@@ -8,6 +10,7 @@
from pandas.util.version import Version
from pandas.io.excel import ExcelFile
+from pandas.io.excel._base import inspect_excel_format
xlrd = pytest.importorskip("xlrd")
xlwt = pytest.importorskip("xlwt")
@@ -78,3 +81,18 @@ def test_read_excel_warning_with_xlsx_file(datapath):
else:
with tm.assert_produces_warning(None):
pd.read_excel(path, "Sheet1", engine=None)
+
+
+@pytest.mark.parametrize(
+ "file_header",
+ [
+ b"\x09\x00\x04\x00\x07\x00\x10\x00",
+ b"\x09\x02\x06\x00\x00\x00\x10\x00",
+ b"\x09\x04\x06\x00\x00\x00\x10\x00",
+ b"\xd0\xcf\x11\xe0\xa1\xb1\x1a\xe1",
+ ],
+)
+def test_read_old_xls_files(file_header):
+ # GH 41226
+ f = io.BytesIO(file_header)
+ assert inspect_excel_format(f) == "xls"
| - [x] closes #41225
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
Added ability to check for multiple XLS signatures according to testing files available at [Spreadsheet Project](https://www.openoffice.org/sc/testdocs/). Also defer raising an error when the engine is specified but the file signature does not match one of the values in `SIGNATURES`: this allows a user to attempt to specify and use an engine even if the passed file doesn't match one of the provided values in `SIGNATURES`.
This is my first PR for this project, so please let me know if more is expected (writing tests, writing a whatsnew entry, etc). I have run Flake8 on the code and successfully opened BIFF2 through BIFF8 files with this method. Thanks!
| https://api.github.com/repos/pandas-dev/pandas/pulls/41321 | 2021-05-04T21:04:13Z | 2021-05-21T01:44:13Z | 2021-05-21T01:44:13Z | 2021-05-21T01:47:17Z |
REF: remove no-op casting | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f2041951b9e49..a2556ad54af02 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -59,9 +59,7 @@ class providing the base-class of operations.
doc,
)
-from pandas.core.dtypes.cast import maybe_downcast_numeric
from pandas.core.dtypes.common import (
- ensure_float,
is_bool_dtype,
is_datetime64_dtype,
is_integer_dtype,
@@ -1271,19 +1269,7 @@ def _python_agg_general(self, func, *args, **kwargs):
except TypeError:
continue
- assert result is not None
key = base.OutputKey(label=name, position=idx)
-
- if self.grouper._filter_empty_groups:
- mask = counts.ravel() > 0
-
- # since we are masking, make sure that we have a float object
- values = result
- if is_numeric_dtype(values.dtype):
- values = ensure_float(values)
-
- result = maybe_downcast_numeric(values[mask], result.dtype)
-
output[key] = result
if not output:
@@ -3035,9 +3021,7 @@ def _reindex_output(
Object (potentially) re-indexed to include all possible groups.
"""
groupings = self.grouper.groupings
- if groupings is None:
- return output
- elif len(groupings) == 1:
+ if len(groupings) == 1:
return output
# if we only care about the observed values
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 0727cad941d49..715a3008dc058 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -56,7 +56,6 @@
needs_i8_conversion,
)
from pandas.core.dtypes.dtypes import ExtensionDtype
-from pandas.core.dtypes.generic import ABCCategoricalIndex
from pandas.core.dtypes.missing import (
isna,
maybe_fill,
@@ -89,6 +88,7 @@
grouper,
)
from pandas.core.indexes.api import (
+ CategoricalIndex,
Index,
MultiIndex,
ensure_index,
@@ -676,7 +676,6 @@ def __init__(
):
assert isinstance(axis, Index), axis
- self._filter_empty_groups = self.compressed = len(groupings) != 1
self.axis = axis
self._groupings: list[grouper.Grouping] = list(groupings)
self.sort = sort
@@ -822,9 +821,7 @@ def apply(self, f: F, data: FrameOrSeries, axis: int = 0):
@cache_readonly
def indices(self):
""" dict {group name -> group indices} """
- if len(self.groupings) == 1 and isinstance(
- self.result_index, ABCCategoricalIndex
- ):
+ if len(self.groupings) == 1 and isinstance(self.result_index, CategoricalIndex):
# This shows unused categories in indices GH#38642
return self.groupings[0].indices
codes_list = [ping.codes for ping in self.groupings]
@@ -913,7 +910,7 @@ def reconstructed_codes(self) -> list[np.ndarray]:
@cache_readonly
def result_index(self) -> Index:
- if not self.compressed and len(self.groupings) == 1:
+ if len(self.groupings) == 1:
return self.groupings[0].result_index.rename(self.names[0])
codes = self.reconstructed_codes
@@ -924,7 +921,9 @@ def result_index(self) -> Index:
@final
def get_group_levels(self) -> list[Index]:
- if not self.compressed and len(self.groupings) == 1:
+ # Note: only called from _insert_inaxis_grouper_inplace, which
+ # is only called for BaseGrouper, never for BinGrouper
+ if len(self.groupings) == 1:
return [self.groupings[0].result_index]
name_list = []
@@ -1091,7 +1090,6 @@ def __init__(
):
self.bins = ensure_int64(bins)
self.binlabels = ensure_index(binlabels)
- self._filter_empty_groups = False
self.mutated = mutated
self.indexer = indexer
@@ -1201,10 +1199,9 @@ def names(self) -> list[Hashable]:
@property
def groupings(self) -> list[grouper.Grouping]:
- return [
- grouper.Grouping(lvl, lvl, in_axis=False, level=None, name=name)
- for lvl, name in zip(self.levels, self.names)
- ]
+ lev = self.binlabels
+ ping = grouper.Grouping(lev, lev, in_axis=False, level=None, name=lev.name)
+ return [ping]
def _aggregate_series_fast(
self, obj: Series, func: F
| In _python_agg_general:
```
if self.grouper._filter_empty_groups:
mask = counts.ravel() > 0
```
It isn't entirely trivial, but we can show that we will always have `mask.all()`, as a result of which the remainder of this code chunk of code is a no-op. This PR removes it.
As a consequence, we can remove `_filter_empty_groups` and `compressed`
| https://api.github.com/repos/pandas-dev/pandas/pulls/41317 | 2021-05-04T19:10:25Z | 2021-05-05T12:44:33Z | 2021-05-05T12:44:33Z | 2021-05-05T14:50:22Z |
CI: pin py310-dev version to alpha 7 | diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml
index 2643dc5ec656e..221501ae028f3 100644
--- a/.github/workflows/python-dev.yml
+++ b/.github/workflows/python-dev.yml
@@ -22,7 +22,7 @@ jobs:
- name: Set up Python Dev Version
uses: actions/setup-python@v2
with:
- python-version: '3.10-dev'
+ python-version: '3.10.0-alpha.7'
- name: Install dependencies
run: |
| xref #41313
Not sure if the syntax is correct. Let's see. | https://api.github.com/repos/pandas-dev/pandas/pulls/41315 | 2021-05-04T17:10:11Z | 2021-05-04T19:27:24Z | 2021-05-04T19:27:24Z | 2022-11-18T02:21:51Z |
[ArrowStringDtype] Make it already a StringDtype subclass | diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index bd01191719143..2cb30c53b6832 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -530,7 +530,6 @@ def astype(self, dtype, copy=True):
NumPy ndarray with 'dtype' for its dtype.
"""
from pandas.core.arrays.string_ import StringDtype
- from pandas.core.arrays.string_arrow import ArrowStringDtype
dtype = pandas_dtype(dtype)
if is_dtype_equal(dtype, self.dtype):
@@ -540,9 +539,8 @@ def astype(self, dtype, copy=True):
return self.copy()
# FIXME: Really hard-code here?
- if isinstance(
- dtype, (ArrowStringDtype, StringDtype)
- ): # allow conversion to StringArrays
+ if isinstance(dtype, StringDtype):
+ # allow conversion to StringArrays
return dtype.construct_array_type()._from_sequence(self, copy=False)
return np.array(self, dtype=dtype, copy=copy)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 95c95d98bc968..a99bf245a6073 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -829,7 +829,6 @@ def astype(self, dtype, copy: bool = True):
"""
from pandas import Index
from pandas.core.arrays.string_ import StringDtype
- from pandas.core.arrays.string_arrow import ArrowStringDtype
if dtype is not None:
dtype = pandas_dtype(dtype)
@@ -852,7 +851,7 @@ def astype(self, dtype, copy: bool = True):
return self._shallow_copy(new_left, new_right)
elif is_categorical_dtype(dtype):
return Categorical(np.asarray(self), dtype=dtype)
- elif isinstance(dtype, (StringDtype, ArrowStringDtype)):
+ elif isinstance(dtype, StringDtype):
return dtype.construct_array_type()._from_sequence(self, copy=False)
# TODO: This try/except will be repeated.
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 44298401d02cb..6f23457c04dd4 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -24,7 +24,6 @@
from pandas.util._decorators import doc
from pandas.util._validators import validate_fillna_kwargs
-from pandas.core.dtypes.base import ExtensionDtype
from pandas.core.dtypes.common import (
is_array_like,
is_bool_dtype,
@@ -42,6 +41,7 @@
from pandas.core.arrays.base import ExtensionArray
from pandas.core.arrays.boolean import BooleanDtype
from pandas.core.arrays.integer import Int64Dtype
+from pandas.core.arrays.string_ import StringDtype
from pandas.core.indexers import (
check_array_indexer,
validate_indices,
@@ -74,7 +74,7 @@
@register_extension_dtype
-class ArrowStringDtype(ExtensionDtype):
+class ArrowStringDtype(StringDtype):
"""
Extension dtype for string data in a ``pyarrow.ChunkedArray``.
@@ -110,7 +110,7 @@ def type(self) -> type[str]:
return str
@classmethod
- def construct_array_type(cls) -> type_t[ArrowStringArray]:
+ def construct_array_type(cls) -> type_t[ArrowStringArray]: # type: ignore[override]
"""
Return the array type associated with this dtype.
@@ -126,7 +126,9 @@ def __hash__(self) -> int:
def __repr__(self) -> str:
return "ArrowStringDtype"
- def __from_arrow__(self, array: pa.Array | pa.ChunkedArray) -> ArrowStringArray:
+ def __from_arrow__( # type: ignore[override]
+ self, array: pa.Array | pa.ChunkedArray
+ ) -> ArrowStringArray:
"""
Construct StringArray from pyarrow Array/ChunkedArray.
"""
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index f7fa32076ec86..f8df05a7022d1 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -155,11 +155,10 @@ class StringMethods(NoNewAttributesMixin):
def __init__(self, data):
from pandas.core.arrays.string_ import StringDtype
- from pandas.core.arrays.string_arrow import ArrowStringDtype
self._inferred_dtype = self._validate(data)
self._is_categorical = is_categorical_dtype(data.dtype)
- self._is_string = isinstance(data.dtype, (StringDtype, ArrowStringDtype))
+ self._is_string = isinstance(data.dtype, StringDtype)
self._data = data
self._index = self._name = None
@@ -3028,9 +3027,8 @@ def _result_dtype(arr):
# ideally we just pass `dtype=arr.dtype` unconditionally, but this fails
# when the list of values is empty.
from pandas.core.arrays.string_ import StringDtype
- from pandas.core.arrays.string_arrow import ArrowStringDtype
- if isinstance(arr.dtype, (StringDtype, ArrowStringDtype)):
+ if isinstance(arr.dtype, StringDtype):
return arr.dtype.name
else:
return object
diff --git a/pandas/tests/extension/json/array.py b/pandas/tests/extension/json/array.py
index ffe2769730f34..2eef828288e59 100644
--- a/pandas/tests/extension/json/array.py
+++ b/pandas/tests/extension/json/array.py
@@ -40,7 +40,6 @@
ExtensionDtype,
)
from pandas.api.types import is_bool_dtype
-from pandas.core.arrays.string_arrow import ArrowStringDtype
class JSONDtype(ExtensionDtype):
@@ -196,7 +195,7 @@ def astype(self, dtype, copy=True):
if copy:
return self.copy()
return self
- elif isinstance(dtype, (StringDtype, ArrowStringDtype)):
+ elif isinstance(dtype, StringDtype):
value = self.astype(str) # numpy doesn'y like nested dicts
return dtype.construct_array_type()._from_sequence(value, copy=False)
| @simonjayhawkins this will be redundant after https://github.com/pandas-dev/pandas/pull/39908, but in the meantime, ArrowStringDtype being a subclass would help for writing robust code related to `StringDtype` in downstream packages (for those testing against pandas master) | https://api.github.com/repos/pandas-dev/pandas/pulls/41312 | 2021-05-04T16:36:23Z | 2021-05-05T06:55:49Z | 2021-05-05T06:55:49Z | 2021-05-05T06:55:53Z |
CI: skip tests when only files in doc/web changes (github actions) | diff --git a/.github/workflows/database.yml b/.github/workflows/database.yml
index b15889351386a..292598dfcab73 100644
--- a/.github/workflows/database.yml
+++ b/.github/workflows/database.yml
@@ -7,6 +7,8 @@ on:
branches:
- master
- 1.2.x
+ paths-ignore:
+ - "doc/**"
env:
PYTEST_WORKERS: "auto"
diff --git a/.github/workflows/posix.yml b/.github/workflows/posix.yml
index 3a4d3c106f851..cb7d3fb5cabcf 100644
--- a/.github/workflows/posix.yml
+++ b/.github/workflows/posix.yml
@@ -7,6 +7,8 @@ on:
branches:
- master
- 1.2.x
+ paths-ignore:
+ - "doc/**"
env:
PYTEST_WORKERS: "auto"
diff --git a/.github/workflows/python-dev.yml b/.github/workflows/python-dev.yml
index 2643dc5ec656e..38b1aa9ae7047 100644
--- a/.github/workflows/python-dev.yml
+++ b/.github/workflows/python-dev.yml
@@ -7,6 +7,8 @@ on:
pull_request:
branches:
- master
+ paths-ignore:
+ - "doc/**"
jobs:
build:
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index 56da4e87f2709..956feaef5f83e 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -1,7 +1,12 @@
# Adapted from https://github.com/numba/numba/blob/master/azure-pipelines.yml
trigger:
-- master
-- 1.2.x
+ branches:
+ include:
+ - master
+ - 1.2.x
+ paths:
+ exclude:
+ - 'doc/*'
pr:
- master
| - [x] closes #41101
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
| https://api.github.com/repos/pandas-dev/pandas/pulls/41310 | 2021-05-04T16:04:48Z | 2021-06-12T09:43:18Z | 2021-06-12T09:43:18Z | 2021-08-26T13:44:50Z |
REF: share GroupBy.transform | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 89623d260af71..f5274f0998e7c 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -526,35 +526,9 @@ def _aggregate_named(self, func, *args, **kwargs):
@Substitution(klass="Series")
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
-
- if maybe_use_numba(engine):
- with group_selection_context(self):
- data = self._selected_obj
- result = self._transform_with_numba(
- data.to_frame(), func, *args, engine_kwargs=engine_kwargs, **kwargs
- )
- return self.obj._constructor(
- result.ravel(), index=data.index, name=data.name
- )
-
- func = com.get_cython_func(func) or func
-
- if not isinstance(func, str):
- return self._transform_general(func, *args, **kwargs)
-
- elif func not in base.transform_kernel_allowlist:
- msg = f"'{func}' is not a valid function name for transform(name)"
- raise ValueError(msg)
- elif func in base.cythonized_kernels or func in base.transformation_kernels:
- # cythonized transform or canned "agg+broadcast"
- return getattr(self, func)(*args, **kwargs)
- # If func is a reduction, we need to broadcast the
- # result to the whole group. Compute func result
- # and deal with possible broadcasting below.
- # Temporarily set observed for dealing with categoricals.
- with com.temp_setattr(self, "observed", True):
- result = getattr(self, func)(*args, **kwargs)
- return self._wrap_transform_fast_result(result)
+ return self._transform(
+ func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
+ )
def _transform_general(self, func: Callable, *args, **kwargs) -> Series:
"""
@@ -586,6 +560,9 @@ def _transform_general(self, func: Callable, *args, **kwargs) -> Series:
result.name = self._selected_obj.name
return result
+ def _can_use_transform_fast(self, result) -> bool:
+ return True
+
def _wrap_transform_fast_result(self, result: Series) -> Series:
"""
fast version of transform, only applicable to
@@ -1334,43 +1311,14 @@ def _transform_general(self, func, *args, **kwargs):
@Substitution(klass="DataFrame")
@Appender(_transform_template)
def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
+ return self._transform(
+ func, *args, engine=engine, engine_kwargs=engine_kwargs, **kwargs
+ )
- if maybe_use_numba(engine):
- with group_selection_context(self):
- data = self._selected_obj
- result = self._transform_with_numba(
- data, func, *args, engine_kwargs=engine_kwargs, **kwargs
- )
- return self.obj._constructor(result, index=data.index, columns=data.columns)
-
- # optimized transforms
- func = com.get_cython_func(func) or func
-
- if not isinstance(func, str):
- return self._transform_general(func, *args, **kwargs)
-
- elif func not in base.transform_kernel_allowlist:
- msg = f"'{func}' is not a valid function name for transform(name)"
- raise ValueError(msg)
- elif func in base.cythonized_kernels or func in base.transformation_kernels:
- # cythonized transformation or canned "reduction+broadcast"
- return getattr(self, func)(*args, **kwargs)
- # GH 30918
- # Use _transform_fast only when we know func is an aggregation
- if func in base.reduction_kernels:
- # If func is a reduction, we need to broadcast the
- # result to the whole group. Compute func result
- # and deal with possible broadcasting below.
- # Temporarily set observed for dealing with categoricals.
- with com.temp_setattr(self, "observed", True):
- result = getattr(self, func)(*args, **kwargs)
-
- if isinstance(result, DataFrame) and result.columns.equals(
- self._obj_with_exclusions.columns
- ):
- return self._wrap_transform_fast_result(result)
-
- return self._transform_general(func, *args, **kwargs)
+ def _can_use_transform_fast(self, result) -> bool:
+ return isinstance(result, DataFrame) and result.columns.equals(
+ self._obj_with_exclusions.columns
+ )
def _wrap_transform_fast_result(self, result: DataFrame) -> DataFrame:
"""
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index f2041951b9e49..7a8b41fbdf141 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -29,6 +29,7 @@ class providing the base-class of operations.
Sequence,
TypeVar,
Union,
+ cast,
)
import numpy as np
@@ -104,7 +105,10 @@ class providing the base-class of operations.
from pandas.core.internals.blocks import ensure_block_shape
from pandas.core.series import Series
from pandas.core.sorting import get_group_index_sorter
-from pandas.core.util.numba_ import NUMBA_FUNC_CACHE
+from pandas.core.util.numba_ import (
+ NUMBA_FUNC_CACHE,
+ maybe_use_numba,
+)
if TYPE_CHECKING:
from typing import Literal
@@ -1398,8 +1402,55 @@ def _cython_transform(
return self._wrap_transformed_output(output)
- def transform(self, func, *args, **kwargs):
- raise AbstractMethodError(self)
+ @final
+ def _transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
+
+ if maybe_use_numba(engine):
+ # TODO: tests with self._selected_obj.ndim == 1 on DataFrameGroupBy
+ with group_selection_context(self):
+ data = self._selected_obj
+ df = data if data.ndim == 2 else data.to_frame()
+ result = self._transform_with_numba(
+ df, func, *args, engine_kwargs=engine_kwargs, **kwargs
+ )
+ if self.obj.ndim == 2:
+ return cast(DataFrame, self.obj)._constructor(
+ result, index=data.index, columns=data.columns
+ )
+ else:
+ return cast(Series, self.obj)._constructor(
+ result.ravel(), index=data.index, name=data.name
+ )
+
+ # optimized transforms
+ func = com.get_cython_func(func) or func
+
+ if not isinstance(func, str):
+ return self._transform_general(func, *args, **kwargs)
+
+ elif func not in base.transform_kernel_allowlist:
+ msg = f"'{func}' is not a valid function name for transform(name)"
+ raise ValueError(msg)
+ elif func in base.cythonized_kernels or func in base.transformation_kernels:
+ # cythonized transform or canned "agg+broadcast"
+ return getattr(self, func)(*args, **kwargs)
+
+ else:
+ # i.e. func in base.reduction_kernels
+
+ # GH#30918 Use _transform_fast only when we know func is an aggregation
+ # If func is a reduction, we need to broadcast the
+ # result to the whole group. Compute func result
+ # and deal with possible broadcasting below.
+ # Temporarily set observed for dealing with categoricals.
+ with com.temp_setattr(self, "observed", True):
+ result = getattr(self, func)(*args, **kwargs)
+
+ if self._can_use_transform_fast(result):
+ return self._wrap_transform_fast_result(result)
+
+ # only reached for DataFrameGroupBy
+ return self._transform_general(func, *args, **kwargs)
# -----------------------------------------------------------------
# Utilities
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41308 | 2021-05-04T14:55:12Z | 2021-05-04T21:36:53Z | 2021-05-04T21:36:53Z | 2021-05-04T21:55:23Z |
Deprecate inplace in Categorical.set_categories. | diff --git a/doc/source/user_guide/categorical.rst b/doc/source/user_guide/categorical.rst
index fba41f73ba819..f65638cd78a2b 100644
--- a/doc/source/user_guide/categorical.rst
+++ b/doc/source/user_guide/categorical.rst
@@ -954,6 +954,7 @@ categorical (categories and ordering). So if you read back the CSV file you have
relevant columns back to ``category`` and assign the right categories and categories ordering.
.. ipython:: python
+ :okwarning:
import io
diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 61bea198e42db..320912ec38890 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -639,7 +639,7 @@ Deprecations
- Deprecated using :func:`merge` or :func:`join` on a different number of levels (:issue:`34862`)
- Deprecated the use of ``**kwargs`` in :class:`.ExcelWriter`; use the keyword argument ``engine_kwargs`` instead (:issue:`40430`)
- Deprecated the ``level`` keyword for :class:`DataFrame` and :class:`Series` aggregations; use groupby instead (:issue:`39983`)
-- The ``inplace`` parameter of :meth:`Categorical.remove_categories`, :meth:`Categorical.add_categories`, :meth:`Categorical.reorder_categories`, :meth:`Categorical.rename_categories` is deprecated and will be removed in a future version (:issue:`37643`)
+- The ``inplace`` parameter of :meth:`Categorical.remove_categories`, :meth:`Categorical.add_categories`, :meth:`Categorical.reorder_categories`, :meth:`Categorical.rename_categories`, :meth:`Categorical.set_categories` is deprecated and will be removed in a future version (:issue:`37643`)
- Deprecated :func:`merge` producing duplicated columns through the ``suffixes`` keyword and already existing columns (:issue:`22818`)
.. ---------------------------------------------------------------------------
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index 7cddfef3d4292..7b653bf84a466 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -882,7 +882,9 @@ def as_unordered(self, inplace=False):
inplace = validate_bool_kwarg(inplace, "inplace")
return self.set_ordered(False, inplace=inplace)
- def set_categories(self, new_categories, ordered=None, rename=False, inplace=False):
+ def set_categories(
+ self, new_categories, ordered=None, rename=False, inplace=no_default
+ ):
"""
Set the categories to the specified new_categories.
@@ -916,6 +918,8 @@ def set_categories(self, new_categories, ordered=None, rename=False, inplace=Fal
Whether or not to reorder the categories in-place or return a copy
of this categorical with reordered categories.
+ .. deprecated:: 1.3.0
+
Returns
-------
Categorical with reordered categories or None if inplace.
@@ -933,6 +937,18 @@ def set_categories(self, new_categories, ordered=None, rename=False, inplace=Fal
remove_categories : Remove the specified categories.
remove_unused_categories : Remove categories which are not used.
"""
+ if inplace is not no_default:
+ warn(
+ "The `inplace` parameter in pandas.Categorical."
+ "set_categories is deprecated and will be removed in "
+ "a future version. Removing unused categories will always "
+ "return a new Categorical object.",
+ FutureWarning,
+ stacklevel=2,
+ )
+ else:
+ inplace = False
+
inplace = validate_bool_kwarg(inplace, "inplace")
if ordered is None:
ordered = self.dtype.ordered
@@ -1101,7 +1117,10 @@ def reorder_categories(self, new_categories, ordered=None, inplace=no_default):
raise ValueError(
"items in new_categories are not the same as in old categories"
)
- return self.set_categories(new_categories, ordered=ordered, inplace=inplace)
+
+ with catch_warnings():
+ simplefilter("ignore")
+ return self.set_categories(new_categories, ordered=ordered, inplace=inplace)
def add_categories(self, new_categories, inplace=no_default):
"""
@@ -1231,9 +1250,11 @@ def remove_categories(self, removals, inplace=no_default):
if len(not_included) != 0:
raise ValueError(f"removals must all be in old categories: {not_included}")
- return self.set_categories(
- new_categories, ordered=self.ordered, rename=False, inplace=inplace
- )
+ with catch_warnings():
+ simplefilter("ignore")
+ return self.set_categories(
+ new_categories, ordered=self.ordered, rename=False, inplace=inplace
+ )
def remove_unused_categories(self, inplace=no_default):
"""
diff --git a/pandas/tests/arrays/categorical/test_analytics.py b/pandas/tests/arrays/categorical/test_analytics.py
index 7bb86987456f1..c0287df1694e9 100644
--- a/pandas/tests/arrays/categorical/test_analytics.py
+++ b/pandas/tests/arrays/categorical/test_analytics.py
@@ -314,7 +314,9 @@ def test_validate_inplace_raises(self, value):
cat.as_unordered(inplace=value)
with pytest.raises(ValueError, match=msg):
- cat.set_categories(["X", "Y", "Z"], rename=True, inplace=value)
+ with tm.assert_produces_warning(FutureWarning):
+ # issue #37643 inplace kwarg deprecated
+ cat.set_categories(["X", "Y", "Z"], rename=True, inplace=value)
with pytest.raises(ValueError, match=msg):
with tm.assert_produces_warning(FutureWarning):
diff --git a/pandas/tests/arrays/categorical/test_api.py b/pandas/tests/arrays/categorical/test_api.py
index 10e29dc82c050..a063491cd08fa 100644
--- a/pandas/tests/arrays/categorical/test_api.py
+++ b/pandas/tests/arrays/categorical/test_api.py
@@ -229,7 +229,10 @@ def test_set_categories(self):
exp_categories = Index(["c", "b", "a"])
exp_values = np.array(["a", "b", "c", "a"], dtype=np.object_)
- res = cat.set_categories(["c", "b", "a"], inplace=True)
+ with tm.assert_produces_warning(FutureWarning):
+ # issue #37643 inplace kwarg deprecated
+ res = cat.set_categories(["c", "b", "a"], inplace=True)
+
tm.assert_index_equal(cat.categories, exp_categories)
tm.assert_numpy_array_equal(cat.__array__(), exp_values)
assert res is None
@@ -439,7 +442,11 @@ def test_describe(self):
# check unused categories
cat = self.factor.copy()
- cat.set_categories(["a", "b", "c", "d"], inplace=True)
+
+ with tm.assert_produces_warning(FutureWarning):
+ # issue #37643 inplace kwarg deprecated
+ cat.set_categories(["a", "b", "c", "d"], inplace=True)
+
desc = cat.describe()
exp_index = CategoricalIndex(
@@ -475,7 +482,11 @@ def test_describe(self):
def test_set_categories_inplace(self):
cat = self.factor.copy()
- cat.set_categories(["a", "b", "c", "d"], inplace=True)
+
+ with tm.assert_produces_warning(FutureWarning):
+ # issue #37643 inplace kwarg deprecated
+ cat.set_categories(["a", "b", "c", "d"], inplace=True)
+
tm.assert_index_equal(cat.categories, Index(["a", "b", "c", "d"]))
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 4004e595c832f..204c7648ac2f7 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -887,9 +887,11 @@ def test_setitem_mask_categorical(self):
df = DataFrame({"cats": catsf, "values": valuesf}, index=idxf)
exp_fancy = exp_multi_row.copy()
- return_value = exp_fancy["cats"].cat.set_categories(
- ["a", "b", "c"], inplace=True
- )
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # issue #37643 inplace kwarg deprecated
+ return_value = exp_fancy["cats"].cat.set_categories(
+ ["a", "b", "c"], inplace=True
+ )
assert return_value is None
mask = df["cats"] == "c"
diff --git a/pandas/tests/series/accessors/test_cat_accessor.py b/pandas/tests/series/accessors/test_cat_accessor.py
index 8a4c4d56e264d..7aea45755f940 100644
--- a/pandas/tests/series/accessors/test_cat_accessor.py
+++ b/pandas/tests/series/accessors/test_cat_accessor.py
@@ -48,7 +48,11 @@ def test_cat_accessor(self):
assert not ser.cat.ordered, False
exp = Categorical(["a", "b", np.nan, "a"], categories=["b", "a"])
- return_value = ser.cat.set_categories(["b", "a"], inplace=True)
+
+ with tm.assert_produces_warning(FutureWarning, check_stacklevel=False):
+ # issue #37643 inplace kwarg deprecated
+ return_value = ser.cat.set_categories(["b", "a"], inplace=True)
+
assert return_value is None
tm.assert_categorical_equal(ser.values, exp)
| - [x] xref #37643
- [x] tests added / passed
- [x] passes `black pandas`
- [x] passes `git diff upstream/master -u -- "*.py" | flake8 --diff`
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41307 | 2021-05-04T14:33:56Z | 2021-05-04T19:40:36Z | 2021-05-04T19:40:36Z | 2021-05-04T19:40:41Z |
[ArrowStringArray] CLN: assorted cleanup | diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 72a2ab8a1b80a..501484a98ded3 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -8,7 +8,6 @@
Sequence,
cast,
)
-import warnings
import numpy as np
@@ -766,20 +765,13 @@ def _str_map(self, f, na_value=None, dtype: Dtype | None = None):
# -> We don't know the result type. E.g. `.get` can return anything.
return lib.map_infer_mask(arr, f, mask.view("uint8"))
- def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
+ def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex: bool = True):
if flags:
return super()._str_contains(pat, case, flags, na, regex)
if regex:
# match_substring_regex added in pyarrow 4.0.0
if hasattr(pc, "match_substring_regex") and case:
- if re.compile(pat).groups:
- warnings.warn(
- "This pattern has match groups. To actually get the "
- "groups, use str.extract.",
- UserWarning,
- stacklevel=3,
- )
result = pc.match_substring_regex(self._data, pat)
else:
return super()._str_contains(pat, case, flags, na, regex)
@@ -816,48 +808,31 @@ def _str_endswith(self, pat, na=None):
return super()._str_endswith(pat, na)
def _str_isalnum(self):
- if hasattr(pc, "utf8_is_alnum"):
- result = pc.utf8_is_alnum(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isalnum()
+ result = pc.utf8_is_alnum(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isalpha(self):
- if hasattr(pc, "utf8_is_alpha"):
- result = pc.utf8_is_alpha(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isalpha()
+ result = pc.utf8_is_alpha(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isdecimal(self):
- if hasattr(pc, "utf8_is_decimal"):
- result = pc.utf8_is_decimal(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isdecimal()
+ result = pc.utf8_is_decimal(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isdigit(self):
- if hasattr(pc, "utf8_is_digit"):
- result = pc.utf8_is_digit(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isdigit()
+ result = pc.utf8_is_digit(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_islower(self):
- if hasattr(pc, "utf8_is_lower"):
- result = pc.utf8_is_lower(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_islower()
+ result = pc.utf8_is_lower(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isnumeric(self):
- if hasattr(pc, "utf8_is_numeric"):
- result = pc.utf8_is_numeric(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isnumeric()
+ result = pc.utf8_is_numeric(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isspace(self):
+ # utf8_is_space added in pyarrow 2.0.0
if hasattr(pc, "utf8_is_space"):
result = pc.utf8_is_space(self._data)
return BooleanDtype().__from_arrow__(result)
@@ -865,18 +840,12 @@ def _str_isspace(self):
return super()._str_isspace()
def _str_istitle(self):
- if hasattr(pc, "utf8_is_title"):
- result = pc.utf8_is_title(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_istitle()
+ result = pc.utf8_is_title(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_isupper(self):
- if hasattr(pc, "utf8_is_upper"):
- result = pc.utf8_is_upper(self._data)
- return BooleanDtype().__from_arrow__(result)
- else:
- return super()._str_isupper()
+ result = pc.utf8_is_upper(self._data)
+ return BooleanDtype().__from_arrow__(result)
def _str_lower(self):
return type(self)(pc.utf8_lower(self._data))
@@ -886,27 +855,33 @@ def _str_upper(self):
def _str_strip(self, to_strip=None):
if to_strip is None:
+ # utf8_trim_whitespace added in pyarrow 4.0.0
if hasattr(pc, "utf8_trim_whitespace"):
return type(self)(pc.utf8_trim_whitespace(self._data))
else:
+ # utf8_trim added in pyarrow 4.0.0
if hasattr(pc, "utf8_trim"):
return type(self)(pc.utf8_trim(self._data, characters=to_strip))
return super()._str_strip(to_strip)
def _str_lstrip(self, to_strip=None):
if to_strip is None:
+ # utf8_ltrim_whitespace added in pyarrow 4.0.0
if hasattr(pc, "utf8_ltrim_whitespace"):
return type(self)(pc.utf8_ltrim_whitespace(self._data))
else:
+ # utf8_ltrim added in pyarrow 4.0.0
if hasattr(pc, "utf8_ltrim"):
return type(self)(pc.utf8_ltrim(self._data, characters=to_strip))
return super()._str_lstrip(to_strip)
def _str_rstrip(self, to_strip=None):
if to_strip is None:
+ # utf8_rtrim_whitespace added in pyarrow 4.0.0
if hasattr(pc, "utf8_rtrim_whitespace"):
return type(self)(pc.utf8_rtrim_whitespace(self._data))
else:
+ # utf8_rtrim added in pyarrow 4.0.0
if hasattr(pc, "utf8_rtrim"):
return type(self)(pc.utf8_rtrim(self._data, characters=to_strip))
return super()._str_rstrip(to_strip)
diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 85a58d3d99795..8f971eb33f1dc 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -195,8 +195,6 @@ def _validate(data):
-------
dtype : inferred dtype of data
"""
- from pandas import StringDtype
-
if isinstance(data, ABCMultiIndex):
raise AttributeError(
"Can only use .str accessor with Index, not MultiIndex"
@@ -208,10 +206,6 @@ def _validate(data):
values = getattr(data, "values", data) # Series / Index
values = getattr(values, "categories", values) # categorical / normal
- # explicitly allow StringDtype
- if isinstance(values.dtype, StringDtype):
- return "string"
-
inferred_dtype = lib.infer_dtype(values, skipna=True)
if inferred_dtype not in allowed_types:
@@ -1132,6 +1126,14 @@ def contains(self, pat, case=True, flags=0, na=None, regex=True):
4 False
dtype: bool
"""
+ if regex and re.compile(pat).groups:
+ warnings.warn(
+ "This pattern has match groups. To actually get the "
+ "groups, use str.extract.",
+ UserWarning,
+ stacklevel=3,
+ )
+
result = self._data.array._str_contains(pat, case, flags, na, regex)
return self._wrap_result(result, fill_value=na, returns_string=False)
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index b794690ccc5af..a47a6d49a4ba1 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -7,7 +7,6 @@
Union,
)
import unicodedata
-import warnings
import numpy as np
@@ -115,22 +114,14 @@ def _str_pad(self, width, side="left", fillchar=" "):
raise ValueError("Invalid side")
return self._str_map(f)
- def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex=True):
+ def _str_contains(self, pat, case=True, flags=0, na=np.nan, regex: bool = True):
if regex:
if not case:
flags |= re.IGNORECASE
- regex = re.compile(pat, flags=flags)
+ pat = re.compile(pat, flags=flags)
- if regex.groups > 0:
- warnings.warn(
- "This pattern has match groups. To actually get the "
- "groups, use str.extract.",
- UserWarning,
- stacklevel=3,
- )
-
- f = lambda x: regex.search(x) is not None
+ f = lambda x: pat.search(x) is not None
else:
if case:
f = lambda x: pat in x
diff --git a/pandas/tests/strings/conftest.py b/pandas/tests/strings/conftest.py
index 4fedbee91f649..17703d970e29e 100644
--- a/pandas/tests/strings/conftest.py
+++ b/pandas/tests/strings/conftest.py
@@ -1,6 +1,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
from pandas import Series
from pandas.core import strings as strings
@@ -173,3 +175,24 @@ def any_allowed_skipna_inferred_dtype(request):
# correctness of inference tested in tests/dtypes/test_inference.py
return inferred_dtype, values
+
+
+@pytest.fixture(
+ params=[
+ "object",
+ "string",
+ pytest.param(
+ "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0")
+ ),
+ ]
+)
+def any_string_dtype(request):
+ """
+ Parametrized fixture for string dtypes.
+ * 'object'
+ * 'string'
+ * 'arrow_string'
+ """
+ from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
+
+ return request.param
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 0c54042d983ad..b547af6b4cedd 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -4,8 +4,6 @@
import numpy as np
import pytest
-import pandas.util._test_decorators as td
-
import pandas as pd
from pandas import (
Index,
@@ -14,27 +12,6 @@
)
-@pytest.fixture(
- params=[
- "object",
- "string",
- pytest.param(
- "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0")
- ),
- ]
-)
-def any_string_dtype(request):
- """
- Parametrized fixture for string dtypes.
- * 'object'
- * 'string'
- * 'arrow_string'
- """
- from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
-
- return request.param
-
-
def test_contains(any_string_dtype):
values = np.array(
["foo", np.nan, "fooommm__foo", "mmm_", "foommm[_]+bar"], dtype=np.object_
@@ -751,6 +728,7 @@ def test_flags_kwarg(any_string_dtype):
result = data.str.count(pat, flags=re.IGNORECASE)
assert result[0] == 1
- with tm.assert_produces_warning(UserWarning):
+ msg = "This pattern has match groups"
+ with tm.assert_produces_warning(UserWarning, match=msg):
result = data.str.contains(pat, flags=re.IGNORECASE)
assert result[0]
diff --git a/pandas/tests/strings/test_strings.py b/pandas/tests/strings/test_strings.py
index f218d5333b415..38e714a7bc5c7 100644
--- a/pandas/tests/strings/test_strings.py
+++ b/pandas/tests/strings/test_strings.py
@@ -6,8 +6,6 @@
import numpy as np
import pytest
-import pandas.util._test_decorators as td
-
from pandas import (
DataFrame,
Index,
@@ -19,27 +17,6 @@
import pandas._testing as tm
-@pytest.fixture(
- params=[
- "object",
- "string",
- pytest.param(
- "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0")
- ),
- ]
-)
-def any_string_dtype(request):
- """
- Parametrized fixture for string dtypes.
- * 'object'
- * 'string'
- * 'arrow_string'
- """
- from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
-
- return request.param
-
-
def assert_series_or_index_equal(left, right):
if isinstance(left, Series):
tm.assert_series_equal(left, right)
| seperate commits for cleanups
remove hasattr for str methods available in pyarrow 1.0.0
add comments to indicate pyarrow version a method is addded.
de-duplicate fixture
de-duplicate warning from str.contains with pattern with capture groups
remove special casing StringDtype | https://api.github.com/repos/pandas-dev/pandas/pulls/41306 | 2021-05-04T12:27:42Z | 2021-05-04T16:37:46Z | 2021-05-04T16:37:46Z | 2021-05-04T17:58:37Z |
ASV: add benchmarks to test formatting func in Styler. | diff --git a/asv_bench/benchmarks/io/style.py b/asv_bench/benchmarks/io/style.py
index e4369d67ca67e..a01610a69278b 100644
--- a/asv_bench/benchmarks/io/style.py
+++ b/asv_bench/benchmarks/io/style.py
@@ -1,6 +1,9 @@
import numpy as np
-from pandas import DataFrame
+from pandas import (
+ DataFrame,
+ IndexSlice,
+)
class Render:
@@ -31,6 +34,14 @@ def peakmem_classes_render(self, cols, rows):
self._style_classes()
self.st._render_html()
+ def time_format_render(self, cols, rows):
+ self._style_format()
+ self.st.render()
+
+ def peakmem_format_render(self, cols, rows):
+ self._style_format()
+ self.st.render()
+
def _style_apply(self):
def _apply_func(s):
return [
@@ -43,3 +54,12 @@ def _style_classes(self):
classes = self.df.applymap(lambda v: ("cls-1" if v > 0 else ""))
classes.index, classes.columns = self.df.index, self.df.columns
self.st = self.df.style.set_td_classes(classes)
+
+ def _style_format(self):
+ ic = int(len(self.df.columns) / 4 * 3)
+ ir = int(len(self.df.index) / 4 * 3)
+ # apply a formatting function
+ # subset is flexible but hinders vectorised solutions
+ self.st = self.df.style.format(
+ "{:,.3f}", subset=IndexSlice["row_1":f"row_{ir}", "float_1":f"float_{ic}"]
+ )
| The main time in rendering Styler is consumed by:
- formatting values for display
- adding cell styles via apply / applymap
- adding cell styles via set_td_classes
We have benchmarks for the latter 2 but not the first. Any future changes to the formatting function might be worth having benchmarks (I would use them :) )
Especially if we try to merge Styler formatting with the floatformatter for example.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41302 | 2021-05-04T10:43:44Z | 2021-05-04T12:44:09Z | 2021-05-04T12:44:09Z | 2021-05-06T07:35:51Z |
COMPAT: frame round error msg for py310 | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 6d3042507d930..d1ff69f16d993 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -9206,9 +9206,12 @@ def _series_round(s, decimals):
nv.validate_round(args, kwargs)
if isinstance(decimals, (dict, Series)):
- if isinstance(decimals, Series):
- if not decimals.index.is_unique:
- raise ValueError("Index of decimals must be unique")
+ if isinstance(decimals, Series) and not decimals.index.is_unique:
+ raise ValueError("Index of decimals must be unique")
+ if is_dict_like(decimals) and not all(
+ is_integer(value) for _, value in decimals.items()
+ ):
+ raise TypeError("Values in decimals must be integers")
new_cols = list(_dict_round(self, decimals))
elif is_integer(decimals):
# Dispatch to Series.round
diff --git a/pandas/tests/frame/methods/test_round.py b/pandas/tests/frame/methods/test_round.py
index ebe33922be541..dd9206940bcd6 100644
--- a/pandas/tests/frame/methods/test_round.py
+++ b/pandas/tests/frame/methods/test_round.py
@@ -62,13 +62,12 @@ def test_round(self):
# float input to `decimals`
non_int_round_dict = {"col1": 1, "col2": 0.5}
- msg = "integer argument expected, got float"
+ msg = "Values in decimals must be integers"
with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
# String input
non_int_round_dict = {"col1": 1, "col2": "foo"}
- msg = r"an integer is required \(got type str\)"
with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
@@ -78,7 +77,6 @@ def test_round(self):
# List input
non_int_round_dict = {"col1": 1, "col2": [1, 2]}
- msg = r"an integer is required \(got type list\)"
with pytest.raises(TypeError, match=msg):
df.round(non_int_round_dict)
@@ -106,7 +104,6 @@ def test_round(self):
# nan in Series round
nan_round_Series = Series({"col1": np.nan, "col2": 1})
- msg = "integer argument expected, got float"
with pytest.raises(TypeError, match=msg):
df.round(nan_round_Series)
| Not sure if this is the desired way to address this.
```
________________________ TestDataFrameRound.test_round _________________________
self = <pandas.tests.frame.methods.test_round.TestDataFrameRound object at 0x7fcdd089eb90>
def test_round(self):
# GH#2665
# Test that rounding an empty DataFrame does nothing
df = DataFrame()
tm.assert_frame_equal(df, df.round())
# Here's the test frame we'll be working with
df = DataFrame({"col1": [1.123, 2.123, 3.123], "col2": [1.234, 2.234, 3.234]})
# Default round to integer (i.e. decimals=0)
expected_rounded = DataFrame({"col1": [1.0, 2.0, 3.0], "col2": [1.0, 2.0, 3.0]})
tm.assert_frame_equal(df.round(), expected_rounded)
# Round with an integer
decimals = 2
expected_rounded = DataFrame(
{"col1": [1.12, 2.12, 3.12], "col2": [1.23, 2.23, 3.23]}
)
tm.assert_frame_equal(df.round(decimals), expected_rounded)
# This should also work with np.round (since np.round dispatches to
# df.round)
tm.assert_frame_equal(np.round(df, decimals), expected_rounded)
# Round with a list
round_list = [1, 2]
msg = "decimals must be an integer, a dict-like or a Series"
with pytest.raises(TypeError, match=msg):
df.round(round_list)
# Round with a dictionary
expected_rounded = DataFrame(
{"col1": [1.1, 2.1, 3.1], "col2": [1.23, 2.23, 3.23]}
)
round_dict = {"col1": 1, "col2": 2}
tm.assert_frame_equal(df.round(round_dict), expected_rounded)
# Incomplete dict
expected_partially_rounded = DataFrame(
{"col1": [1.123, 2.123, 3.123], "col2": [1.2, 2.2, 3.2]}
)
partial_round_dict = {"col2": 1}
tm.assert_frame_equal(df.round(partial_round_dict), expected_partially_rounded)
# Dict with unknown elements
wrong_round_dict = {"col3": 2, "col2": 1}
tm.assert_frame_equal(df.round(wrong_round_dict), expected_partially_rounded)
# float input to `decimals`
non_int_round_dict = {"col1": 1, "col2": 0.5}
msg = "integer argument expected, got float"
with pytest.raises(TypeError, match=msg):
> df.round(non_int_round_dict)
pandas/tests/frame/methods/test_round.py:67:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = col1 col2
0 1.123 1.234
1 2.123 2.234
2 3.123 3.234
decimals = {'col1': 1, 'col2': 0.5}, args = (), kwargs = {}
concat = <function concat at 0x7fce2c668e50>
_dict_round = <function DataFrame.round.<locals>._dict_round at 0x7fcdd13728c0>
def round(
self, decimals: int | dict[IndexLabel, int] | Series = 0, *args, **kwargs
) -> DataFrame:
"""
Round a DataFrame to a variable number of decimal places.
Parameters
----------
decimals : int, dict, Series
Number of decimal places to round each column to. If an int is
given, round each column to the same number of places.
Otherwise dict and Series round to variable numbers of places.
Column names should be in the keys if `decimals` is a
dict-like, or in the index if `decimals` is a Series. Any
columns not included in `decimals` will be left as is. Elements
of `decimals` which are not columns of the input will be
ignored.
*args
Additional keywords have no effect but might be accepted for
compatibility with numpy.
**kwargs
Additional keywords have no effect but might be accepted for
compatibility with numpy.
Returns
-------
DataFrame
A DataFrame with the affected columns rounded to the specified
number of decimal places.
See Also
--------
numpy.around : Round a numpy array to the given number of decimals.
Series.round : Round a Series to the given number of decimals.
Examples
--------
>>> df = pd.DataFrame([(.21, .32), (.01, .67), (.66, .03), (.21, .18)],
... columns=['dogs', 'cats'])
>>> df
dogs cats
0 0.21 0.32
1 0.01 0.67
2 0.66 0.03
3 0.21 0.18
By providing an integer each column is rounded to the same number
of decimal places
>>> df.round(1)
dogs cats
0 0.2 0.3
1 0.0 0.7
2 0.7 0.0
3 0.2 0.2
With a dict, the number of places for specific columns can be
specified with the column names as key and the number of decimal
places as value
>>> df.round({'dogs': 1, 'cats': 0})
dogs cats
0 0.2 0.0
1 0.0 1.0
2 0.7 0.0
3 0.2 0.0
Using a Series, the number of places for specific columns can be
specified with the column names as index and the number of
decimal places as value
>>> decimals = pd.Series([0, 1], index=['cats', 'dogs'])
>>> df.round(decimals)
dogs cats
0 0.2 0.0
1 0.0 1.0
2 0.7 0.0
3 0.2 0.0
"""
from pandas.core.reshape.concat import concat
def _dict_round(df, decimals):
for col, vals in df.items():
try:
yield _series_round(vals, decimals[col])
except KeyError:
yield vals
def _series_round(s, decimals):
if is_integer_dtype(s) or is_float_dtype(s):
return s.round(decimals)
return s
nv.validate_round(args, kwargs)
if isinstance(decimals, (dict, Series)):
if isinstance(decimals, Series):
if not decimals.index.is_unique:
raise ValueError("Index of decimals must be unique")
> new_cols = list(_dict_round(self, decimals))
pandas/core/frame.py:9205:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
df = col1 col2
0 1.123 1.234
1 2.123 2.234
2 3.123 3.234
decimals = {'col1': 1, 'col2': 0.5}
def _dict_round(df, decimals):
for col, vals in df.items():
try:
> yield _series_round(vals, decimals[col])
pandas/core/frame.py:9190:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
s = 0 1.234
1 2.234
2 3.234
Name: col2, dtype: float64, decimals = 0.5
def _series_round(s, decimals):
if is_integer_dtype(s) or is_float_dtype(s):
> return s.round(decimals)
pandas/core/frame.py:9196:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = 0 1.234
1 2.234
2 3.234
Name: col2, dtype: float64
decimals = 0.5, args = (), kwargs = {}
def round(self, decimals=0, *args, **kwargs) -> Series:
"""
Round each value in a Series to the given number of decimals.
Parameters
----------
decimals : int, default 0
Number of decimal places to round to. If decimals is negative,
it specifies the number of positions to the left of the decimal point.
*args, **kwargs
Additional arguments and keywords have no effect but might be
accepted for compatibility with NumPy.
Returns
-------
Series
Rounded values of the Series.
See Also
--------
numpy.around : Round values of an np.array.
DataFrame.round : Round values of a DataFrame.
Examples
--------
>>> s = pd.Series([0.1, 1.3, 2.7])
>>> s.round()
0 0.0
1 1.0
2 3.0
dtype: float64
"""
nv.validate_round(args, kwargs)
> result = self._values.round(decimals)
E TypeError: 'float' object cannot be interpreted as an integer
pandas/core/series.py:2361: TypeError
During handling of the above exception, another exception occurred:
self = <pandas.tests.frame.methods.test_round.TestDataFrameRound object at 0x7fcdd089eb90>
> ???
E AssertionError: Regex pattern 'integer argument expected, got float' does not match "'float' object cannot be interpreted as an integer".
pandas/tests/frame/methods/test_round.py:-1: AssertionError
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/41301 | 2021-05-04T02:47:57Z | 2021-05-06T23:22:43Z | 2021-05-06T23:22:43Z | 2022-11-18T02:21:39Z |
CLN: more descriptive names, annotations in groupby | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 4f60660dfb499..324aef3cd5435 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -524,7 +524,8 @@ def _aggregate_named(self, func, *args, **kwargs):
for name, group in self:
# Each step of this loop corresponds to
# libreduction._BaseGrouper._apply_to_group
- group.name = name # NB: libreduction does not pin name
+ # NB: libreduction does not pin name
+ object.__setattr__(group, "name", name)
output = func(group, *args, **kwargs)
output = libreduction.extract_result(output)
@@ -567,9 +568,9 @@ def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
# Temporarily set observed for dealing with categoricals.
with com.temp_setattr(self, "observed", True):
result = getattr(self, func)(*args, **kwargs)
- return self._transform_fast(result)
+ return self._wrap_transform_fast_result(result)
- def _transform_general(self, func, *args, **kwargs):
+ def _transform_general(self, func: Callable, *args, **kwargs) -> Series:
"""
Transform with a callable func`.
"""
@@ -599,7 +600,7 @@ def _transform_general(self, func, *args, **kwargs):
result.name = self._selected_obj.name
return result
- def _transform_fast(self, result) -> Series:
+ def _wrap_transform_fast_result(self, result: Series) -> Series:
"""
fast version of transform, only applicable to
builtin/cythonizable functions
@@ -1436,11 +1437,11 @@ def transform(self, func, *args, engine=None, engine_kwargs=None, **kwargs):
if isinstance(result, DataFrame) and result.columns.equals(
self._obj_with_exclusions.columns
):
- return self._transform_fast(result)
+ return self._wrap_transform_fast_result(result)
return self._transform_general(func, *args, **kwargs)
- def _transform_fast(self, result: DataFrame) -> DataFrame:
+ def _wrap_transform_fast_result(self, result: DataFrame) -> DataFrame:
"""
Fast transform path for aggregations
"""
@@ -1449,14 +1450,9 @@ def _transform_fast(self, result: DataFrame) -> DataFrame:
# for each col, reshape to size of original frame by take operation
ids, _, _ = self.grouper.group_info
result = result.reindex(self.grouper.result_index, copy=False)
- output = [
- algorithms.take_nd(result.iloc[:, i].values, ids)
- for i, _ in enumerate(result.columns)
- ]
-
- return self.obj._constructor._from_arrays(
- output, columns=result.columns, index=obj.index
- )
+ output = result.take(ids, axis=0)
+ output.index = obj.index
+ return output
def _define_paths(self, func, *args, **kwargs):
if isinstance(func, str):
@@ -1653,7 +1649,7 @@ def _gotitem(self, key, ndim: int, subset=None):
raise AssertionError("invalid ndim for _gotitem")
- def _wrap_frame_output(self, result, obj: DataFrame) -> DataFrame:
+ def _wrap_frame_output(self, result: dict, obj: DataFrame) -> DataFrame:
result_index = self.grouper.levels[0]
if self.axis == 0:
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 620668dadc32d..4bcb1b5a19cb6 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -18,6 +18,7 @@ class providing the base-class of operations.
from textwrap import dedent
import types
from typing import (
+ TYPE_CHECKING,
Callable,
Generic,
Hashable,
@@ -104,6 +105,9 @@ class providing the base-class of operations.
from pandas.core.sorting import get_group_index_sorter
from pandas.core.util.numba_ import NUMBA_FUNC_CACHE
+if TYPE_CHECKING:
+ from typing import Literal
+
_common_see_also = """
See Also
--------
@@ -1989,7 +1993,7 @@ def ewm(self, *args, **kwargs):
)
@final
- def _fill(self, direction, limit=None):
+ def _fill(self, direction: Literal["ffill", "bfill"], limit=None):
"""
Shared function for `pad` and `backfill` to call Cython method.
@@ -2731,7 +2735,7 @@ def _get_cythonized_result(
name = obj.name
values = obj._values
- if numeric_only and not is_numeric_dtype(values):
+ if numeric_only and not is_numeric_dtype(values.dtype):
continue
if aggregate:
diff --git a/pandas/core/groupby/grouper.py b/pandas/core/groupby/grouper.py
index 151756b829a1d..f1762a2535ff7 100644
--- a/pandas/core/groupby/grouper.py
+++ b/pandas/core/groupby/grouper.py
@@ -249,6 +249,10 @@ class Grouper:
Freq: 17T, dtype: int64
"""
+ axis: int
+ sort: bool
+ dropna: bool
+
_attributes: tuple[str, ...] = ("key", "level", "freq", "axis", "sort")
def __new__(cls, *args, **kwargs):
@@ -260,7 +264,13 @@ def __new__(cls, *args, **kwargs):
return super().__new__(cls)
def __init__(
- self, key=None, level=None, freq=None, axis=0, sort=False, dropna=True
+ self,
+ key=None,
+ level=None,
+ freq=None,
+ axis: int = 0,
+ sort: bool = False,
+ dropna: bool = True,
):
self.key = key
self.level = level
@@ -281,11 +291,11 @@ def __init__(
def ax(self):
return self.grouper
- def _get_grouper(self, obj, validate: bool = True):
+ def _get_grouper(self, obj: FrameOrSeries, validate: bool = True):
"""
Parameters
----------
- obj : the subject object
+ obj : Series or DataFrame
validate : bool, default True
if True, validate the grouper
@@ -296,7 +306,9 @@ def _get_grouper(self, obj, validate: bool = True):
self._set_grouper(obj)
# error: Value of type variable "FrameOrSeries" of "get_grouper" cannot be
# "Optional[Any]"
- self.grouper, _, self.obj = get_grouper( # type: ignore[type-var]
+ # error: Incompatible types in assignment (expression has type "BaseGrouper",
+ # variable has type "None")
+ self.grouper, _, self.obj = get_grouper( # type: ignore[type-var,assignment]
self.obj,
[self.key],
axis=self.axis,
@@ -375,15 +387,19 @@ def _set_grouper(self, obj: FrameOrSeries, sort: bool = False):
ax = ax.take(indexer)
obj = obj.take(indexer, axis=self.axis)
- self.obj = obj
- self.grouper = ax
+ # error: Incompatible types in assignment (expression has type
+ # "FrameOrSeries", variable has type "None")
+ self.obj = obj # type: ignore[assignment]
+ # error: Incompatible types in assignment (expression has type "Index",
+ # variable has type "None")
+ self.grouper = ax # type: ignore[assignment]
return self.grouper
@final
@property
def groups(self):
- # error: Item "None" of "Optional[Any]" has no attribute "groups"
- return self.grouper.groups # type: ignore[union-attr]
+ # error: "None" has no attribute "groups"
+ return self.grouper.groups # type: ignore[attr-defined]
@final
def __repr__(self) -> str:
@@ -428,7 +444,7 @@ def __init__(
index: Index,
grouper=None,
obj: FrameOrSeries | None = None,
- name=None,
+ name: Hashable = None,
level=None,
sort: bool = True,
observed: bool = False,
@@ -478,7 +494,12 @@ def __init__(
# what key/level refer to exactly, don't need to
# check again as we have by this point converted these
# to an actual value (rather than a pd.Grouper)
- _, grouper, _ = self.grouper._get_grouper(self.obj, validate=False)
+ _, grouper, _ = self.grouper._get_grouper(
+ # error: Value of type variable "FrameOrSeries" of "_get_grouper"
+ # of "Grouper" cannot be "Optional[FrameOrSeries]"
+ self.obj, # type: ignore[type-var]
+ validate=False,
+ )
if self.name is None:
self.name = grouper.result_index.name
self.obj = self.grouper.obj
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 975a902f49db9..e90892138f15a 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -662,6 +662,8 @@ class BaseGrouper:
"""
+ axis: Index
+
def __init__(
self,
axis: Index,
| https://api.github.com/repos/pandas-dev/pandas/pulls/41300 | 2021-05-04T02:01:14Z | 2021-05-04T16:21:53Z | 2021-05-04T16:21:53Z | 2021-05-04T16:21:53Z | |
TYP: BaseWindowGroupby._grouper | diff --git a/pandas/core/window/rolling.py b/pandas/core/window/rolling.py
index 1c85385c587a5..b51875134c614 100644
--- a/pandas/core/window/rolling.py
+++ b/pandas/core/window/rolling.py
@@ -101,6 +101,7 @@
DataFrame,
Series,
)
+ from pandas.core.groupby.ops import BaseGrouper
from pandas.core.internals import Block # noqa:F401
@@ -538,18 +539,22 @@ class BaseWindowGroupby(BaseWindow):
Provide the groupby windowing facilities.
"""
+ _grouper: BaseGrouper
+ _as_index: bool
_attributes = ["_grouper"]
def __init__(
self,
obj: FrameOrSeries,
*args,
- _grouper=None,
- _as_index=True,
+ _grouper: BaseGrouper,
+ _as_index: bool = True,
**kwargs,
):
- if _grouper is None:
- raise ValueError("Must pass a Grouper object.")
+ from pandas.core.groupby.ops import BaseGrouper
+
+ if not isinstance(_grouper, BaseGrouper):
+ raise ValueError("Must pass a BaseGrouper object.")
self._grouper = _grouper
self._as_index = _as_index
# GH 32262: It's convention to keep the grouping column in
@@ -659,7 +664,9 @@ def _apply_pairwise(
# When we evaluate the pairwise=True result, repeat the groupby
# labels by the number of columns in the original object
groupby_codes = self._grouper.codes
- groupby_levels = self._grouper.levels
+ # error: Incompatible types in assignment (expression has type
+ # "List[Index]", variable has type "List[Union[ndarray, Index]]")
+ groupby_levels = self._grouper.levels # type: ignore[assignment]
group_indices = self._grouper.indices.values()
if group_indices:
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41299 | 2021-05-04T01:56:29Z | 2021-05-04T19:29:11Z | 2021-05-04T19:29:11Z | 2021-05-04T19:38:03Z |
REF: raise more selectively in libreduction | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 09999b6970bca..9bef6cb428e8a 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -27,12 +27,18 @@ from pandas._libs.lib import (
)
-cpdef check_result_array(object obj):
+cdef cnp.dtype _dtype_obj = np.dtype("object")
- if (is_array(obj) or
- (isinstance(obj, list) and len(obj) == 0) or
- getattr(obj, 'shape', None) == (0,)):
- raise ValueError('Must produce aggregated value')
+
+cpdef check_result_array(object obj, object dtype):
+ # Our operation is supposed to be an aggregation/reduction. If
+ # it returns an ndarray, this likely means an invalid operation has
+ # been passed. See test_apply_without_aggregation, test_agg_must_agg
+ if is_array(obj):
+ if dtype != _dtype_obj:
+ # If it is object dtype, the function can be a reduction/aggregation
+ # and still return an ndarray e.g. test_agg_over_numpy_arrays
+ raise ValueError("Must produce aggregated value")
cdef class _BaseGrouper:
@@ -89,7 +95,7 @@ cdef class _BaseGrouper:
# On the first pass, we check the output shape to see
# if this looks like a reduction.
initialized = True
- check_result_array(res)
+ check_result_array(res, cached_series.dtype)
return res, initialized
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 4f60660dfb499..a69f7ef9dcd49 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -530,7 +530,7 @@ def _aggregate_named(self, func, *args, **kwargs):
output = libreduction.extract_result(output)
if not initialized:
# We only do this validation on the first iteration
- libreduction.check_result_array(output)
+ libreduction.check_result_array(output, group.dtype)
initialized = True
result[name] = output
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 975a902f49db9..d649240c1df88 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -1027,7 +1027,7 @@ def _aggregate_series_pure_python(self, obj: Series, func: F):
if not initialized:
# We only do this validation on the first iteration
- libreduction.check_result_array(res)
+ libreduction.check_result_array(res, group.dtype)
initialized = True
counts[i] = group.shape[0]
| This allows slightly more cases to go through the cython path | https://api.github.com/repos/pandas-dev/pandas/pulls/41298 | 2021-05-04T01:46:50Z | 2021-05-04T19:38:18Z | 2021-05-04T19:38:18Z | 2021-05-04T19:40:52Z |
CLN: remove unused filter_empty from BinGrouper | diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 975a902f49db9..d75e7f75a2b62 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -1047,7 +1047,6 @@ class BinGrouper(BaseGrouper):
----------
bins : the split index of binlabels to group the item of axis
binlabels : the label list
- filter_empty : bool, default False
mutated : bool, default False
indexer : np.ndarray[np.intp]
@@ -1069,17 +1068,20 @@ class BinGrouper(BaseGrouper):
"""
+ bins: np.ndarray # np.ndarray[np.int64]
+ binlabels: Index
+ mutated: bool
+
def __init__(
self,
bins,
binlabels,
- filter_empty: bool = False,
mutated: bool = False,
indexer=None,
):
self.bins = ensure_int64(bins)
self.binlabels = ensure_index(binlabels)
- self._filter_empty_groups = filter_empty
+ self._filter_empty_groups = False
self.mutated = mutated
self.indexer = indexer
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41296 | 2021-05-04T01:12:40Z | 2021-05-04T12:49:12Z | 2021-05-04T12:49:12Z | 2021-05-04T13:58:54Z |
DOC: Validator + converting array_like to array-like in docstrings | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 177dfee0c03ab..f26cf113f7d5e 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -1530,13 +1530,13 @@ def searchsorted(arr, value, side="left", sorter=None) -> np.ndarray:
Input array. If `sorter` is None, then it must be sorted in
ascending order, otherwise `sorter` must be an array of indices
that sort it.
- value : array_like
+ value : array-like
Values to insert into `arr`.
side : {'left', 'right'}, optional
If 'left', the index of the first suitable location found is given.
If 'right', return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of `self`).
- sorter : 1-D array_like, optional
+ sorter : 1-D array-like, optional
Optional array of integer indices that sort array a into ascending
order. They are typically the result of argsort.
diff --git a/pandas/core/array_algos/replace.py b/pandas/core/array_algos/replace.py
index e800f5ac748ec..df4407067b131 100644
--- a/pandas/core/array_algos/replace.py
+++ b/pandas/core/array_algos/replace.py
@@ -45,21 +45,21 @@ def compare_or_regex_search(
a: ArrayLike, b: Scalar | Pattern, regex: bool, mask: np.ndarray
) -> ArrayLike | bool:
"""
- Compare two array_like inputs of the same shape or two scalar values
+ Compare two array-like inputs of the same shape or two scalar values
Calls operator.eq or re.search, depending on regex argument. If regex is
True, perform an element-wise regex matching.
Parameters
----------
- a : array_like
+ a : array-like
b : scalar or regex pattern
regex : bool
mask : np.ndarray[bool]
Returns
-------
- mask : array_like of bool
+ mask : array-like of bool
"""
if isna(b):
return ~mask
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index a6d1986937d2b..888c7cbbffb59 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -826,13 +826,13 @@ def searchsorted(self, value, side="left", sorter=None):
Parameters
----------
- value : array_like
+ value : array-like
Values to insert into `self`.
side : {'left', 'right'}, optional
If 'left', the index of the first suitable location found is given.
If 'right', return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of `self`).
- sorter : 1-D array_like, optional
+ sorter : 1-D array-like, optional
Optional array of integer indices that sort array a into ascending
order. They are typically the result of argsort.
diff --git a/pandas/core/base.py b/pandas/core/base.py
index ae7e1a1062cfb..104baa04d3459 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1133,13 +1133,13 @@ def factorize(self, sort: bool = False, na_sentinel: int | None = -1):
Parameters
----------
- value : array_like
+ value : array-like
Values to insert into `self`.
side : {{'left', 'right'}}, optional
If 'left', the index of the first suitable location found is given.
If 'right', return the last such index. If there is no suitable
index, return either 0 or N (where N is the length of `self`).
- sorter : 1-D array_like, optional
+ sorter : 1-D array-like, optional
Optional array of integer indices that sort `self` into ascending
order. They are typically the result of ``np.argsort``.
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 41f77e081c1e9..5bd845534fc96 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7285,11 +7285,11 @@ def clip(
Parameters
----------
- lower : float or array_like, default None
+ lower : float or array-like, default None
Minimum threshold value. All values below this
threshold will be set to it. A missing
threshold (e.g `NA`) will not clip the value.
- upper : float or array_like, default None
+ upper : float or array-like, default None
Maximum threshold value. All values above this
threshold will be set to it. A missing
threshold (e.g `NA`) will not clip the value.
@@ -7889,8 +7889,8 @@ def resample(
Pass a custom function via ``apply``
- >>> def custom_resampler(array_like):
- ... return np.sum(array_like) + 5
+ >>> def custom_resampler(arraylike):
+ ... return np.sum(arraylike) + 5
...
>>> series.resample('3T').apply(custom_resampler)
2000-01-01 00:00:00 8
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index e4c21b3de2cac..eaba30012a5b8 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -6025,8 +6025,8 @@ def any(self, *args, **kwargs):
Returns
-------
- any : bool or array_like (if axis is specified)
- A single element array_like may be converted to bool.
+ any : bool or array-like (if axis is specified)
+ A single element array-like may be converted to bool.
See Also
--------
@@ -6069,8 +6069,8 @@ def all(self, *args, **kwargs):
Returns
-------
- all : bool or array_like (if axis is specified)
- A single element array_like may be converted to bool.
+ all : bool or array-like (if axis is specified)
+ A single element array-like may be converted to bool.
See Also
--------
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 821d696200175..4dff63ea22e00 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -3877,7 +3877,7 @@ def maybe_droplevels(index: Index, key) -> Index:
def _coerce_indexer_frozen(array_like, categories, copy: bool = False) -> np.ndarray:
"""
- Coerce the array_like indexer to the smallest integer dtype that can encode all
+ Coerce the array-like indexer to the smallest integer dtype that can encode all
of the given categories.
Parameters
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 2e7e6c7f7a100..237d06402a0ee 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -1276,7 +1276,7 @@ def _unstack(self, unstacker, fill_value, new_placement):
-------
blocks : list of Block
New blocks of unstacked values.
- mask : array_like of bool
+ mask : array-like of bool
The mask of columns of `blocks` we should keep.
"""
new_values, mask = unstacker.get_new_values(
diff --git a/pandas/core/missing.py b/pandas/core/missing.py
index 8849eb0670faa..424173ccc69f0 100644
--- a/pandas/core/missing.py
+++ b/pandas/core/missing.py
@@ -524,11 +524,11 @@ def _from_derivatives(xi, yi, x, order=None, der=0, extrapolate=False):
Parameters
----------
- xi : array_like
+ xi : array-like
sorted 1D array of x-coordinates
- yi : array_like or list of array-likes
+ yi : array-like or list of array-likes
yi[i][j] is the j-th derivative known at xi[i]
- order: None or int or array_like of ints. Default: None.
+ order: None or int or array-like of ints. Default: None.
Specifies the degree of local polynomials. If not None, some
derivatives are ignored.
der : int or list
@@ -546,7 +546,7 @@ def _from_derivatives(xi, yi, x, order=None, der=0, extrapolate=False):
Returns
-------
- y : scalar or array_like
+ y : scalar or array-like
The result, of length R or length M or M by R.
"""
from scipy import interpolate
@@ -568,13 +568,13 @@ def _akima_interpolate(xi, yi, x, der=0, axis=0):
Parameters
----------
- xi : array_like
+ xi : array-like
A sorted list of x-coordinates, of length N.
- yi : array_like
+ yi : array-like
A 1-D array of real values. `yi`'s length along the interpolation
axis must be equal to the length of `xi`. If N-D array, use axis
parameter to select correct axis.
- x : scalar or array_like
+ x : scalar or array-like
Of length M.
der : int, optional
How many derivatives to extract; None for all potentially
@@ -590,7 +590,7 @@ def _akima_interpolate(xi, yi, x, der=0, axis=0):
Returns
-------
- y : scalar or array_like
+ y : scalar or array-like
The result, of length R or length M or M by R,
"""
@@ -609,14 +609,14 @@ def _cubicspline_interpolate(xi, yi, x, axis=0, bc_type="not-a-knot", extrapolat
Parameters
----------
- xi : array_like, shape (n,)
+ xi : array-like, shape (n,)
1-d array containing values of the independent variable.
Values must be real, finite and in strictly increasing order.
- yi : array_like
+ yi : array-like
Array containing values of the dependent variable. It can have
arbitrary number of dimensions, but the length along ``axis``
(see below) must match the length of ``x``. Values must be finite.
- x : scalar or array_like, shape (m,)
+ x : scalar or array-like, shape (m,)
axis : int, optional
Axis along which `y` is assumed to be varying. Meaning that for
``x[i]`` the corresponding values are ``np.take(y, i, axis=axis)``.
@@ -644,7 +644,7 @@ def _cubicspline_interpolate(xi, yi, x, axis=0, bc_type="not-a-knot", extrapolat
tuple `(order, deriv_values)` allowing to specify arbitrary
derivatives at curve ends:
* `order`: the derivative order, 1 or 2.
- * `deriv_value`: array_like containing derivative values, shape must
+ * `deriv_value`: array-like containing derivative values, shape must
be the same as `y`, excluding ``axis`` dimension. For example, if
`y` is 1D, then `deriv_value` must be a scalar. If `y` is 3D with
the shape (n0, n1, n2) and axis=2, then `deriv_value` must be 2D
@@ -661,7 +661,7 @@ def _cubicspline_interpolate(xi, yi, x, axis=0, bc_type="not-a-knot", extrapolat
Returns
-------
- y : scalar or array_like
+ y : scalar or array-like
The result, of shape (m,)
References
diff --git a/pandas/plotting/_core.py b/pandas/plotting/_core.py
index 5d3db13610845..00d87b707580d 100644
--- a/pandas/plotting/_core.py
+++ b/pandas/plotting/_core.py
@@ -430,7 +430,7 @@ def hist_frame(
y : label or position, optional
Allows plotting of one column versus another. If not specified,
all numerical columns are used.
- color : str, array_like, or dict, optional
+ color : str, array-like, or dict, optional
The color for each of the DataFrame's columns. Possible values are:
- A single color string referred to by name, RGB or RGBA code,
@@ -1571,7 +1571,7 @@ def scatter(self, x, y, s=None, c=None, **kwargs):
y : int or str
The column name or column position to be used as vertical
coordinates for each point.
- s : str, scalar or array_like, optional
+ s : str, scalar or array-like, optional
The size of each point. Possible values are:
- A string with the name of the column to be used for marker's size.
@@ -1584,7 +1584,7 @@ def scatter(self, x, y, s=None, c=None, **kwargs):
.. versionchanged:: 1.1.0
- c : str, int or array_like, optional
+ c : str, int or array-like, optional
The color of each point. Possible values are:
- A single color string referred to by name, RGB or RGBA code,
diff --git a/scripts/tests/test_validate_docstrings.py b/scripts/tests/test_validate_docstrings.py
index cbf3e84044d53..46cfae8e31208 100644
--- a/scripts/tests/test_validate_docstrings.py
+++ b/scripts/tests/test_validate_docstrings.py
@@ -82,6 +82,12 @@ def missing_whitespace_after_comma(self):
"""
pass
+ def write_array_like_with_hyphen_not_underscore(self):
+ """
+ In docstrings, use array-like over array_like
+ """
+ pass
+
class TestValidator:
def _import_path(self, klass=None, func=None):
@@ -172,6 +178,11 @@ def test_bad_class(self, capsys):
"missing_whitespace_after_comma",
("flake8 error: E231 missing whitespace after ',' (3 times)",),
),
+ (
+ "BadDocstrings",
+ "write_array_like_with_hyphen_not_underscore",
+ ("Use 'array-like' rather than 'array_like' in docstrings",),
+ ),
],
)
def test_bad_docstrings(self, capsys, klass, func, msgs):
diff --git a/scripts/validate_docstrings.py b/scripts/validate_docstrings.py
index b77210e3d2bab..9b65204403612 100755
--- a/scripts/validate_docstrings.py
+++ b/scripts/validate_docstrings.py
@@ -54,6 +54,7 @@
ERROR_MSGS = {
"GL04": "Private classes ({mentioned_private_classes}) should not be "
"mentioned in public docstrings",
+ "GL05": "Use 'array-like' rather than 'array_like' in docstrings.",
"SA05": "{reference_name} in `See Also` section does not need `pandas` "
"prefix, use {right_reference} instead.",
"EX02": "Examples do not pass tests:\n{doctest_log}",
@@ -196,6 +197,9 @@ def validate_pep8(self):
error_count, error_code, message = error_message.split(maxsplit=2)
yield error_code, message, int(error_count)
+ def non_hyphenated_array_like(self):
+ return "array_like" in self.raw_doc
+
def pandas_validate(func_name: str):
"""
@@ -256,6 +260,9 @@ def pandas_validate(func_name: str):
pandas_error("EX04", imported_library=wrong_import)
)
+ if doc.non_hyphenated_array_like():
+ result["errors"].append(pandas_error("GL05"))
+
return result
| - [ ] closes #40560 (Pending feedback on decorator-generated docstrings)
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
First try at any open source stuff! This one looked pretty doable, hopefully it's there's nothing wrong with it/let me know!
I'm not sure whether *I* should be checking these boxes, or a reviewer should. I *believe* this fixes the issue since it adds the validator, though it's not a particularly clever validator.
It technically is an added test, and it technically works - I re-added one of the entries and got a successful (correct) result for its failure... (amongst the infinite other validation failures 😢 )
I assume linting will run after the PR is submitted, but I'll look into running it locally, too.
```
...
None:None:ES01:pandas.IndexSlice:No extended summary foundNone:None:EX03:pandas.IndexSlice:flake8 error: E231 missing whitespace after ',' (4 times)
.../pandas/pandas/core/indexes/multi.py:434:ES01:pandas.MultiIndex.from_arrays:No extended summary found
.../pandas/pandas/core/indexes/multi.py:434:GL05:pandas.MultiIndex.from_arrays:Use 'array-like' rather than 'array_like' in docstrings. <<<--- Message for added issue.
.../pandas/pandas/core/indexes/multi.py:500:ES01:pandas.MultiIndex.from_tuples:No extended summary found
.../pandas/pandas/core/indexes/multi.py:567:ES01:pandas.MultiIndex.from_product:No extended summary found
None:None:ES01:pandas.MultiIndex.names:No extended summary found
...
```
More on the validator:
* I chose `GL` as the code after looking through [the numpy docstring validation guide](https://numpydoc.readthedocs.io/en/latest/validation.html) and the [pandas docstring validation guide](https://pandas.pydata.org/docs/dev/development/contributing_docstring.html) - I think it stands for "General" and that seems like the best categorization for this message.
* I used `raw_doc` for this but there was another property like `clean_doc` as well. I could just iterate through each section, too. 🤷
* I didn't add any extra detail/context to the error message. The line and file are already given, so I think there's nothing extra that's needed.
Using the validator to pick up any missed docstring changes:
```
$ time ./validate_docstrings.py 2>&1 | tee validate_docstrings.log ### (It takes ~5m to run)
$ grep GL05 validate_docstrings.log
.../pandas/pandas/core/frame.py:10220:GL05:pandas.DataFrame.resample:Use 'array-like' rather than 'array_like' in docstrings.
.../pandas/pandas/core/series.py:5171:GL05:pandas.Series.resample:Use 'array-like' rather than 'array_like' in docstrings.
```
-----
Actually, looking into those two remaining results - those are docstrings generated by lines like:
```10220 @doc(NDFrame.resample, **_shared_doc_kwargs)```
Looking in more detail, I'm not exactly sure how this happens, but it seems like this decorator must be generating text with "array_like" in it. I kind of want to declare it out of scope for this PR, but I can dig into it more if needed.
I suspect maybe the `Substitution` decorator tool can help, possibly. It seems kind of cheesy to add something that will substitute array_like for array-like in a docstring - eg, it might be the result of a function whose param is actually called `array_like`, which, I'm pretty sure they *do* exist. I'm open to feedback on that.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41295 | 2021-05-04T00:10:24Z | 2021-06-12T10:21:23Z | 2021-06-12T10:21:22Z | 2021-06-19T22:37:10Z |
REF: support object dtype in libgroupby.group_add | diff --git a/pandas/_libs/groupby.pyx b/pandas/_libs/groupby.pyx
index 3fa92ce2229c3..8637d50745195 100644
--- a/pandas/_libs/groupby.pyx
+++ b/pandas/_libs/groupby.pyx
@@ -469,18 +469,19 @@ def group_any_all(int8_t[::1] out,
# group_add, group_prod, group_var, group_mean, group_ohlc
# ----------------------------------------------------------------------
-ctypedef fused complexfloating_t:
+ctypedef fused add_t:
float64_t
float32_t
complex64_t
complex128_t
+ object
@cython.wraparound(False)
@cython.boundscheck(False)
-def group_add(complexfloating_t[:, ::1] out,
+def group_add(add_t[:, ::1] out,
int64_t[::1] counts,
- ndarray[complexfloating_t, ndim=2] values,
+ ndarray[add_t, ndim=2] values,
const intp_t[:] labels,
Py_ssize_t min_count=0) -> None:
"""
@@ -488,8 +489,8 @@ def group_add(complexfloating_t[:, ::1] out,
"""
cdef:
Py_ssize_t i, j, N, K, lab, ncounts = len(counts)
- complexfloating_t val, count, t, y
- complexfloating_t[:, ::1] sumx, compensation
+ add_t val, t, y
+ add_t[:, ::1] sumx, compensation
int64_t[:, ::1] nobs
Py_ssize_t len_values = len(values), len_labels = len(labels)
@@ -503,7 +504,8 @@ def group_add(complexfloating_t[:, ::1] out,
N, K = (<object>values).shape
- with nogil:
+ if add_t is object:
+ # NB: this does not use 'compensation' like the non-object track does.
for i in range(N):
lab = labels[i]
if lab < 0:
@@ -516,9 +518,13 @@ def group_add(complexfloating_t[:, ::1] out,
# not nan
if val == val:
nobs[lab, j] += 1
- y = val - compensation[lab, j]
- t = sumx[lab, j] + y
- compensation[lab, j] = t - sumx[lab, j] - y
+
+ if nobs[lab, j] == 1:
+ # i.e. we havent added anything yet; avoid TypeError
+ # if e.g. val is a str and sumx[lab, j] is 0
+ t = val
+ else:
+ t = sumx[lab, j] + val
sumx[lab, j] = t
for i in range(ncounts):
@@ -527,6 +533,31 @@ def group_add(complexfloating_t[:, ::1] out,
out[i, j] = NAN
else:
out[i, j] = sumx[i, j]
+ else:
+ with nogil:
+ for i in range(N):
+ lab = labels[i]
+ if lab < 0:
+ continue
+
+ counts[lab] += 1
+ for j in range(K):
+ val = values[i, j]
+
+ # not nan
+ if val == val:
+ nobs[lab, j] += 1
+ y = val - compensation[lab, j]
+ t = sumx[lab, j] + y
+ compensation[lab, j] = t - sumx[lab, j] - y
+ sumx[lab, j] = t
+
+ for i in range(ncounts):
+ for j in range(K):
+ if nobs[i, j] < min_count:
+ out[i, j] = NAN
+ else:
+ out[i, j] = sumx[i, j]
@cython.wraparound(False)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41294 | 2021-05-03T23:02:03Z | 2021-05-04T16:22:29Z | 2021-05-04T16:22:29Z | 2021-05-04T16:25:45Z |
[ArrowStringArray] REF: pre-cursor to adding replace str accessor method | diff --git a/pandas/core/strings/accessor.py b/pandas/core/strings/accessor.py
index 85a58d3d99795..82462f8d922d5 100644
--- a/pandas/core/strings/accessor.py
+++ b/pandas/core/strings/accessor.py
@@ -19,6 +19,7 @@
is_categorical_dtype,
is_integer,
is_list_like,
+ is_re,
)
from pandas.core.dtypes.generic import (
ABCDataFrame,
@@ -1333,6 +1334,29 @@ def replace(self, pat, repl, n=-1, case=None, flags=0, regex=None):
)
warnings.warn(msg, FutureWarning, stacklevel=3)
regex = True
+
+ # Check whether repl is valid (GH 13438, GH 15055)
+ if not (isinstance(repl, str) or callable(repl)):
+ raise TypeError("repl must be a string or callable")
+
+ is_compiled_re = is_re(pat)
+ if regex:
+ if is_compiled_re:
+ if (case is not None) or (flags != 0):
+ raise ValueError(
+ "case and flags cannot be set when pat is a compiled regex"
+ )
+ elif case is None:
+ # not a compiled regex, set default case
+ case = True
+
+ elif is_compiled_re:
+ raise ValueError(
+ "Cannot use a compiled regex as replacement pattern with regex=False"
+ )
+ elif callable(repl):
+ raise ValueError("Cannot use a callable replacement when regex=False")
+
result = self._data.array._str_replace(
pat, repl, n=n, case=case, flags=flags, regex=regex
)
diff --git a/pandas/core/strings/object_array.py b/pandas/core/strings/object_array.py
index b794690ccc5af..c7e4368a98c95 100644
--- a/pandas/core/strings/object_array.py
+++ b/pandas/core/strings/object_array.py
@@ -147,41 +147,20 @@ def _str_endswith(self, pat, na=None):
f = lambda x: x.endswith(pat)
return self._str_map(f, na_value=na, dtype=np.dtype(bool))
- def _str_replace(self, pat, repl, n=-1, case=None, flags=0, regex=True):
- # Check whether repl is valid (GH 13438, GH 15055)
- if not (isinstance(repl, str) or callable(repl)):
- raise TypeError("repl must be a string or callable")
-
+ def _str_replace(self, pat, repl, n=-1, case: bool = True, flags=0, regex=True):
is_compiled_re = is_re(pat)
- if regex:
- if is_compiled_re:
- if (case is not None) or (flags != 0):
- raise ValueError(
- "case and flags cannot be set when pat is a compiled regex"
- )
- else:
- # not a compiled regex
- # set default case
- if case is None:
- case = True
-
- # add case flag, if provided
- if case is False:
- flags |= re.IGNORECASE
- if is_compiled_re or len(pat) > 1 or flags or callable(repl):
- n = n if n >= 0 else 0
- compiled = re.compile(pat, flags=flags)
- f = lambda x: compiled.sub(repl=repl, string=x, count=n)
- else:
- f = lambda x: x.replace(pat, repl, n)
+
+ if case is False:
+ # add case flag, if provided
+ flags |= re.IGNORECASE
+
+ if regex and (is_compiled_re or len(pat) > 1 or flags or callable(repl)):
+ if not is_compiled_re:
+ pat = re.compile(pat, flags=flags)
+
+ n = n if n >= 0 else 0
+ f = lambda x: pat.sub(repl=repl, string=x, count=n)
else:
- if is_compiled_re:
- raise ValueError(
- "Cannot use a compiled regex as replacement pattern with "
- "regex=False"
- )
- if callable(repl):
- raise ValueError("Cannot use a callable replacement when regex=False")
f = lambda x: x.replace(pat, repl, n)
return self._str_map(f, dtype=str)
diff --git a/pandas/tests/strings/test_find_replace.py b/pandas/tests/strings/test_find_replace.py
index 0c54042d983ad..3d33e34a9dcfe 100644
--- a/pandas/tests/strings/test_find_replace.py
+++ b/pandas/tests/strings/test_find_replace.py
@@ -266,144 +266,157 @@ def test_endswith_nullable_string_dtype(nullable_string_dtype, na):
tm.assert_series_equal(result, exp)
-def test_replace():
- values = Series(["fooBAD__barBAD", np.nan])
+def test_replace(any_string_dtype):
+ values = Series(["fooBAD__barBAD", np.nan], dtype=any_string_dtype)
result = values.str.replace("BAD[_]*", "", regex=True)
- exp = Series(["foobar", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foobar", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
result = values.str.replace("BAD[_]*", "", n=1, regex=True)
- exp = Series(["foobarBAD", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foobarBAD", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
- # mixed
+
+def test_replace_mixed_object():
mixed = Series(
["aBAD", np.nan, "bBAD", True, datetime.today(), "fooBAD", None, 1, 2.0]
)
- rs = Series(mixed).str.replace("BAD[_]*", "", regex=True)
- xp = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan])
- assert isinstance(rs, Series)
- tm.assert_almost_equal(rs, xp)
+ result = Series(mixed).str.replace("BAD[_]*", "", regex=True)
+ expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan])
+ assert isinstance(result, Series)
+ tm.assert_almost_equal(result, expected)
- # flags + unicode
- values = Series([b"abcd,\xc3\xa0".decode("utf-8")])
- exp = Series([b"abcd, \xc3\xa0".decode("utf-8")])
+
+def test_replace_unicode(any_string_dtype):
+ values = Series([b"abcd,\xc3\xa0".decode("utf-8")], dtype=any_string_dtype)
+ expected = Series([b"abcd, \xc3\xa0".decode("utf-8")], dtype=any_string_dtype)
result = values.str.replace(r"(?<=\w),(?=\w)", ", ", flags=re.UNICODE, regex=True)
- tm.assert_series_equal(result, exp)
+ tm.assert_series_equal(result, expected)
+
- # GH 13438
+@pytest.mark.parametrize("klass", [Series, Index])
+@pytest.mark.parametrize("repl", [None, 3, {"a": "b"}])
+@pytest.mark.parametrize("data", [["a", "b", None], ["a", "b", "c", "ad"]])
+def test_replace_raises(any_string_dtype, klass, repl, data):
+ # https://github.com/pandas-dev/pandas/issues/13438
msg = "repl must be a string or callable"
- for klass in (Series, Index):
- for repl in (None, 3, {"a": "b"}):
- for data in (["a", "b", None], ["a", "b", "c", "ad"]):
- values = klass(data)
- with pytest.raises(TypeError, match=msg):
- values.str.replace("a", repl)
+ values = klass(data, dtype=any_string_dtype)
+ with pytest.raises(TypeError, match=msg):
+ values.str.replace("a", repl)
-def test_replace_callable():
+def test_replace_callable(any_string_dtype):
# GH 15055
- values = Series(["fooBAD__barBAD", np.nan])
+ values = Series(["fooBAD__barBAD", np.nan], dtype=any_string_dtype)
# test with callable
repl = lambda m: m.group(0).swapcase()
result = values.str.replace("[a-z][A-Z]{2}", repl, n=2, regex=True)
- exp = Series(["foObaD__baRbaD", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foObaD__baRbaD", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
+
+
+@pytest.mark.parametrize(
+ "repl", [lambda: None, lambda m, x: None, lambda m, x, y=None: None]
+)
+def test_replace_callable_raises(any_string_dtype, repl):
+ # GH 15055
+ values = Series(["fooBAD__barBAD", np.nan], dtype=any_string_dtype)
# test with wrong number of arguments, raising an error
- p_err = (
+ msg = (
r"((takes)|(missing)) (?(2)from \d+ to )?\d+ "
r"(?(3)required )positional arguments?"
)
-
- repl = lambda: None
- with pytest.raises(TypeError, match=p_err):
+ with pytest.raises(TypeError, match=msg):
values.str.replace("a", repl)
- repl = lambda m, x: None
- with pytest.raises(TypeError, match=p_err):
- values.str.replace("a", repl)
-
- repl = lambda m, x, y=None: None
- with pytest.raises(TypeError, match=p_err):
- values.str.replace("a", repl)
+def test_replace_callable_named_groups(any_string_dtype):
# test regex named groups
- values = Series(["Foo Bar Baz", np.nan])
+ values = Series(["Foo Bar Baz", np.nan], dtype=any_string_dtype)
pat = r"(?P<first>\w+) (?P<middle>\w+) (?P<last>\w+)"
repl = lambda m: m.group("middle").swapcase()
result = values.str.replace(pat, repl, regex=True)
- exp = Series(["bAR", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["bAR", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
-def test_replace_compiled_regex():
+def test_replace_compiled_regex(any_string_dtype):
# GH 15446
- values = Series(["fooBAD__barBAD", np.nan])
+ values = Series(["fooBAD__barBAD", np.nan], dtype=any_string_dtype)
# test with compiled regex
pat = re.compile(r"BAD_*")
result = values.str.replace(pat, "", regex=True)
- exp = Series(["foobar", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foobar", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
result = values.str.replace(pat, "", n=1, regex=True)
- exp = Series(["foobarBAD", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foobarBAD", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
- # mixed
+
+def test_replace_compiled_regex_mixed_object():
+ pat = re.compile(r"BAD_*")
mixed = Series(
["aBAD", np.nan, "bBAD", True, datetime.today(), "fooBAD", None, 1, 2.0]
)
- rs = Series(mixed).str.replace(pat, "", regex=True)
- xp = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan])
- assert isinstance(rs, Series)
- tm.assert_almost_equal(rs, xp)
+ result = Series(mixed).str.replace(pat, "", regex=True)
+ expected = Series(["a", np.nan, "b", np.nan, np.nan, "foo", np.nan, np.nan, np.nan])
+ assert isinstance(result, Series)
+ tm.assert_almost_equal(result, expected)
+
- # flags + unicode
- values = Series([b"abcd,\xc3\xa0".decode("utf-8")])
- exp = Series([b"abcd, \xc3\xa0".decode("utf-8")])
+def test_replace_compiled_regex_unicode(any_string_dtype):
+ values = Series([b"abcd,\xc3\xa0".decode("utf-8")], dtype=any_string_dtype)
+ expected = Series([b"abcd, \xc3\xa0".decode("utf-8")], dtype=any_string_dtype)
pat = re.compile(r"(?<=\w),(?=\w)", flags=re.UNICODE)
result = values.str.replace(pat, ", ")
- tm.assert_series_equal(result, exp)
+ tm.assert_series_equal(result, expected)
+
+def test_replace_compiled_regex_raises(any_string_dtype):
# case and flags provided to str.replace will have no effect
# and will produce warnings
- values = Series(["fooBAD__barBAD__bad", np.nan])
+ values = Series(["fooBAD__barBAD__bad", np.nan], dtype=any_string_dtype)
pat = re.compile(r"BAD_*")
- with pytest.raises(ValueError, match="case and flags cannot be"):
- result = values.str.replace(pat, "", flags=re.IGNORECASE)
+ msg = "case and flags cannot be set when pat is a compiled regex"
- with pytest.raises(ValueError, match="case and flags cannot be"):
- result = values.str.replace(pat, "", case=False)
+ with pytest.raises(ValueError, match=msg):
+ values.str.replace(pat, "", flags=re.IGNORECASE)
- with pytest.raises(ValueError, match="case and flags cannot be"):
- result = values.str.replace(pat, "", case=True)
+ with pytest.raises(ValueError, match=msg):
+ values.str.replace(pat, "", case=False)
+ with pytest.raises(ValueError, match=msg):
+ values.str.replace(pat, "", case=True)
+
+
+def test_replace_compiled_regex_callable(any_string_dtype):
# test with callable
- values = Series(["fooBAD__barBAD", np.nan])
+ values = Series(["fooBAD__barBAD", np.nan], dtype=any_string_dtype)
repl = lambda m: m.group(0).swapcase()
pat = re.compile("[a-z][A-Z]{2}")
result = values.str.replace(pat, repl, n=2)
- exp = Series(["foObaD__baRbaD", np.nan])
- tm.assert_series_equal(result, exp)
+ expected = Series(["foObaD__baRbaD", np.nan], dtype=any_string_dtype)
+ tm.assert_series_equal(result, expected)
-def test_replace_literal():
+def test_replace_literal(any_string_dtype):
# GH16808 literal replace (regex=False vs regex=True)
- values = Series(["f.o", "foo", np.nan])
- exp = Series(["bao", "bao", np.nan])
+ values = Series(["f.o", "foo", np.nan], dtype=any_string_dtype)
+ expected = Series(["bao", "bao", np.nan], dtype=any_string_dtype)
result = values.str.replace("f.", "ba", regex=True)
- tm.assert_series_equal(result, exp)
+ tm.assert_series_equal(result, expected)
- exp = Series(["bao", "foo", np.nan])
+ expected = Series(["bao", "foo", np.nan], dtype=any_string_dtype)
result = values.str.replace("f.", "ba", regex=False)
- tm.assert_series_equal(result, exp)
+ tm.assert_series_equal(result, expected)
# Cannot do a literal replace if given a callable repl or compiled
# pattern
@@ -680,13 +693,17 @@ def test_contains_nan(any_string_dtype):
tm.assert_series_equal(result, expected)
-def test_replace_moar():
+def test_replace_moar(any_string_dtype):
# PR #1179
- s = Series(["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"])
+ s = Series(
+ ["A", "B", "C", "Aaba", "Baca", "", np.nan, "CABA", "dog", "cat"],
+ dtype=any_string_dtype,
+ )
result = s.str.replace("A", "YYY")
expected = Series(
- ["YYY", "B", "C", "YYYaba", "Baca", "", np.nan, "CYYYBYYY", "dog", "cat"]
+ ["YYY", "B", "C", "YYYaba", "Baca", "", np.nan, "CYYYBYYY", "dog", "cat"],
+ dtype=any_string_dtype,
)
tm.assert_series_equal(result, expected)
@@ -703,7 +720,8 @@ def test_replace_moar():
"CYYYBYYY",
"dog",
"cYYYt",
- ]
+ ],
+ dtype=any_string_dtype,
)
tm.assert_series_equal(result, expected)
@@ -720,7 +738,8 @@ def test_replace_moar():
"XX-XX BA",
"XX-XX ",
"XX-XX t",
- ]
+ ],
+ dtype=any_string_dtype,
)
tm.assert_series_equal(result, expected)
| pre-cursor cleanup to avoid duplicating validation | https://api.github.com/repos/pandas-dev/pandas/pulls/41293 | 2021-05-03T21:40:25Z | 2021-05-04T16:26:35Z | 2021-05-04T16:26:35Z | 2021-05-04T18:10:17Z |
BUG: fix suppressed TypeErrors in Groupby.mean, median, var | diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 620668dadc32d..9935bc706b7b1 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1559,7 +1559,7 @@ def mean(self, numeric_only: bool = True):
"""
result = self._cython_agg_general(
"mean",
- alt=lambda x, axis: Series(x).mean(numeric_only=numeric_only),
+ alt=lambda x: Series(x).mean(numeric_only=numeric_only),
numeric_only=numeric_only,
)
return result.__finalize__(self.obj, method="groupby")
@@ -1586,7 +1586,7 @@ def median(self, numeric_only=True):
"""
result = self._cython_agg_general(
"median",
- alt=lambda x, axis: Series(x).median(axis=axis, numeric_only=numeric_only),
+ alt=lambda x: Series(x).median(numeric_only=numeric_only),
numeric_only=numeric_only,
)
return result.__finalize__(self.obj, method="groupby")
@@ -1642,7 +1642,7 @@ def var(self, ddof: int = 1):
"""
if ddof == 1:
return self._cython_agg_general(
- "var", alt=lambda x, axis: Series(x).var(ddof=ddof)
+ "var", alt=lambda x: Series(x).var(ddof=ddof)
)
else:
func = lambda x: x.var(ddof=ddof)
| These are raising TypeError because we aren't passing enough args, but then we're catching those TypeErrors and falling back. So this doesn't change behavior, just avoids an unnecessary fallback in some cases.
No tests. Surfacing the exceptions would require ugly/invasive changes. | https://api.github.com/repos/pandas-dev/pandas/pulls/41292 | 2021-05-03T21:31:20Z | 2021-05-04T16:23:01Z | 2021-05-04T16:23:01Z | 2021-05-04T16:26:07Z |
REF: Share py_fallback | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index 4f60660dfb499..c04ad0e9dfa30 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -69,7 +69,6 @@
validate_func_kwargs,
)
from pandas.core.apply import GroupByApply
-from pandas.core.arrays import Categorical
from pandas.core.base import (
DataError,
SpecificationError,
@@ -84,7 +83,6 @@
_agg_template,
_apply_docs,
_transform_template,
- get_groupby,
group_selection_context,
)
from pandas.core.indexes.api import (
@@ -353,6 +351,7 @@ def _cython_agg_general(
obj = self._selected_obj
objvals = obj._values
+ data = obj._mgr
if numeric_only and not is_numeric_dtype(obj.dtype):
raise DataError("No numeric types to aggregate")
@@ -362,28 +361,15 @@ def _cython_agg_general(
def array_func(values: ArrayLike) -> ArrayLike:
try:
result = self.grouper._cython_operation(
- "aggregate", values, how, axis=0, min_count=min_count
+ "aggregate", values, how, axis=data.ndim - 1, min_count=min_count
)
except NotImplementedError:
- ser = Series(values) # equiv 'obj' from outer frame
- if self.ngroups > 0:
- res_values, _ = self.grouper.agg_series(ser, alt)
- else:
- # equiv: res_values = self._python_agg_general(alt)
- # error: Incompatible types in assignment (expression has
- # type "Union[DataFrame, Series]", variable has type
- # "Union[ExtensionArray, ndarray]")
- res_values = self._python_apply_general( # type: ignore[assignment]
- alt, ser
- )
+ # generally if we have numeric_only=False
+ # and non-applicable functions
+ # try to python agg
+ # TODO: shouldn't min_count matter?
+ result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
- if isinstance(values, Categorical):
- # Because we only get here with known dtype-preserving
- # reductions, we cast back to Categorical.
- # TODO: if we ever get "rank" working, exclude it here.
- result = type(values)._from_sequence(res_values, dtype=values.dtype)
- else:
- result = res_values
return result
result = array_func(objvals)
@@ -1115,72 +1101,17 @@ def _cython_agg_general(
if numeric_only:
data = data.get_numeric_data(copy=False)
- def cast_agg_result(result: ArrayLike, values: ArrayLike) -> ArrayLike:
- # see if we can cast the values to the desired dtype
- # this may not be the original dtype
-
- if isinstance(result.dtype, np.dtype) and result.ndim == 1:
- # We went through a SeriesGroupByPath and need to reshape
- # GH#32223 includes case with IntegerArray values
- # We only get here with values.dtype == object
- result = result.reshape(1, -1)
- # test_groupby_duplicate_columns gets here with
- # result.dtype == int64, values.dtype=object, how="min"
-
- return result
-
- def py_fallback(values: ArrayLike) -> ArrayLike:
- # if self.grouper.aggregate fails, we fall back to a pure-python
- # solution
-
- # We get here with a) EADtypes and b) object dtype
- obj: FrameOrSeriesUnion
-
- # call our grouper again with only this block
- if values.ndim == 1:
- # We only get here with ExtensionArray
-
- obj = Series(values)
- else:
- # We only get here with values.dtype == object
- # TODO special case not needed with ArrayManager
- df = DataFrame(values.T)
- # bc we split object blocks in grouped_reduce, we have only 1 col
- # otherwise we'd have to worry about block-splitting GH#39329
- assert df.shape[1] == 1
- # Avoid call to self.values that can occur in DataFrame
- # reductions; see GH#28949
- obj = df.iloc[:, 0]
-
- # Create SeriesGroupBy with observed=True so that it does
- # not try to add missing categories if grouping over multiple
- # Categoricals. This will done by later self._reindex_output()
- # Doing it here creates an error. See GH#34951
- sgb = get_groupby(obj, self.grouper, observed=True)
-
- # Note: bc obj is always a Series here, we can ignore axis and pass
- # `alt` directly instead of `lambda x: alt(x, axis=self.axis)`
- # use _agg_general bc it will go through _cython_agg_general
- # which will correctly cast Categoricals.
- res_ser = sgb._agg_general(
- numeric_only=False, min_count=min_count, alias=how, npfunc=alt
- )
-
- # unwrap Series to get array
- res_values = res_ser._mgr.arrays[0]
- return cast_agg_result(res_values, values)
-
def array_func(values: ArrayLike) -> ArrayLike:
-
try:
result = self.grouper._cython_operation(
- "aggregate", values, how, axis=1, min_count=min_count
+ "aggregate", values, how, axis=data.ndim - 1, min_count=min_count
)
except NotImplementedError:
# generally if we have numeric_only=False
# and non-applicable functions
# try to python agg
- result = py_fallback(values)
+ # TODO: shouldn't min_count matter?
+ result = self._agg_py_fallback(values, ndim=data.ndim, alt=alt)
return result
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 620668dadc32d..5b2b00713b318 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -100,6 +100,7 @@ class providing the base-class of operations.
Index,
MultiIndex,
)
+from pandas.core.internals.blocks import ensure_block_shape
from pandas.core.series import Series
from pandas.core.sorting import get_group_index_sorter
from pandas.core.util.numba_ import NUMBA_FUNC_CACHE
@@ -1313,6 +1314,54 @@ def _agg_general(
result = self.aggregate(lambda x: npfunc(x, axis=self.axis))
return result.__finalize__(self.obj, method="groupby")
+ def _agg_py_fallback(
+ self, values: ArrayLike, ndim: int, alt: Callable
+ ) -> ArrayLike:
+ """
+ Fallback to pure-python aggregation if _cython_operation raises
+ NotImplementedError.
+ """
+ # We get here with a) EADtypes and b) object dtype
+
+ if values.ndim == 1:
+ # For DataFrameGroupBy we only get here with ExtensionArray
+ ser = Series(values)
+ else:
+ # We only get here with values.dtype == object
+ # TODO: special case not needed with ArrayManager
+ df = DataFrame(values.T)
+ # bc we split object blocks in grouped_reduce, we have only 1 col
+ # otherwise we'd have to worry about block-splitting GH#39329
+ assert df.shape[1] == 1
+ # Avoid call to self.values that can occur in DataFrame
+ # reductions; see GH#28949
+ ser = df.iloc[:, 0]
+
+ # Create SeriesGroupBy with observed=True so that it does
+ # not try to add missing categories if grouping over multiple
+ # Categoricals. This will done by later self._reindex_output()
+ # Doing it here creates an error. See GH#34951
+ sgb = get_groupby(ser, self.grouper, observed=True)
+ # For SeriesGroupBy we could just use self instead of sgb
+
+ if self.ngroups > 0:
+ res_values, _ = self.grouper.agg_series(ser, alt)
+ else:
+ # equiv: res_values = self._python_agg_general(alt)
+ res_values = sgb._python_apply_general(alt, ser)._values
+
+ if isinstance(values, Categorical):
+ # Because we only get here with known dtype-preserving
+ # reductions, we cast back to Categorical.
+ # TODO: if we ever get "rank" working, exclude it here.
+ res_values = type(values)._from_sequence(res_values, dtype=values.dtype)
+
+ # If we are DataFrameGroupBy and went through a SeriesGroupByPath
+ # then we need to reshape
+ # GH#32223 includes case with IntegerArray values, ndarray res_values
+ # test_groupby_duplicate_columns with object dtype values
+ return ensure_block_shape(res_values, ndim=ndim)
+
def _cython_agg_general(
self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1
):
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41289 | 2021-05-03T19:24:11Z | 2021-05-04T16:27:25Z | 2021-05-04T16:27:25Z | 2021-05-04T16:40:44Z |
Bug in iloc.setitem orienting IntegerArray into the wrong direction | diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index d3756d6252c0a..4f3f536cd3290 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -164,7 +164,7 @@ def check_setitem_lengths(indexer, value, values) -> bool:
# a) not necessarily 1-D indexers, e.g. tuple
# b) boolean indexers e.g. BoolArray
if is_list_like(value):
- if len(indexer) != len(value):
+ if len(indexer) != len(value) and values.ndim == 1:
# boolean with truth values == len of the value is ok too
if not (
isinstance(indexer, np.ndarray)
diff --git a/pandas/core/internals/blocks.py b/pandas/core/internals/blocks.py
index 61396fdf372d5..d87e77043a713 100644
--- a/pandas/core/internals/blocks.py
+++ b/pandas/core/internals/blocks.py
@@ -970,12 +970,7 @@ def setitem(self, indexer, value):
values[indexer] = value
elif is_ea_value:
- # GH#38952
- if values.ndim == 1:
- values[indexer] = value
- else:
- # TODO(EA2D): special case not needed with 2D EA
- values[indexer] = value.to_numpy(values.dtype).reshape(-1, 1)
+ values[indexer] = value
else:
# error: Argument 1 to "setitem_datetimelike_compat" has incompatible type
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 4004e595c832f..f46ecf61138b1 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -18,6 +18,7 @@
PeriodDtype,
)
+import pandas as pd
from pandas import (
Categorical,
DataFrame,
@@ -792,22 +793,29 @@ def test_setitem_slice_position(self):
tm.assert_frame_equal(df, expected)
@pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
- @pytest.mark.parametrize("box", [Series, np.array, list])
+ @pytest.mark.parametrize("box", [Series, np.array, list, pd.array])
@pytest.mark.parametrize("n", [1, 2, 3])
- def test_setitem_broadcasting_rhs(self, n, box, indexer):
+ def test_setitem_slice_indexer_broadcasting_rhs(self, n, box, indexer):
# GH#40440
- # TODO: Add pandas array as box after GH#40933 is fixed
df = DataFrame([[1, 3, 5]] + [[2, 4, 6]] * n, columns=["a", "b", "c"])
indexer(df)[1:] = box([10, 11, 12])
expected = DataFrame([[1, 3, 5]] + [[10, 11, 12]] * n, columns=["a", "b", "c"])
tm.assert_frame_equal(df, expected)
+ @pytest.mark.parametrize("box", [Series, np.array, list, pd.array])
+ @pytest.mark.parametrize("n", [1, 2, 3])
+ def test_setitem_list_indexer_broadcasting_rhs(self, n, box):
+ # GH#40440
+ df = DataFrame([[1, 3, 5]] + [[2, 4, 6]] * n, columns=["a", "b", "c"])
+ df.iloc[list(range(1, n + 1))] = box([10, 11, 12])
+ expected = DataFrame([[1, 3, 5]] + [[10, 11, 12]] * n, columns=["a", "b", "c"])
+ tm.assert_frame_equal(df, expected)
+
@pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
- @pytest.mark.parametrize("box", [Series, np.array, list])
+ @pytest.mark.parametrize("box", [Series, np.array, list, pd.array])
@pytest.mark.parametrize("n", [1, 2, 3])
- def test_setitem_broadcasting_rhs_mixed_dtypes(self, n, box, indexer):
+ def test_setitem_slice_broadcasting_rhs_mixed_dtypes(self, n, box, indexer):
# GH#40440
- # TODO: Add pandas array as box after GH#40933 is fixed
df = DataFrame(
[[1, 3, 5], ["x", "y", "z"]] + [[2, 4, 6]] * n, columns=["a", "b", "c"]
)
diff --git a/pandas/tests/indexing/test_iloc.py b/pandas/tests/indexing/test_iloc.py
index ad0d4245d58c3..446b616111e9e 100644
--- a/pandas/tests/indexing/test_iloc.py
+++ b/pandas/tests/indexing/test_iloc.py
@@ -122,7 +122,11 @@ def test_iloc_setitem_ea_inplace(self, frame_or_series, box, using_array_manager
else:
values = obj[0].values
- obj.iloc[:2] = box(arr[2:])
+ if frame_or_series is Series:
+ obj.iloc[:2] = box(arr[2:])
+ else:
+ obj.iloc[:2, 0] = box(arr[2:])
+
expected = frame_or_series(np.array([3, 4, 3, 4], dtype="i8"))
tm.assert_equal(obj, expected)
| - [x] closes #40933
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
This is more or less a follow up of #39040, so don't think we need a whatsnew? | https://api.github.com/repos/pandas-dev/pandas/pulls/41288 | 2021-05-03T19:08:19Z | 2021-05-05T12:41:14Z | 2021-05-05T12:41:13Z | 2021-05-05T19:51:11Z |
DOC: Fix docs in pandas/io/excel/* | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 149e10b48933d..8e876eebf93ad 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -15,7 +15,7 @@
# $ ./ci/code_checks.sh code # checks on imported code
# $ ./ci/code_checks.sh doctests # run doctests
# $ ./ci/code_checks.sh docstrings # validate docstring errors
-# $ ./ci/code_checks.sh typing # run static type analysis
+# $ ./ci/code_checks.sh typing # run static type analysis
[[ -z "$1" || "$1" == "lint" || "$1" == "patterns" || "$1" == "code" || "$1" == "doctests" || "$1" == "docstrings" || "$1" == "typing" ]] || \
{ echo "Unknown command $1. Usage: $0 [lint|patterns|code|doctests|docstrings|typing]"; exit 9999; }
@@ -140,6 +140,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
pandas/core/window/ \
pandas/errors/ \
pandas/io/clipboard/ \
+ pandas/io/excel/ \
pandas/io/parsers/ \
pandas/io/sas/ \
pandas/tseries/
diff --git a/pandas/io/excel/_base.py b/pandas/io/excel/_base.py
index 3c9dd90c0a0cb..d26a991ba2820 100644
--- a/pandas/io/excel/_base.py
+++ b/pandas/io/excel/_base.py
@@ -707,30 +707,45 @@ class ExcelWriter(metaclass=abc.ABCMeta):
--------
Default usage:
- >>> with ExcelWriter('path_to_file.xlsx') as writer:
+ >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"])
+ >>> with ExcelWriter("path_to_file.xlsx") as writer:
... df.to_excel(writer)
To write to separate sheets in a single file:
- >>> with ExcelWriter('path_to_file.xlsx') as writer:
- ... df1.to_excel(writer, sheet_name='Sheet1')
- ... df2.to_excel(writer, sheet_name='Sheet2')
+ >>> df1 = pd.DataFrame([["AAA", "BBB"]], columns=["Spam", "Egg"])
+ >>> df2 = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"])
+ >>> with ExcelWriter("path_to_file.xlsx") as writer:
+ ... df1.to_excel(writer, sheet_name="Sheet1")
+ ... df2.to_excel(writer, sheet_name="Sheet2")
You can set the date format or datetime format:
- >>> with ExcelWriter('path_to_file.xlsx',
- ... date_format='YYYY-MM-DD',
- ... datetime_format='YYYY-MM-DD HH:MM:SS') as writer:
+ >>> from datetime import date, datetime
+ >>> df = pd.DataFrame(
+ ... [
+ ... [date(2014, 1, 31), date(1999, 9, 24)],
+ ... [datetime(1998, 5, 26, 23, 33, 4), datetime(2014, 2, 28, 13, 5, 13)],
+ ... ],
+ ... index=["Date", "Datetime"],
+ ... columns=["X", "Y"],
+ ... )
+ >>> with ExcelWriter(
+ ... "path_to_file.xlsx",
+ ... date_format="YYYY-MM-DD",
+ ... datetime_format="YYYY-MM-DD HH:MM:SS"
+ ... ) as writer:
... df.to_excel(writer)
You can also append to an existing Excel file:
- >>> with ExcelWriter('path_to_file.xlsx', mode='a') as writer:
- ... df.to_excel(writer, sheet_name='Sheet3')
+ >>> with ExcelWriter("path_to_file.xlsx", mode="a", engine="openpyxl") as writer:
+ ... df.to_excel(writer, sheet_name="Sheet3")
You can store Excel file in RAM:
>>> import io
+ >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"])
>>> buffer = io.BytesIO()
>>> with pd.ExcelWriter(buffer) as writer:
... df.to_excel(writer)
@@ -738,8 +753,9 @@ class ExcelWriter(metaclass=abc.ABCMeta):
You can pack Excel file into zip archive:
>>> import zipfile
- >>> with zipfile.ZipFile('path_to_file.zip', 'w') as zf:
- ... with zf.open('filename.xlsx', 'w') as buffer:
+ >>> df = pd.DataFrame([["ABC", "XYZ"]], columns=["Foo", "Bar"])
+ >>> with zipfile.ZipFile("path_to_file.zip", "w") as zf:
+ ... with zf.open("filename.xlsx", "w") as buffer:
... with pd.ExcelWriter(buffer) as writer:
... df.to_excel(writer)
"""
| - [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
| https://api.github.com/repos/pandas-dev/pandas/pulls/41286 | 2021-05-03T16:23:40Z | 2021-05-04T12:50:12Z | 2021-05-04T12:50:12Z | 2021-05-04T13:50:41Z |
TYP: core.sorting | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 8e12a8cb18b68..50837e1b3ed50 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6146,7 +6146,7 @@ def duplicated(
if self.empty:
return self._constructor_sliced(dtype=bool)
- def f(vals):
+ def f(vals) -> tuple[np.ndarray, int]:
labels, shape = algorithms.factorize(vals, size_hint=len(self))
return labels.astype("i8", copy=False), len(shape)
@@ -6173,7 +6173,14 @@ def f(vals):
vals = (col.values for name, col in self.items() if name in subset)
labels, shape = map(list, zip(*map(f, vals)))
- ids = get_group_index(labels, shape, sort=False, xnull=False)
+ ids = get_group_index(
+ labels,
+ # error: Argument 1 to "tuple" has incompatible type "List[_T]";
+ # expected "Iterable[int]"
+ tuple(shape), # type: ignore[arg-type]
+ sort=False,
+ xnull=False,
+ )
result = self._constructor_sliced(duplicated_int64(ids, keep), index=self.index)
return result.__finalize__(self, method="duplicated")
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 794f13bbfb6b1..3ddb5b1248060 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -1611,7 +1611,7 @@ def _inferred_type_levels(self) -> list[str]:
@doc(Index.duplicated)
def duplicated(self, keep="first") -> np.ndarray:
- shape = map(len, self.levels)
+ shape = tuple(len(lev) for lev in self.levels)
ids = get_group_index(self.codes, shape, sort=False, xnull=False)
return duplicated_int64(ids, keep)
diff --git a/pandas/core/reshape/reshape.py b/pandas/core/reshape/reshape.py
index d1e076da9293d..037fe5366255a 100644
--- a/pandas/core/reshape/reshape.py
+++ b/pandas/core/reshape/reshape.py
@@ -142,7 +142,7 @@ def _indexer_and_to_sort(
codes = list(self.index.codes)
levs = list(self.index.levels)
to_sort = codes[:v] + codes[v + 1 :] + [codes[v]]
- sizes = [len(x) for x in levs[:v] + levs[v + 1 :] + [levs[v]]]
+ sizes = tuple(len(x) for x in levs[:v] + levs[v + 1 :] + [levs[v]])
comp_index, obs_ids = get_compressed_ids(to_sort, sizes)
ngroups = len(obs_ids)
@@ -166,7 +166,7 @@ def _make_selectors(self):
# make the mask
remaining_labels = self.sorted_labels[:-1]
- level_sizes = [len(x) for x in new_levels]
+ level_sizes = tuple(len(x) for x in new_levels)
comp_index, obs_ids = get_compressed_ids(remaining_labels, level_sizes)
ngroups = len(obs_ids)
@@ -353,7 +353,7 @@ def _unstack_multiple(data, clocs, fill_value=None):
rcodes = [index.codes[i] for i in rlocs]
rnames = [index.names[i] for i in rlocs]
- shape = [len(x) for x in clevels]
+ shape = tuple(len(x) for x in clevels)
group_index = get_group_index(ccodes, shape, sort=False, xnull=False)
comp_ids, obs_ids = compress_group_index(group_index, sort=False)
diff --git a/pandas/core/sorting.py b/pandas/core/sorting.py
index f5cd390f077a6..f6c1afbde0bd9 100644
--- a/pandas/core/sorting.py
+++ b/pandas/core/sorting.py
@@ -18,7 +18,10 @@
lib,
)
from pandas._libs.hashtable import unique_label_indices
-from pandas._typing import IndexKeyFunc
+from pandas._typing import (
+ IndexKeyFunc,
+ Shape,
+)
from pandas.core.dtypes.common import (
ensure_int64,
@@ -93,7 +96,7 @@ def get_indexer_indexer(
return indexer
-def get_group_index(labels, shape, sort: bool, xnull: bool):
+def get_group_index(labels, shape: Shape, sort: bool, xnull: bool):
"""
For the particular label_list, gets the offsets into the hypothetical list
representing the totally ordered cartesian product of all possible label
@@ -108,7 +111,7 @@ def get_group_index(labels, shape, sort: bool, xnull: bool):
----------
labels : sequence of arrays
Integers identifying levels at each location
- shape : sequence of ints
+ shape : tuple[int, ...]
Number of unique levels at each location
sort : bool
If the ranks of returned ids should match lexical ranks of labels
@@ -134,33 +137,36 @@ def _int64_cut_off(shape) -> int:
return i
return len(shape)
- def maybe_lift(lab, size):
+ def maybe_lift(lab, size) -> tuple[np.ndarray, int]:
# promote nan values (assigned -1 label in lab array)
# so that all output values are non-negative
return (lab + 1, size + 1) if (lab == -1).any() else (lab, size)
- labels = map(ensure_int64, labels)
+ labels = [ensure_int64(x) for x in labels]
+ lshape = list(shape)
if not xnull:
- labels, shape = map(list, zip(*map(maybe_lift, labels, shape)))
+ for i, (lab, size) in enumerate(zip(labels, shape)):
+ lab, size = maybe_lift(lab, size)
+ labels[i] = lab
+ lshape[i] = size
labels = list(labels)
- shape = list(shape)
# Iteratively process all the labels in chunks sized so less
# than _INT64_MAX unique int ids will be required for each chunk
while True:
# how many levels can be done without overflow:
- nlev = _int64_cut_off(shape)
+ nlev = _int64_cut_off(lshape)
# compute flat ids for the first `nlev` levels
- stride = np.prod(shape[1:nlev], dtype="i8")
+ stride = np.prod(lshape[1:nlev], dtype="i8")
out = stride * labels[0].astype("i8", subok=False, copy=False)
for i in range(1, nlev):
- if shape[i] == 0:
- stride = 0
+ if lshape[i] == 0:
+ stride = np.int64(0)
else:
- stride //= shape[i]
+ stride //= lshape[i]
out += labels[i] * stride
if xnull: # exclude nulls
@@ -169,7 +175,7 @@ def maybe_lift(lab, size):
mask |= lab == -1
out[mask] = -1
- if nlev == len(shape): # all levels done!
+ if nlev == len(lshape): # all levels done!
break
# compress what has been done so far in order to avoid overflow
@@ -177,12 +183,12 @@ def maybe_lift(lab, size):
comp_ids, obs_ids = compress_group_index(out, sort=sort)
labels = [comp_ids] + labels[nlev:]
- shape = [len(obs_ids)] + shape[nlev:]
+ lshape = [len(obs_ids)] + lshape[nlev:]
return out
-def get_compressed_ids(labels, sizes) -> tuple[np.ndarray, np.ndarray]:
+def get_compressed_ids(labels, sizes: Shape) -> tuple[np.ndarray, np.ndarray]:
"""
Group_index is offsets into cartesian product of all possible labels. This
space can be huge, so this function compresses it, by computing offsets
@@ -191,7 +197,7 @@ def get_compressed_ids(labels, sizes) -> tuple[np.ndarray, np.ndarray]:
Parameters
----------
labels : list of label arrays
- sizes : list of size of the levels
+ sizes : tuple[int] of size of the levels
Returns
-------
@@ -252,12 +258,11 @@ def decons_obs_group_ids(comp_ids: np.ndarray, obs_ids, shape, labels, xnull: bo
return out if xnull or not lift.any() else [x - y for x, y in zip(out, lift)]
# TODO: unique_label_indices only used here, should take ndarray[np.intp]
- i = unique_label_indices(ensure_int64(comp_ids))
- i8copy = lambda a: a.astype("i8", subok=False, copy=True)
- return [i8copy(lab[i]) for lab in labels]
+ indexer = unique_label_indices(ensure_int64(comp_ids))
+ return [lab[indexer].astype(np.intp, subok=False, copy=True) for lab in labels]
-def indexer_from_factorized(labels, shape, compress: bool = True) -> np.ndarray:
+def indexer_from_factorized(labels, shape: Shape, compress: bool = True) -> np.ndarray:
# returned ndarray is np.intp
ids = get_group_index(labels, shape, sort=True, xnull=False)
@@ -334,7 +339,7 @@ def lexsort_indexer(
shape.append(n)
labels.append(codes)
- return indexer_from_factorized(labels, shape)
+ return indexer_from_factorized(labels, tuple(shape))
def nargsort(
@@ -576,7 +581,7 @@ def get_indexer_dict(
"""
shape = [len(x) for x in keys]
- group_index = get_group_index(label_list, shape, sort=True, xnull=True)
+ group_index = get_group_index(label_list, tuple(shape), sort=True, xnull=True)
if np.all(group_index == -1):
# Short-circuit, lib.indices_fast will return the same
return {}
| both mypy and i have a hard time with `labels, lshape = map(list, zip(*map(maybe_lift, labels, lshape)))` | https://api.github.com/repos/pandas-dev/pandas/pulls/41285 | 2021-05-03T15:16:39Z | 2021-05-04T12:58:41Z | 2021-05-04T12:58:41Z | 2021-05-04T14:00:12Z |
DOC: Fix docs for io/json/* | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index 8e876eebf93ad..7cc171330e01a 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -140,6 +140,7 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
pandas/core/window/ \
pandas/errors/ \
pandas/io/clipboard/ \
+ pandas/io/json/ \
pandas/io/excel/ \
pandas/io/parsers/ \
pandas/io/sas/ \
diff --git a/pandas/io/json/_json.py b/pandas/io/json/_json.py
index b7493ebeadf34..259850e9a7233 100644
--- a/pandas/io/json/_json.py
+++ b/pandas/io/json/_json.py
@@ -526,9 +526,13 @@ def read_json(
Encoding/decoding a Dataframe using ``'split'`` formatted JSON:
>>> df.to_json(orient='split')
- '{{"columns":["col 1","col 2"],
- "index":["row 1","row 2"],
- "data":[["a","b"],["c","d"]]}}'
+ '\
+{{\
+"columns":["col 1","col 2"],\
+"index":["row 1","row 2"],\
+"data":[["a","b"],["c","d"]]\
+}}\
+'
>>> pd.read_json(_, orient='split')
col 1 col 2
row 1 a b
@@ -538,6 +542,7 @@ def read_json(
>>> df.to_json(orient='index')
'{{"row 1":{{"col 1":"a","col 2":"b"}},"row 2":{{"col 1":"c","col 2":"d"}}}}'
+
>>> pd.read_json(_, orient='index')
col 1 col 2
row 1 a b
@@ -556,13 +561,18 @@ def read_json(
Encoding with Table Schema
>>> df.to_json(orient='table')
- '{{"schema": {{"fields": [{{"name": "index", "type": "string"}},
- {{"name": "col 1", "type": "string"}},
- {{"name": "col 2", "type": "string"}}],
- "primaryKey": "index",
- "pandas_version": "0.20.0"}},
- "data": [{{"index": "row 1", "col 1": "a", "col 2": "b"}},
- {{"index": "row 2", "col 1": "c", "col 2": "d"}}]}}'
+ '\
+{{"schema":{{"fields":[\
+{{"name":"index","type":"string"}},\
+{{"name":"col 1","type":"string"}},\
+{{"name":"col 2","type":"string"}}],\
+"primaryKey":["index"],\
+"pandas_version":"0.20.0"}},\
+"data":[\
+{{"index":"row 1","col 1":"a","col 2":"b"}},\
+{{"index":"row 2","col 1":"c","col 2":"d"}}]\
+}}\
+'
"""
if orient == "table" and dtype:
raise ValueError("cannot pass both dtype and orient='table'")
diff --git a/pandas/io/json/_normalize.py b/pandas/io/json/_normalize.py
index 3d07b9d98f9a9..5927d6482d3b0 100644
--- a/pandas/io/json/_normalize.py
+++ b/pandas/io/json/_normalize.py
@@ -70,15 +70,17 @@ def nested_to_record(
Examples
--------
- IN[52]: nested_to_record(dict(flat1=1,dict1=dict(c=1,d=2),
- nested=dict(e=dict(c=1,d=2),d=2)))
- Out[52]:
- {'dict1.c': 1,
- 'dict1.d': 2,
- 'flat1': 1,
- 'nested.d': 2,
- 'nested.e.c': 1,
- 'nested.e.d': 2}
+ >>> nested_to_record(
+ ... dict(flat1=1, dict1=dict(c=1, d=2), nested=dict(e=dict(c=1, d=2), d=2))
+ ... )
+ {\
+'flat1': 1, \
+'dict1.c': 1, \
+'dict1.d': 2, \
+'nested.e.c': 1, \
+'nested.e.d': 2, \
+'nested.d': 2\
+}
"""
singleton = False
if isinstance(ds, dict):
@@ -208,18 +210,21 @@ def _simple_json_normalize(
Examples
--------
- IN[52]: _simple_json_normalize({
- 'flat1': 1,
- 'dict1': {'c': 1, 'd': 2},
- 'nested': {'e': {'c': 1, 'd': 2}, 'd': 2}
- })
- Out[52]:
- {'dict1.c': 1,
- 'dict1.d': 2,
- 'flat1': 1,
- 'nested.d': 2,
- 'nested.e.c': 1,
- 'nested.e.d': 2}
+ >>> _simple_json_normalize(
+ ... {
+ ... "flat1": 1,
+ ... "dict1": {"c": 1, "d": 2},
+ ... "nested": {"e": {"c": 1, "d": 2}, "d": 2},
+ ... }
+ ... )
+ {\
+'flat1': 1, \
+'dict1.c': 1, \
+'dict1.d': 2, \
+'nested.e.c': 1, \
+'nested.e.d': 2, \
+'nested.d': 2\
+}
"""
normalised_json_object = {}
@@ -283,22 +288,30 @@ def _json_normalize(
Examples
--------
- >>> data = [{'id': 1, 'name': {'first': 'Coleen', 'last': 'Volk'}},
- ... {'name': {'given': 'Mark', 'family': 'Regner'}},
- ... {'id': 2, 'name': 'Faye Raker'}]
+ >>> data = [
+ ... {"id": 1, "name": {"first": "Coleen", "last": "Volk"}},
+ ... {"name": {"given": "Mark", "family": "Regner"}},
+ ... {"id": 2, "name": "Faye Raker"},
+ ... ]
>>> pd.json_normalize(data)
id name.first name.last name.given name.family name
0 1.0 Coleen Volk NaN NaN NaN
1 NaN NaN NaN Mark Regner NaN
2 2.0 NaN NaN NaN NaN Faye Raker
- >>> data = [{'id': 1,
- ... 'name': "Cole Volk",
- ... 'fitness': {'height': 130, 'weight': 60}},
- ... {'name': "Mark Reg",
- ... 'fitness': {'height': 130, 'weight': 60}},
- ... {'id': 2, 'name': 'Faye Raker',
- ... 'fitness': {'height': 130, 'weight': 60}}]
+ >>> data = [
+ ... {
+ ... "id": 1,
+ ... "name": "Cole Volk",
+ ... "fitness": {"height": 130, "weight": 60},
+ ... },
+ ... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
+ ... {
+ ... "id": 2,
+ ... "name": "Faye Raker",
+ ... "fitness": {"height": 130, "weight": 60},
+ ... },
+ ... ]
>>> pd.json_normalize(data, max_level=0)
id name fitness
0 1.0 Cole Volk {'height': 130, 'weight': 60}
@@ -307,32 +320,49 @@ def _json_normalize(
Normalizes nested data up to level 1.
- >>> data = [{'id': 1,
- ... 'name': "Cole Volk",
- ... 'fitness': {'height': 130, 'weight': 60}},
- ... {'name': "Mark Reg",
- ... 'fitness': {'height': 130, 'weight': 60}},
- ... {'id': 2, 'name': 'Faye Raker',
- ... 'fitness': {'height': 130, 'weight': 60}}]
+ >>> data = [
+ ... {
+ ... "id": 1,
+ ... "name": "Cole Volk",
+ ... "fitness": {"height": 130, "weight": 60},
+ ... },
+ ... {"name": "Mark Reg", "fitness": {"height": 130, "weight": 60}},
+ ... {
+ ... "id": 2,
+ ... "name": "Faye Raker",
+ ... "fitness": {"height": 130, "weight": 60},
+ ... },
+ ... ]
>>> pd.json_normalize(data, max_level=1)
id name fitness.height fitness.weight
0 1.0 Cole Volk 130 60
1 NaN Mark Reg 130 60
2 2.0 Faye Raker 130 60
- >>> data = [{'state': 'Florida',
- ... 'shortname': 'FL',
- ... 'info': {'governor': 'Rick Scott'},
- ... 'counties': [{'name': 'Dade', 'population': 12345},
- ... {'name': 'Broward', 'population': 40000},
- ... {'name': 'Palm Beach', 'population': 60000}]},
- ... {'state': 'Ohio',
- ... 'shortname': 'OH',
- ... 'info': {'governor': 'John Kasich'},
- ... 'counties': [{'name': 'Summit', 'population': 1234},
- ... {'name': 'Cuyahoga', 'population': 1337}]}]
- >>> result = pd.json_normalize(data, 'counties', ['state', 'shortname',
- ... ['info', 'governor']])
+ >>> data = [
+ ... {
+ ... "state": "Florida",
+ ... "shortname": "FL",
+ ... "info": {"governor": "Rick Scott"},
+ ... "counties": [
+ ... {"name": "Dade", "population": 12345},
+ ... {"name": "Broward", "population": 40000},
+ ... {"name": "Palm Beach", "population": 60000},
+ ... ],
+ ... },
+ ... {
+ ... "state": "Ohio",
+ ... "shortname": "OH",
+ ... "info": {"governor": "John Kasich"},
+ ... "counties": [
+ ... {"name": "Summit", "population": 1234},
+ ... {"name": "Cuyahoga", "population": 1337},
+ ... ],
+ ... },
+ ... ]
+ >>> result = pd.json_normalize(
+ ... data, "counties", ["state", "shortname", ["info", "governor"]]
+ ... )
>>> result
name population state shortname info.governor
0 Dade 12345 Florida FL Rick Scott
@@ -341,8 +371,8 @@ def _json_normalize(
3 Summit 1234 Ohio OH John Kasich
4 Cuyahoga 1337 Ohio OH John Kasich
- >>> data = {'A': [1, 2]}
- >>> pd.json_normalize(data, 'A', record_prefix='Prefix.')
+ >>> data = {"A": [1, 2]}
+ >>> pd.json_normalize(data, "A", record_prefix="Prefix.")
Prefix.0
0 1
1 2
diff --git a/pandas/io/json/_table_schema.py b/pandas/io/json/_table_schema.py
index ea47dca4f079e..87ea109c20f43 100644
--- a/pandas/io/json/_table_schema.py
+++ b/pandas/io/json/_table_schema.py
@@ -155,21 +155,25 @@ def convert_json_field_to_pandas_type(field):
Examples
--------
- >>> convert_json_field_to_pandas_type({'name': 'an_int',
- 'type': 'integer'})
+ >>> convert_json_field_to_pandas_type({"name": "an_int", "type": "integer"})
'int64'
- >>> convert_json_field_to_pandas_type({'name': 'a_categorical',
- 'type': 'any',
- 'constraints': {'enum': [
- 'a', 'b', 'c']},
- 'ordered': True})
- 'CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)'
- >>> convert_json_field_to_pandas_type({'name': 'a_datetime',
- 'type': 'datetime'})
+
+ >>> convert_json_field_to_pandas_type(
+ ... {
+ ... "name": "a_categorical",
+ ... "type": "any",
+ ... "constraints": {"enum": ["a", "b", "c"]},
+ ... "ordered": True,
+ ... }
+ ... )
+ CategoricalDtype(categories=['a', 'b', 'c'], ordered=True)
+
+ >>> convert_json_field_to_pandas_type({"name": "a_datetime", "type": "datetime"})
'datetime64[ns]'
- >>> convert_json_field_to_pandas_type({'name': 'a_datetime_with_tz',
- 'type': 'datetime',
- 'tz': 'US/Central'})
+
+ >>> convert_json_field_to_pandas_type(
+ ... {"name": "a_datetime_with_tz", "type": "datetime", "tz": "US/Central"}
+ ... )
'datetime64[ns, US/Central]'
"""
typ = field["type"]
@@ -245,12 +249,13 @@ def build_table_schema(
... 'C': pd.date_range('2016-01-01', freq='d', periods=3),
... }, index=pd.Index(range(3), name='idx'))
>>> build_table_schema(df)
- {'fields': [{'name': 'idx', 'type': 'integer'},
- {'name': 'A', 'type': 'integer'},
- {'name': 'B', 'type': 'string'},
- {'name': 'C', 'type': 'datetime'}],
- 'pandas_version': '0.20.0',
- 'primaryKey': ['idx']}
+ {'fields': \
+[{'name': 'idx', 'type': 'integer'}, \
+{'name': 'A', 'type': 'integer'}, \
+{'name': 'B', 'type': 'string'}, \
+{'name': 'C', 'type': 'datetime'}], \
+'primaryKey': ['idx'], \
+'pandas_version': '0.20.0'}
"""
if index is True:
data = set_default_names(data)
| - [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
| https://api.github.com/repos/pandas-dev/pandas/pulls/41284 | 2021-05-03T15:08:39Z | 2021-05-04T21:55:08Z | 2021-05-04T21:55:08Z | 2021-05-05T12:13:24Z |
TYP Series and DataFrame currently type-check as hashable | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 0641b32383125..60dc7096c9d1e 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -707,6 +707,7 @@ Other API changes
- Added new ``engine`` and ``**engine_kwargs`` parameters to :meth:`DataFrame.to_sql` to support other future "SQL engines". Currently we still only use ``SQLAlchemy`` under the hood, but more engines are planned to be supported such as `turbodbc <https://turbodbc.readthedocs.io/en/latest/>`_ (:issue:`36893`)
- Removed redundant ``freq`` from :class:`PeriodIndex` string representation (:issue:`41653`)
- :meth:`ExtensionDtype.construct_array_type` is now a required method instead of an optional one for :class:`ExtensionDtype` subclasses (:issue:`24860`)
+- Calling ``hash`` on non-hashable pandas objects will now raise ``TypeError`` with the built-in error message (e.g. ``unhashable type: 'Series'``). Previously it would raise a custom message such as ``'Series' objects are mutable, thus they cannot be hashed``. Furthermore, ``isinstance(<Series>, abc.collections.Hashable)`` will now return ``False`` (:issue:`40013`)
- :meth:`.Styler.from_custom_template` now has two new arguments for template names, and removed the old ``name``, due to template inheritance having been introducing for better parsing (:issue:`42053`). Subclassing modifications to Styler attributes are also needed.
.. _whatsnew_130.api_breaking.build:
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 888c7cbbffb59..96bd4280f4da4 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -1296,8 +1296,10 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
"""
raise TypeError(f"cannot perform {name} with type {self.dtype}")
- def __hash__(self) -> int:
- raise TypeError(f"unhashable type: {repr(type(self).__name__)}")
+ # https://github.com/python/typeshed/issues/2148#issuecomment-520783318
+ # Incompatible types in assignment (expression has type "None", base class
+ # "object" defined the type as "Callable[[object], int]")
+ __hash__: None # type: ignore[assignment]
# ------------------------------------------------------------------------
# Non-Optimized Default Methods
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 7432aa90afbc2..6f621699aa5ae 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -6180,7 +6180,10 @@ def f(vals) -> tuple[np.ndarray, int]:
return labels.astype("i8", copy=False), len(shape)
if subset is None:
- subset = self.columns
+ # Incompatible types in assignment
+ # (expression has type "Index", variable has type "Sequence[Any]")
+ # (pending on https://github.com/pandas-dev/pandas/issues/28770)
+ subset = self.columns # type: ignore[assignment]
elif (
not np.iterable(subset)
or isinstance(subset, str)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 82895ab9eb67a..c052b977ea07a 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1874,11 +1874,10 @@ def _drop_labels_or_levels(self, keys, axis: int = 0):
# ----------------------------------------------------------------------
# Iteration
- def __hash__(self) -> int:
- raise TypeError(
- f"{repr(type(self).__name__)} objects are mutable, "
- f"thus they cannot be hashed"
- )
+ # https://github.com/python/typeshed/issues/2148#issuecomment-520783318
+ # Incompatible types in assignment (expression has type "None", base class
+ # "object" defined the type as "Callable[[object], int]")
+ __hash__: None # type: ignore[assignment]
def __iter__(self):
"""
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 7b7b234a06eca..3cfa1f15fa118 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4604,9 +4604,10 @@ def __contains__(self, key: Any) -> bool:
except (OverflowError, TypeError, ValueError):
return False
- @final
- def __hash__(self):
- raise TypeError(f"unhashable type: {repr(type(self).__name__)}")
+ # https://github.com/python/typeshed/issues/2148#issuecomment-520783318
+ # Incompatible types in assignment (expression has type "None", base class
+ # "object" defined the type as "Callable[[object], int]")
+ __hash__: None # type: ignore[assignment]
@final
def __setitem__(self, key, value):
diff --git a/pandas/core/reshape/pivot.py b/pandas/core/reshape/pivot.py
index 51556fda6da04..7a5c2677307e2 100644
--- a/pandas/core/reshape/pivot.py
+++ b/pandas/core/reshape/pivot.py
@@ -482,7 +482,7 @@ def pivot(
if columns is None:
raise TypeError("pivot() missing 1 required argument: 'columns'")
- columns = com.convert_to_list_like(columns)
+ columns_listlike = com.convert_to_list_like(columns)
if values is None:
if index is not None:
@@ -494,28 +494,27 @@ def pivot(
# error: Unsupported operand types for + ("List[Any]" and "ExtensionArray")
# error: Unsupported left operand type for + ("ExtensionArray")
indexed = data.set_index(
- cols + columns, append=append # type: ignore[operator]
+ cols + columns_listlike, append=append # type: ignore[operator]
)
else:
if index is None:
- index = [Series(data.index, name=data.index.name)]
+ index_list = [Series(data.index, name=data.index.name)]
else:
- index = com.convert_to_list_like(index)
- index = [data[idx] for idx in index]
+ index_list = [data[idx] for idx in com.convert_to_list_like(index)]
- data_columns = [data[col] for col in columns]
- index.extend(data_columns)
- index = MultiIndex.from_arrays(index)
+ data_columns = [data[col] for col in columns_listlike]
+ index_list.extend(data_columns)
+ multiindex = MultiIndex.from_arrays(index_list)
if is_list_like(values) and not isinstance(values, tuple):
# Exclude tuple because it is seen as a single column name
values = cast(Sequence[Hashable], values)
indexed = data._constructor(
- data[values]._values, index=index, columns=values
+ data[values]._values, index=multiindex, columns=values
)
else:
- indexed = data._constructor_sliced(data[values]._values, index=index)
- return indexed.unstack(columns)
+ indexed = data._constructor_sliced(data[values]._values, index=multiindex)
+ return indexed.unstack(columns_listlike)
def crosstab(
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 455d51455c456..c79ecd554bed5 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -305,7 +305,6 @@ class Series(base.IndexOpsMixin, generic.NDFrame):
hasnans = property( # type: ignore[assignment]
base.IndexOpsMixin.hasnans.func, doc=base.IndexOpsMixin.hasnans.__doc__
)
- __hash__ = generic.NDFrame.__hash__
_mgr: SingleManager
div: Callable[[Series, Any], Series]
rdiv: Callable[[Series, Any], Series]
diff --git a/pandas/tests/frame/test_api.py b/pandas/tests/frame/test_api.py
index 76cfd77d254f2..49649c1487f13 100644
--- a/pandas/tests/frame/test_api.py
+++ b/pandas/tests/frame/test_api.py
@@ -91,7 +91,7 @@ def test_not_hashable(self):
empty_frame = DataFrame()
df = DataFrame([1])
- msg = "'DataFrame' objects are mutable, thus they cannot be hashed"
+ msg = "unhashable type: 'DataFrame'"
with pytest.raises(TypeError, match=msg):
hash(df)
with pytest.raises(TypeError, match=msg):
diff --git a/pandas/tests/series/test_api.py b/pandas/tests/series/test_api.py
index eddf57c1e88f3..b49c209a59a06 100644
--- a/pandas/tests/series/test_api.py
+++ b/pandas/tests/series/test_api.py
@@ -101,7 +101,7 @@ def test_index_tab_completion(self, index):
def test_not_hashable(self):
s_empty = Series(dtype=object)
s = Series([1])
- msg = "'Series' objects are mutable, thus they cannot be hashed"
+ msg = "unhashable type: 'Series'"
with pytest.raises(TypeError, match=msg):
hash(s_empty)
with pytest.raises(TypeError, match=msg):
| - [ ] closes #40013
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41283 | 2021-05-03T14:15:01Z | 2021-06-29T12:27:43Z | 2021-06-29T12:27:42Z | 2021-06-29T12:31:30Z |
[ArrowStringArray] PERF: isin using native pyarrow function if available | diff --git a/asv_bench/benchmarks/algos/isin.py b/asv_bench/benchmarks/algos/isin.py
index a8b8a193dbcfc..44245295beafc 100644
--- a/asv_bench/benchmarks/algos/isin.py
+++ b/asv_bench/benchmarks/algos/isin.py
@@ -9,6 +9,8 @@
date_range,
)
+from ..pandas_vb_common import tm
+
class IsIn:
@@ -22,6 +24,9 @@ class IsIn:
"datetime64[ns]",
"category[object]",
"category[int]",
+ "str",
+ "string",
+ "arrow_string",
]
param_names = ["dtype"]
@@ -57,6 +62,15 @@ def setup(self, dtype):
self.values = np.random.choice(arr, sample_size)
self.series = Series(arr).astype("category")
+ elif dtype in ["str", "string", "arrow_string"]:
+ from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
+
+ try:
+ self.series = Series(tm.makeStringIndex(N), dtype=dtype)
+ except ImportError:
+ raise NotImplementedError
+ self.values = list(self.series[:2])
+
else:
self.series = Series(np.random.randint(1, 10, N)).astype(dtype)
self.values = [1, 2]
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 2c4477056a112..c52105b77e4dc 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -468,10 +468,9 @@ def isin(comps: AnyArrayLike, values: AnyArrayLike) -> np.ndarray:
comps = _ensure_arraylike(comps)
comps = extract_array(comps, extract_numpy=True)
- if is_extension_array_dtype(comps.dtype):
- # error: Incompatible return value type (got "Series", expected "ndarray")
- # error: Item "ndarray" of "Union[Any, ndarray]" has no attribute "isin"
- return comps.isin(values) # type: ignore[return-value,union-attr]
+ if not isinstance(comps, np.ndarray):
+ # i.e. Extension Array
+ return comps.isin(values)
elif needs_i8_conversion(comps.dtype):
# Dispatch to DatetimeLikeArrayMixin.isin
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index 72a2ab8a1b80a..01813cef97b8d 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -663,6 +663,34 @@ def take(
indices_array[indices_array < 0] += len(self._data)
return type(self)(self._data.take(indices_array))
+ def isin(self, values):
+
+ # pyarrow.compute.is_in added in pyarrow 2.0.0
+ if not hasattr(pc, "is_in"):
+ return super().isin(values)
+
+ value_set = [
+ pa_scalar.as_py()
+ for pa_scalar in [pa.scalar(value, from_pandas=True) for value in values]
+ if pa_scalar.type in (pa.string(), pa.null())
+ ]
+
+ # for an empty value_set pyarrow 3.0.0 segfaults and pyarrow 2.0.0 returns True
+ # for null values, so we short-circuit to return all False array.
+ if not len(value_set):
+ return np.zeros(len(self), dtype=bool)
+
+ kwargs = {}
+ if LooseVersion(pa.__version__) < "3.0.0":
+ # in pyarrow 2.0.0 skip_null is ignored but is a required keyword and raises
+ # with unexpected keyword argument in pyarrow 3.0.0+
+ kwargs["skip_null"] = True
+
+ result = pc.is_in(self._data, value_set=pa.array(value_set), **kwargs)
+ # pyarrow 2.0.0 returned nulls, so we explicily specify dtype to convert nulls
+ # to False
+ return np.array(result, dtype=np.bool_)
+
def value_counts(self, dropna: bool = True) -> Series:
"""
Return a Series containing counts of each unique value.
diff --git a/pandas/tests/arrays/string_/test_string.py b/pandas/tests/arrays/string_/test_string.py
index e2d8e522abb35..43ba5667d4d93 100644
--- a/pandas/tests/arrays/string_/test_string.py
+++ b/pandas/tests/arrays/string_/test_string.py
@@ -566,3 +566,23 @@ def test_to_numpy_na_value(dtype, nulls_fixture):
result = arr.to_numpy(na_value=na_value)
expected = np.array(["a", na_value, "b"], dtype=object)
tm.assert_numpy_array_equal(result, expected)
+
+
+def test_isin(dtype, request):
+ s = pd.Series(["a", "b", None], dtype=dtype)
+
+ result = s.isin(["a", "c"])
+ expected = pd.Series([True, False, False])
+ tm.assert_series_equal(result, expected)
+
+ result = s.isin(["a", pd.NA])
+ expected = pd.Series([True, False, True])
+ tm.assert_series_equal(result, expected)
+
+ result = s.isin([])
+ expected = pd.Series([False, False, False])
+ tm.assert_series_equal(result, expected)
+
+ result = s.isin(["a", pd.Timestamp.now()])
+ expected = pd.Series([True, False, False])
+ tm.assert_series_equal(result, expected)
| Unlike MaskedArray, this returns a numpy bool array to be consistent with the EA interface and StringArray and also due to the fact that the returned boolean array has no null values to be consistent with the latest version of pyarrow.
```
[ 0.00%] ·· Benchmarking existing-py_home_simon_miniconda3_envs_pandas-dev_bin_python
[ 4.17%] ··· algos.isin.IsIn.time_isin ok
[ 4.17%] ··· ================== ==========
dtype
------------------ ----------
int64 295±0μs
uint64 348±0μs
object 337±0μs
Int64 785±0μs
boolean 868±0μs
bool 420±0μs
datetime64[ns] 4.67±0ms
category[object] 9.46±0ms
category[int] 7.30±0ms
str 535±0μs
string 556±0μs
arrow_string 330±0μs
================== ==========
[ 8.33%] ··· algos.isin.IsIn.time_isin_categorical ok
[ 8.33%] ··· ================== ==========
dtype
------------------ ----------
int64 374±0μs
uint64 507±0μs
object 467±0μs
Int64 633±0μs
boolean 702±0μs
bool 458±0μs
datetime64[ns] 3.10±0ms
category[object] 10.2±0ms
category[int] 9.12±0ms
str 598±0μs
string 628±0μs
arrow_string 404±0μs
================== ==========
[ 12.50%] ··· algos.isin.IsIn.time_isin_empty ok
[ 12.50%] ··· ================== ==========
dtype
------------------ ----------
int64 275±0μs
uint64 285±0μs
object 390±0μs
Int64 1.06±0ms
boolean 1.14±0ms
bool 280±0μs
datetime64[ns] 295±0μs
category[object] 4.17±0ms
category[int] 3.49±0ms
str 424±0μs
string 718±0μs
arrow_string 140±0μs
================== ==========
[ 16.67%] ··· algos.isin.IsIn.time_isin_mismatched_dtype ok
[ 16.67%] ··· ================== ==========
dtype
------------------ ----------
int64 221±0μs
uint64 216±0μs
object 337±0μs
Int64 337±0μs
boolean 356±0μs
bool 323±0μs
datetime64[ns] 348±0μs
category[object] 4.86±0ms
category[int] 3.47±0ms
str 586±0μs
string 525±0μs
arrow_string 224±0μs
================== ==========
``` | https://api.github.com/repos/pandas-dev/pandas/pulls/41281 | 2021-05-03T12:41:35Z | 2021-05-05T12:50:49Z | 2021-05-05T12:50:49Z | 2021-05-05T12:55:43Z |
CI: Added unselected directories to doctest CI | diff --git a/ci/code_checks.sh b/ci/code_checks.sh
index c178e9f7cecbe..149e10b48933d 100755
--- a/ci/code_checks.sh
+++ b/ci/code_checks.sh
@@ -123,6 +123,11 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
MSG='Doctests for directories' ; echo $MSG
pytest -q --doctest-modules \
+ pandas/_libs/ \
+ pandas/api/ \
+ pandas/arrays/ \
+ pandas/compat/ \
+ pandas/core/array_algos/ \
pandas/core/arrays/ \
pandas/core/computation/ \
pandas/core/dtypes/ \
@@ -133,6 +138,10 @@ if [[ -z "$CHECK" || "$CHECK" == "doctests" ]]; then
pandas/core/strings/ \
pandas/core/tools/ \
pandas/core/window/ \
+ pandas/errors/ \
+ pandas/io/clipboard/ \
+ pandas/io/parsers/ \
+ pandas/io/sas/ \
pandas/tseries/
RET=$(($RET + $?)) ; echo $MSG "DONE"
| Some of those do have docstrings with example to be validated, and some do not have.
This only make sure that future files or changes to files in those directories will be checked.
---
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
| https://api.github.com/repos/pandas-dev/pandas/pulls/41280 | 2021-05-03T12:23:55Z | 2021-05-03T13:50:14Z | 2021-05-03T13:50:14Z | 2021-05-03T13:53:19Z |
CI: Fix warning from SSL transport (again) | diff --git a/pandas/_testing/_warnings.py b/pandas/_testing/_warnings.py
index b1f070d1a1ccc..5153118e9b142 100644
--- a/pandas/_testing/_warnings.py
+++ b/pandas/_testing/_warnings.py
@@ -143,7 +143,7 @@ def _assert_caught_no_extra_warnings(
for actual_warning in caught_warnings:
if _is_unexpected_warning(actual_warning, expected_warning):
unclosed = "unclosed transport <asyncio.sslproto._SSLProtocolTransport"
- if isinstance(actual_warning, ResourceWarning) and unclosed in str(
+ if actual_warning.category == ResourceWarning and unclosed in str(
actual_warning.message
):
# FIXME: kludge because pytest.filterwarnings does not
| Turns out that the issue was the comparison of the type. Loop has `WarningMessage`, but the type of warning is in `actual_warning.category` | https://api.github.com/repos/pandas-dev/pandas/pulls/41279 | 2021-05-03T12:19:53Z | 2021-05-03T13:49:46Z | 2021-05-03T13:49:46Z | 2021-05-03T13:53:58Z |
TST: fix casting extension tests for external users | diff --git a/pandas/tests/extension/base/casting.py b/pandas/tests/extension/base/casting.py
index 47f4f7585243d..99a5666926e10 100644
--- a/pandas/tests/extension/base/casting.py
+++ b/pandas/tests/extension/base/casting.py
@@ -1,6 +1,8 @@
import numpy as np
import pytest
+import pandas.util._test_decorators as td
+
import pandas as pd
from pandas.core.internals import ObjectBlock
from pandas.tests.extension.base.base import BaseExtensionTests
@@ -43,8 +45,19 @@ def test_astype_str(self, data):
expected = pd.Series([str(x) for x in data[:5]], dtype=str)
self.assert_series_equal(result, expected)
+ @pytest.mark.parametrize(
+ "nullable_string_dtype",
+ [
+ "string",
+ pytest.param(
+ "arrow_string", marks=td.skip_if_no("pyarrow", min_version="1.0.0")
+ ),
+ ],
+ )
def test_astype_string(self, data, nullable_string_dtype):
# GH-33465
+ from pandas.core.arrays.string_arrow import ArrowStringDtype # noqa: F401
+
result = pd.Series(data[:5]).astype(nullable_string_dtype)
expected = pd.Series([str(x) for x in data[:5]], dtype=nullable_string_dtype)
self.assert_series_equal(result, expected)
| cc @simonjayhawkins fixtures defined by pandas are not known when inheriting the tests as external package. There are a bunch of fixtures that external packages are expected to define in `pandas/tests/extension/conftest.py`, but I think for the string dtype here this is not worth it to expect downstream users to define it themselves (certainly given that it is still going to change) | https://api.github.com/repos/pandas-dev/pandas/pulls/41278 | 2021-05-03T09:48:43Z | 2021-05-03T14:28:06Z | 2021-05-03T14:28:06Z | 2021-05-03T14:40:34Z |
Allow to union `MultiIndex` with empty `RangeIndex` | diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 9b3f2d191831d..ee2ae291cd288 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -2923,8 +2923,10 @@ def union(self, other, sort=None):
other, result_name = self._convert_can_do_setop(other)
if not is_dtype_equal(self.dtype, other.dtype):
- if isinstance(self, ABCMultiIndex) and not is_object_dtype(
- unpack_nested_dtype(other)
+ if (
+ isinstance(self, ABCMultiIndex)
+ and not is_object_dtype(unpack_nested_dtype(other))
+ and len(other) > 0
):
raise NotImplementedError(
"Can only union MultiIndex with MultiIndex or Index of tuples, "
diff --git a/pandas/core/indexes/multi.py b/pandas/core/indexes/multi.py
index 794f13bbfb6b1..36c8e818f1a24 100644
--- a/pandas/core/indexes/multi.py
+++ b/pandas/core/indexes/multi.py
@@ -3597,7 +3597,7 @@ def _get_reconciled_name_object(self, other) -> MultiIndex:
def _maybe_match_names(self, other):
"""
Try to find common names to attach to the result of an operation between
- a and b. Return a consensus list of names if they match at least partly
+ a and b. Return a consensus list of names if they match at least partly
or list of None if they have completely different names.
"""
if len(self.names) != len(other.names):
diff --git a/pandas/tests/indexes/multi/test_setops.py b/pandas/tests/indexes/multi/test_setops.py
index 4a170d9cd161f..0b59e832ce3a8 100644
--- a/pandas/tests/indexes/multi/test_setops.py
+++ b/pandas/tests/indexes/multi/test_setops.py
@@ -414,6 +414,18 @@ def test_union_empty_self_different_names():
tm.assert_index_equal(result, expected)
+def test_union_multiindex_empty_rangeindex():
+ # GH#41234
+ mi = MultiIndex.from_arrays([[1, 2], [3, 4]], names=["a", "b"])
+ ri = pd.RangeIndex(0)
+
+ result_left = mi.union(ri)
+ tm.assert_index_equal(mi, result_left, check_names=False)
+
+ result_right = ri.union(mi)
+ tm.assert_index_equal(mi, result_right, check_names=False)
+
+
@pytest.mark.parametrize(
"method", ["union", "intersection", "difference", "symmetric_difference"]
)
diff --git a/pandas/tests/reshape/concat/test_concat.py b/pandas/tests/reshape/concat/test_concat.py
index 2ed38670e88a6..96b88dc61cfed 100644
--- a/pandas/tests/reshape/concat/test_concat.py
+++ b/pandas/tests/reshape/concat/test_concat.py
@@ -627,3 +627,14 @@ def test_concat_null_object_with_dti():
index=exp_index,
)
tm.assert_frame_equal(result, expected)
+
+
+def test_concat_multiindex_with_empty_rangeindex():
+ # GH#41234
+ mi = MultiIndex.from_tuples([("B", 1), ("C", 1)])
+ df1 = DataFrame([[1, 2]], columns=mi)
+ df2 = DataFrame(index=[1], columns=pd.RangeIndex(0))
+
+ result = concat([df1, df2])
+ expected = DataFrame([[1, 2], [np.nan, np.nan]], columns=mi)
+ tm.assert_frame_equal(result, expected)
| - [x] closes #41234
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
See also https://github.com/dask/dask/issues/7610 and https://github.com/JDASoftwareGroup/kartothek/issues/464. The regression seems to have been introduced in pandas-dev/pandas#38671. | https://api.github.com/repos/pandas-dev/pandas/pulls/41275 | 2021-05-03T07:03:01Z | 2021-05-04T12:53:48Z | 2021-05-04T12:53:47Z | 2021-06-02T14:38:08Z |
REF: clearer names in libreduction | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index d0f85b75a629e..09999b6970bca 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -63,17 +63,17 @@ cdef class _BaseGrouper:
)
return cached_index, cached_series
- cdef inline _update_cached_objs(self, object cached_typ, object cached_ityp,
+ cdef inline _update_cached_objs(self, object cached_series, object cached_index,
Slider islider, Slider vslider):
# See the comment in indexes/base.py about _index_data.
# We need this for EA-backed indexes that have a reference
# to a 1-d ndarray like datetime / timedelta / period.
- cached_ityp._engine.clear_mapping()
- cached_ityp._cache.clear() # e.g. inferred_freq must go
- cached_typ._mgr.set_values(vslider.buf)
+ cached_index._engine.clear_mapping()
+ cached_index._cache.clear() # e.g. inferred_freq must go
+ cached_series._mgr.set_values(vslider.buf)
cdef inline object _apply_to_group(self,
- object cached_typ, object cached_ityp,
+ object cached_series, object cached_index,
bint initialized):
"""
Call self.f on our new group, then update to the next group.
@@ -83,7 +83,7 @@ cdef class _BaseGrouper:
# NB: we assume that _update_cached_objs has already cleared cleared
# the cache and engine mapping
- res = self.f(cached_typ)
+ res = self.f(cached_series)
res = extract_result(res)
if not initialized:
# On the first pass, we check the output shape to see
@@ -140,7 +140,7 @@ cdef class SeriesBinGrouper(_BaseGrouper):
object res
bint initialized = 0
Slider vslider, islider
- object cached_typ = None, cached_ityp = None
+ object cached_series = None, cached_index = None
counts = np.zeros(self.ngroups, dtype=np.int64)
@@ -160,7 +160,9 @@ cdef class SeriesBinGrouper(_BaseGrouper):
result = np.empty(self.ngroups, dtype='O')
- cached_ityp, cached_typ = self._init_dummy_series_and_index(islider, vslider)
+ cached_index, cached_series = self._init_dummy_series_and_index(
+ islider, vslider
+ )
start = 0
try:
@@ -172,9 +174,9 @@ cdef class SeriesBinGrouper(_BaseGrouper):
vslider.move(start, end)
self._update_cached_objs(
- cached_typ, cached_ityp, islider, vslider)
+ cached_series, cached_index, islider, vslider)
- res, initialized = self._apply_to_group(cached_typ, cached_ityp,
+ res, initialized = self._apply_to_group(cached_series, cached_index,
initialized)
start += group_size
@@ -236,7 +238,7 @@ cdef class SeriesGrouper(_BaseGrouper):
object res
bint initialized = 0
Slider vslider, islider
- object cached_typ = None, cached_ityp = None
+ object cached_series = None, cached_index = None
labels = self.labels
counts = np.zeros(self.ngroups, dtype=np.int64)
@@ -248,7 +250,9 @@ cdef class SeriesGrouper(_BaseGrouper):
result = np.empty(self.ngroups, dtype='O')
- cached_ityp, cached_typ = self._init_dummy_series_and_index(islider, vslider)
+ cached_index, cached_series = self._init_dummy_series_and_index(
+ islider, vslider
+ )
start = 0
try:
@@ -268,9 +272,9 @@ cdef class SeriesGrouper(_BaseGrouper):
vslider.move(start, end)
self._update_cached_objs(
- cached_typ, cached_ityp, islider, vslider)
+ cached_series, cached_index, islider, vslider)
- res, initialized = self._apply_to_group(cached_typ, cached_ityp,
+ res, initialized = self._apply_to_group(cached_series, cached_index,
initialized)
start += group_size
@@ -293,20 +297,20 @@ cdef class SeriesGrouper(_BaseGrouper):
return result, counts
-cpdef inline extract_result(object res, bint squeeze=True):
+cpdef inline extract_result(object res):
""" extract the result object, it might be a 0-dim ndarray
or a len-1 0-dim, or a scalar """
if hasattr(res, "_values"):
# Preserve EA
res = res._values
- if squeeze and res.ndim == 1 and len(res) == 1:
+ if res.ndim == 1 and len(res) == 1:
res = res[0]
if hasattr(res, 'values') and is_array(res.values):
res = res.values
if is_array(res):
if res.ndim == 0:
res = res.item()
- elif squeeze and res.ndim == 1 and len(res) == 1:
+ elif res.ndim == 1 and len(res) == 1:
res = res[0]
return res
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41274 | 2021-05-03T01:57:34Z | 2021-05-03T15:15:36Z | 2021-05-03T15:15:36Z | 2021-05-03T16:50:08Z |
REF: consolidate casting in groupby agg_series | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index d0f85b75a629e..8dcd01b14535f 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -21,10 +21,7 @@ from pandas._libs.util cimport (
set_array_not_contiguous,
)
-from pandas._libs.lib import (
- is_scalar,
- maybe_convert_objects,
-)
+from pandas._libs.lib import is_scalar
cpdef check_result_array(object obj):
@@ -185,7 +182,6 @@ cdef class SeriesBinGrouper(_BaseGrouper):
islider.reset()
vslider.reset()
- result = maybe_convert_objects(result)
return result, counts
@@ -288,8 +284,6 @@ cdef class SeriesGrouper(_BaseGrouper):
# have result initialized by this point.
assert initialized, "`result` has not been initialized."
- result = maybe_convert_objects(result)
-
return result, counts
diff --git a/pandas/core/groupby/ops.py b/pandas/core/groupby/ops.py
index 9edbeb412026d..30be9968ad91f 100644
--- a/pandas/core/groupby/ops.py
+++ b/pandas/core/groupby/ops.py
@@ -970,22 +970,33 @@ def agg_series(self, obj: Series, func: F) -> tuple[ArrayLike, np.ndarray]:
# Caller is responsible for checking ngroups != 0
assert self.ngroups != 0
+ cast_back = True
if len(obj) == 0:
# SeriesGrouper would raise if we were to call _aggregate_series_fast
- return self._aggregate_series_pure_python(obj, func)
+ result, counts = self._aggregate_series_pure_python(obj, func)
elif is_extension_array_dtype(obj.dtype):
# _aggregate_series_fast would raise TypeError when
# calling libreduction.Slider
# In the datetime64tz case it would incorrectly cast to tz-naive
# TODO: can we get a performant workaround for EAs backed by ndarray?
- return self._aggregate_series_pure_python(obj, func)
+ result, counts = self._aggregate_series_pure_python(obj, func)
elif obj.index._has_complex_internals:
# Preempt TypeError in _aggregate_series_fast
- return self._aggregate_series_pure_python(obj, func)
+ result, counts = self._aggregate_series_pure_python(obj, func)
- return self._aggregate_series_fast(obj, func)
+ else:
+ result, counts = self._aggregate_series_fast(obj, func)
+ cast_back = False
+
+ npvalues = lib.maybe_convert_objects(result, try_float=False)
+ if cast_back:
+ # TODO: Is there a documented reason why we dont always cast_back?
+ out = maybe_cast_pointwise_result(npvalues, obj.dtype, numeric_only=True)
+ else:
+ out = npvalues
+ return out, counts
def _aggregate_series_fast(
self, obj: Series, func: F
@@ -1033,10 +1044,7 @@ def _aggregate_series_pure_python(self, obj: Series, func: F):
counts[i] = group.shape[0]
result[i] = res
- npvalues = lib.maybe_convert_objects(result, try_float=False)
- out = maybe_cast_pointwise_result(npvalues, obj.dtype, numeric_only=True)
-
- return out, counts
+ return result, counts
class BinGrouper(BaseGrouper):
diff --git a/pandas/core/series.py b/pandas/core/series.py
index a07a685c2ffde..240f678960969 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -3086,7 +3086,7 @@ def combine(self, other, func, fill_value=None) -> Series:
new_values[:] = [func(lv, other) for lv in self._values]
new_name = self.name
- # try_float=False is to match _aggregate_series_pure_python
+ # try_float=False is to match agg_series
npvalues = lib.maybe_convert_objects(new_values, try_float=False)
res_values = maybe_cast_pointwise_result(npvalues, self.dtype, same_dtype=False)
return self._constructor(res_values, index=new_index, name=new_name)
diff --git a/pandas/tests/groupby/test_bin_groupby.py b/pandas/tests/groupby/test_bin_groupby.py
index 13fddad30eeba..aa126ae801f1e 100644
--- a/pandas/tests/groupby/test_bin_groupby.py
+++ b/pandas/tests/groupby/test_bin_groupby.py
@@ -20,7 +20,7 @@ def test_series_grouper():
grouper = libreduction.SeriesGrouper(obj, np.mean, labels, 2)
result, counts = grouper.get_result()
- expected = np.array([obj[3:6].mean(), obj[6:].mean()])
+ expected = np.array([obj[3:6].mean(), obj[6:].mean()], dtype=object)
tm.assert_almost_equal(result, expected)
exp_counts = np.array([3, 4], dtype=np.int64)
@@ -36,7 +36,7 @@ def test_series_grouper_result_length_difference():
grouper = libreduction.SeriesGrouper(obj, lambda x: all(x > 0), labels, 2)
result, counts = grouper.get_result()
- expected = np.array([all(obj[3:6] > 0), all(obj[6:] > 0)])
+ expected = np.array([all(obj[3:6] > 0), all(obj[6:] > 0)], dtype=object)
tm.assert_equal(result, expected)
exp_counts = np.array([3, 4], dtype=np.int64)
@@ -61,7 +61,7 @@ def test_series_bin_grouper():
grouper = libreduction.SeriesBinGrouper(obj, np.mean, bins)
result, counts = grouper.get_result()
- expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()])
+ expected = np.array([obj[:3].mean(), obj[3:6].mean(), obj[6:].mean()], dtype=object)
tm.assert_almost_equal(result, expected)
exp_counts = np.array([3, 3, 4], dtype=np.int64)
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41273 | 2021-05-02T21:49:16Z | 2021-05-04T12:52:35Z | 2021-05-04T12:52:35Z | 2021-05-04T14:04:01Z |
REF: simpilify _cython_agg_general | diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index b9f1ca0710872..7edd458ced790 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -345,47 +345,48 @@ def _aggregate_multiple_funcs(self, arg):
def _cython_agg_general(
self, how: str, alt=None, numeric_only: bool = True, min_count: int = -1
):
- output: dict[base.OutputKey, ArrayLike] = {}
- # Ideally we would be able to enumerate self._iterate_slices and use
- # the index from enumeration as the key of output, but ohlc in particular
- # returns a (n x 4) array. Output requires 1D ndarrays as values, so we
- # need to slice that up into 1D arrays
- idx = 0
- for obj in self._iterate_slices():
- name = obj.name
- is_numeric = is_numeric_dtype(obj.dtype)
- if numeric_only and not is_numeric:
- continue
-
- objvals = obj._values
- if isinstance(objvals, Categorical):
- if self.grouper.ngroups > 0:
- # without special-casing, we would raise, then in fallback
- # would eventually call agg_series but without re-casting
- # to Categorical
- # equiv: res_values, _ = self.grouper.agg_series(obj, alt)
- res_values, _ = self.grouper._aggregate_series_pure_python(obj, alt)
- else:
- # equiv: res_values = self._python_agg_general(alt)
- res_values = self._python_apply_general(alt, self._selected_obj)
+ obj = self._selected_obj
+ objvals = obj._values
- result = type(objvals)._from_sequence(res_values, dtype=objvals.dtype)
+ if numeric_only and not is_numeric_dtype(obj.dtype):
+ raise DataError("No numeric types to aggregate")
- else:
+ # This is overkill because it is only called once, but is here to
+ # mirror the array_func used in DataFrameGroupBy._cython_agg_general
+ def array_func(values: ArrayLike) -> ArrayLike:
+ try:
result = self.grouper._cython_operation(
- "aggregate", obj._values, how, axis=0, min_count=min_count
+ "aggregate", values, how, axis=0, min_count=min_count
)
+ except NotImplementedError:
+ ser = Series(values) # equiv 'obj' from outer frame
+ if self.grouper.ngroups > 0:
+ res_values, _ = self.grouper.agg_series(ser, alt)
+ else:
+ # equiv: res_values = self._python_agg_general(alt)
+ # error: Incompatible types in assignment (expression has
+ # type "Union[DataFrame, Series]", variable has type
+ # "Union[ExtensionArray, ndarray]")
+ res_values = self._python_apply_general( # type: ignore[assignment]
+ alt, ser
+ )
- assert result.ndim == 1
- key = base.OutputKey(label=name, position=idx)
- output[key] = result
- idx += 1
+ if isinstance(values, Categorical):
+ # Because we only get here with known dtype-preserving
+ # reductions, we cast back to Categorical.
+ # TODO: if we ever get "rank" working, exclude it here.
+ result = type(values)._from_sequence(res_values, dtype=values.dtype)
+ else:
+ result = res_values
+ return result
- if not output:
- raise DataError("No numeric types to aggregate")
+ result = array_func(objvals)
- return self._wrap_aggregated_output(output)
+ ser = self.obj._constructor(
+ result, index=self.grouper.result_index, name=obj.name
+ )
+ return self._reindex_output(ser)
def _wrap_aggregated_output(
self,
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 7fe9d7cb49eb5..cdd2cef1f2e59 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -1281,15 +1281,6 @@ def _agg_general(
)
except DataError:
pass
- except NotImplementedError as err:
- if "function is not implemented for this dtype" in str(
- err
- ) or "category dtype not supported" in str(err):
- # raised in _get_cython_function, in some cases can
- # be trimmed by implementing cython funcs for more dtypes
- pass
- else:
- raise
# apply a non-cython aggregation
if result is None:
| - make the exception-handling in SeriesGroupBy._cython_agg_general match that in DataFrameGroupBy._cython_agg_general
- remove an unnecessary for loop
- define array_func to match the pattern in DataFrameGroupBy._cython_agg_general in the hopes that we can share these methods before long | https://api.github.com/repos/pandas-dev/pandas/pulls/41271 | 2021-05-02T19:58:37Z | 2021-05-02T23:24:23Z | 2021-05-02T23:24:23Z | 2021-05-03T01:24:52Z |
CI: fix test for SSL warning | diff --git a/pandas/_testing/_warnings.py b/pandas/_testing/_warnings.py
index 391226b622a01..b1f070d1a1ccc 100644
--- a/pandas/_testing/_warnings.py
+++ b/pandas/_testing/_warnings.py
@@ -144,7 +144,7 @@ def _assert_caught_no_extra_warnings(
if _is_unexpected_warning(actual_warning, expected_warning):
unclosed = "unclosed transport <asyncio.sslproto._SSLProtocolTransport"
if isinstance(actual_warning, ResourceWarning) and unclosed in str(
- actual_warning
+ actual_warning.message
):
# FIXME: kludge because pytest.filterwarnings does not
# suppress these, xref GH#38630
| follow up to #41083 to change test | https://api.github.com/repos/pandas-dev/pandas/pulls/41270 | 2021-05-02T19:37:45Z | 2021-05-02T23:08:51Z | 2021-05-02T23:08:51Z | 2021-05-02T23:52:30Z |
ENH: make `Styler` compatible with non-unique indexes | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index a81dda4e7dfdd..fec4422bac37e 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -138,6 +138,8 @@ The :meth:`.Styler.format` has had upgrades to easily format missing data,
precision, and perform HTML escaping (:issue:`40437` :issue:`40134`). There have been numerous other bug fixes to
properly format HTML and eliminate some inconsistencies (:issue:`39942` :issue:`40356` :issue:`39807` :issue:`39889` :issue:`39627`)
+:class:`.Styler` has also been compatible with non-unique index or columns, at least for as many features as are fully compatible, others made only partially compatible (:issue:`41269`).
+
Documentation has also seen major revisions in light of new features (:issue:`39720` :issue:`39317` :issue:`40493`)
.. _whatsnew_130.dataframe_honors_copy_with_dict:
diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 02e1369a05b93..8fc2825ffcfc5 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -322,6 +322,10 @@ def set_tooltips(
raise NotImplementedError(
"Tooltips can only render with 'cell_ids' is True."
)
+ if not ttips.index.is_unique or not ttips.columns.is_unique:
+ raise KeyError(
+ "Tooltips render only if `ttips` has unique index and columns."
+ )
if self.tooltips is None: # create a default instance if necessary
self.tooltips = Tooltips()
self.tooltips.tt_data = ttips
@@ -442,6 +446,10 @@ def set_td_classes(self, classes: DataFrame) -> Styler:
' </tbody>'
'</table>'
"""
+ if not classes.index.is_unique or not classes.columns.is_unique:
+ raise KeyError(
+ "Classes render only if `classes` has unique index and columns."
+ )
classes = classes.reindex_like(self.data)
for r, row_tup in enumerate(classes.itertuples()):
@@ -464,6 +472,12 @@ def _update_ctx(self, attrs: DataFrame) -> None:
Whitespace shouldn't matter and the final trailing ';' shouldn't
matter.
"""
+ if not self.index.is_unique or not self.columns.is_unique:
+ raise KeyError(
+ "`Styler.apply` and `.applymap` are not compatible "
+ "with non-unique index or columns."
+ )
+
for cn in attrs.columns:
for rn, c in attrs[[cn]].itertuples():
if not c:
@@ -986,10 +1000,11 @@ def set_table_styles(
table_styles = [
{
- "selector": str(s["selector"]) + idf + str(obj.get_loc(key)),
+ "selector": str(s["selector"]) + idf + str(idx),
"props": maybe_convert_css_to_tuples(s["props"]),
}
for key, styles in table_styles.items()
+ for idx in obj.get_indexer_for([key])
for s in styles
]
else:
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 4aaf1eecde5e8..bd768f4f0a1d4 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -82,8 +82,6 @@ def __init__(
data = data.to_frame()
if not isinstance(data, DataFrame):
raise TypeError("``data`` must be a Series or DataFrame")
- if not data.index.is_unique or not data.columns.is_unique:
- raise ValueError("style is not supported for non-unique indices.")
self.data: DataFrame = data
self.index: Index = data.index
self.columns: Index = data.columns
@@ -481,23 +479,22 @@ def format(
subset = non_reducing_slice(subset)
data = self.data.loc[subset]
- columns = data.columns
if not isinstance(formatter, dict):
- formatter = {col: formatter for col in columns}
+ formatter = {col: formatter for col in data.columns}
- for col in columns:
+ cis = self.columns.get_indexer_for(data.columns)
+ ris = self.index.get_indexer_for(data.index)
+ for ci in cis:
format_func = _maybe_wrap_formatter(
- formatter.get(col),
+ formatter.get(self.columns[ci]),
na_rep=na_rep,
precision=precision,
decimal=decimal,
thousands=thousands,
escape=escape,
)
-
- for row, value in data[[col]].itertuples():
- i, j = self.index.get_loc(row), self.columns.get_loc(col)
- self._display_funcs[(i, j)] = format_func
+ for ri in ris:
+ self._display_funcs[(ri, ci)] = format_func
return self
diff --git a/pandas/tests/io/formats/style/test_non_unique.py b/pandas/tests/io/formats/style/test_non_unique.py
new file mode 100644
index 0000000000000..2dc7433009368
--- /dev/null
+++ b/pandas/tests/io/formats/style/test_non_unique.py
@@ -0,0 +1,124 @@
+import pytest
+
+from pandas import (
+ DataFrame,
+ IndexSlice,
+)
+
+pytest.importorskip("jinja2")
+
+from pandas.io.formats.style import Styler
+
+
+@pytest.fixture
+def df():
+ return DataFrame(
+ [[1, 2, 3], [4, 5, 6], [7, 8, 9]],
+ index=["i", "j", "j"],
+ columns=["c", "d", "d"],
+ dtype=float,
+ )
+
+
+@pytest.fixture
+def styler(df):
+ return Styler(df, uuid_len=0)
+
+
+def test_format_non_unique(df):
+ # GH 41269
+
+ # test dict
+ html = df.style.format({"d": "{:.1f}"}).render()
+ for val in ["1.000000<", "4.000000<", "7.000000<"]:
+ assert val in html
+ for val in ["2.0<", "3.0<", "5.0<", "6.0<", "8.0<", "9.0<"]:
+ assert val in html
+
+ # test subset
+ html = df.style.format(precision=1, subset=IndexSlice["j", "d"]).render()
+ for val in ["1.000000<", "4.000000<", "7.000000<", "2.000000<", "3.000000<"]:
+ assert val in html
+ for val in ["5.0<", "6.0<", "8.0<", "9.0<"]:
+ assert val in html
+
+
+@pytest.mark.parametrize("func", ["apply", "applymap"])
+def test_apply_applymap_non_unique_raises(df, func):
+ # GH 41269
+ if func == "apply":
+ op = lambda s: ["color: red;"] * len(s)
+ else:
+ op = lambda v: "color: red;"
+
+ with pytest.raises(KeyError, match="`Styler.apply` and `.applymap` are not"):
+ getattr(df.style, func)(op)._compute()
+
+
+def test_table_styles_dict_non_unique_index(styler):
+ styles = styler.set_table_styles(
+ {"j": [{"selector": "td", "props": "a: v;"}]}, axis=1
+ ).table_styles
+ assert styles == [
+ {"selector": "td.row1", "props": [("a", "v")]},
+ {"selector": "td.row2", "props": [("a", "v")]},
+ ]
+
+
+def test_table_styles_dict_non_unique_columns(styler):
+ styles = styler.set_table_styles(
+ {"d": [{"selector": "td", "props": "a: v;"}]}, axis=0
+ ).table_styles
+ assert styles == [
+ {"selector": "td.col1", "props": [("a", "v")]},
+ {"selector": "td.col2", "props": [("a", "v")]},
+ ]
+
+
+def test_tooltips_non_unique_raises(styler):
+ # ttips has unique keys
+ ttips = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "d"], index=["a", "b"])
+ styler.set_tooltips(ttips=ttips) # OK
+
+ # ttips has non-unique columns
+ ttips = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "c"], index=["a", "b"])
+ with pytest.raises(KeyError, match="Tooltips render only if `ttips` has unique"):
+ styler.set_tooltips(ttips=ttips)
+
+ # ttips has non-unique index
+ ttips = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "d"], index=["a", "a"])
+ with pytest.raises(KeyError, match="Tooltips render only if `ttips` has unique"):
+ styler.set_tooltips(ttips=ttips)
+
+
+def test_set_td_classes_non_unique_raises(styler):
+ # classes has unique keys
+ classes = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "d"], index=["a", "b"])
+ styler.set_td_classes(classes=classes) # OK
+
+ # classes has non-unique columns
+ classes = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "c"], index=["a", "b"])
+ with pytest.raises(KeyError, match="Classes render only if `classes` has unique"):
+ styler.set_td_classes(classes=classes)
+
+ # classes has non-unique index
+ classes = DataFrame([["1", "2"], ["3", "4"]], columns=["c", "d"], index=["a", "a"])
+ with pytest.raises(KeyError, match="Classes render only if `classes` has unique"):
+ styler.set_td_classes(classes=classes)
+
+
+def test_hide_columns_non_unique(styler):
+ ctx = styler.hide_columns(["d"])._translate()
+
+ assert ctx["head"][0][1]["display_value"] == "c"
+ assert ctx["head"][0][1]["is_visible"] is True
+
+ assert ctx["head"][0][2]["display_value"] == "d"
+ assert ctx["head"][0][2]["is_visible"] is False
+
+ assert ctx["head"][0][3]["display_value"] == "d"
+ assert ctx["head"][0][3]["is_visible"] is False
+
+ assert ctx["body"][0][1]["is_visible"] is True
+ assert ctx["body"][0][2]["is_visible"] is False
+ assert ctx["body"][0][3]["is_visible"] is False
diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py
index 3b614be770bc5..855def916c2cd 100644
--- a/pandas/tests/io/formats/style/test_style.py
+++ b/pandas/tests/io/formats/style/test_style.py
@@ -671,15 +671,6 @@ def test_set_na_rep(self):
assert ctx["body"][0][1]["display_value"] == "NA"
assert ctx["body"][0][2]["display_value"] == "-"
- def test_nonunique_raises(self):
- df = DataFrame([[1, 2]], columns=["A", "A"])
- msg = "style is not supported for non-unique indices."
- with pytest.raises(ValueError, match=msg):
- df.style
-
- with pytest.raises(ValueError, match=msg):
- Styler(df)
-
def test_caption(self):
styler = Styler(self.df, caption="foo")
result = styler.render()
| - [x] closes #41143
The PR aims to make Styler compatible with non-unique indexes/columns, for the purpose of rendering all DataFrame types (even if no styling is applied)
- [x] `Styler.format`: made FULLY compatible with some modifications to the loops, inc TESTS.
- [x] <s>`Styler.apply` and `Styler.applymap`: made PARTIALLY compatible:
- if `subset`s are unique then compatible, inc TESTS.
- if `subset`s are non-unique slices will raise a not compatible `KeyError`, inc. TESTS</s>
- [x] `Styler.apply` and `applymap` are NOT compatible. Raises KeyError.
- [x] `Styler.set_table_styles`: made FULLY compatible and will style multiple rows/columns from a non-unique key, inc TESTS.
- [x] `Styler.set_td_classes` uses `reindex` so is PARTIALLY compatible where `classes` has unique index/columns: now returns a `KeyError` in non-unique case, inc TESTS.
- [x] `Styler.set_tooltips` uses `reindex` so is PARTIALLY compatible where `ttips` has unique index/columns: now returns a `KeyError` in non-unique case, inc TESTS.
- [x] `Styler.hide_index` and `.hide_columns` are already FULLY compatible through existing code (inc TESTS)
- [x] all the `built-in` styling functions use some version of `apply` or `applymap` so are captured by the above cases.
I believe this is all relevant functionality reviewed.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41269 | 2021-05-02T18:38:41Z | 2021-05-06T23:32:02Z | 2021-05-06T23:32:02Z | 2021-05-07T05:18:04Z |
Bug in setitem raising ValueError with row-slice indexer on df with list-like on rhs | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index b2f4de22ca5c1..2c2ae8cc1f88b 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -752,6 +752,7 @@ Indexing
- Bug in :meth:`RangeIndex.append` where a single object of length 1 was concatenated incorrectly (:issue:`39401`)
- Bug in setting ``numpy.timedelta64`` values into an object-dtype :class:`Series` using a boolean indexer (:issue:`39488`)
- Bug in setting numeric values into a into a boolean-dtypes :class:`Series` using ``at`` or ``iat`` failing to cast to object-dtype (:issue:`39582`)
+- Bug in :meth:`DataFrame.__setitem__` and :meth:`DataFrame.iloc.__setitem__` raising ``ValueError`` when trying to index with a row-slice and setting a list as values (:issue:`40440`)
- Bug in :meth:`DataFrame.loc.__setitem__` when setting-with-expansion incorrectly raising when the index in the expanding axis contains duplicates (:issue:`40096`)
- Bug in :meth:`DataFrame.loc` incorrectly matching non-boolean index elements (:issue:`20432`)
- Bug in :meth:`Series.__delitem__` with ``ExtensionDtype`` incorrectly casting to ``ndarray`` (:issue:`40386`)
diff --git a/pandas/core/indexers.py b/pandas/core/indexers.py
index aa780787d58b6..d3756d6252c0a 100644
--- a/pandas/core/indexers.py
+++ b/pandas/core/indexers.py
@@ -180,7 +180,8 @@ def check_setitem_lengths(indexer, value, values) -> bool:
elif isinstance(indexer, slice):
if is_list_like(value):
- if len(value) != length_of_indexer(indexer, values):
+ if len(value) != length_of_indexer(indexer, values) and values.ndim == 1:
+ # In case of two dimensional value is used row-wise and broadcasted
raise ValueError(
"cannot set using a slice indexer with a "
"different length than the value"
diff --git a/pandas/tests/frame/indexing/test_setitem.py b/pandas/tests/frame/indexing/test_setitem.py
index 3fa8295084718..4004e595c832f 100644
--- a/pandas/tests/frame/indexing/test_setitem.py
+++ b/pandas/tests/frame/indexing/test_setitem.py
@@ -791,6 +791,34 @@ def test_setitem_slice_position(self):
expected = DataFrame(arr)
tm.assert_frame_equal(df, expected)
+ @pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
+ @pytest.mark.parametrize("box", [Series, np.array, list])
+ @pytest.mark.parametrize("n", [1, 2, 3])
+ def test_setitem_broadcasting_rhs(self, n, box, indexer):
+ # GH#40440
+ # TODO: Add pandas array as box after GH#40933 is fixed
+ df = DataFrame([[1, 3, 5]] + [[2, 4, 6]] * n, columns=["a", "b", "c"])
+ indexer(df)[1:] = box([10, 11, 12])
+ expected = DataFrame([[1, 3, 5]] + [[10, 11, 12]] * n, columns=["a", "b", "c"])
+ tm.assert_frame_equal(df, expected)
+
+ @pytest.mark.parametrize("indexer", [tm.setitem, tm.iloc])
+ @pytest.mark.parametrize("box", [Series, np.array, list])
+ @pytest.mark.parametrize("n", [1, 2, 3])
+ def test_setitem_broadcasting_rhs_mixed_dtypes(self, n, box, indexer):
+ # GH#40440
+ # TODO: Add pandas array as box after GH#40933 is fixed
+ df = DataFrame(
+ [[1, 3, 5], ["x", "y", "z"]] + [[2, 4, 6]] * n, columns=["a", "b", "c"]
+ )
+ indexer(df)[1:] = box([10, 11, 12])
+ expected = DataFrame(
+ [[1, 3, 5]] + [[10, 11, 12]] * (n + 1),
+ columns=["a", "b", "c"],
+ dtype="object",
+ )
+ tm.assert_frame_equal(df, expected)
+
class TestDataFrameSetItemCallable:
def test_setitem_callable(self):
| - [x] closes #40440
- [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
I'll adress the IntegerArray case from #40933 after this is merged | https://api.github.com/repos/pandas-dev/pandas/pulls/41268 | 2021-05-02T17:55:43Z | 2021-05-02T23:47:54Z | 2021-05-02T23:47:54Z | 2021-05-03T18:30:08Z |
ENH: Numba engine for EWM.mean | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index b2f4de22ca5c1..ea1bc309bb041 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -197,7 +197,7 @@ Other enhancements
- Add support for dict-like names in :class:`MultiIndex.set_names` and :class:`MultiIndex.rename` (:issue:`20421`)
- :func:`pandas.read_excel` can now auto detect .xlsb files (:issue:`35416`)
- :class:`pandas.ExcelWriter` now accepts an ``if_sheet_exists`` parameter to control the behaviour of append mode when writing to existing sheets (:issue:`40230`)
-- :meth:`.Rolling.sum`, :meth:`.Expanding.sum`, :meth:`.Rolling.mean`, :meth:`.Expanding.mean`, :meth:`.Rolling.median`, :meth:`.Expanding.median`, :meth:`.Rolling.max`, :meth:`.Expanding.max`, :meth:`.Rolling.min`, and :meth:`.Expanding.min` now support ``Numba`` execution with the ``engine`` keyword (:issue:`38895`)
+- :meth:`.Rolling.sum`, :meth:`.Expanding.sum`, :meth:`.Rolling.mean`, :meth:`.Expanding.mean`, :meth:`.ExponentialMovingWindow.mean`, :meth:`.Rolling.median`, :meth:`.Expanding.median`, :meth:`.Rolling.max`, :meth:`.Expanding.max`, :meth:`.Rolling.min`, and :meth:`.Expanding.min` now support ``Numba`` execution with the ``engine`` keyword (:issue:`38895`, :issue:`41267`)
- :meth:`DataFrame.apply` can now accept NumPy unary operators as strings, e.g. ``df.apply("sqrt")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
- :meth:`DataFrame.apply` can now accept non-callable DataFrame properties as strings, e.g. ``df.apply("size")``, which was already the case for :meth:`Series.apply` (:issue:`39116`)
- :meth:`DataFrame.applymap` can now accept kwargs to pass on to func (:issue:`39987`)
diff --git a/pandas/core/window/ewm.py b/pandas/core/window/ewm.py
index 4a210d8b47e9b..08a65964f278e 100644
--- a/pandas/core/window/ewm.py
+++ b/pandas/core/window/ewm.py
@@ -29,16 +29,18 @@
args_compat,
create_section_header,
kwargs_compat,
+ numba_notes,
template_header,
template_returns,
template_see_also,
+ window_agg_numba_parameters,
)
from pandas.core.window.indexers import (
BaseIndexer,
ExponentialMovingWindowIndexer,
GroupbyIndexer,
)
-from pandas.core.window.numba_ import generate_numba_groupby_ewma_func
+from pandas.core.window.numba_ import generate_numba_ewma_func
from pandas.core.window.rolling import (
BaseWindow,
BaseWindowGroupby,
@@ -372,26 +374,41 @@ def aggregate(self, func, *args, **kwargs):
template_header,
create_section_header("Parameters"),
args_compat,
+ window_agg_numba_parameters,
kwargs_compat,
create_section_header("Returns"),
template_returns,
create_section_header("See Also"),
- template_see_also[:-1],
+ template_see_also,
+ create_section_header("Notes"),
+ numba_notes.replace("\n", "", 1),
window_method="ewm",
aggregation_description="(exponential weighted moment) mean",
agg_method="mean",
)
- def mean(self, *args, **kwargs):
- nv.validate_window_func("mean", args, kwargs)
- window_func = window_aggregations.ewma
- window_func = partial(
- window_func,
- com=self._com,
- adjust=self.adjust,
- ignore_na=self.ignore_na,
- deltas=self._deltas,
- )
- return self._apply(window_func)
+ def mean(self, *args, engine=None, engine_kwargs=None, **kwargs):
+ if maybe_use_numba(engine):
+ ewma_func = generate_numba_ewma_func(
+ engine_kwargs, self._com, self.adjust, self.ignore_na, self._deltas
+ )
+ return self._apply(
+ ewma_func,
+ numba_cache_key=(lambda x: x, "ewma"),
+ )
+ elif engine in ("cython", None):
+ if engine_kwargs is not None:
+ raise ValueError("cython engine does not accept engine_kwargs")
+ nv.validate_window_func("mean", args, kwargs)
+ window_func = partial(
+ window_aggregations.ewma,
+ com=self._com,
+ adjust=self.adjust,
+ ignore_na=self.ignore_na,
+ deltas=self._deltas,
+ )
+ return self._apply(window_func)
+ else:
+ raise ValueError("engine must be either 'numba' or 'cython'")
@doc(
template_header,
@@ -635,45 +652,3 @@ def _get_window_indexer(self) -> GroupbyIndexer:
window_indexer=ExponentialMovingWindowIndexer,
)
return window_indexer
-
- def mean(self, engine=None, engine_kwargs=None):
- """
- Parameters
- ----------
- engine : str, default None
- * ``'cython'`` : Runs mean through C-extensions from cython.
- * ``'numba'`` : Runs mean through JIT compiled code from numba.
- Only available when ``raw`` is set to ``True``.
- * ``None`` : Defaults to ``'cython'`` or globally setting
- ``compute.use_numba``
-
- .. versionadded:: 1.2.0
-
- engine_kwargs : dict, default None
- * For ``'cython'`` engine, there are no accepted ``engine_kwargs``
- * For ``'numba'`` engine, the engine can accept ``nopython``, ``nogil``
- and ``parallel`` dictionary keys. The values must either be ``True`` or
- ``False``. The default ``engine_kwargs`` for the ``'numba'`` engine is
- ``{'nopython': True, 'nogil': False, 'parallel': False}``.
-
- .. versionadded:: 1.2.0
-
- Returns
- -------
- Series or DataFrame
- Return type is determined by the caller.
- """
- if maybe_use_numba(engine):
- groupby_ewma_func = generate_numba_groupby_ewma_func(
- engine_kwargs, self._com, self.adjust, self.ignore_na, self._deltas
- )
- return self._apply(
- groupby_ewma_func,
- numba_cache_key=(lambda x: x, "groupby_ewma"),
- )
- elif engine in ("cython", None):
- if engine_kwargs is not None:
- raise ValueError("cython engine does not accept engine_kwargs")
- return super().mean()
- else:
- raise ValueError("engine must be either 'numba' or 'cython'")
diff --git a/pandas/core/window/numba_.py b/pandas/core/window/numba_.py
index d84dea7ee622c..9407efd0bef2b 100644
--- a/pandas/core/window/numba_.py
+++ b/pandas/core/window/numba_.py
@@ -80,7 +80,7 @@ def roll_apply(
return roll_apply
-def generate_numba_groupby_ewma_func(
+def generate_numba_ewma_func(
engine_kwargs: Optional[Dict[str, bool]],
com: float,
adjust: bool,
@@ -88,7 +88,7 @@ def generate_numba_groupby_ewma_func(
deltas: np.ndarray,
):
"""
- Generate a numba jitted groupby ewma function specified by values
+ Generate a numba jitted ewma function specified by values
from engine_kwargs.
Parameters
@@ -106,14 +106,14 @@ def generate_numba_groupby_ewma_func(
"""
nopython, nogil, parallel = get_jit_arguments(engine_kwargs)
- cache_key = (lambda x: x, "groupby_ewma")
+ cache_key = (lambda x: x, "ewma")
if cache_key in NUMBA_FUNC_CACHE:
return NUMBA_FUNC_CACHE[cache_key]
numba = import_optional_dependency("numba")
@numba.jit(nopython=nopython, nogil=nogil, parallel=parallel)
- def groupby_ewma(
+ def ewma(
values: np.ndarray,
begin: np.ndarray,
end: np.ndarray,
@@ -121,15 +121,15 @@ def groupby_ewma(
) -> np.ndarray:
result = np.empty(len(values))
alpha = 1.0 / (1.0 + com)
+ old_wt_factor = 1.0 - alpha
+ new_wt = 1.0 if adjust else alpha
+
for i in numba.prange(len(begin)):
start = begin[i]
stop = end[i]
window = values[start:stop]
sub_result = np.empty(len(window))
- old_wt_factor = 1.0 - alpha
- new_wt = 1.0 if adjust else alpha
-
weighted_avg = window[0]
nobs = int(not np.isnan(weighted_avg))
sub_result[0] = weighted_avg if nobs >= minimum_periods else np.nan
@@ -166,7 +166,7 @@ def groupby_ewma(
return result
- return groupby_ewma
+ return ewma
def generate_numba_table_func(
diff --git a/pandas/tests/window/test_numba.py b/pandas/tests/window/test_numba.py
index 06b34201e0dba..b79c367d482ae 100644
--- a/pandas/tests/window/test_numba.py
+++ b/pandas/tests/window/test_numba.py
@@ -123,30 +123,44 @@ def func_2(x):
@td.skip_if_no("numba", "0.46.0")
-class TestGroupbyEWMMean:
- def test_invalid_engine(self):
+class TestEWMMean:
+ @pytest.mark.parametrize(
+ "grouper", [lambda x: x, lambda x: x.groupby("A")], ids=["None", "groupby"]
+ )
+ def test_invalid_engine(self, grouper):
df = DataFrame({"A": ["a", "b", "a", "b"], "B": range(4)})
with pytest.raises(ValueError, match="engine must be either"):
- df.groupby("A").ewm(com=1.0).mean(engine="foo")
+ grouper(df).ewm(com=1.0).mean(engine="foo")
- def test_invalid_engine_kwargs(self):
+ @pytest.mark.parametrize(
+ "grouper", [lambda x: x, lambda x: x.groupby("A")], ids=["None", "groupby"]
+ )
+ def test_invalid_engine_kwargs(self, grouper):
df = DataFrame({"A": ["a", "b", "a", "b"], "B": range(4)})
with pytest.raises(ValueError, match="cython engine does not"):
- df.groupby("A").ewm(com=1.0).mean(
+ grouper(df).ewm(com=1.0).mean(
engine="cython", engine_kwargs={"nopython": True}
)
- def test_cython_vs_numba(self, nogil, parallel, nopython, ignore_na, adjust):
+ @pytest.mark.parametrize(
+ "grouper", [lambda x: x, lambda x: x.groupby("A")], ids=["None", "groupby"]
+ )
+ def test_cython_vs_numba(
+ self, grouper, nogil, parallel, nopython, ignore_na, adjust
+ ):
df = DataFrame({"A": ["a", "b", "a", "b"], "B": range(4)})
- gb_ewm = df.groupby("A").ewm(com=1.0, adjust=adjust, ignore_na=ignore_na)
+ ewm = grouper(df).ewm(com=1.0, adjust=adjust, ignore_na=ignore_na)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
- result = gb_ewm.mean(engine="numba", engine_kwargs=engine_kwargs)
- expected = gb_ewm.mean(engine="cython")
+ result = ewm.mean(engine="numba", engine_kwargs=engine_kwargs)
+ expected = ewm.mean(engine="cython")
tm.assert_frame_equal(result, expected)
- def test_cython_vs_numba_times(self, nogil, parallel, nopython, ignore_na):
+ @pytest.mark.parametrize(
+ "grouper", [lambda x: x, lambda x: x.groupby("A")], ids=["None", "groupby"]
+ )
+ def test_cython_vs_numba_times(self, grouper, nogil, parallel, nopython, ignore_na):
# GH 40951
halflife = "23 days"
times = to_datetime(
@@ -160,13 +174,13 @@ def test_cython_vs_numba_times(self, nogil, parallel, nopython, ignore_na):
]
)
df = DataFrame({"A": ["a", "b", "a", "b", "b", "a"], "B": [0, 0, 1, 1, 2, 2]})
- gb_ewm = df.groupby("A").ewm(
+ ewm = grouper(df).ewm(
halflife=halflife, adjust=True, ignore_na=ignore_na, times=times
)
engine_kwargs = {"nogil": nogil, "parallel": parallel, "nopython": nopython}
- result = gb_ewm.mean(engine="numba", engine_kwargs=engine_kwargs)
- expected = gb_ewm.mean(engine="cython")
+ result = ewm.mean(engine="numba", engine_kwargs=engine_kwargs)
+ expected = ewm.mean(engine="cython")
tm.assert_frame_equal(result, expected)
| - [x] tests added / passed
- [x] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [x] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41267 | 2021-05-02T17:10:37Z | 2021-05-02T23:57:20Z | 2021-05-02T23:57:20Z | 2021-05-03T05:04:48Z |
API: make `hide_columns` and `hide_index` have a consistent signature and function in `Styler` | diff --git a/pandas/io/formats/style.py b/pandas/io/formats/style.py
index 93c3843b36846..bf4df4ff73055 100644
--- a/pandas/io/formats/style.py
+++ b/pandas/io/formats/style.py
@@ -31,7 +31,10 @@
from pandas.util._decorators import doc
import pandas as pd
-from pandas import RangeIndex
+from pandas import (
+ IndexSlice,
+ RangeIndex,
+)
from pandas.api.types import is_list_like
from pandas.core import generic
import pandas.core.common as com
@@ -682,7 +685,7 @@ def to_latex(
self.data.columns = RangeIndex(stop=len(self.data.columns))
numeric_cols = self.data._get_numeric_data().columns.to_list()
self.data.columns = _original_columns
- column_format = "" if self.hidden_index else "l" * self.data.index.nlevels
+ column_format = "" if self.hide_index_ else "l" * self.data.index.nlevels
for ci, _ in enumerate(self.data.columns):
if ci not in self.hidden_columns:
column_format += (
@@ -926,7 +929,7 @@ def _copy(self, deepcopy: bool = False) -> Styler:
)
styler.uuid = self.uuid
- styler.hidden_index = self.hidden_index
+ styler.hide_index_ = self.hide_index_
if deepcopy:
styler.ctx = copy.deepcopy(self.ctx)
@@ -965,7 +968,7 @@ def clear(self) -> None:
self.cell_context.clear()
self._todo.clear()
- self.hidden_index = False
+ self.hide_index_ = False
self.hidden_columns = []
# self.format and self.table_styles may be dependent on user
# input in self.__init__()
@@ -1096,7 +1099,7 @@ def _applymap(
) -> Styler:
func = partial(func, **kwargs) # applymap doesn't take kwargs?
if subset is None:
- subset = pd.IndexSlice[:]
+ subset = IndexSlice[:]
subset = non_reducing_slice(subset)
result = self.data.loc[subset].applymap(func)
self._update_ctx(result)
@@ -1509,37 +1512,169 @@ def set_na_rep(self, na_rep: str) -> StylerRenderer:
self.na_rep = na_rep
return self.format(na_rep=na_rep, precision=self.precision)
- def hide_index(self) -> Styler:
+ def hide_index(self, subset: Subset | None = None) -> Styler:
"""
- Hide any indices from rendering.
+ Hide the entire index, or specific keys in the index from rendering.
+
+ This method has dual functionality:
+
+ - if ``subset`` is ``None`` then the entire index will be hidden whilst
+ displaying all data-rows.
+ - if a ``subset`` is given then those specific rows will be hidden whilst the
+ index itself remains visible.
+
+ .. versionchanged:: 1.3.0
+
+ Parameters
+ ----------
+ subset : label, array-like, IndexSlice, optional
+ A valid 1d input or single key along the index axis within
+ `DataFrame.loc[<subset>, :]`, to limit ``data`` to *before* applying
+ the function.
Returns
-------
self : Styler
+
+ See Also
+ --------
+ Styler.hide_columns: Hide the entire column headers row, or specific columns.
+
+ Examples
+ --------
+ Simple application hiding specific rows:
+
+ >>> df = pd.DataFrame([[1,2], [3,4], [5,6]], index=["a", "b", "c"])
+ >>> df.style.hide_index(["a", "b"])
+ 0 1
+ c 5 6
+
+ Hide the index and retain the data values:
+
+ >>> midx = pd.MultiIndex.from_product([["x", "y"], ["a", "b", "c"]])
+ >>> df = pd.DataFrame(np.random.randn(6,6), index=midx, columns=midx)
+ >>> df.style.format("{:.1f}").hide_index()
+ x y
+ a b c a b c
+ 0.1 0.0 0.4 1.3 0.6 -1.4
+ 0.7 1.0 1.3 1.5 -0.0 -0.2
+ 1.4 -0.8 1.6 -0.2 -0.4 -0.3
+ 0.4 1.0 -0.2 -0.8 -1.2 1.1
+ -0.6 1.2 1.8 1.9 0.3 0.3
+ 0.8 0.5 -0.3 1.2 2.2 -0.8
+
+ Hide specific rows but retain the index:
+
+ >>> df.style.format("{:.1f}").hide_index(subset=(slice(None), ["a", "c"]))
+ x y
+ a b c a b c
+ x b 0.7 1.0 1.3 1.5 -0.0 -0.2
+ y b -0.6 1.2 1.8 1.9 0.3 0.3
+
+ Hide specific rows and the index:
+
+ >>> df.style.format("{:.1f}").hide_index(subset=(slice(None), ["a", "c"]))
+ ... .hide_index()
+ x y
+ a b c a b c
+ 0.7 1.0 1.3 1.5 -0.0 -0.2
+ -0.6 1.2 1.8 1.9 0.3 0.3
"""
- self.hidden_index = True
+ if subset is None:
+ self.hide_index_ = True
+ else:
+ subset_ = IndexSlice[subset, :] # new var so mypy reads not Optional
+ subset = non_reducing_slice(subset_)
+ hide = self.data.loc[subset]
+ hrows = self.index.get_indexer_for(hide.index)
+ # error: Incompatible types in assignment (expression has type
+ # "ndarray", variable has type "Sequence[int]")
+ self.hidden_rows = hrows # type: ignore[assignment]
return self
- def hide_columns(self, subset: Subset) -> Styler:
+ def hide_columns(self, subset: Subset | None = None) -> Styler:
"""
- Hide columns from rendering.
+ Hide the column headers or specific keys in the columns from rendering.
+
+ This method has dual functionality:
+
+ - if ``subset`` is ``None`` then the entire column headers row will be hidden
+ whilst the data-values remain visible.
+ - if a ``subset`` is given then those specific columns, including the
+ data-values will be hidden, whilst the column headers row remains visible.
+
+ .. versionchanged:: 1.3.0
Parameters
----------
- subset : label, array-like, IndexSlice
- A valid 1d input or single key along the appropriate axis within
- `DataFrame.loc[]`, to limit ``data`` to *before* applying the function.
+ subset : label, array-like, IndexSlice, optional
+ A valid 1d input or single key along the columns axis within
+ `DataFrame.loc[:, <subset>]`, to limit ``data`` to *before* applying
+ the function.
Returns
-------
self : Styler
+
+ See Also
+ --------
+ Styler.hide_index: Hide the entire index, or specific keys in the index.
+
+ Examples
+ --------
+ Simple application hiding specific columns:
+
+ >>> df = pd.DataFrame([[1, 2, 3], [4, 5, 6]], columns=["a", "b", "c"])
+ >>> df.style.hide_columns(["a", "b"])
+ c
+ 0 3
+ 1 6
+
+ Hide column headers and retain the data values:
+
+ >>> midx = pd.MultiIndex.from_product([["x", "y"], ["a", "b", "c"]])
+ >>> df = pd.DataFrame(np.random.randn(6,6), index=midx, columns=midx)
+ >>> df.style.format("{:.1f}").hide_columns()
+ x d 0.1 0.0 0.4 1.3 0.6 -1.4
+ e 0.7 1.0 1.3 1.5 -0.0 -0.2
+ f 1.4 -0.8 1.6 -0.2 -0.4 -0.3
+ y d 0.4 1.0 -0.2 -0.8 -1.2 1.1
+ e -0.6 1.2 1.8 1.9 0.3 0.3
+ f 0.8 0.5 -0.3 1.2 2.2 -0.8
+
+ Hide specific columns but retain the column headers:
+
+ >>> df.style.format("{:.1f}").hide_columns(subset=(slice(None), ["a", "c"]))
+ x y
+ b b
+ x a 0.0 0.6
+ b 1.0 -0.0
+ c -0.8 -0.4
+ y a 1.0 -1.2
+ b 1.2 0.3
+ c 0.5 2.2
+
+ Hide specific columns and the column headers:
+
+ >>> df.style.format("{:.1f}").hide_columns(subset=(slice(None), ["a", "c"]))
+ ... .hide_columns()
+ x a 0.0 0.6
+ b 1.0 -0.0
+ c -0.8 -0.4
+ y a 1.0 -1.2
+ b 1.2 0.3
+ c 0.5 2.2
"""
- subset = non_reducing_slice(subset)
- hidden_df = self.data.loc[subset]
- hcols = self.columns.get_indexer_for(hidden_df.columns)
- # error: Incompatible types in assignment (expression has type
- # "ndarray", variable has type "Sequence[int]")
- self.hidden_columns = hcols # type: ignore[assignment]
+ if subset is None:
+ self.hide_columns_ = True
+ else:
+ subset_ = IndexSlice[:, subset] # new var so mypy reads not Optional
+ subset = non_reducing_slice(subset_)
+ hide = self.data.loc[subset]
+ hcols = self.columns.get_indexer_for(hide.columns)
+ # error: Incompatible types in assignment (expression has type
+ # "ndarray", variable has type "Sequence[int]")
+ self.hidden_columns = hcols # type: ignore[assignment]
return self
# -----------------------------------------------------------------------
diff --git a/pandas/io/formats/style_render.py b/pandas/io/formats/style_render.py
index 7686d8a340c37..514597d27a92b 100644
--- a/pandas/io/formats/style_render.py
+++ b/pandas/io/formats/style_render.py
@@ -97,7 +97,9 @@ def __init__(
self.cell_ids = cell_ids
# add rendering variables
- self.hidden_index: bool = False
+ self.hide_index_: bool = False # bools for hiding col/row headers
+ self.hide_columns_: bool = False
+ self.hidden_rows: Sequence[int] = [] # sequence for specific hidden rows/cols
self.hidden_columns: Sequence[int] = []
self.ctx: DefaultDict[tuple[int, int], CSSList] = defaultdict(list)
self.cell_context: DefaultDict[tuple[int, int], str] = defaultdict(str)
@@ -297,55 +299,56 @@ def _translate_header(
head = []
# 1) column headers
- for r in range(self.data.columns.nlevels):
- index_blanks = [
- _element("th", blank_class, blank_value, not self.hidden_index)
- ] * (self.data.index.nlevels - 1)
-
- name = self.data.columns.names[r]
- column_name = [
- _element(
- "th",
- f"{blank_class if name is None else index_name_class} level{r}",
- name if name is not None else blank_value,
- not self.hidden_index,
- )
- ]
-
- if clabels:
- column_headers = [
+ if not self.hide_columns_:
+ for r in range(self.data.columns.nlevels):
+ index_blanks = [
+ _element("th", blank_class, blank_value, not self.hide_index_)
+ ] * (self.data.index.nlevels - 1)
+
+ name = self.data.columns.names[r]
+ column_name = [
_element(
"th",
- f"{col_heading_class} level{r} col{c}",
- value,
- _is_visible(c, r, col_lengths),
- attributes=(
- f'colspan="{col_lengths.get((r, c), 0)}"'
- if col_lengths.get((r, c), 0) > 1
- else ""
- ),
+ f"{blank_class if name is None else index_name_class} level{r}",
+ name if name is not None else blank_value,
+ not self.hide_index_,
)
- for c, value in enumerate(clabels[r])
]
- if len(self.data.columns) > max_cols:
- # add an extra column with `...` value to indicate trimming
- column_headers.append(
+ if clabels:
+ column_headers = [
_element(
"th",
- f"{col_heading_class} level{r} {trimmed_col_class}",
- "...",
- True,
- attributes="",
+ f"{col_heading_class} level{r} col{c}",
+ value,
+ _is_visible(c, r, col_lengths),
+ attributes=(
+ f'colspan="{col_lengths.get((r, c), 0)}"'
+ if col_lengths.get((r, c), 0) > 1
+ else ""
+ ),
)
- )
- head.append(index_blanks + column_name + column_headers)
+ for c, value in enumerate(clabels[r])
+ ]
+
+ if len(self.data.columns) > max_cols:
+ # add an extra column with `...` value to indicate trimming
+ column_headers.append(
+ _element(
+ "th",
+ f"{col_heading_class} level{r} {trimmed_col_class}",
+ "...",
+ True,
+ attributes="",
+ )
+ )
+ head.append(index_blanks + column_name + column_headers)
# 2) index names
if (
self.data.index.names
and com.any_not_none(*self.data.index.names)
- and not self.hidden_index
+ and not self.hide_index_
):
index_names = [
_element(
@@ -411,7 +414,9 @@ def _translate_body(
The associated HTML elements needed for template rendering.
"""
# for sparsifying a MultiIndex
- idx_lengths = _get_level_lengths(self.index, sparsify_index, max_rows)
+ idx_lengths = _get_level_lengths(
+ self.index, sparsify_index, max_rows, self.hidden_rows
+ )
rlabels = self.data.index.tolist()[:max_rows] # slice to allow trimming
if self.data.index.nlevels == 1:
@@ -425,7 +430,7 @@ def _translate_body(
"th",
f"{row_heading_class} level{c} {trimmed_row_class}",
"...",
- not self.hidden_index,
+ not self.hide_index_,
attributes="",
)
for c in range(self.data.index.nlevels)
@@ -462,7 +467,7 @@ def _translate_body(
"th",
f"{row_heading_class} level{c} row{r}",
value,
- (_is_visible(r, c, idx_lengths) and not self.hidden_index),
+ (_is_visible(r, c, idx_lengths) and not self.hide_index_),
id=f"level{c}_row{r}",
attributes=(
f'rowspan="{idx_lengths.get((c, r), 0)}"'
@@ -496,7 +501,7 @@ def _translate_body(
"td",
f"{data_class} row{r} col{c}{cls}",
value,
- (c not in self.hidden_columns),
+ (c not in self.hidden_columns and r not in self.hidden_rows),
attributes="",
display_value=self._display_funcs[(r, c)](value),
)
@@ -527,7 +532,7 @@ def _translate_latex(self, d: dict) -> None:
d["head"] = [[col for col in row if col["is_visible"]] for row in d["head"]]
body = []
for r, row in enumerate(d["body"]):
- if self.hidden_index:
+ if self.hide_index_:
row_body_headers = []
else:
row_body_headers = [
@@ -842,7 +847,13 @@ def _get_level_lengths(
last_label = j
lengths[(i, last_label)] = 0
elif j not in hidden_elements:
- lengths[(i, last_label)] += 1
+ if lengths[(i, last_label)] == 0:
+ # if the previous iteration was first-of-kind but hidden then offset
+ last_label = j
+ lengths[(i, last_label)] = 1
+ else:
+ # else add to previous iteration
+ lengths[(i, last_label)] += 1
non_zero_lengths = {
element: length for element, length in lengths.items() if length >= 1
diff --git a/pandas/tests/io/formats/style/test_style.py b/pandas/tests/io/formats/style/test_style.py
index 281170ab6c7cb..0516aa6029487 100644
--- a/pandas/tests/io/formats/style/test_style.py
+++ b/pandas/tests/io/formats/style/test_style.py
@@ -221,7 +221,7 @@ def test_copy(self, do_changes, do_render):
[{"selector": "th", "props": [("foo", "bar")]}]
)
self.styler.set_table_attributes('class="foo" data-bar')
- self.styler.hidden_index = not self.styler.hidden_index
+ self.styler.hide_index_ = not self.styler.hide_index_
self.styler.hide_columns("A")
classes = DataFrame(
[["favorite-val red", ""], [None, "blue my-val"]],
@@ -292,7 +292,7 @@ def test_copy(self, do_changes, do_render):
"table_styles",
"table_attributes",
"cell_ids",
- "hidden_index",
+ "hide_index_",
"hidden_columns",
"cell_context",
]
@@ -317,7 +317,7 @@ def test_clear(self):
assert len(s._todo) > 0
assert s.tooltips
assert len(s.cell_context) > 0
- assert s.hidden_index is True
+ assert s.hide_index_ is True
assert len(s.hidden_columns) > 0
s = s._compute()
@@ -331,7 +331,7 @@ def test_clear(self):
assert len(s._todo) == 0
assert not s.tooltips
assert len(s.cell_context) == 0
- assert s.hidden_index is False
+ assert s.hide_index_ is False
assert len(s.hidden_columns) == 0
def test_render(self):
@@ -1127,6 +1127,14 @@ def test_mi_sparse_column_names(self):
]
assert head == expected
+ def test_hide_column_headers(self):
+ ctx = self.styler.hide_columns()._translate(True, True)
+ assert len(ctx["head"]) == 0 # no header entries with an unnamed index
+
+ self.df.index.name = "some_name"
+ ctx = self.df.style.hide_columns()._translate(True, True)
+ assert len(ctx["head"]) == 1 # only a single row for index names: no col heads
+
def test_hide_single_index(self):
# GH 14194
# single unnamed index
@@ -1195,7 +1203,7 @@ def test_hide_columns_single_level(self):
assert not ctx["body"][0][1]["is_visible"] # col A, row 1
assert not ctx["body"][1][2]["is_visible"] # col B, row 1
- def test_hide_columns_mult_levels(self):
+ def test_hide_columns_index_mult_levels(self):
# GH 14194
# setup dataframe with multiple column levels and indices
i1 = MultiIndex.from_arrays(
@@ -1227,7 +1235,8 @@ def test_hide_columns_mult_levels(self):
# hide first column only
ctx = df.style.hide_columns([("b", 0)])._translate(True, True)
- assert ctx["head"][0][2]["is_visible"] # b
+ assert not ctx["head"][0][2]["is_visible"] # b
+ assert ctx["head"][0][3]["is_visible"] # b
assert not ctx["head"][1][2]["is_visible"] # 0
assert not ctx["body"][1][2]["is_visible"] # 3
assert ctx["body"][1][3]["is_visible"]
@@ -1243,6 +1252,18 @@ def test_hide_columns_mult_levels(self):
assert ctx["body"][1][2]["is_visible"]
assert ctx["body"][1][2]["display_value"] == 3
+ # hide top row level, which hides both rows
+ ctx = df.style.hide_index("a")._translate(True, True)
+ for i in [0, 1, 2, 3]:
+ assert not ctx["body"][0][i]["is_visible"]
+ assert not ctx["body"][1][i]["is_visible"]
+
+ # hide first row only
+ ctx = df.style.hide_index(("a", 0))._translate(True, True)
+ for i in [0, 1, 2, 3]:
+ assert not ctx["body"][0][i]["is_visible"]
+ assert ctx["body"][1][i]["is_visible"]
+
def test_pipe(self):
def set_caption_from_template(styler, a, b):
return styler.set_caption(f"Dataframe with a = {a} and b = {b}")
| This closes #41158 (which is an alternative PR for similar functionality)
Currently `hide_index()` and `hide_columns(subset)` have similar signatures but **different** operations. One hides the index whilst showing the data-rows, and the other hides data-columns whilst showing the column-headers row.
### This PR
- adds the `subset` keyword to give: `hide_index(subset=None)`.
- sets the default to `hide_columns(subset=None)`.
When `subset` is None the function now operates to hide the entire index, or entire column headers row respectively.
When `subset` is not None it operates to selectively hide the given rows or columns respectively, keeping the index or column headers row.
<s>We also add the `show` keyword to allow the method to operate inversely.</s>
@jreback I think you will prefer this over the `hide_values` and `hide_headers` methods suggested in #41158 since this re-uses the existing methods with minimal changes but tries to align their consistency. It is also fully backwards compatible.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41266 | 2021-05-02T16:02:20Z | 2021-06-16T00:22:53Z | 2021-06-16T00:22:53Z | 2021-07-27T18:03:50Z |
REF: avoid unnecessary raise in DataFrameGroupBy._cython_agg_general | diff --git a/pandas/core/apply.py b/pandas/core/apply.py
index 1aa08356982d2..693b1832ed3c9 100644
--- a/pandas/core/apply.py
+++ b/pandas/core/apply.py
@@ -360,7 +360,10 @@ def agg_list_like(self) -> FrameOrSeriesUnion:
# raised directly in _aggregate_named
pass
elif "no results" in str(err):
- # raised directly in _aggregate_multiple_funcs
+ # reached in test_frame_apply.test_nuiscance_columns
+ # where the colg.aggregate(arg) ends up going through
+ # the selected_obj.ndim == 1 branch above with arg == ["sum"]
+ # on a datetime64[ns] column
pass
else:
raise
diff --git a/pandas/core/groupby/generic.py b/pandas/core/groupby/generic.py
index fd4b0cfa87950..27e07deb69e9c 100644
--- a/pandas/core/groupby/generic.py
+++ b/pandas/core/groupby/generic.py
@@ -1110,6 +1110,7 @@ def _cython_agg_general(
# Note: we never get here with how="ohlc"; that goes through SeriesGroupBy
data: Manager2D = self._get_data_to_aggregate()
+ orig = data
if numeric_only:
data = data.get_numeric_data(copy=False)
@@ -1187,7 +1188,8 @@ def array_func(values: ArrayLike) -> ArrayLike:
# continue and exclude the block
new_mgr = data.grouped_reduce(array_func, ignore_failures=True)
- if not len(new_mgr):
+ if not len(new_mgr) and len(orig):
+ # If the original Manager was already empty, no need to raise
raise DataError("No numeric types to aggregate")
return self._wrap_agged_manager(new_mgr)
| Untangling these layered try/excepts is turning out to be an exceptional PITA, so splitting it into extra-small pieces. | https://api.github.com/repos/pandas-dev/pandas/pulls/41265 | 2021-05-02T15:40:41Z | 2021-05-03T15:14:58Z | 2021-05-03T15:14:58Z | 2021-05-03T16:51:38Z |
BUG: RangeIndex.astype('category') | diff --git a/doc/source/whatsnew/v1.3.0.rst b/doc/source/whatsnew/v1.3.0.rst
index 794a7025fe218..7def8f6bdb2bc 100644
--- a/doc/source/whatsnew/v1.3.0.rst
+++ b/doc/source/whatsnew/v1.3.0.rst
@@ -780,6 +780,7 @@ Indexing
- Bug in :meth:`DatetimeIndex.insert` when inserting ``np.datetime64("NaT")`` into a timezone-aware index incorrectly treating the timezone-naive value as timezone-aware (:issue:`39769`)
- Bug in incorrectly raising in :meth:`Index.insert`, when setting a new column that cannot be held in the existing ``frame.columns``, or in :meth:`Series.reset_index` or :meth:`DataFrame.reset_index` instead of casting to a compatible dtype (:issue:`39068`)
- Bug in :meth:`RangeIndex.append` where a single object of length 1 was concatenated incorrectly (:issue:`39401`)
+- Bug in :meth:`RangeIndex.astype` where when converting to :class:`CategoricalIndex`, the categories became a :class:`Int64Index` instead of a :class:`RangeIndex` (:issue:`41263`)
- Bug in setting ``numpy.timedelta64`` values into an object-dtype :class:`Series` using a boolean indexer (:issue:`39488`)
- Bug in setting numeric values into a into a boolean-dtypes :class:`Series` using ``at`` or ``iat`` failing to cast to object-dtype (:issue:`39582`)
- Bug in :meth:`DataFrame.__setitem__` and :meth:`DataFrame.iloc.__setitem__` raising ``ValueError`` when trying to index with a row-slice and setting a list as values (:issue:`40440`)
@@ -945,6 +946,7 @@ Other
- Bug in :meth:`Series.where` with numeric dtype and ``other = None`` not casting to ``nan`` (:issue:`39761`)
- :meth:`Index.where` behavior now mirrors :meth:`Index.putmask` behavior, i.e. ``index.where(mask, other)`` matches ``index.putmask(~mask, other)`` (:issue:`39412`)
- Bug in :func:`pandas.testing.assert_series_equal`, :func:`pandas.testing.assert_frame_equal`, :func:`pandas.testing.assert_index_equal` and :func:`pandas.testing.assert_extension_array_equal` incorrectly raising when an attribute has an unrecognized NA type (:issue:`39461`)
+- Bug in :func:`pandas.testing.assert_index_equal` with ``exact=True`` not raising when comparing :class:`CategoricalIndex` instances with ``Int64Index`` and ``RangeIndex`` categories (:issue:`41263`)
- Bug in :meth:`DataFrame.equals`, :meth:`Series.equals`, :meth:`Index.equals` with object-dtype containing ``np.datetime64("NaT")`` or ``np.timedelta64("NaT")`` (:issue:`39650`)
- Bug in :func:`pandas.util.show_versions` where console JSON output was not proper JSON (:issue:`39701`)
- Bug in :meth:`DataFrame.convert_dtypes` incorrectly raised ValueError when called on an empty DataFrame (:issue:`40393`)
diff --git a/pandas/_testing/asserters.py b/pandas/_testing/asserters.py
index 912039b7571bc..2d695458e32e6 100644
--- a/pandas/_testing/asserters.py
+++ b/pandas/_testing/asserters.py
@@ -309,18 +309,22 @@ def assert_index_equal(
__tracebackhide__ = True
def _check_types(left, right, obj="Index"):
- if exact:
- assert_class_equal(left, right, exact=exact, obj=obj)
+ if not exact:
+ return
- # Skip exact dtype checking when `check_categorical` is False
- if check_categorical:
- assert_attr_equal("dtype", left, right, obj=obj)
+ assert_class_equal(left, right, exact=exact, obj=obj)
- # allow string-like to have different inferred_types
- if left.inferred_type in ("string"):
- assert right.inferred_type in ("string")
- else:
- assert_attr_equal("inferred_type", left, right, obj=obj)
+ # Skip exact dtype checking when `check_categorical` is False
+ if check_categorical:
+ assert_attr_equal("dtype", left, right, obj=obj)
+ if is_categorical_dtype(left.dtype) and is_categorical_dtype(right.dtype):
+ assert_index_equal(left.categories, right.categories, exact=exact)
+
+ # allow string-like to have different inferred_types
+ if left.inferred_type in ("string"):
+ assert right.inferred_type in ("string")
+ else:
+ assert_attr_equal("inferred_type", left, right, obj=obj)
def _get_ilevel_values(index, level):
# accept level number only
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 7779335bfd3ba..6f414c91ce94c 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -907,9 +907,7 @@ def astype(self, dtype, copy=True):
elif is_categorical_dtype(dtype):
from pandas.core.indexes.category import CategoricalIndex
- return CategoricalIndex(
- self._values, name=self.name, dtype=dtype, copy=copy
- )
+ return CategoricalIndex(self, name=self.name, dtype=dtype, copy=copy)
elif is_extension_array_dtype(dtype):
return Index(np.asarray(self), name=self.name, dtype=dtype, copy=copy)
diff --git a/pandas/tests/indexes/categorical/test_constructors.py b/pandas/tests/indexes/categorical/test_constructors.py
index 2acf79ee0bced..35620875d5a1a 100644
--- a/pandas/tests/indexes/categorical/test_constructors.py
+++ b/pandas/tests/indexes/categorical/test_constructors.py
@@ -108,8 +108,8 @@ def test_construction_with_dtype(self):
tm.assert_index_equal(result, ci, exact=True)
# make sure indexes are handled
- expected = CategoricalIndex([0, 1, 2], categories=[0, 1, 2], ordered=True)
idx = Index(range(3))
+ expected = CategoricalIndex([0, 1, 2], categories=idx, ordered=True)
result = CategoricalIndex(idx, categories=idx, ordered=True)
tm.assert_index_equal(result, expected, exact=True)
diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 6139d8af48d98..8bbe8f9b9e0e2 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -667,20 +667,20 @@ def test_astype_category(self, copy, name, ordered, simple_index):
# standard categories
dtype = CategoricalDtype(ordered=ordered)
result = idx.astype(dtype, copy=copy)
- expected = CategoricalIndex(idx.values, name=name, ordered=ordered)
- tm.assert_index_equal(result, expected)
+ expected = CategoricalIndex(idx, name=name, ordered=ordered)
+ tm.assert_index_equal(result, expected, exact=True)
# non-standard categories
dtype = CategoricalDtype(idx.unique().tolist()[:-1], ordered)
result = idx.astype(dtype, copy=copy)
- expected = CategoricalIndex(idx.values, name=name, dtype=dtype)
- tm.assert_index_equal(result, expected)
+ expected = CategoricalIndex(idx, name=name, dtype=dtype)
+ tm.assert_index_equal(result, expected, exact=True)
if ordered is False:
# dtype='category' defaults to ordered=False, so only test once
result = idx.astype("category", copy=copy)
- expected = CategoricalIndex(idx.values, name=name)
- tm.assert_index_equal(result, expected)
+ expected = CategoricalIndex(idx, name=name)
+ tm.assert_index_equal(result, expected, exact=True)
def test_is_unique(self, simple_index):
# initialize a unique index
diff --git a/pandas/tests/util/test_assert_index_equal.py b/pandas/tests/util/test_assert_index_equal.py
index 82a3a223b442b..1778b6fb9d832 100644
--- a/pandas/tests/util/test_assert_index_equal.py
+++ b/pandas/tests/util/test_assert_index_equal.py
@@ -3,9 +3,11 @@
from pandas import (
Categorical,
+ CategoricalIndex,
Index,
MultiIndex,
NaT,
+ RangeIndex,
)
import pandas._testing as tm
@@ -199,6 +201,28 @@ def test_index_equal_category_mismatch(check_categorical):
tm.assert_index_equal(idx1, idx2, check_categorical=check_categorical)
+@pytest.mark.parametrize("exact", [False, True])
+def test_index_equal_range_categories(check_categorical, exact):
+ # GH41263
+ msg = """\
+Index are different
+
+Index classes are different
+\\[left\\]: RangeIndex\\(start=0, stop=10, step=1\\)
+\\[right\\]: Int64Index\\(\\[0, 1, 2, 3, 4, 5, 6, 7, 8, 9\\], dtype='int64'\\)"""
+
+ rcat = CategoricalIndex(RangeIndex(10))
+ icat = CategoricalIndex(list(range(10)))
+
+ if check_categorical and exact:
+ with pytest.raises(AssertionError, match=msg):
+ tm.assert_index_equal(rcat, icat, check_categorical=True, exact=True)
+ else:
+ tm.assert_index_equal(
+ rcat, icat, check_categorical=check_categorical, exact=exact
+ )
+
+
def test_assert_index_equal_mixed_dtype():
# GH#39168
idx = Index(["foo", "bar", 42])
| There's currently inconsistent behaviour when converting `RangeIndex` to `CategoricalIndex` using different methods. This fixes that.
```python
>>> ridx = pd.RangeIndex(5)
>>> pd.CategoricalIndex(ridx).categories
RangeIndex(start=0, stop=5, step=1) # both master and this PR
>>> ridx.astype("category").categories
Int64Index([0, 1, 2, 3, 4], dtype='int64') # master
RangeIndex(start=0, stop=5, step=1) # this PR
```
In general, when supplying a Index subclass to `Categorical`/`CategoricalIndex`, the new categories should be of that type (unless the supplied index is a `CategoricalIndex` itself).
Discovered working on tests for #41153. | https://api.github.com/repos/pandas-dev/pandas/pulls/41263 | 2021-05-02T10:46:15Z | 2021-05-05T12:39:13Z | 2021-05-05T12:39:13Z | 2021-05-05T13:41:07Z |
REF: simplify libreduction | diff --git a/pandas/_libs/reduction.pyx b/pandas/_libs/reduction.pyx
index 191967585c431..d0f85b75a629e 100644
--- a/pandas/_libs/reduction.pyx
+++ b/pandas/_libs/reduction.pyx
@@ -53,24 +53,24 @@ cdef class _BaseGrouper:
return values, index
+ cdef _init_dummy_series_and_index(self, Slider islider, Slider vslider):
+ """
+ Create Series and Index objects that we will alter in-place while iterating.
+ """
+ cached_index = self.ityp(islider.buf, dtype=self.idtype)
+ cached_series = self.typ(
+ vslider.buf, dtype=vslider.buf.dtype, index=cached_index, name=self.name
+ )
+ return cached_index, cached_series
+
cdef inline _update_cached_objs(self, object cached_typ, object cached_ityp,
Slider islider, Slider vslider):
- if cached_typ is None:
- cached_ityp = self.ityp(islider.buf, dtype=self.idtype)
- cached_typ = self.typ(
- vslider.buf, dtype=vslider.buf.dtype, index=cached_ityp, name=self.name
- )
- else:
- # See the comment in indexes/base.py about _index_data.
- # We need this for EA-backed indexes that have a reference
- # to a 1-d ndarray like datetime / timedelta / period.
- object.__setattr__(cached_ityp, '_index_data', islider.buf)
- cached_ityp._engine.clear_mapping()
- cached_ityp._cache.clear() # e.g. inferred_freq must go
- cached_typ._mgr.set_values(vslider.buf)
- object.__setattr__(cached_typ, '_index', cached_ityp)
- object.__setattr__(cached_typ, 'name', self.name)
- return cached_typ, cached_ityp
+ # See the comment in indexes/base.py about _index_data.
+ # We need this for EA-backed indexes that have a reference
+ # to a 1-d ndarray like datetime / timedelta / period.
+ cached_ityp._engine.clear_mapping()
+ cached_ityp._cache.clear() # e.g. inferred_freq must go
+ cached_typ._mgr.set_values(vslider.buf)
cdef inline object _apply_to_group(self,
object cached_typ, object cached_ityp,
@@ -81,8 +81,8 @@ cdef class _BaseGrouper:
cdef:
object res
- cached_ityp._engine.clear_mapping()
- cached_ityp._cache.clear() # e.g. inferred_freq must go
+ # NB: we assume that _update_cached_objs has already cleared cleared
+ # the cache and engine mapping
res = self.f(cached_typ)
res = extract_result(res)
if not initialized:
@@ -160,6 +160,8 @@ cdef class SeriesBinGrouper(_BaseGrouper):
result = np.empty(self.ngroups, dtype='O')
+ cached_ityp, cached_typ = self._init_dummy_series_and_index(islider, vslider)
+
start = 0
try:
for i in range(self.ngroups):
@@ -169,7 +171,7 @@ cdef class SeriesBinGrouper(_BaseGrouper):
islider.move(start, end)
vslider.move(start, end)
- cached_typ, cached_ityp = self._update_cached_objs(
+ self._update_cached_objs(
cached_typ, cached_ityp, islider, vslider)
res, initialized = self._apply_to_group(cached_typ, cached_ityp,
@@ -246,6 +248,8 @@ cdef class SeriesGrouper(_BaseGrouper):
result = np.empty(self.ngroups, dtype='O')
+ cached_ityp, cached_typ = self._init_dummy_series_and_index(islider, vslider)
+
start = 0
try:
for i in range(n):
@@ -263,7 +267,7 @@ cdef class SeriesGrouper(_BaseGrouper):
islider.move(start, end)
vslider.move(start, end)
- cached_typ, cached_ityp = self._update_cached_objs(
+ self._update_cached_objs(
cached_typ, cached_ityp, islider, vslider)
res, initialized = self._apply_to_group(cached_typ, cached_ityp,
| - [ ] closes #xxxx
- [ ] tests added / passed
- [ ] Ensure all linting tests pass, see [here](https://pandas.pydata.org/pandas-docs/dev/development/contributing.html#code-standards) for how to run them
- [ ] whatsnew entry
| https://api.github.com/repos/pandas-dev/pandas/pulls/41262 | 2021-05-02T04:19:33Z | 2021-05-02T23:10:17Z | 2021-05-02T23:10:17Z | 2021-05-03T01:19:36Z |
COMPAT: fix _bootlocale import error on py310 | diff --git a/pandas/tests/io/json/test_readlines.py b/pandas/tests/io/json/test_readlines.py
index 6731c481f8935..abc65f2f1eda1 100644
--- a/pandas/tests/io/json/test_readlines.py
+++ b/pandas/tests/io/json/test_readlines.py
@@ -192,7 +192,7 @@ def test_readjson_chunks_multiple_empty_lines(chunksize):
def test_readjson_unicode(monkeypatch):
with tm.ensure_clean("test.json") as path:
- monkeypatch.setattr("_bootlocale.getpreferredencoding", lambda l: "cp949")
+ monkeypatch.setattr("locale.getpreferredencoding", lambda l: "cp949")
with open(path, "w", encoding="utf-8") as f:
f.write('{"£©µÀÆÖÞßéöÿ":["АБВГДабвгд가"]}')
| The _bootlocale module has been removed from Python 3.10.
https://github.com/python/cpython/commit/b62bdf71ea0cd52041d49691d8ae3dc645bd48e1 | https://api.github.com/repos/pandas-dev/pandas/pulls/41261 | 2021-05-02T02:51:41Z | 2021-05-02T23:11:52Z | 2021-05-02T23:11:52Z | 2022-11-18T02:21:52Z |
TYP: ExtensionArray unique and repeat | diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index 5a2643dd531ed..bd01191719143 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -794,7 +794,7 @@ def shift(self, periods: int = 1, fill_value: object = None) -> ExtensionArray:
b = empty
return self._concat_same_type([a, b])
- def unique(self):
+ def unique(self: ExtensionArrayT) -> ExtensionArrayT:
"""
Compute the ExtensionArray of unique values.
@@ -1023,7 +1023,7 @@ def factorize(self, na_sentinel: int = -1) -> tuple[np.ndarray, ExtensionArray]:
@Substitution(klass="ExtensionArray")
@Appender(_extension_array_shared_docs["repeat"])
- def repeat(self, repeats, axis=None):
+ def repeat(self, repeats: int | Sequence[int], axis: int | None = None):
nv.validate_repeat((), {"axis": axis})
ind = np.arange(len(self)).repeat(repeats)
return self.take(ind)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 50e8cc4c82e0d..95c95d98bc968 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -1518,7 +1518,11 @@ def delete(self: IntervalArrayT, loc) -> IntervalArrayT:
return self._shallow_copy(left=new_left, right=new_right)
@Appender(_extension_array_shared_docs["repeat"] % _shared_docs_kwargs)
- def repeat(self: IntervalArrayT, repeats: int, axis=None) -> IntervalArrayT:
+ def repeat(
+ self: IntervalArrayT,
+ repeats: int | Sequence[int],
+ axis: int | None = None,
+ ) -> IntervalArrayT:
nv.validate_repeat((), {"axis": axis})
left_repeat = self.left.repeat(repeats)
right_repeat = self.right.repeat(repeats)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 42f52618eb07b..e1e36d6d226c4 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -16,6 +16,7 @@
import pandas._libs.lib as lib
from pandas._typing import (
+ ArrayLike,
Dtype,
DtypeObj,
IndexLabel,
@@ -996,7 +997,7 @@ def unique(self):
values = self._values
if not isinstance(values, np.ndarray):
- result = values.unique()
+ result: ArrayLike = values.unique()
if self.dtype.kind in ["m", "M"] and isinstance(self, ABCSeries):
# GH#31182 Series._values returns EA, unpack for backward-compat
if getattr(self.dtype, "tz", None) is None:
| Typing fixes for `ExtensionArray.unique()` and `ExtensionArray.repeat()`
| https://api.github.com/repos/pandas-dev/pandas/pulls/41260 | 2021-05-02T01:33:00Z | 2021-05-03T12:46:14Z | 2021-05-03T12:46:14Z | 2021-05-03T13:00:03Z |
TYP: Typing for ExtensionArray.__getitem__ | diff --git a/pandas/_typing.py b/pandas/_typing.py
index 93d49497a85e0..5077e659410e3 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -206,10 +206,16 @@
# indexing
# PositionalIndexer -> valid 1D positional indexer, e.g. can pass
# to ndarray.__getitem__
+# ScalarIndexer is for a single value as the index
+# SequenceIndexer is for list like or slices (but not tuples)
+# PositionalIndexerTuple is extends the PositionalIndexer for 2D arrays
+# These are used in various __getitem__ overloads
# TODO: add Ellipsis, see
# https://github.com/python/typing/issues/684#issuecomment-548203158
# https://bugs.python.org/issue41810
-PositionalIndexer = Union[int, np.integer, slice, Sequence[int], np.ndarray]
-PositionalIndexer2D = Union[
- PositionalIndexer, Tuple[PositionalIndexer, PositionalIndexer]
-]
+# Using List[int] here rather than Sequence[int] to disallow tuples.
+ScalarIndexer = Union[int, np.integer]
+SequenceIndexer = Union[slice, List[int], np.ndarray]
+PositionalIndexer = Union[ScalarIndexer, SequenceIndexer]
+PositionalIndexerTuple = Tuple[PositionalIndexer, PositionalIndexer]
+PositionalIndexer2D = Union[PositionalIndexer, PositionalIndexerTuple]
diff --git a/pandas/core/arrays/_mixins.py b/pandas/core/arrays/_mixins.py
index f13f1a418c2e9..4c7ccc2f16477 100644
--- a/pandas/core/arrays/_mixins.py
+++ b/pandas/core/arrays/_mixins.py
@@ -4,9 +4,11 @@
from typing import (
TYPE_CHECKING,
Any,
+ Literal,
Sequence,
TypeVar,
cast,
+ overload,
)
import numpy as np
@@ -16,6 +18,9 @@
from pandas._typing import (
F,
PositionalIndexer2D,
+ PositionalIndexerTuple,
+ ScalarIndexer,
+ SequenceIndexer,
Shape,
npt,
type_t,
@@ -48,7 +53,6 @@
)
if TYPE_CHECKING:
- from typing import Literal
from pandas._typing import (
NumpySorter,
@@ -205,6 +209,17 @@ def __setitem__(self, key, value):
def _validate_setitem_value(self, value):
return value
+ @overload
+ def __getitem__(self, key: ScalarIndexer) -> Any:
+ ...
+
+ @overload
+ def __getitem__(
+ self: NDArrayBackedExtensionArrayT,
+ key: SequenceIndexer | PositionalIndexerTuple,
+ ) -> NDArrayBackedExtensionArrayT:
+ ...
+
def __getitem__(
self: NDArrayBackedExtensionArrayT,
key: PositionalIndexer2D,
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index b0b7b81d059e6..40837ccad6ac8 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -30,6 +30,8 @@
Dtype,
FillnaOptions,
PositionalIndexer,
+ ScalarIndexer,
+ SequenceIndexer,
Shape,
npt,
)
@@ -298,8 +300,17 @@ def _from_factorized(cls, values, original):
# ------------------------------------------------------------------------
# Must be a Sequence
# ------------------------------------------------------------------------
+ @overload
+ def __getitem__(self, item: ScalarIndexer) -> Any:
+ ...
+
+ @overload
+ def __getitem__(self: ExtensionArrayT, item: SequenceIndexer) -> ExtensionArrayT:
+ ...
- def __getitem__(self, item: PositionalIndexer) -> ExtensionArray | Any:
+ def __getitem__(
+ self: ExtensionArrayT, item: PositionalIndexer
+ ) -> ExtensionArrayT | Any:
"""
Select a subset of self.
@@ -313,6 +324,8 @@ def __getitem__(self, item: PositionalIndexer) -> ExtensionArray | Any:
* ndarray: A 1-d boolean NumPy ndarray the same length as 'self'
+ * list[int]: A list of int
+
Returns
-------
item : scalar or ExtensionArray
@@ -761,7 +774,7 @@ def fillna(
new_values = self.copy()
return new_values
- def dropna(self):
+ def dropna(self: ExtensionArrayT) -> ExtensionArrayT:
"""
Return ExtensionArray without NA values.
diff --git a/pandas/core/arrays/categorical.py b/pandas/core/arrays/categorical.py
index b8ceef3d52e41..543b018c07ea5 100644
--- a/pandas/core/arrays/categorical.py
+++ b/pandas/core/arrays/categorical.py
@@ -6,6 +6,7 @@
from shutil import get_terminal_size
from typing import (
TYPE_CHECKING,
+ Any,
Hashable,
Sequence,
TypeVar,
@@ -37,7 +38,11 @@
Dtype,
NpDtype,
Ordered,
+ PositionalIndexer2D,
+ PositionalIndexerTuple,
Scalar,
+ ScalarIndexer,
+ SequenceIndexer,
Shape,
npt,
type_t,
@@ -2017,7 +2022,18 @@ def __repr__(self) -> str:
# ------------------------------------------------------------------
- def __getitem__(self, key):
+ @overload
+ def __getitem__(self, key: ScalarIndexer) -> Any:
+ ...
+
+ @overload
+ def __getitem__(
+ self: CategoricalT,
+ key: SequenceIndexer | PositionalIndexerTuple,
+ ) -> CategoricalT:
+ ...
+
+ def __getitem__(self: CategoricalT, key: PositionalIndexer2D) -> CategoricalT | Any:
"""
Return an item.
"""
diff --git a/pandas/core/arrays/datetimelike.py b/pandas/core/arrays/datetimelike.py
index ad3120c9c27d3..63ba9fdd59fc6 100644
--- a/pandas/core/arrays/datetimelike.py
+++ b/pandas/core/arrays/datetimelike.py
@@ -49,6 +49,9 @@
DtypeObj,
NpDtype,
PositionalIndexer2D,
+ PositionalIndexerTuple,
+ ScalarIndexer,
+ SequenceIndexer,
npt,
)
from pandas.compat.numpy import function as nv
@@ -313,17 +316,33 @@ def __array__(self, dtype: NpDtype | None = None) -> np.ndarray:
return np.array(list(self), dtype=object)
return self._ndarray
+ @overload
+ def __getitem__(self, item: ScalarIndexer) -> DTScalarOrNaT:
+ ...
+
+ @overload
def __getitem__(
- self, key: PositionalIndexer2D
- ) -> DatetimeLikeArrayMixin | DTScalarOrNaT:
+ self: DatetimeLikeArrayT,
+ item: SequenceIndexer | PositionalIndexerTuple,
+ ) -> DatetimeLikeArrayT:
+ ...
+
+ def __getitem__(
+ self: DatetimeLikeArrayT, key: PositionalIndexer2D
+ ) -> DatetimeLikeArrayT | DTScalarOrNaT:
"""
This getitem defers to the underlying array, which by-definition can
only handle list-likes, slices, and integer scalars
"""
- result = super().__getitem__(key)
+ # Use cast as we know we will get back a DatetimeLikeArray or DTScalar
+ result = cast(
+ Union[DatetimeLikeArrayT, DTScalarOrNaT], super().__getitem__(key)
+ )
if lib.is_scalar(result):
return result
-
+ else:
+ # At this point we know the result is an array.
+ result = cast(DatetimeLikeArrayT, result)
result._freq = self._get_getitem_freq(key)
return result
@@ -1768,11 +1787,7 @@ def factorize(self, na_sentinel=-1, sort: bool = False):
uniques = self.copy() # TODO: copy or view?
if sort and self.freq.n < 0:
codes = codes[::-1]
- # TODO: overload __getitem__, a slice indexer returns same type as self
- # error: Incompatible types in assignment (expression has type
- # "Union[DatetimeLikeArrayMixin, Union[Any, Any]]", variable
- # has type "TimelikeOps")
- uniques = uniques[::-1] # type: ignore[assignment]
+ uniques = uniques[::-1]
return codes, uniques
# FIXME: shouldn't get here; we are ignoring sort
return super().factorize(na_sentinel=na_sentinel)
diff --git a/pandas/core/arrays/datetimes.py b/pandas/core/arrays/datetimes.py
index e82b81d55807d..823103181bb82 100644
--- a/pandas/core/arrays/datetimes.py
+++ b/pandas/core/arrays/datetimes.py
@@ -9,7 +9,6 @@
from typing import (
TYPE_CHECKING,
Literal,
- cast,
overload,
)
import warnings
@@ -478,11 +477,9 @@ def _generate_range(
index = cls._simple_new(arr, freq=None, dtype=dtype)
if not left_closed and len(index) and index[0] == start:
- # TODO: overload DatetimeLikeArrayMixin.__getitem__
- index = cast(DatetimeArray, index[1:])
+ index = index[1:]
if not right_closed and len(index) and index[-1] == end:
- # TODO: overload DatetimeLikeArrayMixin.__getitem__
- index = cast(DatetimeArray, index[:-1])
+ index = index[:-1]
dtype = tz_to_dtype(tz)
return cls._simple_new(index._ndarray, freq=freq, dtype=dtype)
diff --git a/pandas/core/arrays/interval.py b/pandas/core/arrays/interval.py
index 41998218acd7d..732bdb112b8c3 100644
--- a/pandas/core/arrays/interval.py
+++ b/pandas/core/arrays/interval.py
@@ -9,7 +9,9 @@
from typing import (
Sequence,
TypeVar,
+ Union,
cast,
+ overload,
)
import numpy as np
@@ -31,6 +33,9 @@
ArrayLike,
Dtype,
NpDtype,
+ PositionalIndexer,
+ ScalarIndexer,
+ SequenceIndexer,
)
from pandas.compat.numpy import function as nv
from pandas.util._decorators import Appender
@@ -89,6 +94,7 @@
)
IntervalArrayT = TypeVar("IntervalArrayT", bound="IntervalArray")
+IntervalOrNA = Union[Interval, float]
_interval_shared_docs: dict[str, str] = {}
@@ -635,7 +641,17 @@ def __iter__(self):
def __len__(self) -> int:
return len(self._left)
- def __getitem__(self, key):
+ @overload
+ def __getitem__(self, key: ScalarIndexer) -> IntervalOrNA:
+ ...
+
+ @overload
+ def __getitem__(self: IntervalArrayT, key: SequenceIndexer) -> IntervalArrayT:
+ ...
+
+ def __getitem__(
+ self: IntervalArrayT, key: PositionalIndexer
+ ) -> IntervalArrayT | IntervalOrNA:
key = check_array_indexer(self, key)
left = self._left[key]
right = self._right[key]
@@ -1633,10 +1649,11 @@ def _from_combined(self, combined: np.ndarray) -> IntervalArray:
return self._shallow_copy(left=new_left, right=new_right)
def unique(self) -> IntervalArray:
- # Invalid index type "Tuple[slice, int]" for "Union[ExtensionArray,
- # ndarray[Any, Any]]"; expected type "Union[int, integer[Any], slice,
- # Sequence[int], ndarray[Any, Any]]"
- nc = unique(self._combined.view("complex128")[:, 0]) # type: ignore[index]
+ # No overload variant of "__getitem__" of "ExtensionArray" matches argument
+ # type "Tuple[slice, int]"
+ nc = unique(
+ self._combined.view("complex128")[:, 0] # type: ignore[call-overload]
+ )
nc = nc[:, None]
return self._from_combined(nc)
diff --git a/pandas/core/arrays/masked.py b/pandas/core/arrays/masked.py
index cccfd58aa914d..877babe4f18e8 100644
--- a/pandas/core/arrays/masked.py
+++ b/pandas/core/arrays/masked.py
@@ -20,6 +20,8 @@
NpDtype,
PositionalIndexer,
Scalar,
+ ScalarIndexer,
+ SequenceIndexer,
npt,
type_t,
)
@@ -139,7 +141,17 @@ def __init__(self, values: np.ndarray, mask: np.ndarray, copy: bool = False):
def dtype(self) -> BaseMaskedDtype:
raise AbstractMethodError(self)
- def __getitem__(self, item: PositionalIndexer) -> BaseMaskedArray | Any:
+ @overload
+ def __getitem__(self, item: ScalarIndexer) -> Any:
+ ...
+
+ @overload
+ def __getitem__(self: BaseMaskedArrayT, item: SequenceIndexer) -> BaseMaskedArrayT:
+ ...
+
+ def __getitem__(
+ self: BaseMaskedArrayT, item: PositionalIndexer
+ ) -> BaseMaskedArrayT | Any:
if is_integer(item):
if self._mask[item]:
return self.dtype.na_value
diff --git a/pandas/core/arrays/period.py b/pandas/core/arrays/period.py
index 2db1f00e237ee..84e611659b165 100644
--- a/pandas/core/arrays/period.py
+++ b/pandas/core/arrays/period.py
@@ -6,6 +6,7 @@
TYPE_CHECKING,
Any,
Callable,
+ Literal,
Sequence,
)
@@ -76,7 +77,6 @@
import pandas.core.common as com
if TYPE_CHECKING:
- from typing import Literal
from pandas._typing import (
NumpySorter,
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 6dce9b4475d1b..77142ef450487 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -10,8 +10,11 @@
TYPE_CHECKING,
Any,
Callable,
+ Literal,
Sequence,
TypeVar,
+ cast,
+ overload,
)
import warnings
@@ -30,7 +33,10 @@
AstypeArg,
Dtype,
NpDtype,
+ PositionalIndexer,
Scalar,
+ ScalarIndexer,
+ SequenceIndexer,
npt,
)
from pandas.compat.numpy import function as nv
@@ -81,11 +87,21 @@
import pandas.io.formats.printing as printing
+# See https://github.com/python/typing/issues/684
if TYPE_CHECKING:
- from typing import Literal
+ from enum import Enum
+
+ class ellipsis(Enum):
+ Ellipsis = "..."
+
+ Ellipsis = ellipsis.Ellipsis
from pandas._typing import NumpySorter
+else:
+ ellipsis = type(Ellipsis)
+
+
# ----------------------------------------------------------------------------
# Array
@@ -813,8 +829,21 @@ def value_counts(self, dropna: bool = True):
# --------
# Indexing
# --------
+ @overload
+ def __getitem__(self, key: ScalarIndexer) -> Any:
+ ...
+
+ @overload
+ def __getitem__(
+ self: SparseArrayT,
+ key: SequenceIndexer | tuple[int | ellipsis, ...],
+ ) -> SparseArrayT:
+ ...
- def __getitem__(self, key):
+ def __getitem__(
+ self: SparseArrayT,
+ key: PositionalIndexer | tuple[int | ellipsis, ...],
+ ) -> SparseArrayT | Any:
if isinstance(key, tuple):
if len(key) > 1:
@@ -824,6 +853,8 @@ def __getitem__(self, key):
key = key[:-1]
if len(key) > 1:
raise IndexError("too many indices for array.")
+ if key[0] is Ellipsis:
+ raise ValueError("Cannot slice with Ellipsis")
key = key[0]
if is_integer(key):
@@ -852,7 +883,8 @@ def __getitem__(self, key):
key = check_array_indexer(self, key)
if com.is_bool_indexer(key):
-
+ # mypy doesn't know we have an array here
+ key = cast(np.ndarray, key)
return self.take(np.arange(len(key), dtype=np.int32)[key])
elif hasattr(key, "__len__"):
return self.take(key)
diff --git a/pandas/core/arrays/string_arrow.py b/pandas/core/arrays/string_arrow.py
index ab8599f0f05ba..4be7f4eb0c521 100644
--- a/pandas/core/arrays/string_arrow.py
+++ b/pandas/core/arrays/string_arrow.py
@@ -6,17 +6,24 @@
TYPE_CHECKING,
Any,
Sequence,
+ Union,
cast,
+ overload,
)
import numpy as np
-from pandas._libs import lib
+from pandas._libs import (
+ lib,
+ missing as libmissing,
+)
from pandas._typing import (
Dtype,
NpDtype,
PositionalIndexer,
Scalar,
+ ScalarIndexer,
+ SequenceIndexer,
)
from pandas.compat import (
pa_version_under1p0,
@@ -77,6 +84,8 @@
if TYPE_CHECKING:
from pandas import Series
+ArrowStringScalarOrNAT = Union[str, libmissing.NAType]
+
def _chk_pyarrow_available() -> None:
if pa_version_under1p0:
@@ -260,7 +269,17 @@ def _concat_same_type(cls, to_concat) -> ArrowStringArray:
)
)
- def __getitem__(self, item: PositionalIndexer) -> Any:
+ @overload
+ def __getitem__(self, item: ScalarIndexer) -> ArrowStringScalarOrNAT:
+ ...
+
+ @overload
+ def __getitem__(self: ArrowStringArray, item: SequenceIndexer) -> ArrowStringArray:
+ ...
+
+ def __getitem__(
+ self: ArrowStringArray, item: PositionalIndexer
+ ) -> ArrowStringArray | ArrowStringScalarOrNAT:
"""Select a subset of self.
Parameters
diff --git a/pandas/core/groupby/groupby.py b/pandas/core/groupby/groupby.py
index 0e358e611f418..ec8775cf78571 100644
--- a/pandas/core/groupby/groupby.py
+++ b/pandas/core/groupby/groupby.py
@@ -2984,10 +2984,9 @@ def blk_func(values: ArrayLike) -> ArrayLike:
if real_2d and values.ndim == 1:
assert result.shape[1] == 1, result.shape
- # error: Invalid index type "Tuple[slice, int]" for
- # "Union[ExtensionArray, ndarray[Any, Any]]"; expected type
- # "Union[int, integer[Any], slice, Sequence[int], ndarray[Any, Any]]"
- result = result[:, 0] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray"
+ # matches argument type "Tuple[slice, int]"
+ result = result[:, 0] # type: ignore[call-overload]
if needs_mask:
mask = mask[:, 0]
@@ -3001,11 +3000,9 @@ def blk_func(values: ArrayLike) -> ArrayLike:
if needs_2d and not real_2d:
if result.ndim == 2:
assert result.shape[1] == 1
- # error: Invalid index type "Tuple[slice, int]" for
- # "Union[ExtensionArray, Any, ndarray[Any, Any]]"; expected
- # type "Union[int, integer[Any], slice, Sequence[int],
- # ndarray[Any, Any]]"
- result = result[:, 0] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray"
+ # matches argument type "Tuple[slice, int]"
+ result = result[:, 0] # type: ignore[call-overload]
return result.T
diff --git a/pandas/core/indexes/base.py b/pandas/core/indexes/base.py
index 645fab0d76a73..c73b3e99600d6 100644
--- a/pandas/core/indexes/base.py
+++ b/pandas/core/indexes/base.py
@@ -4803,11 +4803,7 @@ def __getitem__(self, key):
result = getitem(key)
if not is_scalar(result):
- # error: Argument 1 to "ndim" has incompatible type "Union[ExtensionArray,
- # Any]"; expected "Union[Union[int, float, complex, str, bytes, generic],
- # Sequence[Union[int, float, complex, str, bytes, generic]],
- # Sequence[Sequence[Any]], _SupportsArray]"
- if np.ndim(result) > 1: # type: ignore[arg-type]
+ if np.ndim(result) > 1:
deprecate_ndim_indexing(result)
return result
# NB: Using _constructor._simple_new would break if MultiIndex
@@ -5122,13 +5118,17 @@ def asof_locs(self, where: Index, mask: np.ndarray) -> npt.NDArray[np.intp]:
which correspond to the return values of the `asof` function
for every element in `where`.
"""
- locs = self._values[mask].searchsorted(where._values, side="right")
+ # error: No overload variant of "searchsorted" of "ndarray" matches argument
+ # types "Union[ExtensionArray, ndarray[Any, Any]]", "str"
+ # TODO: will be fixed when ExtensionArray.searchsorted() is fixed
+ locs = self._values[mask].searchsorted(
+ where._values, side="right" # type: ignore[call-overload]
+ )
locs = np.where(locs > 0, locs - 1, 0)
result = np.arange(len(self), dtype=np.intp)[mask].take(locs)
- # TODO: overload return type of ExtensionArray.__getitem__
- first_value = cast(Any, self._values[mask.argmax()])
+ first_value = self._values[mask.argmax()]
result[(locs == 0) & (where._values < first_value)] = -1
return result
diff --git a/pandas/core/indexes/extension.py b/pandas/core/indexes/extension.py
index 920af5a13baba..b446dfe045e62 100644
--- a/pandas/core/indexes/extension.py
+++ b/pandas/core/indexes/extension.py
@@ -6,6 +6,7 @@
from typing import (
TYPE_CHECKING,
Hashable,
+ Literal,
TypeVar,
overload,
)
@@ -47,7 +48,6 @@
from pandas.core.ops import get_op_result_name
if TYPE_CHECKING:
- from typing import Literal
from pandas._typing import (
NumpySorter,
diff --git a/pandas/core/internals/array_manager.py b/pandas/core/internals/array_manager.py
index 475bfe958ea06..f645cc81e8171 100644
--- a/pandas/core/internals/array_manager.py
+++ b/pandas/core/internals/array_manager.py
@@ -315,10 +315,9 @@ def apply_with_block(self: T, f, align_keys=None, swap_axis=True, **kwargs) -> T
if self.ndim == 2 and arr.ndim == 2:
# 2D for np.ndarray or DatetimeArray/TimedeltaArray
assert len(arr) == 1
- # error: Invalid index type "Tuple[int, slice]" for
- # "Union[ndarray, ExtensionArray]"; expected type
- # "Union[int, slice, ndarray]"
- arr = arr[0, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray"
+ # matches argument type "Tuple[int, slice]"
+ arr = arr[0, :] # type: ignore[call-overload]
result_arrays.append(arr)
return type(self)(result_arrays, self._axes)
@@ -841,10 +840,9 @@ def iset(self, loc: int | slice | np.ndarray, value: ArrayLike):
assert value.shape[0] == len(self._axes[0])
for value_idx, mgr_idx in enumerate(indices):
- # error: Invalid index type "Tuple[slice, int]" for
- # "Union[ExtensionArray, ndarray]"; expected type
- # "Union[int, slice, ndarray]"
- value_arr = value[:, value_idx] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[slice, int]"
+ value_arr = value[:, value_idx] # type: ignore[call-overload]
self.arrays[mgr_idx] = value_arr
return
@@ -864,10 +862,9 @@ def insert(self, loc: int, item: Hashable, value: ArrayLike) -> None:
value = extract_array(value, extract_numpy=True)
if value.ndim == 2:
if value.shape[0] == 1:
- # error: Invalid index type "Tuple[int, slice]" for
- # "Union[Any, ExtensionArray, ndarray]"; expected type
- # "Union[int, slice, ndarray]"
- value = value[0, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray"
+ # matches argument type "Tuple[int, slice]"
+ value = value[0, :] # type: ignore[call-overload]
else:
raise ValueError(
f"Expected a 1D array, got an array with shape {value.shape}"
diff --git a/pandas/core/internals/concat.py b/pandas/core/internals/concat.py
index 6b41d7a26080d..b34d3590f6a71 100644
--- a/pandas/core/internals/concat.py
+++ b/pandas/core/internals/concat.py
@@ -538,10 +538,10 @@ def _concatenate_join_units(
# concatting with at least one EA means we are concatting a single column
# the non-EA values are 2D arrays with shape (1, n)
- # error: Invalid index type "Tuple[int, slice]" for
- # "Union[ExtensionArray, ndarray]"; expected type "Union[int, slice, ndarray]"
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[int, slice]"
to_concat = [
- t if is_1d_only_ea_obj(t) else t[0, :] # type: ignore[index]
+ t if is_1d_only_ea_obj(t) else t[0, :] # type: ignore[call-overload]
for t in to_concat
]
concat_values = concat_compat(to_concat, axis=0, ea_compat_axis=True)
diff --git a/pandas/core/internals/ops.py b/pandas/core/internals/ops.py
index 5f03d6709dfa4..35caeea9b9067 100644
--- a/pandas/core/internals/ops.py
+++ b/pandas/core/internals/ops.py
@@ -106,28 +106,28 @@ def _get_same_shape_values(
# TODO(EA2D): with 2D EAs only this first clause would be needed
if not (left_ea or right_ea):
- # error: Invalid index type "Tuple[Any, slice]" for "Union[ndarray,
- # ExtensionArray]"; expected type "Union[int, slice, ndarray]"
- lvals = lvals[rblk.mgr_locs.indexer, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[Union[ndarray, slice], slice]"
+ lvals = lvals[rblk.mgr_locs.indexer, :] # type: ignore[call-overload]
assert lvals.shape == rvals.shape, (lvals.shape, rvals.shape)
elif left_ea and right_ea:
assert lvals.shape == rvals.shape, (lvals.shape, rvals.shape)
elif right_ea:
# lvals are 2D, rvals are 1D
- # error: Invalid index type "Tuple[Any, slice]" for "Union[ndarray,
- # ExtensionArray]"; expected type "Union[int, slice, ndarray]"
- lvals = lvals[rblk.mgr_locs.indexer, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[Union[ndarray, slice], slice]"
+ lvals = lvals[rblk.mgr_locs.indexer, :] # type: ignore[call-overload]
assert lvals.shape[0] == 1, lvals.shape
- # error: Invalid index type "Tuple[int, slice]" for "Union[Any,
- # ExtensionArray]"; expected type "Union[int, slice, ndarray]"
- lvals = lvals[0, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[int, slice]"
+ lvals = lvals[0, :] # type: ignore[call-overload]
else:
# lvals are 1D, rvals are 2D
assert rvals.shape[0] == 1, rvals.shape
- # error: Invalid index type "Tuple[int, slice]" for "Union[ndarray,
- # ExtensionArray]"; expected type "Union[int, slice, ndarray]"
- rvals = rvals[0, :] # type: ignore[index]
+ # error: No overload variant of "__getitem__" of "ExtensionArray" matches
+ # argument type "Tuple[int, slice]"
+ rvals = rvals[0, :] # type: ignore[call-overload]
return lvals, rvals
| Changes for typing of `ExtensionArray.__getitem__()`.
Required changes to `DateTimeLikeArrayMixin` for correct compatibility.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41258 | 2021-05-02T00:02:01Z | 2021-09-08T22:12:21Z | 2021-09-08T22:12:21Z | 2021-09-08T22:21:42Z |
Backport PR #40994: REGR: memory_map with non-UTF8 encoding | diff --git a/doc/source/whatsnew/v1.2.5.rst b/doc/source/whatsnew/v1.2.5.rst
index 16f9284802407..60e146b2212eb 100644
--- a/doc/source/whatsnew/v1.2.5.rst
+++ b/doc/source/whatsnew/v1.2.5.rst
@@ -15,7 +15,7 @@ including other versions of pandas.
Fixed regressions
~~~~~~~~~~~~~~~~~
- Regression in :func:`concat` between two :class:`DataFrames` where one has an :class:`Index` that is all-None and the other is :class:`DatetimeIndex` incorrectly raising (:issue:`40841`)
--
+- Regression in :func:`read_csv` when using ``memory_map=True`` with an non-UTF8 encoding (:issue:`40986`)
-
.. ---------------------------------------------------------------------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
index be353fefdd1ef..e6b6471294ac7 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -568,7 +568,12 @@ def get_handle(
# memory mapping needs to be the first step
handle, memory_map, handles = _maybe_memory_map(
- handle, memory_map, ioargs.encoding, ioargs.mode, errors
+ handle,
+ memory_map,
+ ioargs.encoding,
+ ioargs.mode,
+ errors,
+ ioargs.compression["method"] not in _compression_to_extension,
)
is_path = isinstance(handle, str)
@@ -759,7 +764,18 @@ class _MMapWrapper(abc.Iterator):
"""
- def __init__(self, f: IO):
+ def __init__(
+ self,
+ f: IO,
+ encoding: str = "utf-8",
+ errors: str = "strict",
+ decode: bool = True,
+ ):
+ self.encoding = encoding
+ self.errors = errors
+ self.decoder = codecs.getincrementaldecoder(encoding)(errors=errors)
+ self.decode = decode
+
self.attributes = {}
for attribute in ("seekable", "readable", "writeable"):
if not hasattr(f, attribute):
@@ -775,19 +791,31 @@ def __getattr__(self, name: str):
def __iter__(self) -> "_MMapWrapper":
return self
+ def read(self, size: int = -1) -> Union[str, bytes]:
+ # CSV c-engine uses read instead of iterating
+ content: bytes = self.mmap.read(size)
+ if self.decode:
+ errors = self.errors if self.errors is not None else "strict"
+ # memory mapping is applied before compression. Encoding should
+ # be applied to the de-compressed data.
+ return content.decode(self.encoding, errors=errors)
+ return content
+
def __next__(self) -> str:
newbytes = self.mmap.readline()
# readline returns bytes, not str, but Python's CSV reader
# expects str, so convert the output to str before continuing
- newline = newbytes.decode("utf-8")
+ newline = self.decoder.decode(newbytes)
# mmap doesn't raise if reading past the allocated
# data but instead returns an empty string, so raise
# if that is returned
if newline == "":
raise StopIteration
- return newline
+
+ # IncrementalDecoder seems to push newline to the next line
+ return newline.lstrip("\n")
def _maybe_memory_map(
@@ -796,6 +824,7 @@ def _maybe_memory_map(
encoding: str,
mode: str,
errors: Optional[str],
+ decode: bool,
) -> Tuple[FileOrBuffer, bool, List[Buffer]]:
"""Try to memory map file/buffer."""
handles: List[Buffer] = []
@@ -814,7 +843,10 @@ def _maybe_memory_map(
handles.append(handle)
try:
- wrapped = cast(mmap.mmap, _MMapWrapper(handle)) # type: ignore[arg-type]
+ wrapped = cast(
+ mmap.mmap,
+ _MMapWrapper(handle, encoding, errors, decode), # type: ignore[arg-type]
+ )
handle.close()
handles.remove(handle)
handles.append(wrapped)
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8ad86fd0a0dce..bbff9dfe1ddd0 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1868,31 +1868,6 @@ def __init__(self, src: FilePathOrBuffer, **kwds):
assert self.handles is not None
for key in ("storage_options", "encoding", "memory_map", "compression"):
kwds.pop(key, None)
- if self.handles.is_mmap and hasattr(self.handles.handle, "mmap"):
- # pandas\io\parsers.py:1861: error: Item "IO[Any]" of
- # "Union[IO[Any], RawIOBase, BufferedIOBase, TextIOBase,
- # TextIOWrapper, mmap]" has no attribute "mmap" [union-attr]
-
- # pandas\io\parsers.py:1861: error: Item "RawIOBase" of
- # "Union[IO[Any], RawIOBase, BufferedIOBase, TextIOBase,
- # TextIOWrapper, mmap]" has no attribute "mmap" [union-attr]
-
- # pandas\io\parsers.py:1861: error: Item "BufferedIOBase" of
- # "Union[IO[Any], RawIOBase, BufferedIOBase, TextIOBase,
- # TextIOWrapper, mmap]" has no attribute "mmap" [union-attr]
-
- # pandas\io\parsers.py:1861: error: Item "TextIOBase" of
- # "Union[IO[Any], RawIOBase, BufferedIOBase, TextIOBase,
- # TextIOWrapper, mmap]" has no attribute "mmap" [union-attr]
-
- # pandas\io\parsers.py:1861: error: Item "TextIOWrapper" of
- # "Union[IO[Any], RawIOBase, BufferedIOBase, TextIOBase,
- # TextIOWrapper, mmap]" has no attribute "mmap" [union-attr]
-
- # pandas\io\parsers.py:1861: error: Item "mmap" of "Union[IO[Any],
- # RawIOBase, BufferedIOBase, TextIOBase, TextIOWrapper, mmap]" has
- # no attribute "mmap" [union-attr]
- self.handles.handle = self.handles.handle.mmap # type: ignore[union-attr]
try:
self._reader = parsers.TextReader(self.handles.handle, **kwds)
diff --git a/pandas/tests/io/parser/test_encoding.py b/pandas/tests/io/parser/test_encoding.py
index e74265da3e966..41e1964086dce 100644
--- a/pandas/tests/io/parser/test_encoding.py
+++ b/pandas/tests/io/parser/test_encoding.py
@@ -217,3 +217,20 @@ def test_parse_encoded_special_characters(encoding):
expected = DataFrame(data=[[":foo", 0], ["bar", 1], ["baz", 2]], columns=["a", "b"])
tm.assert_frame_equal(result, expected)
+
+
+@pytest.mark.parametrize("encoding", ["utf-8", None, "utf-16", "cp1255", "latin-1"])
+def test_encoding_memory_map(all_parsers, encoding):
+ # GH40986
+ parser = all_parsers
+ expected = DataFrame(
+ {
+ "name": ["Raphael", "Donatello", "Miguel Angel", "Leonardo"],
+ "mask": ["red", "purple", "orange", "blue"],
+ "weapon": ["sai", "bo staff", "nunchunk", "katana"],
+ }
+ )
+ with tm.ensure_clean() as file:
+ expected.to_csv(file, index=False, encoding=encoding)
+ df = parser.read_csv(file, encoding=encoding, memory_map=True)
+ tm.assert_frame_equal(df, expected)
| Backport PR #40994 | https://api.github.com/repos/pandas-dev/pandas/pulls/41257 | 2021-05-01T23:58:40Z | 2021-05-25T13:22:36Z | 2021-05-25T13:22:36Z | 2021-06-05T20:50:08Z |
REF: avoid unnecessary casting in algorithms | diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 16ec2bb5f253c..2c4477056a112 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -37,19 +37,17 @@
from pandas.core.dtypes.cast import (
construct_1d_object_array_from_listlike,
infer_dtype_from_array,
+ sanitize_to_nanoseconds,
)
from pandas.core.dtypes.common import (
ensure_float64,
- ensure_int64,
ensure_object,
ensure_platform_int,
- ensure_uint64,
is_array_like,
is_bool_dtype,
is_categorical_dtype,
is_complex_dtype,
is_datetime64_dtype,
- is_datetime64_ns_dtype,
is_extension_array_dtype,
is_float_dtype,
is_integer,
@@ -57,11 +55,8 @@
is_list_like,
is_numeric_dtype,
is_object_dtype,
- is_period_dtype,
is_scalar,
- is_signed_integer_dtype,
is_timedelta64_dtype,
- is_unsigned_integer_dtype,
needs_i8_conversion,
pandas_dtype,
)
@@ -134,71 +129,49 @@ def _ensure_data(values: ArrayLike) -> tuple[np.ndarray, DtypeObj]:
values = extract_array(values, extract_numpy=True)
# we check some simple dtypes first
- if is_object_dtype(values):
+ if is_object_dtype(values.dtype):
return ensure_object(np.asarray(values)), np.dtype("object")
- try:
- if is_bool_dtype(values):
- # we are actually coercing to uint64
- # until our algos support uint8 directly (see TODO)
- return np.asarray(values).astype("uint64"), np.dtype("bool")
- elif is_signed_integer_dtype(values):
- return ensure_int64(values), values.dtype
- elif is_unsigned_integer_dtype(values):
- return ensure_uint64(values), values.dtype
- elif is_float_dtype(values):
+ elif is_bool_dtype(values.dtype):
+ if isinstance(values, np.ndarray):
+ # i.e. actually dtype == np.dtype("bool")
+ return np.asarray(values).view("uint8"), values.dtype
+ else:
+ # i.e. all-bool Categorical, BooleanArray
+ return np.asarray(values).astype("uint8", copy=False), values.dtype
+
+ elif is_integer_dtype(values.dtype):
+ return np.asarray(values), values.dtype
+
+ elif is_float_dtype(values.dtype):
+ # Note: checking `values.dtype == "float128"` raises on Windows and 32bit
+ # error: Item "ExtensionDtype" of "Union[Any, ExtensionDtype, dtype[Any]]"
+ # has no attribute "itemsize"
+ if values.dtype.itemsize in [2, 12, 16]: # type: ignore[union-attr]
+ # we dont (yet) have float128 hashtable support
return ensure_float64(values), values.dtype
- elif is_complex_dtype(values):
-
- # ignore the fact that we are casting to float
- # which discards complex parts
- with catch_warnings():
- simplefilter("ignore", np.ComplexWarning)
- values = ensure_float64(values)
- return values, np.dtype("float64")
+ return np.asarray(values), values.dtype
- except (TypeError, ValueError, OverflowError):
- # if we are trying to coerce to a dtype
- # and it is incompatible this will fall through to here
- return ensure_object(values), np.dtype("object")
+ elif is_complex_dtype(values.dtype):
+ # ignore the fact that we are casting to float
+ # which discards complex parts
+ with catch_warnings():
+ simplefilter("ignore", np.ComplexWarning)
+ values = ensure_float64(values)
+ return values, np.dtype("float64")
# datetimelike
- if needs_i8_conversion(values.dtype):
- if is_period_dtype(values.dtype):
- from pandas import PeriodIndex
-
- values = PeriodIndex(values)._data
- elif is_timedelta64_dtype(values.dtype):
- from pandas import TimedeltaIndex
-
- values = TimedeltaIndex(values)._data
- else:
- # Datetime
- if values.ndim > 1 and is_datetime64_ns_dtype(values.dtype):
- # Avoid calling the DatetimeIndex constructor as it is 1D only
- # Note: this is reached by DataFrame.rank calls GH#27027
- # TODO(EA2D): special case not needed with 2D EAs
- asi8 = values.view("i8")
- dtype = values.dtype
- # error: Incompatible return value type (got "Tuple[Any,
- # Union[dtype, ExtensionDtype, None]]", expected
- # "Tuple[ndarray, Union[dtype, ExtensionDtype]]")
- return asi8, dtype # type: ignore[return-value]
-
- from pandas import DatetimeIndex
-
- values = DatetimeIndex(values)._data
- dtype = values.dtype
- return values.asi8, dtype
+ elif needs_i8_conversion(values.dtype):
+ if isinstance(values, np.ndarray):
+ values = sanitize_to_nanoseconds(values)
+ npvalues = values.view("i8")
+ npvalues = cast(np.ndarray, npvalues)
+ return npvalues, values.dtype
elif is_categorical_dtype(values.dtype):
values = cast("Categorical", values)
values = values.codes
dtype = pandas_dtype("category")
-
- # we are actually coercing to int64
- # until our algos support int* directly (not all do)
- values = ensure_int64(values)
return values, dtype
# we have failed, return object
@@ -268,8 +241,15 @@ def _ensure_arraylike(values) -> ArrayLike:
_hashtables = {
"float64": htable.Float64HashTable,
+ "float32": htable.Float32HashTable,
"uint64": htable.UInt64HashTable,
+ "uint32": htable.UInt32HashTable,
+ "uint16": htable.UInt16HashTable,
+ "uint8": htable.UInt8HashTable,
"int64": htable.Int64HashTable,
+ "int32": htable.Int32HashTable,
+ "int16": htable.Int16HashTable,
+ "int8": htable.Int8HashTable,
"string": htable.StringHashTable,
"object": htable.PyObjectHashTable,
}
@@ -298,6 +278,10 @@ def _get_values_for_rank(values: ArrayLike) -> np.ndarray:
values = cast("Categorical", values)._values_for_rank()
values, _ = _ensure_data(values)
+ if values.dtype.kind in ["i", "u", "f"]:
+ # rank_t includes only object, int64, uint64, float64
+ dtype = values.dtype.kind + "8"
+ values = values.astype(dtype, copy=False)
return values
diff --git a/pandas/core/arrays/sparse/array.py b/pandas/core/arrays/sparse/array.py
index 091efa68c67da..4847372f18239 100644
--- a/pandas/core/arrays/sparse/array.py
+++ b/pandas/core/arrays/sparse/array.py
@@ -550,7 +550,7 @@ def _from_factorized(cls, values, original):
# Data
# ------------------------------------------------------------------------
@property
- def sp_index(self):
+ def sp_index(self) -> SparseIndex:
"""
The SparseIndex containing the location of non- ``fill_value`` points.
"""
@@ -570,7 +570,7 @@ def sp_values(self) -> np.ndarray:
return self._sparse_values
@property
- def dtype(self):
+ def dtype(self) -> SparseDtype:
return self._dtype
@property
@@ -597,7 +597,7 @@ def kind(self) -> str:
return "block"
@property
- def _valid_sp_values(self):
+ def _valid_sp_values(self) -> np.ndarray:
sp_vals = self.sp_values
mask = notna(sp_vals)
return sp_vals[mask]
@@ -620,7 +620,7 @@ def nbytes(self) -> int:
return self.sp_values.nbytes + self.sp_index.nbytes
@property
- def density(self):
+ def density(self) -> float:
"""
The percent of non- ``fill_value`` points, as decimal.
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index c7af104f62770..964dd9bdd0e0a 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -1756,14 +1756,15 @@ def _check(arr):
_check(np.array([np.nan, np.nan, 5.0, 5.0, 5.0, np.nan, 1, 2, 3, np.nan]))
_check(np.array([4.0, np.nan, 5.0, 5.0, 5.0, np.nan, 1, 2, 4.0, np.nan]))
- def test_basic(self, writable):
+ @pytest.mark.parametrize("dtype", np.typecodes["AllInteger"])
+ def test_basic(self, writable, dtype):
exp = np.array([1, 2], dtype=np.float64)
- for dtype in np.typecodes["AllInteger"]:
- data = np.array([1, 100], dtype=dtype)
- data.setflags(write=writable)
- s = Series(data)
- tm.assert_numpy_array_equal(algos.rank(s), exp)
+ data = np.array([1, 100], dtype=dtype)
+ data.setflags(write=writable)
+ ser = Series(data)
+ result = algos.rank(ser)
+ tm.assert_numpy_array_equal(result, exp)
def test_uint64_overflow(self):
exp = np.array([1, 2], dtype=np.float64)
| Sparse annotations are unrelated, were figured out in the process of getting this working. | https://api.github.com/repos/pandas-dev/pandas/pulls/41256 | 2021-05-01T23:39:46Z | 2021-05-02T23:16:56Z | 2021-05-02T23:16:56Z | 2021-05-03T01:12:41Z |
TYP: Simple type fixes for ExtensionArray | diff --git a/pandas/_typing.py b/pandas/_typing.py
index a58dc0dba1bf1..1e1fffdd60676 100644
--- a/pandas/_typing.py
+++ b/pandas/_typing.py
@@ -37,6 +37,7 @@
# https://mypy.readthedocs.io/en/latest/common_issues.html#import-cycles
if TYPE_CHECKING:
from typing import (
+ Literal,
TypedDict,
final,
)
@@ -189,6 +190,12 @@
str, int, Sequence[Union[str, int]], Mapping[Hashable, Union[str, int]]
]
+# Arguments for fillna()
+if TYPE_CHECKING:
+ FillnaOptions = Literal["backfill", "bfill", "ffill", "pad"]
+else:
+ FillnaOptions = str
+
# internals
Manager = Union[
"ArrayManager", "SingleArrayManager", "BlockManager", "SingleBlockManager"
diff --git a/pandas/core/arrays/base.py b/pandas/core/arrays/base.py
index bd01191719143..16d42e4a51927 100644
--- a/pandas/core/arrays/base.py
+++ b/pandas/core/arrays/base.py
@@ -13,6 +13,7 @@
TYPE_CHECKING,
Any,
Callable,
+ Iterator,
Sequence,
TypeVar,
cast,
@@ -24,6 +25,7 @@
from pandas._typing import (
ArrayLike,
Dtype,
+ FillnaOptions,
PositionalIndexer,
Shape,
)
@@ -69,6 +71,7 @@
)
if TYPE_CHECKING:
+ from typing import Literal
class ExtensionArraySupportsAnyAll("ExtensionArray"):
def any(self, *, skipna: bool = True) -> bool:
@@ -375,7 +378,7 @@ def __len__(self) -> int:
"""
raise AbstractMethodError(self)
- def __iter__(self):
+ def __iter__(self) -> Iterator[Any]:
"""
Iterate over elements of the array.
"""
@@ -385,7 +388,7 @@ def __iter__(self):
for i in range(len(self)):
yield self[i]
- def __contains__(self, item) -> bool | np.bool_:
+ def __contains__(self, item: object) -> bool | np.bool_:
"""
Return for `item in self`.
"""
@@ -400,7 +403,9 @@ def __contains__(self, item) -> bool | np.bool_:
else:
return False
else:
- return (item == self).any()
+ # error: Item "ExtensionArray" of "Union[ExtensionArray, ndarray]" has no
+ # attribute "any"
+ return (item == self).any() # type: ignore[union-attr]
# error: Signature of "__eq__" incompatible with supertype "object"
def __eq__(self, other: Any) -> ArrayLike: # type: ignore[override]
@@ -680,7 +685,12 @@ def argmax(self, skipna: bool = True) -> int:
raise NotImplementedError
return nargminmax(self, "argmax")
- def fillna(self, value=None, method=None, limit=None):
+ def fillna(
+ self,
+ value: object | ArrayLike | None = None,
+ method: FillnaOptions | None = None,
+ limit: int | None = None,
+ ):
"""
Fill NA/NaN values using the specified method.
@@ -1207,7 +1217,7 @@ def _formatter(self, boxed: bool = False) -> Callable[[Any], str | None]:
# Reshaping
# ------------------------------------------------------------------------
- def transpose(self, *axes) -> ExtensionArray:
+ def transpose(self, *axes: int) -> ExtensionArray:
"""
Return a transposed view on this array.
@@ -1220,7 +1230,7 @@ def transpose(self, *axes) -> ExtensionArray:
def T(self) -> ExtensionArray:
return self.transpose()
- def ravel(self, order="C") -> ExtensionArray:
+ def ravel(self, order: Literal["C", "F", "A", "K"] | None = "C") -> ExtensionArray:
"""
Return a flattened view on this array.
@@ -1294,7 +1304,7 @@ def _reduce(self, name: str, *, skipna: bool = True, **kwargs):
"""
raise TypeError(f"cannot perform {name} with type {self.dtype}")
- def __hash__(self):
+ def __hash__(self) -> int:
raise TypeError(f"unhashable type: {repr(type(self).__name__)}")
# ------------------------------------------------------------------------
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 50837e1b3ed50..a508b3db2f038 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -54,6 +54,7 @@
CompressionOptions,
Dtype,
FilePathOrBuffer,
+ FillnaOptions,
FloatFormatType,
FormattersType,
FrameOrSeriesUnion,
@@ -5015,7 +5016,7 @@ def rename(
def fillna(
self,
value=...,
- method: str | None = ...,
+ method: FillnaOptions | None = ...,
axis: Axis | None = ...,
inplace: Literal[False] = ...,
limit=...,
@@ -5027,7 +5028,7 @@ def fillna(
def fillna(
self,
value,
- method: str | None,
+ method: FillnaOptions | None,
axis: Axis | None,
inplace: Literal[True],
limit=...,
@@ -5060,7 +5061,7 @@ def fillna(
def fillna(
self,
*,
- method: str | None,
+ method: FillnaOptions | None,
inplace: Literal[True],
limit=...,
downcast=...,
@@ -5082,7 +5083,7 @@ def fillna(
def fillna(
self,
*,
- method: str | None,
+ method: FillnaOptions | None,
axis: Axis | None,
inplace: Literal[True],
limit=...,
@@ -5106,7 +5107,7 @@ def fillna(
def fillna(
self,
value,
- method: str | None,
+ method: FillnaOptions | None,
*,
inplace: Literal[True],
limit=...,
@@ -5118,7 +5119,7 @@ def fillna(
def fillna(
self,
value=...,
- method: str | None = ...,
+ method: FillnaOptions | None = ...,
axis: Axis | None = ...,
inplace: bool = ...,
limit=...,
@@ -5129,8 +5130,8 @@ def fillna(
@doc(NDFrame.fillna, **_shared_doc_kwargs)
def fillna(
self,
- value=None,
- method: str | None = None,
+ value: object | ArrayLike | None = None,
+ method: FillnaOptions | None = None,
axis: Axis | None = None,
inplace: bool = False,
limit=None,
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 240f678960969..c8e9898f9462a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -38,6 +38,7 @@
Axis,
Dtype,
DtypeObj,
+ FillnaOptions,
FrameOrSeriesUnion,
IndexKeyFunc,
NpDtype,
@@ -4594,7 +4595,7 @@ def drop(
def fillna(
self,
value=...,
- method: str | None = ...,
+ method: FillnaOptions | None = ...,
axis: Axis | None = ...,
inplace: Literal[False] = ...,
limit=...,
@@ -4606,7 +4607,7 @@ def fillna(
def fillna(
self,
value,
- method: str | None,
+ method: FillnaOptions | None,
axis: Axis | None,
inplace: Literal[True],
limit=...,
@@ -4639,7 +4640,7 @@ def fillna(
def fillna(
self,
*,
- method: str | None,
+ method: FillnaOptions | None,
inplace: Literal[True],
limit=...,
downcast=...,
@@ -4661,7 +4662,7 @@ def fillna(
def fillna(
self,
*,
- method: str | None,
+ method: FillnaOptions | None,
axis: Axis | None,
inplace: Literal[True],
limit=...,
@@ -4685,7 +4686,7 @@ def fillna(
def fillna(
self,
value,
- method: str | None,
+ method: FillnaOptions | None,
*,
inplace: Literal[True],
limit=...,
@@ -4697,7 +4698,7 @@ def fillna(
def fillna(
self,
value=...,
- method: str | None = ...,
+ method: FillnaOptions | None = ...,
axis: Axis | None = ...,
inplace: bool = ...,
limit=...,
@@ -4709,8 +4710,8 @@ def fillna(
@doc(NDFrame.fillna, **_shared_doc_kwargs) # type: ignore[has-type]
def fillna(
self,
- value=None,
- method=None,
+ value: object | ArrayLike | None = None,
+ method: FillnaOptions | None = None,
axis=None,
inplace=False,
limit=None,
| Typing changes for the following methods in `ExtensionArray`
- `__iter__`
- `__contains__`
- `fillna`
- `transpose`
- `ravel`
- `__hash__` | https://api.github.com/repos/pandas-dev/pandas/pulls/41255 | 2021-05-01T23:16:21Z | 2021-05-06T01:39:35Z | 2021-05-06T01:39:35Z | 2021-05-06T02:19:02Z |
CLN: Move stuff in tests.indexes.test_numeric.py to more logical locations | diff --git a/pandas/tests/indexes/common.py b/pandas/tests/indexes/common.py
index 6139d8af48d98..45c9dda1d1b38 100644
--- a/pandas/tests/indexes/common.py
+++ b/pandas/tests/indexes/common.py
@@ -1,3 +1,4 @@
+from datetime import datetime
import gc
from typing import Type
@@ -5,6 +6,7 @@
import pytest
from pandas._libs import iNaT
+from pandas._libs.tslibs import Timestamp
from pandas.core.dtypes.common import is_datetime64tz_dtype
from pandas.core.dtypes.dtypes import CategoricalDtype
@@ -13,6 +15,7 @@
from pandas import (
CategoricalIndex,
DatetimeIndex,
+ Float64Index,
Index,
Int64Index,
IntervalIndex,
@@ -29,7 +32,9 @@
class Base:
- """ base class for index sub-class tests """
+ """
+ Base class for index sub-class tests.
+ """
_index_cls: Type[Index]
@@ -738,3 +743,91 @@ def test_shallow_copy_shares_cache(self, simple_index):
shallow_copy = idx._shallow_copy(idx._data)
assert shallow_copy._cache is not idx._cache
assert shallow_copy._cache == {}
+
+ def test_index_groupby(self, simple_index):
+ idx = simple_index[:5]
+ to_groupby = np.array([1, 2, np.nan, 2, 1])
+ tm.assert_dict_equal(
+ idx.groupby(to_groupby), {1.0: idx[[0, 4]], 2.0: idx[[1, 3]]}
+ )
+
+ to_groupby = DatetimeIndex(
+ [
+ datetime(2011, 11, 1),
+ datetime(2011, 12, 1),
+ pd.NaT,
+ datetime(2011, 12, 1),
+ datetime(2011, 11, 1),
+ ],
+ tz="UTC",
+ ).values
+
+ ex_keys = [Timestamp("2011-11-01"), Timestamp("2011-12-01")]
+ expected = {ex_keys[0]: idx[[0, 4]], ex_keys[1]: idx[[1, 3]]}
+ tm.assert_dict_equal(idx.groupby(to_groupby), expected)
+
+
+class NumericBase(Base):
+ """
+ Base class for numeric index (incl. RangeIndex) sub-class tests.
+ """
+
+ def test_where(self):
+ # Tested in numeric.test_indexing
+ pass
+
+ def test_can_hold_identifiers(self, simple_index):
+ idx = simple_index
+ key = idx[0]
+ assert idx._can_hold_identifiers_and_holds_name(key) is False
+
+ def test_format(self, simple_index):
+ # GH35439
+ idx = simple_index
+ max_width = max(len(str(x)) for x in idx)
+ expected = [str(x).ljust(max_width) for x in idx]
+ assert idx.format() == expected
+
+ def test_numeric_compat(self):
+ pass # override Base method
+
+ def test_insert_na(self, nulls_fixture, simple_index):
+ # GH 18295 (test missing)
+ index = simple_index
+ na_val = nulls_fixture
+
+ if na_val is pd.NaT:
+ expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object)
+ else:
+ expected = Float64Index([index[0], np.nan] + list(index[1:]))
+
+ result = index.insert(1, na_val)
+ tm.assert_index_equal(result, expected)
+
+ def test_arithmetic_explicit_conversions(self):
+ # GH 8608
+ # add/sub are overridden explicitly for Float/Int Index
+ index_cls = self._index_cls
+ if index_cls is RangeIndex:
+ idx = RangeIndex(5)
+ else:
+ idx = index_cls(np.arange(5, dtype="int64"))
+
+ # float conversions
+ arr = np.arange(5, dtype="int64") * 3.2
+ expected = Float64Index(arr)
+ fidx = idx * 3.2
+ tm.assert_index_equal(fidx, expected)
+ fidx = 3.2 * idx
+ tm.assert_index_equal(fidx, expected)
+
+ # interops with numpy arrays
+ expected = Float64Index(arr)
+ a = np.zeros(5, dtype="float64")
+ result = fidx - a
+ tm.assert_index_equal(result, expected)
+
+ expected = Float64Index(-arr)
+ a = np.zeros(5, dtype="float64")
+ result = a - fidx
+ tm.assert_index_equal(result, expected)
diff --git a/pandas/tests/indexes/test_numeric.py b/pandas/tests/indexes/numeric/test_numeric.py
similarity index 84%
rename from pandas/tests/indexes/test_numeric.py
rename to pandas/tests/indexes/numeric/test_numeric.py
index c5dc84dac0fd2..bfe06d74570da 100644
--- a/pandas/tests/indexes/test_numeric.py
+++ b/pandas/tests/indexes/numeric/test_numeric.py
@@ -1,5 +1,3 @@
-from datetime import datetime
-
import numpy as np
import pytest
@@ -15,107 +13,10 @@
UInt64Index,
)
import pandas._testing as tm
-from pandas.tests.indexes.common import Base
-
-
-class TestArithmetic:
- @pytest.mark.parametrize(
- "klass", [Float64Index, Int64Index, UInt64Index, RangeIndex]
- )
- def test_arithmetic_explicit_conversions(self, klass):
-
- # GH 8608
- # add/sub are overridden explicitly for Float/Int Index
- if klass is RangeIndex:
- idx = RangeIndex(5)
- else:
- idx = klass(np.arange(5, dtype="int64"))
-
- # float conversions
- arr = np.arange(5, dtype="int64") * 3.2
- expected = Float64Index(arr)
- fidx = idx * 3.2
- tm.assert_index_equal(fidx, expected)
- fidx = 3.2 * idx
- tm.assert_index_equal(fidx, expected)
-
- # interops with numpy arrays
- expected = Float64Index(arr)
- a = np.zeros(5, dtype="float64")
- result = fidx - a
- tm.assert_index_equal(result, expected)
-
- expected = Float64Index(-arr)
- a = np.zeros(5, dtype="float64")
- result = a - fidx
- tm.assert_index_equal(result, expected)
-
-
-class TestNumericIndex:
- def test_index_groupby(self):
- int_idx = Index(range(6))
- float_idx = Index(np.arange(0, 0.6, 0.1))
- obj_idx = Index("A B C D E F".split())
- dt_idx = pd.date_range("2013-01-01", freq="M", periods=6)
-
- for idx in [int_idx, float_idx, obj_idx, dt_idx]:
- to_groupby = np.array([1, 2, np.nan, np.nan, 2, 1])
- tm.assert_dict_equal(
- idx.groupby(to_groupby), {1.0: idx[[0, 5]], 2.0: idx[[1, 4]]}
- )
-
- to_groupby = pd.DatetimeIndex(
- [
- datetime(2011, 11, 1),
- datetime(2011, 12, 1),
- pd.NaT,
- pd.NaT,
- datetime(2011, 12, 1),
- datetime(2011, 11, 1),
- ],
- tz="UTC",
- ).values
-
- ex_keys = [Timestamp("2011-11-01"), Timestamp("2011-12-01")]
- expected = {ex_keys[0]: idx[[0, 5]], ex_keys[1]: idx[[1, 4]]}
- tm.assert_dict_equal(idx.groupby(to_groupby), expected)
-
-
-class Numeric(Base):
- def test_where(self):
- # Tested in numeric.test_indexing
- pass
-
- def test_can_hold_identifiers(self, simple_index):
- idx = simple_index
- key = idx[0]
- assert idx._can_hold_identifiers_and_holds_name(key) is False
-
- def test_format(self, simple_index):
- # GH35439
- idx = simple_index
- max_width = max(len(str(x)) for x in idx)
- expected = [str(x).ljust(max_width) for x in idx]
- assert idx.format() == expected
-
- def test_numeric_compat(self):
- pass # override Base method
-
- def test_insert_na(self, nulls_fixture, simple_index):
- # GH 18295 (test missing)
- index = simple_index
- na_val = nulls_fixture
-
- if na_val is pd.NaT:
- expected = Index([index[0], pd.NaT] + list(index[1:]), dtype=object)
- else:
- expected = Float64Index([index[0], np.nan] + list(index[1:]))
-
- result = index.insert(1, na_val)
- tm.assert_index_equal(result, expected)
+from pandas.tests.indexes.common import NumericBase
-class TestFloat64Index(Numeric):
+class TestFloat64Index(NumericBase):
_index_cls = Float64Index
_dtype = np.float64
@@ -387,7 +288,7 @@ def test_fillna_float64(self):
tm.assert_index_equal(idx.fillna("obj"), exp)
-class NumericInt(Numeric):
+class NumericInt(NumericBase):
def test_view(self):
index_cls = self._index_cls
diff --git a/pandas/tests/indexes/ranges/test_range.py b/pandas/tests/indexes/ranges/test_range.py
index f7313f100d429..3a4aa29ea620e 100644
--- a/pandas/tests/indexes/ranges/test_range.py
+++ b/pandas/tests/indexes/ranges/test_range.py
@@ -11,7 +11,7 @@
RangeIndex,
)
import pandas._testing as tm
-from pandas.tests.indexes.test_numeric import Numeric
+from pandas.tests.indexes.common import NumericBase
# aliases to make some tests easier to read
RI = RangeIndex
@@ -20,7 +20,7 @@
OI = Index
-class TestRangeIndex(Numeric):
+class TestRangeIndex(NumericBase):
_index_cls = RangeIndex
@pytest.fixture
| This does four things to better organize the numeric index's tests:
1. Moves test method `test_index_groupby` from `tests.indexes.test_numeric.py::TestNumericIndex` to `tests.indexes.common.py::Base` and makes it use a fixture. This means this tests will be used for all index types. Also deletes `TestNumericIndex`, as its empty now.
2. Moves the `Numeric` class from `test_numeric.py` to `common.py` and renames it `NumericBase`.
3. Moves the `tests.indexes.numeric.py::TestArithmetic::test_arithmetic_explicit_conversions` tests to the new `common.py::NumericBase` and delete `test_numeric.py::TestArithmetic` class, as it's now empty.
4. Moves the file `tests.indexes.test_numeric.py` to `tests.indexes.numeric.test_numeric.py`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/41254 | 2021-05-01T21:19:03Z | 2021-05-04T12:59:25Z | 2021-05-04T12:59:25Z | 2021-05-05T16:33:16Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.