title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k ⌀ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 ⌀ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
CLN/INT: remove Index as a sub-class of NDArray | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 88aab0ced8420..9d443254ae25a 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -1104,6 +1104,8 @@ Modifying and Computations
Index.order
Index.reindex
Index.repeat
+ Index.take
+ Index.putmask
Index.set_names
Index.unique
Index.nunique
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 39635cb0e612f..8ec61496c538a 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -52,6 +52,12 @@ indexing.
should be avoided. See :ref:`Returning a View versus Copy
<indexing.view_versus_copy>`
+.. warning::
+
+ In 0.15.0 ``Index`` has internally been refactored to no longer sub-class ``ndarray``
+ but instead subclass ``PandasObject``, similarly to the rest of the pandas objects. This should be
+ a transparent change with only very limited API implications (See the :ref:`Internal Refactoring <whatsnew_0150.refactoring>`)
+
See the :ref:`cookbook<cookbook.selection>` for some advanced strategies
Different Choices for Indexing (``loc``, ``iloc``, and ``ix``)
@@ -2175,7 +2181,7 @@ you can specify ``inplace=True`` to have the data change in place.
.. versionadded:: 0.15.0
-``set_names``, ``set_levels``, and ``set_labels`` also take an optional
+``set_names``, ``set_levels``, and ``set_labels`` also take an optional
`level`` argument
.. ipython:: python
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 7623bf287bcd3..bb039b4484c7d 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -10,6 +10,7 @@ users upgrade to this version.
- Highlights include:
- The ``Categorical`` type was integrated as a first-class pandas type, see :ref:`here <whatsnew_0150.cat>`
+ - Internal refactoring of the ``Index`` class to no longer sub-class ``ndarray``, see :ref:`Internal Refactoring <whatsnew_0150.refactoring>`
- :ref:`Other Enhancements <whatsnew_0150.enhancements>`
@@ -25,6 +26,12 @@ users upgrade to this version.
- :ref:`Bug Fixes <whatsnew_0150.bug_fixes>`
+.. warning::
+
+ In 0.15.0 ``Index`` has internally been refactored to no longer sub-class ``ndarray``
+ but instead subclass ``PandasObject``, similarly to the rest of the pandas objects. This change allows very easy sub-classing and creation of new index types. This should be
+ a transparent change with only very limited API implications (See the :ref:`Internal Refactoring <whatsnew_0150.refactoring>`)
+
.. _whatsnew_0150.api:
API changes
@@ -155,6 +162,18 @@ previously results in ``Exception`` or ``TypeError`` (:issue:`7812`)
didx
didx.tz_localize(None)
+.. _whatsnew_0150.refactoring:
+
+Internal Refactoring
+~~~~~~~~~~~~~~~~~~~~
+
+In 0.15.0 ``Index`` has internally been refactored to no longer sub-class ``ndarray``
+but instead subclass ``PandasObject``, similarly to the rest of the pandas objects. This change allows very easy sub-classing and creation of new index types. This should be
+a transparent change with only very limited API implications (:issue:`5080`,:issue:`7439`,:issue:`7796`)
+
+- you may need to unpickle pandas version < 0.15.0 pickles using ``pd.read_pickle`` rather than ``pickle.load``. See :ref:`pickle docs <io.pickle>`
+- when plotting with a ``PeriodIndex``. The ``matplotlib`` internal axes will now be arrays of ``Period`` rather than a ``PeriodIndex``. (this is similar to how a ``DatetimeIndex`` passess arrays of ``datetimes`` now)
+
.. _whatsnew_0150.cat:
Categoricals in Series/DataFrame
@@ -278,7 +297,7 @@ Performance
~~~~~~~~~~~
- Performance improvements in ``DatetimeIndex.__iter__`` to allow faster iteration (:issue:`7683`)
-
+- Performance improvements in ``Period`` creation (and ``PeriodIndex`` setitem) (:issue:`5155`)
@@ -386,7 +405,7 @@ Bug Fixes
- Bug in ``GroupBy.filter()`` where fast path vs. slow path made the filter
return a non scalar value that appeared valid but wasn't (:issue:`7870`).
- Bug in ``date_range()``/``DatetimeIndex()`` when the timezone was inferred from input dates yet incorrect
- times were returned when crossing DST boundaries (:issue:`7835`, :issue:`7901`).
+ times were returned when crossing DST boundaries (:issue:`7835`, :issue:`7901`).
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index 03b45336833d3..e794725574119 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -5,29 +5,32 @@
import pandas
import copy
import pickle as pkl
-from pandas import compat
+from pandas import compat, Index
from pandas.compat import u, string_types
-from pandas.core.series import Series, TimeSeries
-from pandas.sparse.series import SparseSeries, SparseTimeSeries
-
def load_reduce(self):
stack = self.stack
args = stack.pop()
func = stack[-1]
+
if type(args[0]) is type:
n = args[0].__name__
- if n == u('DeprecatedSeries') or n == u('DeprecatedTimeSeries'):
- stack[-1] = object.__new__(Series)
- return
- elif (n == u('DeprecatedSparseSeries') or
- n == u('DeprecatedSparseTimeSeries')):
- stack[-1] = object.__new__(SparseSeries)
- return
try:
- value = func(*args)
- except:
+ stack[-1] = func(*args)
+ return
+ except Exception as e:
+
+ # if we have a deprecated function
+ # try to replace and try again
+
+ if '_reconstruct: First argument must be a sub-type of ndarray' in str(e):
+ try:
+ cls = args[0]
+ stack[-1] = object.__new__(cls)
+ return
+ except:
+ pass
# try to reencode the arguments
if getattr(self,'encoding',None) is not None:
@@ -57,6 +60,35 @@ class Unpickler(pkl.Unpickler):
Unpickler.dispatch = copy.copy(Unpickler.dispatch)
Unpickler.dispatch[pkl.REDUCE[0]] = load_reduce
+def load_newobj(self):
+ args = self.stack.pop()
+ cls = self.stack[-1]
+
+ # compat
+ if issubclass(cls, Index):
+ obj = object.__new__(cls)
+ else:
+ obj = cls.__new__(cls, *args)
+
+ self.stack[-1] = obj
+Unpickler.dispatch[pkl.NEWOBJ[0]] = load_newobj
+
+# py3 compat
+def load_newobj_ex(self):
+ kwargs = self.stack.pop()
+ args = self.stack.pop()
+ cls = self.stack.pop()
+
+ # compat
+ if issubclass(cls, Index):
+ obj = object.__new__(cls)
+ else:
+ obj = cls.__new__(cls, *args, **kwargs)
+ self.append(obj)
+try:
+ Unpickler.dispatch[pkl.NEWOBJ_EX[0]] = load_newobj_ex
+except:
+ pass
def load(fh, encoding=None, compat=False, is_verbose=False):
"""load a pickle, with a provided encoding
@@ -74,11 +106,6 @@ def load(fh, encoding=None, compat=False, is_verbose=False):
"""
try:
- if compat:
- pandas.core.series.Series = DeprecatedSeries
- pandas.core.series.TimeSeries = DeprecatedTimeSeries
- pandas.sparse.series.SparseSeries = DeprecatedSparseSeries
- pandas.sparse.series.SparseTimeSeries = DeprecatedSparseTimeSeries
fh.seek(0)
if encoding is not None:
up = Unpickler(fh, encoding=encoding)
@@ -89,25 +116,3 @@ def load(fh, encoding=None, compat=False, is_verbose=False):
return up.load()
except:
raise
- finally:
- if compat:
- pandas.core.series.Series = Series
- pandas.core.series.Series = TimeSeries
- pandas.sparse.series.SparseSeries = SparseSeries
- pandas.sparse.series.SparseTimeSeries = SparseTimeSeries
-
-
-class DeprecatedSeries(np.ndarray, Series):
- pass
-
-
-class DeprecatedTimeSeries(DeprecatedSeries):
- pass
-
-
-class DeprecatedSparseSeries(DeprecatedSeries):
- pass
-
-
-class DeprecatedSparseTimeSeries(DeprecatedSparseSeries):
- pass
diff --git a/pandas/core/base.py b/pandas/core/base.py
index beffbfb2923db..f685edd477b8c 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -8,7 +8,7 @@
from pandas.core import common as com
import pandas.core.nanops as nanops
import pandas.tslib as tslib
-from pandas.util.decorators import cache_readonly
+from pandas.util.decorators import Appender, cache_readonly
class StringMixin(object):
@@ -205,6 +205,19 @@ def __unicode__(self):
quote_strings=True)
return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype)
+def _unbox(func):
+ @Appender(func.__doc__)
+ def f(self, *args, **kwargs):
+ result = func(self.values, *args, **kwargs)
+ from pandas.core.index import Index
+ if isinstance(result, (np.ndarray, com.ABCSeries, Index)) and result.ndim == 0:
+ # return NumPy type
+ return result.dtype.type(result.item())
+ else: # pragma: no cover
+ return result
+ f.__name__ = func.__name__
+ return f
+
class IndexOpsMixin(object):
""" common ops mixin to support a unified inteface / docs for Series / Index """
@@ -238,6 +251,64 @@ def _wrap_access_object(self, obj):
return obj
+ # ndarray compatibility
+ __array_priority__ = 1000
+
+ def transpose(self):
+ """ return the transpose, which is by definition self """
+ return self
+
+ T = property(transpose, doc="return the transpose, which is by definition self")
+
+ @property
+ def shape(self):
+ """ return a tuple of the shape of the underlying data """
+ return self._data.shape
+
+ @property
+ def ndim(self):
+ """ return the number of dimensions of the underlying data, by definition 1 """
+ return 1
+
+ def item(self):
+ """ return the first element of the underlying data as a python scalar """
+ return self.values.item()
+
+ @property
+ def data(self):
+ """ return the data pointer of the underlying data """
+ return self.values.data
+
+ @property
+ def itemsize(self):
+ """ return the size of the dtype of the item of the underlying data """
+ return self.values.itemsize
+
+ @property
+ def nbytes(self):
+ """ return the number of bytes in the underlying data """
+ return self.values.nbytes
+
+ @property
+ def strides(self):
+ """ return the strides of the underlying data """
+ return self.values.strides
+
+ @property
+ def size(self):
+ """ return the number of elements in the underlying data """
+ return self.values.size
+
+ @property
+ def flags(self):
+ """ return the ndarray.flags for the underlying data """
+ return self.values.flags
+
+ @property
+ def base(self):
+ """ return the base object if the memory of the underlying data is shared """
+ return self.values.base
+
def max(self):
""" The maximum value of the object """
return nanops.nanmax(self.values)
@@ -340,6 +411,20 @@ def factorize(self, sort=False, na_sentinel=-1):
from pandas.core.algorithms import factorize
return factorize(self, sort=sort, na_sentinel=na_sentinel)
+ def searchsorted(self, key, side='left'):
+ """ np.ndarray searchsorted compat """
+
+ ### FIXME in GH7447
+ #### needs coercion on the key (DatetimeIndex does alreay)
+ #### needs tests/doc-string
+ return self.values.searchsorted(key, side=side)
+
+ #----------------------------------------------------------------------
+ # unbox reductions
+
+ all = _unbox(np.ndarray.all)
+ any = _unbox(np.ndarray.any)
+
# facilitate the properties on the wrapped ops
def _field_accessor(name, docstring=None):
op_accessor = '_{0}'.format(name)
@@ -431,13 +516,17 @@ def asobject(self):
def tolist(self):
"""
- See ndarray.tolist
+ return a list of the underlying data
"""
return list(self.asobject)
def min(self, axis=None):
"""
- Overridden ndarray.min to return an object
+ return the minimum value of the Index
+
+ See also
+ --------
+ numpy.ndarray.min
"""
try:
i8 = self.asi8
@@ -456,9 +545,30 @@ def min(self, axis=None):
except ValueError:
return self._na_value
+ def argmin(self, axis=None):
+ """
+ return a ndarray of the minimum argument indexer
+
+ See also
+ --------
+ numpy.ndarray.argmin
+ """
+
+ ##### FIXME: need some tests (what do do if all NaT?)
+ i8 = self.asi8
+ if self.hasnans:
+ mask = i8 == tslib.iNaT
+ i8 = i8.copy()
+ i8[mask] = np.iinfo('int64').max
+ return i8.argmin()
+
def max(self, axis=None):
"""
- Overridden ndarray.max to return an object
+ return the maximum value of the Index
+
+ See also
+ --------
+ numpy.ndarray.max
"""
try:
i8 = self.asi8
@@ -477,6 +587,23 @@ def max(self, axis=None):
except ValueError:
return self._na_value
+ def argmax(self, axis=None):
+ """
+ return a ndarray of the maximum argument indexer
+
+ See also
+ --------
+ numpy.ndarray.argmax
+ """
+
+ #### FIXME: need some tests (what do do if all NaT?)
+ i8 = self.asi8
+ if self.hasnans:
+ mask = i8 == tslib.iNaT
+ i8 = i8.copy()
+ i8[mask] = 0
+ return i8.argmax()
+
@property
def _formatter_func(self):
"""
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index f9ed6c2fecc3c..c9674aea4a715 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -939,6 +939,6 @@ def _get_codes_for_values(values, levels):
levels = com._ensure_object(levels)
(hash_klass, vec_klass), vals = _get_data_algo(values, _hashtables)
t = hash_klass(len(levels))
- t.map_locations(levels)
+ t.map_locations(com._values_from_object(levels))
return com._ensure_platform_int(t.lookup(values))
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 04c5140d6a59b..d8314977742a4 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -205,7 +205,7 @@ def _isnull_new(obj):
# hack (for now) because MI registers as ndarray
elif isinstance(obj, pd.MultiIndex):
raise NotImplementedError("isnull is not defined for MultiIndex")
- elif isinstance(obj, (ABCSeries, np.ndarray)):
+ elif isinstance(obj, (ABCSeries, np.ndarray, pd.Index)):
return _isnull_ndarraylike(obj)
elif isinstance(obj, ABCGeneric):
return obj._constructor(obj._data.isnull(func=isnull))
@@ -231,7 +231,7 @@ def _isnull_old(obj):
# hack (for now) because MI registers as ndarray
elif isinstance(obj, pd.MultiIndex):
raise NotImplementedError("isnull is not defined for MultiIndex")
- elif isinstance(obj, (ABCSeries, np.ndarray)):
+ elif isinstance(obj, (ABCSeries, np.ndarray, pd.Index)):
return _isnull_ndarraylike_old(obj)
elif isinstance(obj, ABCGeneric):
return obj._constructor(obj._data.isnull(func=_isnull_old))
@@ -2024,8 +2024,7 @@ def _is_bool_indexer(key):
def _default_index(n):
from pandas.core.index import Int64Index
values = np.arange(n, dtype=np.int64)
- result = values.view(Int64Index)
- result.name = None
+ result = Int64Index(values,name=None)
result.is_unique = True
return result
diff --git a/pandas/core/format.py b/pandas/core/format.py
index be4074bdb0ae7..8f749d07296a7 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -1186,7 +1186,7 @@ def _helper_csv(self, writer, na_rep=None, cols=None,
if cols is None:
cols = self.columns
- has_aliases = isinstance(header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(header, (tuple, list, np.ndarray, Index))
if has_aliases or header:
if index:
# should write something for index label
@@ -1205,7 +1205,7 @@ def _helper_csv(self, writer, na_rep=None, cols=None,
else:
index_label = [index_label]
elif not isinstance(index_label,
- (list, tuple, np.ndarray)):
+ (list, tuple, np.ndarray, Index)):
# given a string for a DF with Index
index_label = [index_label]
@@ -1327,7 +1327,7 @@ def _save_header(self):
header = self.header
encoded_labels = []
- has_aliases = isinstance(header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(header, (tuple, list, np.ndarray, Index))
if not (has_aliases or self.header):
return
if has_aliases:
@@ -1355,7 +1355,7 @@ def _save_header(self):
index_label = ['']
else:
index_label = [index_label]
- elif not isinstance(index_label, (list, tuple, np.ndarray)):
+ elif not isinstance(index_label, (list, tuple, np.ndarray, Index)):
# given a string for a DF with Index
index_label = [index_label]
@@ -1520,7 +1520,7 @@ def _format_value(self, val):
return val
def _format_header_mi(self):
- has_aliases = isinstance(self.header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(self.header, (tuple, list, np.ndarray, Index))
if not(has_aliases or self.header):
return
@@ -1566,7 +1566,7 @@ def _format_header_mi(self):
self.rowcounter = lnum
def _format_header_regular(self):
- has_aliases = isinstance(self.header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(self.header, (tuple, list, np.ndarray, Index))
if has_aliases or self.header:
coloffset = 0
@@ -1611,7 +1611,7 @@ def _format_body(self):
return self._format_regular_rows()
def _format_regular_rows(self):
- has_aliases = isinstance(self.header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(self.header, (tuple, list, np.ndarray, Index))
if has_aliases or self.header:
self.rowcounter += 1
@@ -1621,7 +1621,7 @@ def _format_regular_rows(self):
# chek aliases
# if list only take first as this is not a MultiIndex
if self.index_label and isinstance(self.index_label,
- (list, tuple, np.ndarray)):
+ (list, tuple, np.ndarray, Index)):
index_label = self.index_label[0]
# if string good to go
elif self.index_label and isinstance(self.index_label, str):
@@ -1661,7 +1661,7 @@ def _format_regular_rows(self):
yield ExcelCell(self.rowcounter + i, colidx + coloffset, val)
def _format_hierarchical_rows(self):
- has_aliases = isinstance(self.header, (tuple, list, np.ndarray))
+ has_aliases = isinstance(self.header, (tuple, list, np.ndarray, Index))
if has_aliases or self.header:
self.rowcounter += 1
@@ -1671,7 +1671,7 @@ def _format_hierarchical_rows(self):
index_labels = self.df.index.names
# check for aliases
if self.index_label and isinstance(self.index_label,
- (list, tuple, np.ndarray)):
+ (list, tuple, np.ndarray, Index)):
index_labels = self.index_label
# if index labels are not empty go ahead and dump
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 516d87bb25f5d..3979ae76f14c3 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -220,7 +220,7 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
mgr = self._init_ndarray(data, index, columns, dtype=dtype,
copy=copy)
- elif isinstance(data, (np.ndarray, Series)):
+ elif isinstance(data, (np.ndarray, Series, Index)):
if data.dtype.names:
data_columns = list(data.dtype.names)
data = dict((k, data[k]) for k in data_columns)
@@ -593,7 +593,7 @@ def dot(self, other):
columns=other.columns)
elif isinstance(other, Series):
return Series(np.dot(lvals, rvals), index=left.index)
- elif isinstance(rvals, np.ndarray):
+ elif isinstance(rvals, (np.ndarray, Index)):
result = np.dot(lvals, rvals)
if result.ndim == 2:
return self._constructor(result, index=left.index)
@@ -1668,7 +1668,7 @@ def __getitem__(self, key):
if indexer is not None:
return self._getitem_slice(indexer)
- if isinstance(key, (Series, np.ndarray, list)):
+ if isinstance(key, (Series, np.ndarray, Index, list)):
# either boolean or fancy integer index
return self._getitem_array(key)
elif isinstance(key, DataFrame):
@@ -1719,7 +1719,7 @@ def _getitem_array(self, key):
def _getitem_multilevel(self, key):
loc = self.columns.get_loc(key)
- if isinstance(loc, (slice, Series, np.ndarray)):
+ if isinstance(loc, (slice, Series, np.ndarray, Index)):
new_columns = self.columns[loc]
result_columns = _maybe_droplevels(new_columns, key)
if self._is_mixed_type:
@@ -1999,7 +1999,7 @@ def __setitem__(self, key, value):
if indexer is not None:
return self._setitem_slice(indexer, value)
- if isinstance(key, (Series, np.ndarray, list)):
+ if isinstance(key, (Series, np.ndarray, list, Index)):
self._setitem_array(key, value)
elif isinstance(key, DataFrame):
self._setitem_frame(key, value)
@@ -2371,7 +2371,7 @@ def set_index(self, keys, drop=True, append=False, inplace=False,
elif isinstance(col, Index):
level = col
names.append(col.name)
- elif isinstance(col, (list, np.ndarray)):
+ elif isinstance(col, (list, np.ndarray, Index)):
level = col
names.append(None)
else:
@@ -2436,7 +2436,7 @@ def reset_index(self, level=None, drop=False, inplace=False, col_level=0,
def _maybe_casted_values(index, labels=None):
if isinstance(index, PeriodIndex):
- values = index.asobject
+ values = index.asobject.values
elif (isinstance(index, DatetimeIndex) and
index.tz is not None):
values = index.asobject
@@ -3020,7 +3020,7 @@ def _compare_frame(self, other, func, str_rep):
def _flex_compare_frame(self, other, func, str_rep, level):
if not self._indexed_same(other):
- self, other = self.align(other, 'outer', level=level)
+ self, other = self.align(other, 'outer', level=level, copy=False)
return self._compare_frame_evaluate(other, func, str_rep)
def combine(self, other, func, fill_value=None, overwrite=True):
@@ -4622,7 +4622,7 @@ def extract_index(data):
def _prep_ndarray(values, copy=True):
- if not isinstance(values, (np.ndarray, Series)):
+ if not isinstance(values, (np.ndarray, Series, Index)):
if len(values) == 0:
return np.empty((0, 0), dtype=object)
@@ -4685,7 +4685,7 @@ def _to_arrays(data, columns, coerce_float=False, dtype=None):
return _list_of_series_to_arrays(data, columns,
coerce_float=coerce_float,
dtype=dtype)
- elif (isinstance(data, (np.ndarray, Series))
+ elif (isinstance(data, (np.ndarray, Series, Index))
and data.dtype.names is not None):
columns = list(data.dtype.names)
@@ -4865,9 +4865,9 @@ def _homogenize(data, index, dtype=None):
oindex = index.astype('O')
if type(v) == dict:
# fast cython method
- v = lib.fast_multiget(v, oindex, default=NA)
+ v = lib.fast_multiget(v, oindex.values, default=NA)
else:
- v = lib.map_infer(oindex, v.get)
+ v = lib.map_infer(oindex.values, v.get)
v = _sanitize_array(v, index, dtype=dtype, copy=False,
raise_cast_failure=False)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index cef18c5ad3c2b..2815f05ce313b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1505,7 +1505,7 @@ def drop(self, labels, axis=0, level=None, inplace=False, **kwargs):
if level is not None:
if not isinstance(axis, MultiIndex):
raise AssertionError('axis must be a MultiIndex')
- indexer = ~lib.ismember(axis.get_level_values(level),
+ indexer = ~lib.ismember(axis.get_level_values(level).values,
set(labels))
else:
indexer = ~axis.isin(labels)
@@ -2135,16 +2135,14 @@ def copy(self, deep=True):
Parameters
----------
- deep : boolean, default True
+ deep : boolean or string, default True
Make a deep copy, i.e. also copy data
Returns
-------
copy : type of caller
"""
- data = self._data
- if deep:
- data = data.copy()
+ data = self._data.copy(deep=deep)
return self._constructor(data).__finalize__(self)
def convert_objects(self, convert_dates=True, convert_numeric=False,
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 8cfa0e25b789f..212e5086ee543 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -24,7 +24,7 @@
from pandas.core.common import(_possibly_downcast_to_dtype, isnull,
notnull, _DATELIKE_DTYPES, is_numeric_dtype,
is_timedelta64_dtype, is_datetime64_dtype,
- is_categorical_dtype)
+ is_categorical_dtype, _values_from_object)
from pandas.core.config import option_context
from pandas import _np_version_under1p7
import pandas.lib as lib
@@ -453,7 +453,7 @@ def name(self):
@property
def _selection_list(self):
- if not isinstance(self._selection, (list, tuple, Series, np.ndarray)):
+ if not isinstance(self._selection, (list, tuple, Series, Index, np.ndarray)):
return [self._selection]
return self._selection
@@ -1254,7 +1254,7 @@ def indices(self):
return self.groupings[0].indices
else:
label_list = [ping.labels for ping in self.groupings]
- keys = [ping.group_index for ping in self.groupings]
+ keys = [_values_from_object(ping.group_index) for ping in self.groupings]
return _get_indices_dict(label_list, keys)
@property
@@ -1552,7 +1552,7 @@ def _aggregate_series_pure_python(self, obj, func):
for label, group in splitter:
res = func(group)
if result is None:
- if (isinstance(res, (Series, np.ndarray)) or
+ if (isinstance(res, (Series, Index, np.ndarray)) or
isinstance(res, list)):
raise ValueError('Function does not reduce')
result = np.empty(ngroups, dtype='O')
@@ -1894,7 +1894,7 @@ def __init__(self, index, grouper=None, obj=None, name=None, level=None,
self.name = grouper.name
# no level passed
- if not isinstance(self.grouper, (Series, np.ndarray)):
+ if not isinstance(self.grouper, (Series, Index, np.ndarray)):
self.grouper = self.index.map(self.grouper)
if not (hasattr(self.grouper, "__len__") and
len(self.grouper) == len(self.index)):
@@ -2014,7 +2014,7 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
# what are we after, exactly?
match_axis_length = len(keys) == len(group_axis)
any_callable = any(callable(g) or isinstance(g, dict) for g in keys)
- any_arraylike = any(isinstance(g, (list, tuple, Series, np.ndarray))
+ any_arraylike = any(isinstance(g, (list, tuple, Series, Index, np.ndarray))
for g in keys)
try:
@@ -2080,7 +2080,7 @@ def _convert_grouper(axis, grouper):
return grouper.values
else:
return grouper.reindex(axis).values
- elif isinstance(grouper, (list, Series, np.ndarray)):
+ elif isinstance(grouper, (list, Series, Index, np.ndarray)):
if len(grouper) != len(axis):
raise AssertionError('Grouper and axis must be same length')
return grouper
@@ -2246,7 +2246,7 @@ def _aggregate_named(self, func, *args, **kwargs):
for name, group in self:
group.name = name
output = func(group, *args, **kwargs)
- if isinstance(output, (Series, np.ndarray)):
+ if isinstance(output, (Series, Index, np.ndarray)):
raise Exception('Must produce aggregated value')
result[name] = self._try_cast(output, group)
@@ -2678,7 +2678,7 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
v = values[0]
- if isinstance(v, (np.ndarray, Series)):
+ if isinstance(v, (np.ndarray, Index, Series)):
if isinstance(v, Series):
applied_index = self._selected_obj._get_axis(self.axis)
all_indexed_same = _all_indexes_same([
@@ -2984,7 +2984,7 @@ def __getitem__(self, key):
if self._selection is not None:
raise Exception('Column(s) %s already selected' % self._selection)
- if isinstance(key, (list, tuple, Series, np.ndarray)):
+ if isinstance(key, (list, tuple, Series, Index, np.ndarray)):
if len(self.obj.columns.intersection(key)) != len(key):
bad_keys = list(set(key).difference(self.obj.columns))
raise KeyError("Columns not found: %s"
@@ -3579,7 +3579,7 @@ def _intercept_cython(func):
def _groupby_indices(values):
- return _algos.groupby_indices(com._ensure_object(values))
+ return _algos.groupby_indices(_values_from_object(com._ensure_object(values)))
def numpy_groupby(data, labels, axis=0):
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 94bc48d0f4342..c7b1c60a9ddc4 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1,6 +1,7 @@
# pylint: disable=E1101,E1103,W0232
import datetime
import warnings
+import operator
from functools import partial
from pandas.compat import range, zip, lrange, lzip, u, reduce
from pandas import compat
@@ -11,12 +12,12 @@
import pandas.algos as _algos
import pandas.index as _index
from pandas.lib import Timestamp, is_datetime_array
-from pandas.core.base import FrozenList, FrozenNDArray, IndexOpsMixin
-from pandas.util.decorators import cache_readonly, deprecate, Appender
+from pandas.core.base import PandasObject, FrozenList, FrozenNDArray, IndexOpsMixin
+from pandas.util.decorators import Appender, cache_readonly, deprecate
from pandas.core.common import isnull, array_equivalent
import pandas.core.common as com
from pandas.core.common import (_values_from_object, is_float, is_integer,
- ABCSeries)
+ ABCSeries, _ensure_object)
from pandas.core.config import get_option
# simplify
@@ -44,10 +45,15 @@ def _indexOp(opname):
"""
def wrapper(self, other):
- func = getattr(self.view(np.ndarray), opname)
- result = func(other)
+ func = getattr(self._data.view(np.ndarray), opname)
+ result = func(np.asarray(other))
+
+ # technically we could support bool dtyped Index
+ # for now just return the indexing array directly
+ if com.is_bool_dtype(result):
+ return result
try:
- return result.view(np.ndarray)
+ return Index(result)
except: # pragma: no cover
return result
return wrapper
@@ -56,19 +62,15 @@ def wrapper(self, other):
class InvalidIndexError(Exception):
pass
-
_o_dtype = np.dtype(object)
-
-
-def _shouldbe_timestamp(obj):
- return (tslib.is_datetime_array(obj)
- or tslib.is_datetime64_array(obj)
- or tslib.is_timestamp_array(obj))
-
_Identity = object
+def _new_Index(cls, d):
+ """ This is called upon unpickling, rather than the default which doesn't have arguments
+ and breaks __new__ """
+ return cls.__new__(cls, **d)
-class Index(IndexOpsMixin, FrozenNDArray):
+class Index(IndexOpsMixin, PandasObject):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
@@ -102,16 +104,21 @@ class Index(IndexOpsMixin, FrozenNDArray):
_box_scalars = False
+ _typ = 'index'
+ _data = None
+ _id = None
name = None
asi8 = None
_comparables = ['name']
+ _attributes = ['name']
_allow_index_ops = True
_allow_datetime_index_ops = False
_allow_period_index_ops = False
+ _is_numeric_dtype = False
_engine_type = _index.ObjectEngine
- def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
+ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False,
tupleize_cols=True, **kwargs):
# no class inference!
@@ -119,7 +126,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
return cls._simple_new(data, name)
from pandas.tseries.period import PeriodIndex
- if isinstance(data, (np.ndarray, ABCSeries)):
+ if isinstance(data, (np.ndarray, Index, ABCSeries)):
if issubclass(data.dtype.type, np.datetime64):
from pandas.tseries.index import DatetimeIndex
result = DatetimeIndex(data, copy=copy, name=name, **kwargs)
@@ -143,7 +150,10 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
if issubclass(data.dtype.type, np.floating):
return Float64Index(data, copy=copy, dtype=dtype, name=name)
- subarr = com._asarray_tuplesafe(data, dtype=object)
+ if com.is_bool_dtype(data):
+ subarr = data
+ else:
+ subarr = com._asarray_tuplesafe(data, dtype=object)
# _asarray_tuplesafe does not always copy underlying data,
# so need to make sure that this happens
@@ -153,7 +163,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
elif hasattr(data, '__array__'):
return Index(np.asarray(data), dtype=dtype, copy=copy, name=name,
**kwargs)
- elif np.isscalar(data):
+ elif data is None or np.isscalar(data):
cls._scalar_data_error(data)
else:
if tupleize_cols and isinstance(data, list) and data:
@@ -177,6 +187,9 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
return Int64Index(subarr.astype('i8'), copy=copy, name=name)
elif inferred in ['floating', 'mixed-integer-float']:
return Float64Index(subarr, copy=copy, name=name)
+ elif inferred == 'boolean':
+ # don't support boolean explicity ATM
+ pass
elif inferred != 'string':
if (inferred.startswith('datetime') or
tslib.is_timestamp_array(subarr)):
@@ -185,15 +198,16 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
elif inferred == 'period':
return PeriodIndex(subarr, name=name, **kwargs)
- subarr = subarr.view(cls)
- # could also have a _set_name, but I don't think it's really necessary
- subarr._set_names([name])
- return subarr
+ return cls._simple_new(subarr, name)
@classmethod
- def _simple_new(cls, values, name, **kwargs):
- result = values.view(cls)
+ def _simple_new(cls, values, name=None, **kwargs):
+ result = object.__new__(cls)
+ result._data = values
result.name = name
+ for k, v in compat.iteritems(kwargs):
+ setattr(result,k,v)
+ result._reset_identity()
return result
def is_(self, other):
@@ -219,11 +233,66 @@ def _reset_identity(self):
"""Initializes or resets ``_id`` attribute with new object"""
self._id = _Identity()
- def view(self, *args, **kwargs):
- result = super(Index, self).view(*args, **kwargs)
- if isinstance(result, Index):
- result._id = self._id
- return result
+ # ndarray compat
+ def __len__(self):
+ """
+ return the length of the Index
+ """
+ return len(self._data)
+
+ def __array__(self, result=None):
+ """ the array interface, return my values """
+ return self._data.view(np.ndarray)
+
+ def __array_wrap__(self, result, context=None):
+ """
+ Gets called after a ufunc
+ """
+ return self._shallow_copy(result)
+
+ @cache_readonly
+ def dtype(self):
+ """ return the dtype object of the underlying data """
+ return self._data.dtype
+
+ @property
+ def values(self):
+ """ return the underlying data as an ndarray """
+ return self._data.view(np.ndarray)
+
+ def get_values(self):
+ """ return the underlying data as an ndarray """
+ return self.values
+
+ def _array_values(self):
+ return self._data
+
+ # ops compat
+ def tolist(self):
+ """
+ return a list of the Index values
+ """
+ return list(self.values)
+
+ def repeat(self, n):
+ """
+ return a new Index of the values repeated n times
+
+ See also
+ --------
+ numpy.ndarray.repeat
+ """
+ return self._shallow_copy(self.values.repeat(n))
+
+ def ravel(self, order='C'):
+ """
+ return an ndarray of the flattened values of the underlying data
+
+ See also
+ --------
+ numpy.ndarray.ravel
+ """
+ return self.values.ravel(order=order)
# construction helpers
@classmethod
@@ -243,8 +312,8 @@ def _coerce_to_ndarray(cls, data):
"""coerces data to ndarray, raises on scalar data. Converts other
iterables to list first and then to array. Does not touch ndarrays."""
- if not isinstance(data, np.ndarray):
- if np.isscalar(data):
+ if not isinstance(data, (np.ndarray, Index)):
+ if data is None or np.isscalar(data):
cls._scalar_data_error(data)
# other iterable of some kind
@@ -253,16 +322,27 @@ def _coerce_to_ndarray(cls, data):
data = np.asarray(data)
return data
- def __array_finalize__(self, obj):
- self._reset_identity()
- if not isinstance(obj, type(self)):
- # Only relevant if array being created from an Index instance
- return
+ def _get_attributes_dict(self):
+ """ return an attributes dict for my class """
+ return dict([ (k,getattr(self,k,None)) for k in self._attributes])
- self.name = getattr(obj, 'name', None)
+ def view(self, cls=None):
+ if cls is not None and not issubclass(cls, Index):
+ result = self._data.view(cls)
+ else:
+ result = self._shallow_copy()
+ if isinstance(result, Index):
+ result._id = self._id
+ return result
- def _shallow_copy(self):
- return self.view()
+ def _shallow_copy(self, values=None, **kwargs):
+ """ create a new Index, don't copy the data, use the same object attributes
+ with passed in attributes taking precedence """
+ if values is None:
+ values = self.values
+ attributes = self._get_attributes_dict()
+ attributes.update(kwargs)
+ return self.__class__._simple_new(values,**attributes)
def copy(self, names=None, name=None, dtype=None, deep=False):
"""
@@ -287,10 +367,11 @@ def copy(self, names=None, name=None, dtype=None, deep=False):
raise TypeError("Can only provide one of `names` and `name`")
if deep:
from copy import deepcopy
- new_index = np.ndarray.__deepcopy__(self, {}).view(self.__class__)
+ new_index = self._shallow_copy(self._data.copy())
name = name or deepcopy(self.name)
else:
- new_index = super(Index, self).copy()
+ new_index = self._shallow_copy()
+ name = self.name
if name is not None:
names = [name]
if names:
@@ -299,6 +380,19 @@ def copy(self, names=None, name=None, dtype=None, deep=False):
new_index = new_index.astype(dtype)
return new_index
+ __copy__ = copy
+
+ def __unicode__(self):
+ """
+ Return a string representation for this object.
+
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both
+ py2/py3.
+ """
+ prepr = com.pprint_thing(self, escape_chars=('\t', '\r', '\n'),
+ quote_strings=True)
+ return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype)
+
def to_series(self, keep_tz=False):
"""
Create a Series with both index and values equal to the index keys
@@ -343,22 +437,10 @@ def to_datetime(self, dayfirst=False):
def _assert_can_do_setop(self, other):
return True
- def tolist(self):
- """
- Overridden version of ndarray.tolist
- """
- return list(self.values)
-
- @cache_readonly
- def dtype(self):
- return self.values.dtype
-
@property
def nlevels(self):
return 1
- # for compat with multindex code
-
def _get_names(self):
return FrozenList((self.name,))
@@ -395,7 +477,7 @@ def set_names(self, names, level=None, inplace=False):
>>> Index([1, 2, 3, 4]).set_names(['foo'])
Int64Index([1, 2, 3, 4], dtype='int64')
>>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
- (2, u'one'), (2, u'two')],
+ (2, u'one'), (2, u'two')],
names=['foo', 'bar'])
>>> idx.set_names(['baz', 'quz'])
MultiIndex(levels=[[1, 2], [u'one', u'two']],
@@ -473,13 +555,6 @@ def _mpl_repr(self):
# how to represent ourselves to matplotlib
return self.values
- @property
- def values(self):
- return np.asarray(self)
-
- def get_values(self):
- return self.values
-
_na_value = np.nan
"""The expected NA value to use with this index."""
@@ -720,26 +795,42 @@ def is_type_compatible(self, typ):
@cache_readonly
def is_all_dates(self):
- return is_datetime_array(self.values)
+ if self._data is None:
+ return False
+ return is_datetime_array(_ensure_object(self.values))
def __iter__(self):
return iter(self.values)
def __reduce__(self):
- """Necessary for making this object picklable"""
- object_state = list(np.ndarray.__reduce__(self))
- subclass_state = self.name,
- object_state[2] = (object_state[2], subclass_state)
- return tuple(object_state)
+ d = dict(data=self._data)
+ d.update(self._get_attributes_dict())
+ return _new_Index, (self.__class__, d), None
def __setstate__(self, state):
"""Necessary for making this object picklable"""
- if len(state) == 2:
- nd_state, own_state = state
- np.ndarray.__setstate__(self, nd_state)
- self.name = own_state[0]
- else: # pragma: no cover
- np.ndarray.__setstate__(self, state)
+
+ if isinstance(state, dict):
+ self._data = state.pop('data')
+ for k, v in compat.iteritems(state):
+ setattr(self, k, v)
+
+ elif isinstance(state, tuple):
+
+ if len(state) == 2:
+ nd_state, own_state = state
+ data = np.empty(nd_state[1], dtype=nd_state[2])
+ np.ndarray.__setstate__(data, nd_state)
+ self.name = own_state[0]
+
+ else: # pragma: no cover
+ data = np.empty(state)
+ np.ndarray.__setstate__(data, state)
+
+ self._data = data
+ else:
+ raise Exception("invalid pickle state")
+ _unpickle_compat = __setstate__
def __deepcopy__(self, memo={}):
return self.copy(deep=True)
@@ -755,6 +846,9 @@ def __contains__(self, key):
def __hash__(self):
raise TypeError("unhashable type: %r" % type(self).__name__)
+ def __setitem__(self, key, value):
+ raise TypeError("Indexes does not support mutable operations")
+
def __getitem__(self, key):
"""
Override numpy.ndarray's __getitem__ method to work as desired.
@@ -768,21 +862,24 @@ def __getitem__(self, key):
"""
# There's no custom logic to be implemented in __getslice__, so it's
# not overloaded intentionally.
- __getitem__ = super(Index, self).__getitem__
+ getitem = self._data.__getitem__
+ promote = self._shallow_copy
+
if np.isscalar(key):
- return __getitem__(key)
+ return getitem(key)
if isinstance(key, slice):
# This case is separated from the conditional above to avoid
# pessimization of basic indexing.
- return __getitem__(key)
+ return promote(getitem(key))
if com._is_bool_indexer(key):
- return __getitem__(np.asarray(key))
+ key = np.asarray(key)
- result = __getitem__(key)
- if result.ndim > 1:
- return result.view(np.ndarray)
+ key = _values_from_object(key)
+ result = getitem(key)
+ if not np.isscalar(result):
+ return promote(result)
else:
return result
@@ -831,12 +928,30 @@ def _ensure_compat_concat(indexes):
def take(self, indexer, axis=0):
"""
- Analogous to ndarray.take
+ return a new Index of the values selected by the indexer
+
+ See also
+ --------
+ numpy.ndarray.take
"""
+
indexer = com._ensure_platform_int(indexer)
- taken = self.view(np.ndarray).take(indexer)
- return self._simple_new(taken, name=self.name, freq=None,
- tz=getattr(self, 'tz', None))
+ taken = np.array(self).take(indexer)
+
+ # by definition cannot propogate freq
+ return self._shallow_copy(taken, freq=None)
+
+ def putmask(self, mask, value):
+ """
+ return a new Index of the values set with the mask
+
+ See also
+ --------
+ numpy.ndarray.putmask
+ """
+ values = self.values.copy()
+ np.putmask(values, mask, value)
+ return self._shallow_copy(values)
def format(self, name=False, formatter=None, **kwargs):
"""
@@ -985,18 +1100,22 @@ def shift(self, periods=1, freq=None):
def argsort(self, *args, **kwargs):
"""
- See docstring for ndarray.argsort
+ return an ndarray indexer of the underlying data
+
+ See also
+ --------
+ numpy.ndarray.argsort
"""
result = self.asi8
if result is None:
- result = self.view(np.ndarray)
+ result = np.array(self)
return result.argsort(*args, **kwargs)
def __add__(self, other):
if isinstance(other, Index):
return self.union(other)
else:
- return Index(self.view(np.ndarray) + other)
+ return Index(np.array(self) + other)
__iadd__ = __add__
__eq__ = _indexOp('__eq__')
@@ -1048,7 +1167,7 @@ def union(self, other):
if self.is_monotonic and other.is_monotonic:
try:
- result = self._outer_indexer(self, other.values)[0]
+ result = self._outer_indexer(self.values, other.values)[0]
except TypeError:
# incomparable objects
result = list(self.values)
@@ -1122,7 +1241,7 @@ def intersection(self, other):
if self.is_monotonic and other.is_monotonic:
try:
- result = self._inner_indexer(self, other.values)[0]
+ result = self._inner_indexer(self.values, other.values)[0]
return self._wrap_union_result(other, result)
except TypeError:
pass
@@ -1381,7 +1500,7 @@ def _possibly_promote(self, other):
return self, other
def groupby(self, to_groupby):
- return self._groupby(self.values, to_groupby)
+ return self._groupby(self.values, _values_from_object(to_groupby))
def map(self, mapper):
return self._arrmap(self.values, mapper)
@@ -1416,9 +1535,6 @@ def isin(self, values, level=None):
self._validate_index_level(level)
return lib.ismember(self._array_values(), value_set)
- def _array_values(self):
- return self
-
def _get_method(self, method):
if method:
method = method.lower()
@@ -1778,7 +1894,7 @@ def slice_indexer(self, start=None, end=None, step=None):
return slice(start_slice, end_slice, step)
# loc indexers
- return Index(start_slice) & Index(end_slice)
+ return (Index(start_slice) & Index(end_slice)).values
def slice_locs(self, start=None, end=None):
"""
@@ -1814,7 +1930,7 @@ def _get_slice(starting_value, offset, search_side, slice_property,
# get_loc will return a boolean array for non_uniques
# if we are not monotonic
- if isinstance(slc, np.ndarray):
+ if isinstance(slc, (np.ndarray, Index)):
raise KeyError("cannot peform a slice operation "
"on a non-unique non-monotonic index")
@@ -1853,7 +1969,7 @@ def delete(self, loc):
-------
new_index : Index
"""
- return np.delete(self, loc)
+ return Index(np.delete(self._data, loc), name=self.name)
def insert(self, loc, item):
"""
@@ -1894,8 +2010,75 @@ def drop(self, labels):
raise ValueError('labels %s not contained in axis' % labels[mask])
return self.delete(indexer)
+ @classmethod
+ def _add_numeric_methods_disabled(cls):
+ """ add in numeric methods to disable """
+
+ def _make_invalid_op(opstr):
+
+ def _invalid_op(self, other):
+ raise TypeError("cannot perform {opstr} with this index type: {typ}".format(opstr=opstr,
+ typ=type(self)))
+ return _invalid_op
+
+ cls.__mul__ = cls.__rmul__ = _make_invalid_op('multiplication')
+ cls.__floordiv__ = cls.__rfloordiv__ = _make_invalid_op('floor division')
+ cls.__truediv__ = cls.__rtruediv__ = _make_invalid_op('true division')
+ if not compat.PY3:
+ cls.__div__ = cls.__rdiv__ = _make_invalid_op('division')
+
+ @classmethod
+ def _add_numeric_methods(cls):
+ """ add in numeric methods """
+
+ def _make_evaluate_binop(op, opstr):
+
+ def _evaluate_numeric_binop(self, other):
+
+ # if we are an inheritor of numeric, but not actually numeric (e.g. DatetimeIndex/PeriodInde)
+ if not self._is_numeric_dtype:
+ raise TypeError("cannot evaluate a numeric op {opstr} for type: {typ}".format(opstr=opstr,
+ typ=type(self)))
+
+ if isinstance(other, Index):
+ if not other._is_numeric_dtype:
+ raise TypeError("cannot evaluate a numeric op {opstr} with type: {typ}".format(opstr=type(self),
+ typ=type(other)))
+ elif isinstance(other, np.ndarray) and not other.ndim:
+ other = other.item()
+
+ if isinstance(other, (Index, ABCSeries, np.ndarray)):
+ if len(self) != len(other):
+ raise ValueError("cannot evaluate a numeric op with unequal lengths")
+ other = _values_from_object(other)
+ if other.dtype.kind not in ['f','i']:
+ raise TypeError("cannot evaluate a numeric op with a non-numeric dtype")
+ else:
+ if not (com.is_float(other) or com.is_integer(other)):
+ raise TypeError("can only perform ops with scalar values")
+ return self._shallow_copy(op(self.values, other))
+
+ return _evaluate_numeric_binop
+
+
+ cls.__mul__ = cls.__rmul__ = _make_evaluate_binop(operator.mul,'multiplication')
+ cls.__floordiv__ = cls.__rfloordiv__ = _make_evaluate_binop(operator.floordiv,'floor division')
+ cls.__truediv__ = cls.__rtruediv__ = _make_evaluate_binop(operator.truediv,'true division')
+ if not compat.PY3:
+ cls.__div__ = cls.__rdiv__ = _make_evaluate_binop(operator.div,'division')
+Index._add_numeric_methods_disabled()
+
+class NumericIndex(Index):
+ """
+ Provide numeric type operations
-class Int64Index(Index):
+ This is an abstract class
+
+ """
+ _is_numeric_dtype = True
+
+
+class Int64Index(NumericIndex):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
@@ -1918,6 +2101,7 @@ class Int64Index(Index):
An Index instance can **only** contain hashable objects
"""
+ _typ = 'int64index'
_groupby = _algos.groupby_int64
_arrmap = _algos.arrmap_int64
_left_indexer_unique = _algos.left_join_indexer_unique_int64
@@ -1927,12 +2111,10 @@ class Int64Index(Index):
_engine_type = _index.Int64Engine
- def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
+ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, **kwargs):
if fastpath:
- subarr = data.view(cls)
- subarr.name = name
- return subarr
+ return cls._simple_new(data, name=name)
# isscalar, generators handled in coerce_to_ndarray
data = cls._coerce_to_ndarray(data)
@@ -1955,9 +2137,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
raise TypeError('Unsafe NumPy casting to integer, you must'
' explicitly cast')
- subarr = subarr.view(cls)
- subarr.name = name
- return subarr
+ return cls._simple_new(subarr, name=name)
@property
def inferred_type(self):
@@ -1994,9 +2174,9 @@ def equals(self, other):
def _wrap_joined_index(self, joined, other):
name = self.name if self.name == other.name else None
return Int64Index(joined, name=name)
+Int64Index._add_numeric_methods()
-
-class Float64Index(Index):
+class Float64Index(NumericIndex):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
@@ -2017,7 +2197,7 @@ class Float64Index(Index):
An Float64Index instance can **only** contain hashable objects
"""
- # when this is not longer object dtype this can be changed
+ _typ = 'float64index'
_engine_type = _index.Float64Engine
_groupby = _algos.groupby_float64
_arrmap = _algos.arrmap_float64
@@ -2026,12 +2206,10 @@ class Float64Index(Index):
_inner_indexer = _algos.inner_join_indexer_float64
_outer_indexer = _algos.outer_join_indexer_float64
- def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
+ def __new__(cls, data=None, dtype=None, copy=False, name=None, fastpath=False, **kwargs):
if fastpath:
- subarr = data.view(cls)
- subarr.name = name
- return subarr
+ return cls._simple_new(data, name)
data = cls._coerce_to_ndarray(data)
@@ -2051,9 +2229,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
if subarr.dtype != np.float64:
subarr = subarr.astype(np.float64)
- subarr = subarr.view(cls)
- subarr.name = name
- return subarr
+ return cls._simple_new(subarr, name)
@property
def inferred_type(self):
@@ -2186,6 +2362,7 @@ def isin(self, values, level=None):
self._validate_index_level(level)
return lib.ismember_nans(self._array_values(), value_set,
isnull(list(value_set)).any())
+Float64Index._add_numeric_methods()
class MultiIndex(Index):
@@ -2205,8 +2382,14 @@ class MultiIndex(Index):
level)
names : optional sequence of objects
Names for each of the index levels.
+ copy : boolean, default False
+ Copy the meta-data
+ verify_integrity : boolean, default True
+ Check that the levels/labels are consistent and valid
"""
+
# initialize to zero-length tuples to make everything work
+ _typ = 'multiindex'
_names = FrozenList()
_levels = FrozenList()
_labels = FrozenList()
@@ -2214,7 +2397,8 @@ class MultiIndex(Index):
rename = Index.set_names
def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
- copy=False, verify_integrity=True):
+ copy=False, verify_integrity=True, _set_identity=True, **kwargs):
+
if levels is None or labels is None:
raise TypeError("Must pass both levels and labels")
if len(levels) != len(labels):
@@ -2226,28 +2410,29 @@ def __new__(cls, levels=None, labels=None, sortorder=None, names=None,
name = names[0]
else:
name = None
-
return Index(levels[0], name=name, copy=True).take(labels[0])
- # v3, 0.8.0
- subarr = np.empty(0, dtype=object).view(cls)
+ result = object.__new__(MultiIndex)
+
# we've already validated levels and labels, so shortcut here
- subarr._set_levels(levels, copy=copy, validate=False)
- subarr._set_labels(labels, copy=copy, validate=False)
+ result._set_levels(levels, copy=copy, validate=False)
+ result._set_labels(labels, copy=copy, validate=False)
if names is not None:
# handles name validation
- subarr._set_names(names)
+ result._set_names(names)
if sortorder is not None:
- subarr.sortorder = int(sortorder)
+ result.sortorder = int(sortorder)
else:
- subarr.sortorder = sortorder
+ result.sortorder = sortorder
if verify_integrity:
- subarr._verify_integrity()
+ result._verify_integrity()
+ if _set_identity:
+ result._reset_identity()
- return subarr
+ return result
def _verify_integrity(self):
"""Raises ValueError if length of levels and labels don't match or any
@@ -2329,7 +2514,7 @@ def set_levels(self, levels, level=None, inplace=False, verify_integrity=True):
Examples
--------
>>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
- (2, u'one'), (2, u'two')],
+ (2, u'one'), (2, u'two')],
names=['foo', 'bar'])
>>> idx.set_levels([['a','b'], [1,2]])
MultiIndex(levels=[[u'a', u'b'], [1, 2]],
@@ -2381,7 +2566,7 @@ def _get_labels(self):
def _set_labels(self, labels, level=None, copy=False, validate=True,
verify_integrity=False):
-
+
if validate and level is None and len(labels) != self.nlevels:
raise ValueError("Length of labels must match number of levels")
if validate and level is not None and len(labels) != len(level):
@@ -2427,7 +2612,7 @@ def set_labels(self, labels, level=None, inplace=False, verify_integrity=True):
Examples
--------
>>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
- (2, u'one'), (2, u'two')],
+ (2, u'one'), (2, u'two')],
names=['foo', 'bar'])
>>> idx.set_labels([[1,0,1,0], [0,0,1,1]])
MultiIndex(levels=[[1, 2], [u'one', u'two']],
@@ -2474,7 +2659,7 @@ def set_labels(self, labels, level=None, inplace=False, verify_integrity=True):
labels = property(fget=_get_labels, fset=__set_labels)
def copy(self, names=None, dtype=None, levels=None, labels=None,
- deep=False):
+ deep=False, _set_identity=False):
"""
Make a copy of this object. Names, dtype, levels and labels can be
passed and will be set on new copy.
@@ -2496,39 +2681,33 @@ def copy(self, names=None, dtype=None, levels=None, labels=None,
``deep``, but if ``deep`` is passed it will attempt to deepcopy.
This could be potentially expensive on large MultiIndex objects.
"""
- new_index = np.ndarray.copy(self)
if deep:
from copy import deepcopy
levels = levels if levels is not None else deepcopy(self.levels)
labels = labels if labels is not None else deepcopy(self.labels)
names = names if names is not None else deepcopy(self.names)
- if levels is not None:
- new_index = new_index.set_levels(levels)
- if labels is not None:
- new_index = new_index.set_labels(labels)
- if names is not None:
- new_index = new_index.set_names(names)
- if dtype:
- new_index = new_index.astype(dtype)
- return new_index
+ else:
+ levels = self.levels
+ labels = self.labels
+ names = self.names
+ return MultiIndex(levels=levels,
+ labels=labels,
+ names=names,
+ sortorder=self.sortorder,
+ verify_integrity=False,
+ _set_identity=_set_identity)
+
+ def __array__(self, result=None):
+ """ the array interface, return my values """
+ return self.values
- def __array_finalize__(self, obj):
- """
- Update custom MultiIndex attributes when a new array is created by
- numpy, e.g. when calling ndarray.view()
- """
- # overriden if a view
- self._reset_identity()
- if not isinstance(obj, type(self)):
- # Only relevant if this array is being created from an Index
- # instance.
- return
+ def view(self, cls=None):
+ """ this is defined as a copy with the same identity """
+ result = self.copy()
+ result._id = self._id
+ return result
- # skip the validation on first, rest will catch the errors
- self._set_levels(getattr(obj, 'levels', []), validate=False)
- self._set_labels(getattr(obj, 'labels', []))
- self._set_names(getattr(obj, 'names', []))
- self.sortorder = getattr(obj, 'sortorder', None)
+ _shallow_copy = view
def _array_values(self):
# hack for various methods
@@ -2628,12 +2807,7 @@ def inferred_type(self):
@staticmethod
def _from_elements(values, labels=None, levels=None, names=None,
sortorder=None):
- index = values.view(MultiIndex)
- index._set_levels(levels)
- index._set_labels(labels)
- index._set_names(names)
- index.sortorder = sortorder
- return index
+ return MultiIndex(levels, labels, names, sortorder=sortorder)
def _get_level_number(self, level):
try:
@@ -2663,33 +2837,28 @@ def _get_level_number(self, level):
@property
def values(self):
- if self._is_v2:
- return self.view(np.ndarray)
- else:
- if self._tuples is not None:
- return self._tuples
+ if self._tuples is not None:
+ return self._tuples
- values = []
- for lev, lab in zip(self.levels, self.labels):
- taken = com.take_1d(lev.values, lab)
- # Need to box timestamps, etc.
- if hasattr(lev, '_box_values'):
- taken = lev._box_values(taken)
- values.append(taken)
+ values = []
+ for lev, lab in zip(self.levels, self.labels):
+ taken = com.take_1d(lev.values, lab)
+ # Need to box timestamps, etc.
+ if hasattr(lev, '_box_values'):
+ taken = lev._box_values(taken)
+ values.append(taken)
- self._tuples = lib.fast_zip(values)
- return self._tuples
+ self._tuples = lib.fast_zip(values)
+ return self._tuples
# fml
@property
def _is_v1(self):
- contents = self.view(np.ndarray)
- return len(contents) > 0 and not isinstance(contents[0], tuple)
+ return False
@property
def _is_v2(self):
- contents = self.view(np.ndarray)
- return len(contents) > 0 and isinstance(contents[0], tuple)
+ return False
@property
def _has_complex_internals(self):
@@ -3000,7 +3169,7 @@ def from_tuples(cls, tuples, sortorder=None, names=None):
# I think this is right? Not quite sure...
raise TypeError('Cannot infer number of levels from empty list')
- if isinstance(tuples, np.ndarray):
+ if isinstance(tuples, (np.ndarray, Index)):
if isinstance(tuples, Index):
tuples = tuples.values
@@ -3075,18 +3244,25 @@ def __contains__(self, key):
def __reduce__(self):
"""Necessary for making this object picklable"""
- object_state = list(np.ndarray.__reduce__(self))
- subclass_state = ([lev.view(np.ndarray) for lev in self.levels],
- [label.view(np.ndarray) for label in self.labels],
- self.sortorder, list(self.names))
- object_state[2] = (object_state[2], subclass_state)
- return tuple(object_state)
+ d = dict(levels = [lev.view(np.ndarray) for lev in self.levels],
+ labels = [label.view(np.ndarray) for label in self.labels],
+ sortorder = self.sortorder,
+ names = list(self.names))
+ return _new_Index, (self.__class__, d), None
def __setstate__(self, state):
"""Necessary for making this object picklable"""
- nd_state, own_state = state
- np.ndarray.__setstate__(self, nd_state)
- levels, labels, sortorder, names = own_state
+
+ if isinstance(state, dict):
+ levels = state.get('levels')
+ labels = state.get('labels')
+ sortorder = state.get('sortorder')
+ names = state.get('names')
+
+ elif isinstance(state, tuple):
+
+ nd_state, own_state = state
+ levels, labels, sortorder, names = own_state
self._set_levels([Index(x) for x in levels], validate=False)
self._set_labels(labels)
@@ -3112,21 +3288,15 @@ def __getitem__(self, key):
# cannot be sure whether the result will be sorted
sortorder = None
- result = np.empty(0, dtype=object).view(type(self))
new_labels = [lab[key] for lab in self.labels]
- # an optimization
- result._set_levels(self.levels, validate=False)
- result._set_labels(new_labels)
- result.sortorder = sortorder
- result._set_names(self.names)
-
- return result
+ return MultiIndex(levels=self.levels,
+ labels=new_labels,
+ names=self.names,
+ sortorder=sortorder,
+ verify_integrity=False)
def take(self, indexer, axis=None):
- """
- Analogous to ndarray.take
- """
indexer = com._ensure_platform_int(indexer)
new_labels = [lab.take(indexer) for lab in self.labels]
return MultiIndex(levels=self.levels, labels=new_labels,
@@ -3167,6 +3337,13 @@ def append(self, other):
def argsort(self, *args, **kwargs):
return self.values.argsort()
+ def repeat(self, n):
+ return MultiIndex(levels=self.levels,
+ labels=[label.view(np.ndarray).repeat(n) for label in self.labels],
+ names=self.names,
+ sortorder=self.sortorder,
+ verify_integrity=False)
+
def drop(self, labels, level=None):
"""
Make new MultiIndex with passed list of labels deleted
@@ -3185,7 +3362,7 @@ def drop(self, labels, level=None):
return self._drop_from_level(labels, level)
try:
- if not isinstance(labels, np.ndarray):
+ if not isinstance(labels, (np.ndarray, Index)):
labels = com._index_labels_to_array(labels)
indexer = self.get_indexer(labels)
mask = indexer == -1
@@ -3254,7 +3431,7 @@ def droplevel(self, level=0):
mask = new_labels[0] == -1
result = new_levels[0].take(new_labels[0])
if mask.any():
- np.putmask(result, mask, np.nan)
+ result = result.putmask(mask, np.nan)
result.name = new_names[0]
return result
@@ -3414,16 +3591,16 @@ def get_indexer(self, target, method=None, limit=None):
if not self.is_unique or not self.is_monotonic:
raise AssertionError(('Must be unique and monotonic to '
'use forward fill getting the indexer'))
- indexer = self_index._engine.get_pad_indexer(target_index,
+ indexer = self_index._engine.get_pad_indexer(target_index.values,
limit=limit)
elif method == 'backfill':
if not self.is_unique or not self.is_monotonic:
raise AssertionError(('Must be unique and monotonic to '
'use backward fill getting the indexer'))
- indexer = self_index._engine.get_backfill_indexer(target_index,
+ indexer = self_index._engine.get_backfill_indexer(target_index.values,
limit=limit)
else:
- indexer = self_index._engine.get_indexer(target_index)
+ indexer = self_index._engine.get_indexer(target_index.values)
return com._ensure_platform_int(indexer)
@@ -4087,6 +4264,7 @@ def isin(self, values, level=None):
return np.zeros(len(labs), dtype=np.bool_)
else:
return np.lib.arraysetops.in1d(labs, sought_labels)
+MultiIndex._add_numeric_methods_disabled()
# For utility purposes
@@ -4192,6 +4370,12 @@ def _union_indexes(indexes):
return result
indexes, kind = _sanitize_and_check(indexes)
+ def _unique_indices(inds):
+ def conv(i):
+ if isinstance(i, Index):
+ i = i.tolist()
+ return i
+ return Index(lib.fast_unique_multiple_list([ conv(i) for i in inds ]))
if kind == 'special':
result = indexes[0]
@@ -4206,11 +4390,11 @@ def _union_indexes(indexes):
index = indexes[0]
for other in indexes[1:]:
if not index.equals(other):
- return Index(lib.fast_unique_multiple(indexes))
+ return _unique_indices(indexes)
return index
else:
- return Index(lib.fast_unique_multiple_list(indexes))
+ return _unique_indices(indexes)
def _trim_front(strings):
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b02fe523df998..91008f9b22aed 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -7,7 +7,8 @@
import pandas.core.common as com
from pandas.core.common import (_is_bool_indexer, is_integer_dtype,
_asarray_tuplesafe, is_list_like, isnull,
- ABCSeries, ABCDataFrame, ABCPanel, is_float)
+ ABCSeries, ABCDataFrame, ABCPanel, is_float,
+ _values_from_object)
import pandas.lib as lib
import numpy as np
@@ -1086,7 +1087,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
return {'key': obj}
raise KeyError('%s not in index' % objarr[mask])
- return indexer
+ return _values_from_object(indexer)
else:
try:
@@ -1512,7 +1513,7 @@ def _length_of_indexer(indexer, target=None):
elif step < 0:
step = abs(step)
return (stop - start) / step
- elif isinstance(indexer, (ABCSeries, np.ndarray, list)):
+ elif isinstance(indexer, (ABCSeries, Index, np.ndarray, list)):
return len(indexer)
elif not is_list_like(indexer):
return 1
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index f5cb48fd94022..da36d95a3ad9e 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2617,15 +2617,22 @@ def copy(self, deep=True):
Parameters
----------
- deep : boolean, default True
+ deep : boolean o rstring, default True
If False, return shallow copy (do not copy data)
+ If 'all', copy data and a deep copy of the index
Returns
-------
copy : BlockManager
"""
+
+ # this preserves the notion of view copying of axes
if deep:
- new_axes = [ax.view() for ax in self.axes]
+ if deep == 'all':
+ copy = lambda ax: ax.copy(deep=True)
+ else:
+ copy = lambda ax: ax.view()
+ new_axes = [ copy(ax) for ax in self.axes]
else:
new_axes = list(self.axes)
return self.apply('copy', axes=new_axes, deep=deep,
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index abe1974705243..9f29570af6f4f 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -247,7 +247,7 @@ def __init__(self, left, right, name):
# need to make sure that we are aligning the data
if isinstance(left, pd.Series) and isinstance(right, pd.Series):
- left, right = left.align(right)
+ left, right = left.align(right,copy=False)
self.left = left
self.right = right
@@ -331,12 +331,12 @@ def _convert_to_array(self, values, name=None, other=None):
values = np.empty(values.shape, dtype=other.dtype)
values[:] = tslib.iNaT
- # a datetlike
+ # a datelike
+ elif isinstance(values, pd.DatetimeIndex):
+ values = values.to_series()
elif not (isinstance(values, (pa.Array, pd.Series)) and
com.is_datetime64_dtype(values)):
values = tslib.array_to_datetime(values)
- elif isinstance(values, pd.DatetimeIndex):
- values = values.to_series()
elif inferred_type in ('timedelta', 'timedelta64'):
# have a timedelta, convert to to ns here
values = _possibly_cast_to_timedelta(values, coerce=coerce, dtype='timedelta64[ns]')
@@ -451,11 +451,11 @@ def na_op(x, y):
result = expressions.evaluate(op, str_rep, x, y,
raise_on_error=True, **eval_kwargs)
except TypeError:
- if isinstance(y, (pa.Array, pd.Series)):
+ if isinstance(y, (pa.Array, pd.Series, pd.Index)):
dtype = np.find_common_type([x.dtype, y.dtype], [])
result = np.empty(x.size, dtype=dtype)
mask = notnull(x) & notnull(y)
- result[mask] = op(x[mask], y[mask])
+ result[mask] = op(x[mask], _values_from_object(y[mask]))
elif isinstance(x, pa.Array):
result = pa.empty(len(x), dtype=x.dtype)
mask = notnull(x)
@@ -555,7 +555,7 @@ def wrapper(self, other):
index=self.index, name=name)
elif isinstance(other, pd.DataFrame): # pragma: no cover
return NotImplemented
- elif isinstance(other, (pa.Array, pd.Series)):
+ elif isinstance(other, (pa.Array, pd.Series, pd.Index)):
if len(self) != len(other):
raise ValueError('Lengths must match to compare')
return self._constructor(na_op(self.values, np.asarray(other)),
@@ -565,7 +565,7 @@ def wrapper(self, other):
mask = isnull(self)
values = self.get_values()
- other = _index.convert_scalar(values, other)
+ other = _index.convert_scalar(values,_values_from_object(other))
if issubclass(values.dtype.type, np.datetime64):
values = values.view('i8')
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d1f861b7f7fd7..22284df337d97 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -70,18 +70,6 @@ def wrapper(self):
return wrapper
-def _unbox(func):
- @Appender(func.__doc__)
- def f(self, *args, **kwargs):
- result = func(self.values, *args, **kwargs)
- if isinstance(result, (pa.Array, Series)) and result.ndim == 0:
- # return NumPy type
- return result.dtype.type(result.item())
- else: # pragma: no cover
- return result
- f.__name__ = func.__name__
- return f
-
#----------------------------------------------------------------------
# Series class
@@ -290,76 +278,87 @@ def _set_subtyp(self, is_all_dates):
object.__setattr__(self, '_subtyp', 'series')
# ndarray compatibility
- def item(self):
- return self._data.values.item()
-
- @property
- def data(self):
- return self._data.values.data
-
- @property
- def strides(self):
- return self._data.values.strides
-
- @property
- def size(self):
- return self._data.values.size
-
- @property
- def flags(self):
- return self._data.values.flags
-
@property
def dtype(self):
+ """ return the dtype object of the underlying data """
return self._data.dtype
@property
def dtypes(self):
- """ for compat """
+ """ return the dtype object of the underlying data """
return self._data.dtype
@property
def ftype(self):
+ """ return if the data is sparse|dense """
return self._data.ftype
@property
def ftypes(self):
- """ for compat """
+ """ return if the data is sparse|dense """
return self._data.ftype
@property
- def shape(self):
- return self._data.shape
+ def values(self):
+ """
+ Return Series as ndarray
- @property
- def ndim(self):
- return 1
+ Returns
+ -------
+ arr : numpy.ndarray
+ """
+ return self._data.values
- @property
- def base(self):
- return self.values.base
+ def get_values(self):
+ """ same as values (but handles sparseness conversions); is a view """
+ return self._data.get_values()
+
+ # ops
def ravel(self, order='C'):
+ """
+ Return the flattened underlying data as an ndarray
+
+ See also
+ --------
+ numpy.ndarray.ravel
+ """
return self.values.ravel(order=order)
def compress(self, condition, axis=0, out=None, **kwargs):
- # 1-d compat with numpy
- return self[condition]
-
- def transpose(self):
- """ support for compatiblity """
- return self
+ """
+ Return selected slices of an array along given axis as a Series
- T = property(transpose)
+ See also
+ --------
+ numpy.ndarray.compress
+ """
+ return self[condition]
def nonzero(self):
- """ numpy like, returns same as nonzero """
+ """
+ return the a boolean array of the underlying data is nonzero
+
+ See also
+ --------
+ numpy.ndarray.nonzero
+ """
return self.values.nonzero()
def put(self, *args, **kwargs):
+ """
+ return a ndarray with the values put
+
+ See also
+ --------
+ numpy.ndarray.put
+ """
self.values.put(*args, **kwargs)
def __len__(self):
+ """
+ return the length of the Series
+ """
return len(self._data)
def view(self, dtype=None):
@@ -442,7 +441,7 @@ def _unpickle_series_compat(self, state):
# recreate
self._data = SingleBlockManager(data, index, fastpath=True)
- self.index = index
+ self._index = index
self.name = name
else:
@@ -549,7 +548,7 @@ def _get_with(self, key):
raise
# pragma: no cover
- if not isinstance(key, (list, pa.Array, Series)):
+ if not isinstance(key, (list, pa.Array, Series, Index)):
key = list(key)
if isinstance(key, Index):
@@ -716,7 +715,11 @@ def _set_values(self, key, value):
def repeat(self, reps):
"""
- See ndarray.repeat
+ return a new Series with the values repeated reps times
+
+ See also
+ --------
+ numpy.ndarray.repeat
"""
new_index = self.index.repeat(reps)
new_values = self.values.repeat(reps)
@@ -725,7 +728,13 @@ def repeat(self, reps):
def reshape(self, *args, **kwargs):
"""
- See numpy.ndarray.reshape
+ return an ndarray with the values shape
+ if the specified shape matches exactly the current shape, then
+ return self (for compat)
+
+ See also
+ --------
+ numpy.ndarray.take
"""
if len(args) == 1 and hasattr(args[0], '__iter__'):
shape = args[0]
@@ -989,12 +998,6 @@ def iteritems(self):
if compat.PY3: # pragma: no cover
items = iteritems
- #----------------------------------------------------------------------
- # unbox reductions
-
- all = _unbox(pa.Array.all)
- any = _unbox(pa.Array.any)
-
#----------------------------------------------------------------------
# Misc public methods
@@ -1002,21 +1005,6 @@ def keys(self):
"Alias for index"
return self.index
- @property
- def values(self):
- """
- Return Series as ndarray
-
- Returns
- -------
- arr : numpy.ndarray
- """
- return self._data.values
-
- def get_values(self):
- """ same as values (but handles sparseness conversions); is a view """
- return self._data.get_values()
-
def tolist(self):
""" Convert Series to a nested list """
return list(self)
@@ -1191,6 +1179,7 @@ def idxmin(self, axis=None, out=None, skipna=True):
See Also
--------
DataFrame.idxmin
+ numpy.ndarray.argmin
"""
i = nanops.nanargmin(_values_from_object(self), skipna=skipna)
if i == -1:
@@ -1217,6 +1206,7 @@ def idxmax(self, axis=None, out=None, skipna=True):
See Also
--------
DataFrame.idxmax
+ numpy.ndarray.argmax
"""
i = nanops.nanargmax(_values_from_object(self), skipna=skipna)
if i == -1:
@@ -1334,7 +1324,7 @@ def cov(self, other, min_periods=None):
Normalized by N-1 (unbiased estimator).
"""
- this, other = self.align(other, join='inner')
+ this, other = self.align(other, join='inner', copy=False)
if len(this) == 0:
return pa.NA
return nanops.nancov(this.values, other.values,
@@ -1460,7 +1450,7 @@ def _binop(self, other, func, level=None, fill_value=None):
this = self
if not self.index.equals(other.index):
- this, other = self.align(other, level=level, join='outer')
+ this, other = self.align(other, level=level, join='outer', copy=False)
new_index = this.index
this_vals = this.values
@@ -1599,6 +1589,9 @@ def argsort(self, axis=0, kind='quicksort', order=None):
-------
argsorted : Series, with -1 indicated where nan values are present
+ See also
+ --------
+ numpy.ndarray.argsort
"""
values = self.values
mask = isnull(values)
@@ -2072,8 +2065,7 @@ def reindex_axis(self, labels, axis=0, **kwargs):
def take(self, indices, axis=0, convert=True, is_copy=False):
"""
- Analogous to ndarray.take, return Series corresponding to requested
- indices
+ return Series corresponding to requested indices
Parameters
----------
@@ -2083,6 +2075,10 @@ def take(self, indices, axis=0, convert=True, is_copy=False):
Returns
-------
taken : Series
+
+ See also
+ --------
+ numpy.ndarray.take
"""
# check/convert indicies here
if convert:
@@ -2483,7 +2479,7 @@ def _try_cast(arr, take_fast_path):
return subarr
# GH #846
- if isinstance(data, (pa.Array, Series)):
+ if isinstance(data, (pa.Array, Index, Series)):
subarr = np.array(data, copy=False)
if dtype is not None:
diff --git a/pandas/io/pickle.py b/pandas/io/pickle.py
index e80bfec9c8dba..52a9ef0370e9e 100644
--- a/pandas/io/pickle.py
+++ b/pandas/io/pickle.py
@@ -1,6 +1,5 @@
from pandas.compat import cPickle as pkl, pickle_compat as pc, PY3
-
def to_pickle(obj, path):
"""
Pickle (serialize) object to input file path
@@ -45,7 +44,7 @@ def try_read(path, encoding=None):
try:
with open(path, 'rb') as fh:
return pkl.load(fh)
- except:
+ except (Exception) as e:
# reg/patched pickle
try:
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index b95c1ed0b77e9..78e7c43de678f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3564,7 +3564,7 @@ def read(self, where=None, columns=None, **kwargs):
# need a better algorithm
tuple_index = long_index._tuple_index
- unique_tuples = lib.fast_unique(tuple_index)
+ unique_tuples = lib.fast_unique(tuple_index.values)
unique_tuples = _asarray_tuplesafe(unique_tuples)
indexer = match(unique_tuples, tuple_index)
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index 07d576ac1c8ae..aea7fb42b7d36 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -18,11 +18,6 @@
from pandas.util.misc import is_little_endian
import pandas
-def _read_pickle(vf, encoding=None, compat=False):
- from pandas.compat import pickle_compat as pc
- with open(vf,'rb') as fh:
- pc.load(fh, encoding=encoding, compat=compat)
-
class TestPickle(tm.TestCase):
_multiprocess_can_split_ = True
@@ -97,16 +92,54 @@ def test_read_pickles_0_14_0(self):
self.read_pickles('0.14.0')
def test_round_trip_current(self):
- for typ, dv in self.data.items():
+ try:
+ import cPickle as c_pickle
+ def c_pickler(obj,path):
+ with open(path,'wb') as fh:
+ c_pickle.dump(obj,fh,protocol=-1)
+
+ def c_unpickler(path):
+ with open(path,'rb') as fh:
+ fh.seek(0)
+ return c_pickle.load(fh)
+ except:
+ c_pickler = None
+ c_unpickler = None
+
+ import pickle as python_pickle
+
+ def python_pickler(obj,path):
+ with open(path,'wb') as fh:
+ python_pickle.dump(obj,fh,protocol=-1)
+
+ def python_unpickler(path):
+ with open(path,'rb') as fh:
+ fh.seek(0)
+ return python_pickle.load(fh)
+
+ for typ, dv in self.data.items():
for dt, expected in dv.items():
- with tm.ensure_clean(self.path) as path:
+ for writer in [pd.to_pickle, c_pickler, python_pickler ]:
+ if writer is None:
+ continue
+
+ with tm.ensure_clean(self.path) as path:
+
+ # test writing with each pickler
+ writer(expected,path)
+
+ # test reading with each unpickler
+ result = pd.read_pickle(path)
+ self.compare_element(typ, result, expected)
- pd.to_pickle(expected,path)
+ if c_unpickler is not None:
+ result = c_unpickler(path)
+ self.compare_element(typ, result, expected)
- result = pd.read_pickle(path)
- self.compare_element(typ, result, expected)
+ result = python_unpickler(path)
+ self.compare_element(typ, result, expected)
def _validate_timeseries(self, pickled, current):
# GH 7748
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 024415409cdca..4f76f72b8eb66 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -857,11 +857,16 @@ def check(format,index):
assert_frame_equal(df,store['df'])
for index in [ tm.makeFloatIndex, tm.makeStringIndex, tm.makeIntIndex,
- tm.makeDateIndex, tm.makePeriodIndex ]:
+ tm.makeDateIndex ]:
check('table',index)
check('fixed',index)
+ # period index currently broken for table
+ # seee GH7796 FIXME
+ check('fixed',tm.makePeriodIndex)
+ #check('table',tm.makePeriodIndex)
+
# unicode
index = tm.makeUnicodeIndex
if compat.PY3:
@@ -2285,7 +2290,7 @@ def test_remove_where(self):
# deleted number (entire table)
n = store.remove('wp', [])
- assert(n == 120)
+ self.assertTrue(n == 120)
# non - empty where
_maybe_remove(store, 'wp')
@@ -2379,7 +2384,8 @@ def test_remove_crit(self):
crit4 = Term('major_axis=date4')
store.put('wp3', wp, format='t')
n = store.remove('wp3', where=[crit4])
- assert(n == 36)
+ self.assertTrue(n == 36)
+
result = store.select('wp3')
expected = wp.reindex(major_axis=wp.major_axis - date4)
assert_panel_equal(result, expected)
@@ -2392,11 +2398,10 @@ def test_remove_crit(self):
crit1 = Term('major_axis>date')
crit2 = Term("minor_axis=['A', 'D']")
n = store.remove('wp', where=[crit1])
-
- assert(n == 56)
+ self.assertTrue(n == 56)
n = store.remove('wp', where=[crit2])
- assert(n == 32)
+ self.assertTrue(n == 32)
result = store['wp']
expected = wp.truncate(after=date).reindex(minor=['B', 'C'])
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index 373320393bff2..7ffc59f6ab50d 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -813,7 +813,7 @@ def clean_index_list(list obj):
for i in range(n):
v = obj[i]
- if not (PyList_Check(v) or np.PyArray_Check(v)):
+ if not (PyList_Check(v) or np.PyArray_Check(v) or hasattr(v,'_data')):
all_arrays = 0
break
@@ -823,7 +823,7 @@ def clean_index_list(list obj):
converted = np.empty(n, dtype=object)
for i in range(n):
v = obj[i]
- if PyList_Check(v) or np.PyArray_Check(v):
+ if PyList_Check(v) or np.PyArray_Check(v) or hasattr(v,'_data'):
converted[i] = tuple(v)
else:
converted[i] = v
diff --git a/pandas/sparse/tests/test_array.py b/pandas/sparse/tests/test_array.py
index a12d1dfe70513..5227bb23ad616 100644
--- a/pandas/sparse/tests/test_array.py
+++ b/pandas/sparse/tests/test_array.py
@@ -4,7 +4,6 @@
import numpy as np
import operator
-import pickle
from pandas.core.series import Series
from pandas.core.common import notnull
@@ -169,8 +168,7 @@ def _check_inplace_op(op):
def test_pickle(self):
def _check_roundtrip(obj):
- pickled = pickle.dumps(obj)
- unpickled = pickle.loads(pickled)
+ unpickled = self.round_trip_pickle(obj)
assert_sp_array_equal(unpickled, obj)
_check_roundtrip(self.arr)
diff --git a/pandas/sparse/tests/test_sparse.py b/pandas/sparse/tests/test_sparse.py
index 475b8f93c10ef..105f661f08b10 100644
--- a/pandas/sparse/tests/test_sparse.py
+++ b/pandas/sparse/tests/test_sparse.py
@@ -21,7 +21,7 @@
import pandas.core.datetools as datetools
from pandas.core.common import isnull
import pandas.util.testing as tm
-from pandas.compat import range, lrange, cPickle as pickle, StringIO, lrange
+from pandas.compat import range, lrange, StringIO, lrange
from pandas import compat
import pandas.sparse.frame as spf
@@ -315,8 +315,7 @@ def test_kind(self):
def test_pickle(self):
def _test_roundtrip(series):
- pickled = pickle.dumps(series, protocol=pickle.HIGHEST_PROTOCOL)
- unpickled = pickle.loads(pickled)
+ unpickled = self.round_trip_pickle(series)
assert_sp_series_equal(series, unpickled)
assert_series_equal(series.to_dense(), unpickled.to_dense())
@@ -793,7 +792,10 @@ def test_copy(self):
cp = self.frame.copy()
tm.assert_isinstance(cp, SparseDataFrame)
assert_sp_frame_equal(cp, self.frame)
- self.assertTrue(cp.index.is_(self.frame.index))
+
+ # as of v0.15.0
+ # this is now identical (but not is_a )
+ self.assertTrue(cp.index.identical(self.frame.index))
def test_constructor(self):
for col, series in compat.iteritems(self.frame):
@@ -918,9 +920,8 @@ def test_array_interface(self):
def test_pickle(self):
def _test_roundtrip(frame):
- pickled = pickle.dumps(frame, protocol=pickle.HIGHEST_PROTOCOL)
- unpickled = pickle.loads(pickled)
- assert_sp_frame_equal(frame, unpickled)
+ result = self.round_trip_pickle(frame)
+ assert_sp_frame_equal(frame, result)
_test_roundtrip(SparseDataFrame())
self._check_all(_test_roundtrip)
@@ -1608,12 +1609,11 @@ def test_from_dict(self):
def test_pickle(self):
def _test_roundtrip(panel):
- pickled = pickle.dumps(panel, protocol=pickle.HIGHEST_PROTOCOL)
- unpickled = pickle.loads(pickled)
- tm.assert_isinstance(unpickled.items, Index)
- tm.assert_isinstance(unpickled.major_axis, Index)
- tm.assert_isinstance(unpickled.minor_axis, Index)
- assert_sp_panel_equal(panel, unpickled)
+ result = self.round_trip_pickle(panel)
+ tm.assert_isinstance(result.items, Index)
+ tm.assert_isinstance(result.major_axis, Index)
+ tm.assert_isinstance(result.minor_axis, Index)
+ assert_sp_panel_equal(panel, result)
_test_roundtrip(self.panel)
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
index 842be5a1645bf..f7aede92d635d 100644
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -55,6 +55,17 @@
else:
return np.array(arr, dtype=np.int_)
+cpdef ensure_object(object arr):
+ if util.is_array(arr):
+ if (<ndarray> arr).descr.type_num == NPY_OBJECT:
+ return arr
+ else:
+ return arr.astype(np.object_)
+ elif hasattr(arr,'asobject'):
+ return arr.asobject
+ else:
+ return np.array(arr, dtype=np.object_)
+
"""
@@ -2189,7 +2200,7 @@ def outer_join_indexer_%(name)s(ndarray[%(c_type)s] left,
('int32', 'INT32', 'int32'),
('int64', 'INT64', 'int64'),
# ('platform_int', 'INT', 'int_'),
- ('object', 'OBJECT', 'object_'),
+ #('object', 'OBJECT', 'object_'),
]
def generate_ensure_dtypes():
diff --git a/pandas/src/generated.pyx b/pandas/src/generated.pyx
index 97a34582d2ef2..50eefa5e783cf 100644
--- a/pandas/src/generated.pyx
+++ b/pandas/src/generated.pyx
@@ -49,6 +49,17 @@ cpdef ensure_platform_int(object arr):
else:
return np.array(arr, dtype=np.int_)
+cpdef ensure_object(object arr):
+ if util.is_array(arr):
+ if (<ndarray> arr).descr.type_num == NPY_OBJECT:
+ return arr
+ else:
+ return arr.astype(np.object_)
+ elif hasattr(arr,'asobject'):
+ return arr.asobject
+ else:
+ return np.array(arr, dtype=np.object_)
+
cpdef ensure_float64(object arr):
@@ -111,16 +122,6 @@ cpdef ensure_int64(object arr):
return np.array(arr, dtype=np.int64)
-cpdef ensure_object(object arr):
- if util.is_array(arr):
- if (<ndarray> arr).descr.type_num == NPY_OBJECT:
- return arr
- else:
- return arr.astype(np.object_)
- else:
- return np.array(arr, dtype=np.object_)
-
-
@cython.wraparound(False)
@cython.boundscheck(False)
cpdef map_indices_float64(ndarray[float64_t] index):
@@ -5932,7 +5933,7 @@ def group_mean_bin_float64(ndarray[float64_t, ndim=2] out,
for i in range(ngroups):
for j in range(K):
count = nobs[i, j]
- if nobs[i, j] == 0:
+ if count == 0:
out[i, j] = nan
else:
out[i, j] = sumx[i, j] / count
@@ -5985,7 +5986,7 @@ def group_mean_bin_float32(ndarray[float32_t, ndim=2] out,
for i in range(ngroups):
for j in range(K):
count = nobs[i, j]
- if nobs[i, j] == 0:
+ if count == 0:
out[i, j] = nan
else:
out[i, j] = sumx[i, j] / count
diff --git a/pandas/src/reduce.pyx b/pandas/src/reduce.pyx
index a22e7e636d7e4..add9a03642bed 100644
--- a/pandas/src/reduce.pyx
+++ b/pandas/src/reduce.pyx
@@ -13,7 +13,7 @@ cdef class Reducer:
'''
cdef:
Py_ssize_t increment, chunksize, nresults
- object arr, dummy, f, labels, typ, index
+ object arr, dummy, f, labels, typ, ityp, index
def __init__(self, object arr, object f, axis=1, dummy=None,
labels=None):
@@ -37,38 +37,34 @@ cdef class Reducer:
self.f = f
self.arr = arr
- self.typ = None
self.labels = labels
- self.dummy, index = self._check_dummy(dummy=dummy)
-
- self.labels = labels
- self.index = index
+ self.dummy, self.typ, self.index, self.ityp = self._check_dummy(dummy=dummy)
def _check_dummy(self, dummy=None):
- cdef object index
+ cdef object index=None, typ=None, ityp=None
if dummy is None:
dummy = np.empty(self.chunksize, dtype=self.arr.dtype)
- index = None
# our ref is stolen later since we are creating this array
# in cython, so increment first
Py_INCREF(dummy)
+
else:
+
# we passed a series-like
if hasattr(dummy,'values'):
- self.typ = type(dummy)
+ typ = type(dummy)
index = getattr(dummy,'index',None)
dummy = dummy.values
if dummy.dtype != self.arr.dtype:
raise ValueError('Dummy array must be same dtype')
if len(dummy) != self.chunksize:
- raise ValueError('Dummy array must be length %d' %
- self.chunksize)
+ raise ValueError('Dummy array must be length %d' % self.chunksize)
- return dummy, index
+ return dummy, typ, index, ityp
def get_result(self):
cdef:
@@ -76,21 +72,23 @@ cdef class Reducer:
ndarray arr, result, chunk
Py_ssize_t i, incr
flatiter it
+ bint has_labels
object res, name, labels, index
- object cached_typ = None
+ object cached_typ=None
arr = self.arr
chunk = self.dummy
dummy_buf = chunk.data
chunk.data = arr.data
labels = self.labels
- index = self.index
+ has_labels = labels is not None
+ has_index = self.index is not None
incr = self.increment
try:
for i in range(self.nresults):
- if labels is not None:
+ if has_labels:
name = util.get_value_at(labels, i)
else:
name = None
@@ -102,9 +100,9 @@ cdef class Reducer:
if self.typ is not None:
# recreate with the index if supplied
- if index is not None:
+ if has_index:
- cached_typ = self.typ(chunk, index=index, name=name)
+ cached_typ = self.typ(chunk, index=self.index, name=name)
else:
@@ -113,6 +111,10 @@ cdef class Reducer:
# use the cached_typ if possible
if cached_typ is not None:
+
+ if has_index:
+ object.__setattr__(cached_typ, 'index', self.index)
+
object.__setattr__(cached_typ._data._block, 'values', chunk)
object.__setattr__(cached_typ, 'name', name)
res = self.f(cached_typ)
@@ -121,7 +123,6 @@ cdef class Reducer:
if hasattr(res,'values'):
res = res.values
-
if i == 0:
result = self._get_result_array(res)
it = <flatiter> PyArray_IterNew(result)
@@ -163,7 +164,7 @@ cdef class SeriesBinGrouper:
bint passed_dummy
cdef public:
- object arr, index, dummy_arr, dummy_index, values, f, bins, typ, name
+ object arr, index, dummy_arr, dummy_index, values, f, bins, typ, ityp, name
def __init__(self, object series, object f, object bins, object dummy):
n = len(series)
@@ -175,8 +176,9 @@ cdef class SeriesBinGrouper:
if not values.flags.c_contiguous:
values = values.copy('C')
self.arr = values
- self.index = series.index
self.typ = type(series)
+ self.ityp = type(series.index)
+ self.index = series.index.values
self.name = getattr(series,'name',None)
self.dummy_arr, self.dummy_index = self._check_dummy(dummy)
@@ -189,6 +191,8 @@ cdef class SeriesBinGrouper:
self.ngroups = len(bins) + 1
def _check_dummy(self, dummy=None):
+ # both values and index must be an ndarray!
+
if dummy is None:
values = np.empty(0, dtype=self.arr.dtype)
index = None
@@ -198,7 +202,9 @@ cdef class SeriesBinGrouper:
raise ValueError('Dummy array must be same dtype')
if not values.flags.contiguous:
values = values.copy()
- index = dummy.index
+ index = dummy.index.values
+ if not index.flags.contiguous:
+ index = index.copy()
return values, index
@@ -210,8 +216,7 @@ cdef class SeriesBinGrouper:
object res
bint initialized = 0
Slider vslider, islider
- object gin, typ, name
- object cached_typ = None
+ object name, cached_typ=None, cached_ityp=None
counts = np.zeros(self.ngroups, dtype=np.int64)
@@ -230,8 +235,6 @@ cdef class SeriesBinGrouper:
vslider = Slider(self.arr, self.dummy_arr)
islider = Slider(self.index, self.dummy_index)
- gin = self.dummy_index._engine
-
try:
for i in range(self.ngroups):
group_size = counts[i]
@@ -240,13 +243,17 @@ cdef class SeriesBinGrouper:
vslider.set_length(group_size)
if cached_typ is None:
- cached_typ = self.typ(vslider.buf, index=islider.buf,
+ cached_ityp = self.ityp(islider.buf)
+ cached_typ = self.typ(vslider.buf, index=cached_ityp,
name=name)
else:
+ object.__setattr__(cached_ityp, '_data', islider.buf)
+ cached_ityp._engine.clear_mapping()
object.__setattr__(cached_typ._data._block, 'values', vslider.buf)
- object.__setattr__(cached_typ, '_index', islider.buf)
+ object.__setattr__(cached_typ, '_index', cached_ityp)
object.__setattr__(cached_typ, 'name', name)
+ cached_ityp._engine.clear_mapping()
res = self.f(cached_typ)
res = _extract_result(res)
if not initialized:
@@ -258,7 +265,6 @@ cdef class SeriesBinGrouper:
islider.advance(group_size)
vslider.advance(group_size)
- gin.clear_mapping()
except:
raise
finally:
@@ -292,7 +298,7 @@ cdef class SeriesGrouper:
bint passed_dummy
cdef public:
- object arr, index, dummy_arr, dummy_index, f, labels, values, typ, name
+ object arr, index, dummy_arr, dummy_index, f, labels, values, typ, ityp, name
def __init__(self, object series, object f, object labels,
Py_ssize_t ngroups, object dummy):
@@ -305,8 +311,9 @@ cdef class SeriesGrouper:
if not values.flags.c_contiguous:
values = values.copy('C')
self.arr = values
- self.index = series.index
self.typ = type(series)
+ self.ityp = type(series.index)
+ self.index = series.index.values
self.name = getattr(series,'name',None)
self.dummy_arr, self.dummy_index = self._check_dummy(dummy)
@@ -314,6 +321,8 @@ cdef class SeriesGrouper:
self.ngroups = ngroups
def _check_dummy(self, dummy=None):
+ # both values and index must be an ndarray!
+
if dummy is None:
values = np.empty(0, dtype=self.arr.dtype)
index = None
@@ -323,7 +332,9 @@ cdef class SeriesGrouper:
raise ValueError('Dummy array must be same dtype')
if not values.flags.contiguous:
values = values.copy()
- index = dummy.index
+ index = dummy.index.values
+ if not index.flags.contiguous:
+ index = index.copy()
return values, index
@@ -335,8 +346,7 @@ cdef class SeriesGrouper:
object res
bint initialized = 0
Slider vslider, islider
- object gin, typ, name
- object cached_typ = None
+ object name, cached_typ=None, cached_ityp=None
labels = self.labels
counts = np.zeros(self.ngroups, dtype=np.int64)
@@ -347,8 +357,6 @@ cdef class SeriesGrouper:
vslider = Slider(self.arr, self.dummy_arr)
islider = Slider(self.index, self.dummy_index)
- gin = self.dummy_index._engine
-
try:
for i in range(n):
group_size += 1
@@ -366,13 +374,17 @@ cdef class SeriesGrouper:
vslider.set_length(group_size)
if cached_typ is None:
- cached_typ = self.typ(vslider.buf, index=islider.buf,
+ cached_ityp = self.ityp(islider.buf)
+ cached_typ = self.typ(vslider.buf, index=cached_ityp,
name=name)
else:
+ object.__setattr__(cached_ityp, '_data', islider.buf)
+ cached_ityp._engine.clear_mapping()
object.__setattr__(cached_typ._data._block, 'values', vslider.buf)
- object.__setattr__(cached_typ, '_index', islider.buf)
+ object.__setattr__(cached_typ, '_index', cached_ityp)
object.__setattr__(cached_typ, 'name', name)
+ cached_ityp._engine.clear_mapping()
res = self.f(cached_typ)
res = _extract_result(res)
if not initialized:
@@ -386,8 +398,6 @@ cdef class SeriesGrouper:
group_size = 0
- gin.clear_mapping()
-
except:
raise
finally:
@@ -434,6 +444,7 @@ cdef class Slider:
def __init__(self, object values, object buf):
assert(values.ndim == 1)
+
if not values.flags.contiguous:
values = values.copy()
@@ -463,11 +474,11 @@ cdef class Slider:
self.buf.shape[0] = length
cpdef reset(self):
+
self.buf.shape[0] = self.orig_len
self.buf.data = self.orig_data
self.buf.strides[0] = self.orig_stride
-
class InvalidApply(Exception):
pass
@@ -488,7 +499,7 @@ def apply_frame_axis0(object frame, object f, object names,
# Need to infer if our low-level mucking is going to cause a segfault
if n > 0:
- chunk = frame[starts[0]:ends[0]]
+ chunk = frame.iloc[starts[0]:ends[0]]
shape_before = chunk.shape
try:
result = f(chunk)
@@ -497,17 +508,16 @@ def apply_frame_axis0(object frame, object f, object names,
except:
raise InvalidApply('Let this error raise above us')
+
slider = BlockSlider(frame)
mutated = False
item_cache = slider.dummy._item_cache
- gin = slider.dummy.index._engine # f7u12
try:
for i in range(n):
slider.move(starts[i], ends[i])
item_cache.clear() # ugh
- gin.clear_mapping()
object.__setattr__(slider.dummy, 'name', names[i])
piece = f(slider.dummy)
@@ -515,11 +525,12 @@ def apply_frame_axis0(object frame, object f, object names,
# I'm paying the price for index-sharing, ugh
try:
if piece.index is slider.dummy.index:
- piece = piece.copy()
+ piece = piece.copy(deep='all')
else:
mutated = True
except AttributeError:
pass
+
results.append(piece)
finally:
slider.reset()
@@ -532,7 +543,7 @@ cdef class BlockSlider:
'''
cdef public:
- object frame, dummy
+ object frame, dummy, index
int nblocks
Slider idx_slider
list blocks
@@ -543,6 +554,7 @@ cdef class BlockSlider:
def __init__(self, frame):
self.frame = frame
self.dummy = frame[:0]
+ self.index = self.dummy.index
self.blocks = [b.values for b in self.dummy._data.blocks]
@@ -550,7 +562,7 @@ cdef class BlockSlider:
util.set_array_not_contiguous(x)
self.nblocks = len(self.blocks)
- self.idx_slider = Slider(self.frame.index, self.dummy.index)
+ self.idx_slider = Slider(self.frame.index.values, self.dummy.index.values)
self.base_ptrs = <char**> malloc(sizeof(char*) * len(self.blocks))
for i, block in enumerate(self.blocks):
@@ -562,6 +574,7 @@ cdef class BlockSlider:
cpdef move(self, int start, int end):
cdef:
ndarray arr
+ object index
# move blocks
for i in range(self.nblocks):
@@ -571,13 +584,16 @@ cdef class BlockSlider:
arr.data = self.base_ptrs[i] + arr.strides[1] * start
arr.shape[1] = end - start
+ # move and set the index
self.idx_slider.move(start, end)
+ object.__setattr__(self.index,'_data',self.idx_slider.buf)
+ self.index._engine.clear_mapping()
cdef reset(self):
cdef:
ndarray arr
- # move blocks
+ # reset blocks
for i in range(self.nblocks):
arr = self.blocks[i]
@@ -585,12 +601,25 @@ cdef class BlockSlider:
arr.data = self.base_ptrs[i]
arr.shape[1] = 0
- self.idx_slider.reset()
-
-
def reduce(arr, f, axis=0, dummy=None, labels=None):
- if labels._has_complex_internals:
- raise Exception('Cannot use shortcut')
+ """
+
+ Paramaters
+ -----------
+ arr : NDFrame object
+ f : function
+ axis : integer axis
+ dummy : type of reduced output (series)
+ labels : Index or None
+ """
+
+ if labels is not None:
+ if labels._has_complex_internals:
+ raise Exception('Cannot use shortcut')
+
+ # pass as an ndarray
+ if hasattr(labels,'values'):
+ labels = labels.values
reducer = Reducer(arr, f, axis=axis, dummy=dummy, labels=labels)
return reducer.get_result()
diff --git a/pandas/src/ujson/python/objToJSON.c b/pandas/src/ujson/python/objToJSON.c
index f6cb5b9803e25..c1e9f8edcf423 100644
--- a/pandas/src/ujson/python/objToJSON.c
+++ b/pandas/src/ujson/python/objToJSON.c
@@ -1537,7 +1537,7 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
PRINTMARK();
tc->type = JT_OBJECT;
pc->columnLabelsLen = PyArray_DIM(pc->newObj, 0);
- pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "index"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
+ pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(PyObject_GetAttrString(obj, "index"), "values"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
if (!pc->columnLabels)
{
goto INVALID;
@@ -1614,7 +1614,7 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
PRINTMARK();
tc->type = JT_ARRAY;
pc->columnLabelsLen = PyArray_DIM(pc->newObj, 1);
- pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
+ pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(PyObject_GetAttrString(obj, "columns"), "values"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
if (!pc->columnLabels)
{
goto INVALID;
@@ -1632,7 +1632,7 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
goto INVALID;
}
pc->columnLabelsLen = PyArray_DIM(pc->newObj, 1);
- pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
+ pc->columnLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(PyObject_GetAttrString(obj, "columns"), "values"), (JSONObjectEncoder*) enc, pc->columnLabelsLen);
if (!pc->columnLabels)
{
NpyArr_freeLabels(pc->rowLabels, pc->rowLabelsLen);
@@ -1645,7 +1645,7 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
PRINTMARK();
tc->type = JT_OBJECT;
pc->rowLabelsLen = PyArray_DIM(pc->newObj, 1);
- pc->rowLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(obj, "columns"), (JSONObjectEncoder*) enc, pc->rowLabelsLen);
+ pc->rowLabels = NpyArr_encodeLabels((PyArrayObject*) PyObject_GetAttrString(PyObject_GetAttrString(obj, "columns"), "values"), (JSONObjectEncoder*) enc, pc->rowLabelsLen);
if (!pc->rowLabels)
{
goto INVALID;
diff --git a/pandas/tests/test_algos.py b/pandas/tests/test_algos.py
index 6353ad53a88ef..fe070cff2e0ea 100644
--- a/pandas/tests/test_algos.py
+++ b/pandas/tests/test_algos.py
@@ -187,16 +187,17 @@ def test_object_refcount_bug(self):
len(algos.unique(lst))
def test_on_index_object(self):
+
mindex = pd.MultiIndex.from_arrays([np.arange(5).repeat(5),
np.tile(np.arange(5), 5)])
+ expected = mindex.values
+ expected.sort()
+
mindex = mindex.repeat(2)
result = pd.unique(mindex)
result.sort()
- expected = mindex.values
- expected.sort()
-
tm.assert_almost_equal(result, expected)
class TestValueCounts(tm.TestCase):
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 9acb1804a7ef0..90a36228e816a 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -197,6 +197,32 @@ def setUp(self):
self.is_valid_objs = [ o for o in self.objs if o._allow_index_ops ]
self.not_valid_objs = [ o for o in self.objs if not o._allow_index_ops ]
+ def test_ndarray_compat_properties(self):
+
+ for o in self.objs:
+
+ # check that we work
+ for p in ['shape','dtype','base','flags','T',
+ 'strides','itemsize','nbytes']:
+ self.assertIsNotNone(getattr(o,p,None))
+
+ # if we have a datetimelike dtype then needs a view to work
+ # but the user is responsible for that
+ try:
+ self.assertIsNotNone(o.data)
+ except (ValueError):
+ pass
+
+ # len > 1
+ self.assertRaises(ValueError, lambda : o.item())
+
+ self.assertTrue(o.ndim == 1)
+
+ self.assertTrue(o.size == len(o))
+
+ self.assertTrue(Index([1]).item() == 1)
+ self.assertTrue(Series([1]).item() == 1)
+
def test_ops(self):
tm._skip_if_not_numpy17_friendly()
for op in ['max','min']:
@@ -243,11 +269,13 @@ def test_value_counts_unique_nunique(self):
# create repeated values, 'n'th element is repeated by n+1 times
if isinstance(o, PeriodIndex):
# freq must be specified because repeat makes freq ambiguous
+ expected_index = o[::-1]
o = klass(np.repeat(values, range(1, len(o) + 1)), freq=o.freq)
else:
+ expected_index = values[::-1]
o = klass(np.repeat(values, range(1, len(o) + 1)))
- expected_s = Series(range(10, 0, -1), index=values[::-1], dtype='int64')
+ expected_s = Series(range(10, 0, -1), index=expected_index, dtype='int64')
tm.assert_series_equal(o.value_counts(), expected_s)
result = o.unique()
@@ -278,12 +306,14 @@ def test_value_counts_unique_nunique(self):
# create repeated values, 'n'th element is repeated by n+1 times
if isinstance(o, PeriodIndex):
+ expected_index = o
o = klass(np.repeat(values, range(1, len(o) + 1)), freq=o.freq)
else:
+ expected_index = values
o = klass(np.repeat(values, range(1, len(o) + 1)))
- expected_s_na = Series(list(range(10, 2, -1)) +[3], index=values[9:0:-1], dtype='int64')
- expected_s = Series(list(range(10, 2, -1)), index=values[9:1:-1], dtype='int64')
+ expected_s_na = Series(list(range(10, 2, -1)) +[3], index=expected_index[9:0:-1], dtype='int64')
+ expected_s = Series(list(range(10, 2, -1)), index=expected_index[9:1:-1], dtype='int64')
tm.assert_series_equal(o.value_counts(dropna=False), expected_s_na)
tm.assert_series_equal(o.value_counts(), expected_s)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 88a86da27daf9..6a31f573951cd 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -16,7 +16,7 @@
from pandas.compat import(
map, zip, range, long, lrange, lmap, lzip,
- OrderedDict, cPickle as pickle, u, StringIO
+ OrderedDict, u, StringIO
)
from pandas import compat
@@ -3620,6 +3620,7 @@ def test_constructor_with_datetimes(self):
df = DataFrame()
df['a'] = i
assert_frame_equal(df, expected)
+
df = DataFrame( {'a' : i } )
assert_frame_equal(df, expected)
@@ -3925,14 +3926,14 @@ def test_array_interface(self):
assert_frame_equal(result, self.frame.apply(np.sqrt))
def test_pickle(self):
- unpickled = pickle.loads(pickle.dumps(self.mixed_frame))
+ unpickled = self.round_trip_pickle(self.mixed_frame)
assert_frame_equal(self.mixed_frame, unpickled)
# buglet
self.mixed_frame._data.ndim
# empty
- unpickled = pickle.loads(pickle.dumps(self.empty))
+ unpickled = self.round_trip_pickle(self.empty)
repr(unpickled)
def test_to_dict(self):
@@ -12578,6 +12579,7 @@ def test_empty_nonzero(self):
self.assertTrue(df.T.empty)
def test_any_all(self):
+
self._check_bool_op('any', np.any, has_skipna=True, has_bool_only=True)
self._check_bool_op('all', np.all, has_skipna=True, has_bool_only=True)
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 5d9b43e48e3c1..8dbcb8c542fb3 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -1000,7 +1000,8 @@ def test_xcompat(self):
pd.plot_params['x_compat'] = False
ax = df.plot()
lines = ax.get_lines()
- tm.assert_isinstance(lines[0].get_xdata(), PeriodIndex)
+ self.assertNotIsInstance(lines[0].get_xdata(), PeriodIndex)
+ self.assertIsInstance(PeriodIndex(lines[0].get_xdata()), PeriodIndex)
tm.close()
# useful if you're plotting a bunch together
@@ -1012,7 +1013,8 @@ def test_xcompat(self):
tm.close()
ax = df.plot()
lines = ax.get_lines()
- tm.assert_isinstance(lines[0].get_xdata(), PeriodIndex)
+ self.assertNotIsInstance(lines[0].get_xdata(), PeriodIndex)
+ self.assertIsInstance(PeriodIndex(lines[0].get_xdata()), PeriodIndex)
def test_unsorted_index(self):
df = DataFrame({'y': np.arange(100)},
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index f958d5481ad33..8e9503b4fe1a3 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -2152,6 +2152,10 @@ def test_non_cython_api(self):
result = g.idxmax()
assert_frame_equal(result,expected)
+ def test_cython_api2(self):
+
+ # this takes the fast apply path
+
# cumsum (GH5614)
df = DataFrame([[1, 2, np.nan], [1, np.nan, 9], [3, 4, 9]], columns=['A', 'B', 'C'])
expected = DataFrame([[2, np.nan], [np.nan, 9], [4, 9]], columns=['B', 'C'])
@@ -2425,6 +2429,31 @@ def convert_force_pure(x):
self.assertEqual(result.dtype, np.object_)
tm.assert_isinstance(result[0], Decimal)
+ def test_fast_apply(self):
+ # make sure that fast apply is correctly called
+ # rather than raising any kind of error
+ # otherwise the python path will be callsed
+ # which slows things down
+ N = 1000
+ labels = np.random.randint(0, 2000, size=N)
+ labels2 = np.random.randint(0, 3, size=N)
+ df = DataFrame({'key': labels,
+ 'key2': labels2,
+ 'value1': np.random.randn(N),
+ 'value2': ['foo', 'bar', 'baz', 'qux'] * (N // 4)})
+ def f(g):
+ return 1
+
+ g = df.groupby(['key', 'key2'])
+
+ grouper = g.grouper
+
+ splitter = grouper._get_splitter(g._selected_obj, axis=g.axis)
+ group_keys = grouper._get_group_keys()
+
+ values, mutated = splitter.fast_apply(f, group_keys)
+ self.assertFalse(mutated)
+
def test_apply_with_mixed_dtype(self):
# GH3480, apply with mixed dtype on axis=1 breaks in 0.11
df = DataFrame({'foo1' : ['one', 'two', 'two', 'three', 'one', 'two'],
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index c32c7ddc55ced..5affdbe1c99aa 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -3,7 +3,6 @@
from datetime import datetime, timedelta
from pandas.compat import range, lrange, lzip, u, zip
import operator
-import pickle
import re
import nose
import warnings
@@ -12,9 +11,12 @@
import numpy as np
from numpy.testing import assert_array_equal
+from pandas import period_range, date_range
+
from pandas.core.index import (Index, Float64Index, Int64Index, MultiIndex,
- InvalidIndexError)
+ InvalidIndexError, NumericIndex)
from pandas.tseries.index import DatetimeIndex
+from pandas.tseries.period import PeriodIndex
from pandas.core.series import Series
from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp,
assert_copy)
@@ -32,7 +34,48 @@
from pandas import _np_version_under1p7
-class TestIndex(tm.TestCase):
+class Base(object):
+ """ base class for index sub-class tests """
+ _holder = None
+
+ def verify_pickle(self,index):
+ unpickled = self.round_trip_pickle(index)
+ self.assertTrue(index.equals(unpickled))
+
+ def test_pickle_compat_construction(self):
+ # this is testing for pickle compat
+ if self._holder is None:
+ return
+
+ # need an object to create with
+ self.assertRaises(TypeError, self._holder)
+
+ def test_numeric_compat(self):
+
+ idx = self.create_index()
+ tm.assertRaisesRegexp(TypeError,
+ "cannot perform multiplication",
+ lambda : idx * 1)
+ tm.assertRaisesRegexp(TypeError,
+ "cannot perform multiplication",
+ lambda : 1 * idx)
+
+ div_err = "cannot perform true division" if compat.PY3 else "cannot perform division"
+ tm.assertRaisesRegexp(TypeError,
+ div_err,
+ lambda : idx / 1)
+ tm.assertRaisesRegexp(TypeError,
+ div_err,
+ lambda : 1 / idx)
+ tm.assertRaisesRegexp(TypeError,
+ "cannot perform floor division",
+ lambda : idx // 1)
+ tm.assertRaisesRegexp(TypeError,
+ "cannot perform floor division",
+ lambda : 1 // idx)
+
+class TestIndex(Base, tm.TestCase):
+ _holder = Index
_multiprocess_can_split_ = True
def setUp(self):
@@ -49,6 +92,9 @@ def setUp(self):
for name, ind in self.indices.items():
setattr(self, name, ind)
+ def create_index(self):
+ return Index(list('abcde'))
+
def test_wrong_number_names(self):
def testit(ind):
ind.names = ["apple", "banana", "carrot"]
@@ -123,7 +169,7 @@ def test_constructor(self):
# casting
arr = np.array(self.strIndex)
- index = arr.view(Index)
+ index = Index(arr)
tm.assert_contains_all(arr, index)
self.assert_numpy_array_equal(self.strIndex, index)
@@ -181,13 +227,12 @@ def __array__(self, dtype=None):
for array in [np.arange(5),
np.array(['a', 'b', 'c']),
- pd.date_range('2000-01-01', periods=3).values]:
+ date_range('2000-01-01', periods=3).values]:
expected = pd.Index(array)
result = pd.Index(ArrayLike(array))
self.assertTrue(result.equals(expected))
def test_index_ctor_infer_periodindex(self):
- from pandas import period_range, PeriodIndex
xp = period_range('2012-1-1', freq='M', periods=3)
rs = Index(xp)
assert_array_equal(rs, xp)
@@ -312,8 +357,9 @@ def test_is_(self):
self.assertFalse(ind.is_(ind[:]))
self.assertFalse(ind.is_(ind.view(np.ndarray).view(Index)))
self.assertFalse(ind.is_(np.array(range(10))))
+
# quasi-implementation dependent
- self.assertTrue(ind.is_(ind.view().base))
+ self.assertTrue(ind.is_(ind.view()))
ind2 = ind.view()
ind2.name = 'bob'
self.assertTrue(ind.is_(ind2))
@@ -366,8 +412,7 @@ def _check(op):
arr_result = op(arr, element)
index_result = op(index, element)
- tm.assert_isinstance(index_result, np.ndarray)
- self.assertNotIsInstance(index_result, Index)
+ self.assertIsInstance(index_result, np.ndarray)
self.assert_numpy_array_equal(arr_result, index_result)
_check(operator.eq)
@@ -617,6 +662,7 @@ def test_symmetric_diff(self):
idx2 = Index([0, 1, np.nan])
result = idx1.sym_diff(idx2)
# expected = Index([0.0, np.nan, 2.0, 3.0, np.nan])
+
nans = pd.isnull(result)
self.assertEqual(nans.sum(), 2)
self.assertEqual((~nans).sum(), 3)
@@ -639,21 +685,11 @@ def test_symmetric_diff(self):
idx1 - 1
def test_pickle(self):
- def testit(index):
- pickled = pickle.dumps(index)
- unpickled = pickle.loads(pickled)
-
- tm.assert_isinstance(unpickled, Index)
- self.assert_numpy_array_equal(unpickled, index)
- self.assertEqual(unpickled.name, index.name)
-
- # tm.assert_dict_equal(unpickled.indexMap, index.indexMap)
- testit(self.strIndex)
+ self.verify_pickle(self.strIndex)
self.strIndex.name = 'foo'
- testit(self.strIndex)
-
- testit(self.dateIndex)
+ self.verify_pickle(self.strIndex)
+ self.verify_pickle(self.dateIndex)
def test_is_numeric(self):
self.assertFalse(self.dateIndex.is_numeric())
@@ -902,9 +938,7 @@ def test_boolean_cmp(self):
idx = Index(values)
res = (idx == values)
- self.assertTrue(res.all())
- self.assertEqual(res.dtype, 'bool')
- self.assertNotIsInstance(res, Index)
+ self.assert_numpy_array_equal(res,np.array([True,True,True,True],dtype=bool))
def test_get_level_values(self):
result = self.strIndex.get_level_values(0)
@@ -951,13 +985,64 @@ def test_nan_first_take_datetime(self):
tm.assert_index_equal(res, exp)
-class TestFloat64Index(tm.TestCase):
+class Numeric(Base):
+
+ def test_numeric_compat(self):
+
+ idx = self._holder(np.arange(5,dtype='int64'))
+ didx = self._holder(np.arange(5,dtype='int64')**2
+ )
+ result = idx * 1
+ tm.assert_index_equal(result, idx)
+
+ result = 1 * idx
+ tm.assert_index_equal(result, idx)
+
+ result = idx * idx
+ tm.assert_index_equal(result, didx)
+
+ result = idx / 1
+ tm.assert_index_equal(result, idx)
+
+ result = idx // 1
+ tm.assert_index_equal(result, idx)
+
+ result = idx * np.array(5,dtype='int64')
+ tm.assert_index_equal(result, self._holder(np.arange(5,dtype='int64')*5))
+
+ result = idx * np.arange(5,dtype='int64')
+ tm.assert_index_equal(result, didx)
+
+ result = idx * Series(np.arange(5,dtype='int64'))
+ tm.assert_index_equal(result, didx)
+
+ result = idx * Series(np.arange(5,dtype='float64')+0.1)
+ tm.assert_index_equal(result,
+ Float64Index(np.arange(5,dtype='float64')*(np.arange(5,dtype='float64')+0.1)))
+
+
+ # invalid
+ self.assertRaises(TypeError, lambda : idx * date_range('20130101',periods=5))
+ self.assertRaises(ValueError, lambda : idx * self._holder(np.arange(3)))
+ self.assertRaises(ValueError, lambda : idx * np.array([1,2]))
+
+ def test_ufunc_compat(self):
+ idx = self._holder(np.arange(5,dtype='int64'))
+ result = np.sin(idx)
+ expected = Float64Index(np.sin(np.arange(5,dtype='int64')))
+ tm.assert_index_equal(result, expected)
+
+class TestFloat64Index(Numeric, tm.TestCase):
+ _holder = Float64Index
_multiprocess_can_split_ = True
def setUp(self):
self.mixed = Float64Index([1.5, 2, 3, 4, 5])
self.float = Float64Index(np.arange(5) * 2.5)
+ def create_index(self):
+ return Float64Index(np.arange(5,dtype='float64'))
+
def test_hash_error(self):
with tm.assertRaisesRegexp(TypeError,
"unhashable type: %r" %
@@ -1095,12 +1180,16 @@ def test_astype_from_object(self):
tm.assert_index_equal(result, expected)
-class TestInt64Index(tm.TestCase):
+class TestInt64Index(Numeric, tm.TestCase):
+ _holder = Int64Index
_multiprocess_can_split_ = True
def setUp(self):
self.index = Int64Index(np.arange(0, 20, 2))
+ def create_index(self):
+ return Int64Index(np.arange(5,dtype='int64'))
+
def test_too_many_names(self):
def testit():
self.index.names = ["roger", "harold"]
@@ -1519,8 +1608,38 @@ def test_slice_keep_name(self):
idx = Int64Index([1, 2], name='asdf')
self.assertEqual(idx.name, idx[1:].name)
+class TestDatetimeIndex(Base, tm.TestCase):
+ _holder = DatetimeIndex
+ _multiprocess_can_split_ = True
+
+ def create_index(self):
+ return date_range('20130101',periods=5)
+
+ def test_pickle_compat_construction(self):
+ pass
+
+ def test_numeric_compat(self):
+ super(TestDatetimeIndex, self).test_numeric_compat()
+
+ if not (_np_version_under1p7 or compat.PY3_2):
+ for f in [lambda : np.timedelta64(1, 'D').astype('m8[ns]') * pd.date_range('2000-01-01', periods=3),
+ lambda : pd.date_range('2000-01-01', periods=3) * np.timedelta64(1, 'D').astype('m8[ns]') ]:
+ tm.assertRaisesRegexp(TypeError,
+ "cannot perform multiplication with this index type",
+ f)
-class TestMultiIndex(tm.TestCase):
+class TestPeriodIndex(Base, tm.TestCase):
+ _holder = PeriodIndex
+ _multiprocess_can_split_ = True
+
+ def create_index(self):
+ return period_range('20130101',periods=5,freq='D')
+
+ def test_pickle_compat_construction(self):
+ pass
+
+class TestMultiIndex(Base, tm.TestCase):
+ _holder = MultiIndex
_multiprocess_can_split_ = True
def setUp(self):
@@ -1534,6 +1653,9 @@ def setUp(self):
labels=[major_labels, minor_labels],
names=self.index_names, verify_integrity=False)
+ def create_index(self):
+ return self.index
+
def test_hash_error(self):
with tm.assertRaisesRegexp(TypeError,
"unhashable type: %r" %
@@ -1574,6 +1696,7 @@ def test_set_names_and_rename(self):
def test_set_levels(self):
+
# side note - you probably wouldn't want to use levels and labels
# directly like this - but it is possible.
levels, labels = self.index.levels, self.index.labels
@@ -1966,6 +2089,7 @@ def check_level_names(self, index, names):
self.assertEqual([level.name for level in index.levels], list(names))
def test_changing_names(self):
+
# names should be applied to levels
level_names = [level.name for level in self.index.levels]
self.check_level_names(self.index, self.index.names)
@@ -2015,6 +2139,7 @@ def test_from_arrays(self):
self.assertTrue(result.levels[1].equals(Index(['a','b'])))
def test_from_product(self):
+
first = ['foo', 'bar', 'buz']
second = ['a', 'b', 'c']
names = ['first', 'second']
@@ -2029,7 +2154,7 @@ def test_from_product(self):
self.assertEqual(result.names, names)
def test_from_product_datetimeindex(self):
- dt_index = pd.date_range('2000-01-01', periods=2)
+ dt_index = date_range('2000-01-01', periods=2)
mi = pd.MultiIndex.from_product([[1, 2], dt_index])
etalon = pd.lib.list_to_object_array([(1, pd.Timestamp('2000-01-01')),
(1, pd.Timestamp('2000-01-02')),
@@ -2108,23 +2233,12 @@ def test_iter(self):
('baz', 'two'), ('qux', 'one'), ('qux', 'two')]
self.assertEqual(result, expected)
- def test_pickle(self):
- pickled = pickle.dumps(self.index)
- unpickled = pickle.loads(pickled)
- self.assertTrue(self.index.equals(unpickled))
-
def test_legacy_pickle(self):
if compat.PY3:
- raise nose.SkipTest("doesn't work on Python 3")
-
- def curpath():
- pth, _ = os.path.split(os.path.abspath(__file__))
- return pth
+ raise nose.SkipTest("testing for legacy pickles not support on py3")
- ppath = os.path.join(curpath(), 'data/multiindex_v1.pickle')
- obj = pickle.load(open(ppath, 'r'))
-
- self.assertTrue(obj._is_v1)
+ path = tm.get_data_path('multiindex_v1.pickle')
+ obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
self.assertTrue(obj.equals(obj2))
@@ -2140,11 +2254,10 @@ def curpath():
assert_almost_equal(exp, exp2)
def test_legacy_v2_unpickle(self):
- # 0.7.3 -> 0.8.0 format manage
- pth, _ = os.path.split(os.path.abspath(__file__))
- filepath = os.path.join(pth, 'data', 'mindex_073.pickle')
- obj = pd.read_pickle(filepath)
+ # 0.7.3 -> 0.8.0 format manage
+ path = tm.get_data_path('mindex_073.pickle')
+ obj = pd.read_pickle(path)
obj2 = MultiIndex.from_tuples(obj.values)
self.assertTrue(obj.equals(obj2))
@@ -2562,6 +2675,7 @@ def test_identical(self):
self.assertTrue(mi.equals(mi4))
def test_is_(self):
+
mi = MultiIndex.from_tuples(lzip(range(10), range(10)))
self.assertTrue(mi.is_(mi))
self.assertTrue(mi.is_(mi.view()))
@@ -2571,6 +2685,7 @@ def test_is_(self):
mi2.names = ["A", "B"]
self.assertTrue(mi2.is_(mi))
self.assertTrue(mi.is_(mi2))
+
self.assertTrue(mi.is_(mi.set_names(["C", "D"])))
mi2 = mi.view()
mi2.set_names(["E", "F"], inplace=True)
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 36dbced6eda8c..a523df4cc2461 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -9,7 +9,7 @@
from pandas.core.internals import *
import pandas.core.internals as internals
import pandas.util.testing as tm
-
+import pandas as pd
from pandas.util.testing import (
assert_almost_equal, assert_frame_equal, randn)
from pandas.compat import zip, u
@@ -182,12 +182,9 @@ def test_constructor(self):
self.assertEqual(int32block.dtype, np.int32)
def test_pickle(self):
- import pickle
def _check(blk):
- pickled = pickle.dumps(blk)
- unpickled = pickle.loads(pickled)
- assert_block_equal(blk, unpickled)
+ assert_block_equal(self.round_trip_pickle(blk), blk)
_check(self.fblock)
_check(self.cblock)
@@ -341,12 +338,8 @@ def test_contains(self):
self.assertNotIn('baz', self.mgr)
def test_pickle(self):
- import pickle
-
- pickled = pickle.dumps(self.mgr)
- mgr2 = pickle.loads(pickled)
- # same result
+ mgr2 = self.round_trip_pickle(self.mgr)
assert_frame_equal(DataFrame(self.mgr), DataFrame(mgr2))
# share ref_items
@@ -361,13 +354,13 @@ def test_pickle(self):
self.assertFalse(mgr2._known_consolidated)
def test_non_unique_pickle(self):
- import pickle
+
mgr = create_mgr('a,a,a:f8')
- mgr2 = pickle.loads(pickle.dumps(mgr))
+ mgr2 = self.round_trip_pickle(mgr)
assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
mgr = create_mgr('a: f8; a: i8')
- mgr2 = pickle.loads(pickle.dumps(mgr))
+ mgr2 = self.round_trip_pickle(mgr)
assert_frame_equal(DataFrame(mgr), DataFrame(mgr2))
def test_get_scalar(self):
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index 6d0d7aaf37b02..ed078ae5749de 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -14,7 +14,7 @@
assertRaisesRegexp)
import pandas.core.common as com
import pandas.util.testing as tm
-from pandas.compat import (range, lrange, StringIO, lzip, u, cPickle,
+from pandas.compat import (range, lrange, StringIO, lzip, u,
product as cart_product, zip)
import pandas as pd
@@ -181,8 +181,7 @@ def _check_op(opname):
def test_pickle(self):
def _test_roundtrip(frame):
- pickled = cPickle.dumps(frame)
- unpickled = cPickle.loads(pickled)
+ unpickled = self.round_trip_pickle(frame)
assert_frame_equal(frame, unpickled)
_test_roundtrip(self.frame)
@@ -445,6 +444,7 @@ def test_xs(self):
]
df = DataFrame(acc, columns=['a1','a2','cnt']).set_index(['a1','a2'])
expected = DataFrame({ 'cnt' : [24,26,25,26] }, index=Index(['xbcde',np.nan,'zbcde','ybcde'],name='a2'))
+
result = df.xs('z',level='a1')
assert_frame_equal(result, expected)
@@ -2106,13 +2106,13 @@ def test_reset_index_datetime(self):
idx1 = pd.date_range('1/1/2011', periods=5, freq='D', tz=tz, name='idx1')
idx2 = pd.Index(range(5), name='idx2',dtype='int64')
idx = pd.MultiIndex.from_arrays([idx1, idx2])
- df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
+ df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
expected = pd.DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
datetime.datetime(2011, 1, 2),
datetime.datetime(2011, 1, 3),
datetime.datetime(2011, 1, 4),
- datetime.datetime(2011, 1, 5)],
+ datetime.datetime(2011, 1, 5)],
'idx2': np.arange(5,dtype='int64'),
'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']},
columns=['idx1', 'idx2', 'a', 'b'])
@@ -2122,19 +2122,19 @@ def test_reset_index_datetime(self):
idx3 = pd.date_range('1/1/2012', periods=5, freq='MS', tz='Europe/Paris', name='idx3')
idx = pd.MultiIndex.from_arrays([idx1, idx2, idx3])
- df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
+ df = pd.DataFrame({'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']}, index=idx)
expected = pd.DataFrame({'idx1': [datetime.datetime(2011, 1, 1),
datetime.datetime(2011, 1, 2),
datetime.datetime(2011, 1, 3),
datetime.datetime(2011, 1, 4),
- datetime.datetime(2011, 1, 5)],
+ datetime.datetime(2011, 1, 5)],
'idx2': np.arange(5,dtype='int64'),
'idx3': [datetime.datetime(2012, 1, 1),
datetime.datetime(2012, 2, 1),
datetime.datetime(2012, 3, 1),
datetime.datetime(2012, 4, 1),
- datetime.datetime(2012, 5, 1)],
+ datetime.datetime(2012, 5, 1)],
'a': np.arange(5,dtype='int64'), 'b': ['A', 'B', 'C', 'D', 'E']},
columns=['idx1', 'idx2', 'idx3', 'a', 'b'])
expected['idx1'] = expected['idx1'].apply(lambda d: pd.Timestamp(d, tz=tz))
@@ -2148,7 +2148,7 @@ def test_reset_index_datetime(self):
expected = pd.DataFrame({'level_0': 'a a a b b b'.split(),
'level_1': [datetime.datetime(2013, 1, 1),
datetime.datetime(2013, 1, 2),
- datetime.datetime(2013, 1, 3)] * 2,
+ datetime.datetime(2013, 1, 3)] * 2,
'a': np.arange(6, dtype='int64')},
columns=['level_0', 'level_1', 'a'])
expected['level_1'] = expected['level_1'].apply(lambda d: pd.Timestamp(d, offset='D', tz=tz))
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index f8798e794d22c..fb1f1c1693fdd 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -12,7 +12,7 @@
from pandas.core.series import remove_na
import pandas.core.common as com
from pandas import compat
-from pandas.compat import range, lrange, StringIO, cPickle, OrderedDict
+from pandas.compat import range, lrange, StringIO, OrderedDict
from pandas.util.testing import (assert_panel_equal,
assert_frame_equal,
@@ -31,8 +31,7 @@ class PanelTests(object):
panel = None
def test_pickle(self):
- pickled = cPickle.dumps(self.panel)
- unpickled = cPickle.loads(pickled)
+ unpickled = self.round_trip_pickle(self.panel)
assert_frame_equal(unpickled['ItemA'], self.panel['ItemA'])
def test_cumsum(self):
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index fcd4b89377176..01e9e15585fc0 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -27,7 +27,7 @@
import pandas.core.datetools as datetools
import pandas.core.nanops as nanops
-from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long
+from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long, PY3_2
from pandas import compat
from pandas.util.testing import (assert_series_equal,
assert_almost_equal,
@@ -61,6 +61,7 @@ def test_copy_index_name_checking(self):
self.ts.index.name = None
self.assertIsNone(self.ts.index.name)
self.assertIs(self.ts, self.ts)
+
cp = self.ts.copy()
cp.index.name = 'foo'
com.pprint_thing(self.ts.index.name)
@@ -1867,7 +1868,7 @@ def test_timeseries_periodindex(self):
from pandas import period_range
prng = period_range('1/1/2011', '1/1/2012', freq='M')
ts = Series(np.random.randn(len(prng)), prng)
- new_ts = pickle.loads(pickle.dumps(ts))
+ new_ts = self.round_trip_pickle(ts)
self.assertEqual(new_ts.index.freq, 'M')
def test_iter(self):
@@ -5232,9 +5233,15 @@ def test_align_sameindex(self):
# self.assertIsNot(b.index, self.ts.index)
def test_reindex(self):
+
identity = self.series.reindex(self.series.index)
- self.assertTrue(np.may_share_memory(self.series.index, identity.index))
+
+ # the older numpies / 3.2 call __array_inteface__ which we don't define
+ if not _np_version_under1p7 and not PY3_2:
+ self.assertTrue(np.may_share_memory(self.series.index, identity.index))
+
self.assertTrue(identity.index.is_(self.series.index))
+ self.assertTrue(identity.index.identical(self.series.index))
subIndex = self.series.index[10:20]
subSeries = self.series.reindex(subIndex)
@@ -6083,7 +6090,7 @@ def test_unique_data_ownership(self):
# it works! #1807
Series(Series(["a", "c", "b"]).unique()).sort()
-
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
+
diff --git a/pandas/tests/test_tseries.py b/pandas/tests/test_tseries.py
index d5f7a536f9fe8..5c26fce2b111e 100644
--- a/pandas/tests/test_tseries.py
+++ b/pandas/tests/test_tseries.py
@@ -32,7 +32,7 @@ def test_backfill(self):
old = Index([1, 5, 10])
new = Index(lrange(12))
- filler = algos.backfill_int64(old, new)
+ filler = algos.backfill_int64(old.values, new.values)
expect_filler = [0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2, -1]
self.assert_numpy_array_equal(filler, expect_filler)
@@ -40,7 +40,7 @@ def test_backfill(self):
# corner case
old = Index([1, 4])
new = Index(lrange(5, 10))
- filler = algos.backfill_int64(old, new)
+ filler = algos.backfill_int64(old.values, new.values)
expect_filler = [-1, -1, -1, -1, -1]
self.assert_numpy_array_equal(filler, expect_filler)
@@ -49,7 +49,7 @@ def test_pad(self):
old = Index([1, 5, 10])
new = Index(lrange(12))
- filler = algos.pad_int64(old, new)
+ filler = algos.pad_int64(old.values, new.values)
expect_filler = [-1, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2]
self.assert_numpy_array_equal(filler, expect_filler)
@@ -57,7 +57,7 @@ def test_pad(self):
# corner case
old = Index([5, 10])
new = Index(lrange(5))
- filler = algos.pad_int64(old, new)
+ filler = algos.pad_int64(old.values, new.values)
expect_filler = [-1, -1, -1, -1, -1]
self.assert_numpy_array_equal(filler, expect_filler)
@@ -165,7 +165,7 @@ def test_left_join_indexer2():
idx = Index([1, 1, 2, 5])
idx2 = Index([1, 2, 5, 7, 9])
- res, lidx, ridx = algos.left_join_indexer_int64(idx2, idx)
+ res, lidx, ridx = algos.left_join_indexer_int64(idx2.values, idx.values)
exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
assert_almost_equal(res, exp_res)
@@ -181,7 +181,7 @@ def test_outer_join_indexer2():
idx = Index([1, 1, 2, 5])
idx2 = Index([1, 2, 5, 7, 9])
- res, lidx, ridx = algos.outer_join_indexer_int64(idx2, idx)
+ res, lidx, ridx = algos.outer_join_indexer_int64(idx2.values, idx.values)
exp_res = np.array([1, 1, 2, 5, 7, 9], dtype=np.int64)
assert_almost_equal(res, exp_res)
@@ -197,7 +197,7 @@ def test_inner_join_indexer2():
idx = Index([1, 1, 2, 5])
idx2 = Index([1, 2, 5, 7, 9])
- res, lidx, ridx = algos.inner_join_indexer_int64(idx2, idx)
+ res, lidx, ridx = algos.inner_join_indexer_int64(idx2.values, idx.values)
exp_res = np.array([1, 1, 2, 5], dtype=np.int64)
assert_almost_equal(res, exp_res)
@@ -690,6 +690,10 @@ def test_int_index(self):
expected = arr.sum(1)
assert_almost_equal(result, expected)
+ result = lib.reduce(arr, np.sum, axis=1,
+ dummy=dummy, labels=Index(np.arange(100)))
+ assert_almost_equal(result, expected)
+
class TestTsUtil(tm.TestCase):
def test_min_valid(self):
diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index ada13d6f4bccb..83df908d8033f 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -3,7 +3,7 @@
import warnings
from pandas import Series, DataFrame
-from pandas.core.index import MultiIndex
+from pandas.core.index import MultiIndex, Index
from pandas.core.groupby import Grouper
from pandas.tools.merge import concat
from pandas.tools.util import cartesian_product
@@ -307,7 +307,7 @@ def _all_key():
def _convert_by(by):
if by is None:
by = []
- elif (np.isscalar(by) or isinstance(by, (np.ndarray, Series, Grouper))
+ elif (np.isscalar(by) or isinstance(by, (np.ndarray, Index, Series, Grouper))
or hasattr(by, '__call__')):
by = [by]
else:
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 8f79f14cd551a..5d85b68234f96 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -12,7 +12,7 @@
from pandas.util.decorators import cache_readonly, deprecate_kwarg
import pandas.core.common as com
from pandas.core.generic import _shared_docs, _shared_doc_kwargs
-from pandas.core.index import MultiIndex
+from pandas.core.index import Index, MultiIndex
from pandas.core.series import Series, remove_na
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex, Period
@@ -821,7 +821,7 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True,
for kw, err in zip(['xerr', 'yerr'], [xerr, yerr]):
self.errors[kw] = self._parse_errorbars(kw, err)
- if not isinstance(secondary_y, (bool, tuple, list, np.ndarray)):
+ if not isinstance(secondary_y, (bool, tuple, list, np.ndarray, Index)):
secondary_y = [secondary_y]
self.secondary_y = secondary_y
@@ -872,7 +872,7 @@ def _iter_data(self, data=None, keep_index=False):
data = self.data
from pandas.core.frame import DataFrame
- if isinstance(data, (Series, np.ndarray)):
+ if isinstance(data, (Series, np.ndarray, Index)):
if keep_index is True:
yield self.label, data
else:
@@ -1223,7 +1223,7 @@ def on_right(self, i):
return self.secondary_y
if (isinstance(self.data, DataFrame) and
- isinstance(self.secondary_y, (tuple, list, np.ndarray))):
+ isinstance(self.secondary_y, (tuple, list, np.ndarray, Index))):
return self.data.columns[i] in self.secondary_y
def _get_style(self, i, col_name):
@@ -2485,7 +2485,7 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None,
return axes
if column is not None:
- if not isinstance(column, (list, np.ndarray)):
+ if not isinstance(column, (list, np.ndarray, Index)):
column = [column]
data = data[column]
data = data._get_numeric_data()
@@ -2962,7 +2962,7 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
axarr[i] = ax
if nplots > 1:
-
+
if sharex and nrows > 1:
for ax in axarr[:naxes][:-ncols]: # only bottom row
for label in ax.get_xticklabels():
@@ -3015,7 +3015,7 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
def _flatten(axes):
if not com.is_list_like(axes):
axes = [axes]
- elif isinstance(axes, np.ndarray):
+ elif isinstance(axes, (np.ndarray, Index)):
axes = axes.ravel()
return axes
diff --git a/pandas/tools/tests/test_pivot.py b/pandas/tools/tests/test_pivot.py
index a16df00351d76..7e52c8c333dbf 100644
--- a/pandas/tools/tests/test_pivot.py
+++ b/pandas/tools/tests/test_pivot.py
@@ -517,10 +517,10 @@ def test_pivot_datetime_tz(self):
exp_col3 = pd.DatetimeIndex(['2013-01-01 15:00:00', '2013-02-01 15:00:00'] * 4,
tz='Asia/Tokyo', name='dt2')
exp_col = MultiIndex.from_arrays([exp_col1, exp_col2, exp_col3])
- expected = DataFrame(np.array([[0, 3, 1, 2, 0, 3, 1, 2],
+ expected = DataFrame(np.array([[0, 3, 1, 2, 0, 3, 1, 2],
[1, 4, 2, 1, 1, 4, 2, 1],
- [2, 5, 1, 2, 2, 5, 1, 2]], dtype='int64'),
- index=exp_idx,
+ [2, 5, 1, 2, 2, 5, 1, 2]], dtype='int64'),
+ index=exp_idx,
columns=exp_col)
result = pivot_table(df, index=['dt1'], columns=['dt2'], values=['value1', 'value2'],
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index 80ac97ee60617..b014e718d5411 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -59,7 +59,7 @@ def convert(value, unit, axis):
return time2num(value)
if isinstance(value, Index):
return value.map(time2num)
- if isinstance(value, (list, tuple, np.ndarray)):
+ if isinstance(value, (list, tuple, np.ndarray, Index)):
return [time2num(x) for x in value]
return value
@@ -116,8 +116,8 @@ def convert(values, units, axis):
return values.asfreq(axis.freq).values
if isinstance(values, Index):
return values.map(lambda x: get_datevalue(x, axis.freq))
- if isinstance(values, (list, tuple, np.ndarray)):
- return [get_datevalue(x, axis.freq) for x in values]
+ if isinstance(values, (list, tuple, np.ndarray, Index)):
+ return PeriodIndex(values, freq=axis.freq).values
return values
@@ -127,7 +127,7 @@ def get_datevalue(date, freq):
elif isinstance(date, (str, datetime, pydt.date, pydt.time)):
return Period(date, freq).ordinal
elif (com.is_integer(date) or com.is_float(date) or
- (isinstance(date, np.ndarray) and (date.size == 1))):
+ (isinstance(date, (np.ndarray, Index)) and (date.size == 1))):
return date
elif date is None:
return None
@@ -145,7 +145,7 @@ def _dt_to_float_ordinal(dt):
preserving hours, minutes, seconds and microseconds. Return value
is a :func:`float`.
"""
- if isinstance(dt, (np.ndarray, Series)) and com.is_datetime64_ns_dtype(dt):
+ if isinstance(dt, (np.ndarray, Index, Series)) and com.is_datetime64_ns_dtype(dt):
base = dates.epoch2num(dt.asi8 / 1.0E9)
else:
base = dates.date2num(dt)
@@ -171,7 +171,9 @@ def try_parse(values):
return values
elif isinstance(values, compat.string_types):
return try_parse(values)
- elif isinstance(values, (list, tuple, np.ndarray)):
+ elif isinstance(values, (list, tuple, np.ndarray, Index)):
+ if isinstance(values, Index):
+ values = values.values
if not isinstance(values, np.ndarray):
values = com._asarray_tuplesafe(values)
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index fb87e1b570985..3ada26a7e5779 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -59,10 +59,10 @@ def f(self):
def _join_i8_wrapper(joinf, with_indexers=True):
@staticmethod
def wrapper(left, right):
- if isinstance(left, (np.ndarray, ABCSeries)):
- left = left.view('i8', type=np.ndarray)
- if isinstance(right, (np.ndarray, ABCSeries)):
- right = right.view('i8', type=np.ndarray)
+ if isinstance(left, (np.ndarray, Index, ABCSeries)):
+ left = left.view('i8')
+ if isinstance(right, (np.ndarray, Index, ABCSeries)):
+ right = right.view('i8')
results = joinf(left, right)
if with_indexers:
join_index, left_indexer, right_indexer = results
@@ -86,9 +86,10 @@ def wrapper(self, other):
else:
if isinstance(other, list):
other = DatetimeIndex(other)
- elif not isinstance(other, (np.ndarray, ABCSeries)):
+ elif not isinstance(other, (np.ndarray, Index, ABCSeries)):
other = _ensure_datetime64(other)
result = func(other)
+ result = _values_from_object(result)
if isinstance(other, Index):
o_mask = other.values.view('i8') == tslib.iNaT
@@ -101,7 +102,11 @@ def wrapper(self, other):
mask = self.asi8 == tslib.iNaT
if mask.any():
result[mask] = nat_result
- return result.view(np.ndarray)
+
+ # support of bool dtype indexers
+ if com.is_bool_dtype(result):
+ return result
+ return Index(result)
return wrapper
@@ -143,8 +148,9 @@ class DatetimeIndex(DatetimeIndexOpsMixin, Int64Index):
name : object
Name to be stored in the index
"""
- _join_precedence = 10
+ _typ = 'datetimeindex'
+ _join_precedence = 10
_inner_indexer = _join_i8_wrapper(_algos.inner_join_indexer_int64)
_outer_indexer = _join_i8_wrapper(_algos.outer_join_indexer_int64)
_left_indexer = _join_i8_wrapper(_algos.left_join_indexer_int64)
@@ -167,17 +173,19 @@ class DatetimeIndex(DatetimeIndexOpsMixin, Int64Index):
tz = None
offset = None
_comparables = ['name','freqstr','tz']
+ _attributes = ['name','freq','tz']
_allow_datetime_index_ops = True
+ _is_numeric_dtype = False
def __new__(cls, data=None,
freq=None, start=None, end=None, periods=None,
copy=False, name=None, tz=None,
verify_integrity=True, normalize=False,
- closed=None, **kwds):
+ closed=None, **kwargs):
- dayfirst = kwds.pop('dayfirst', None)
- yearfirst = kwds.pop('yearfirst', None)
- infer_dst = kwds.pop('infer_dst', False)
+ dayfirst = kwargs.pop('dayfirst', None)
+ yearfirst = kwargs.pop('yearfirst', None)
+ infer_dst = kwargs.pop('infer_dst', False)
freq_infer = False
if not isinstance(freq, DateOffset):
@@ -205,7 +213,7 @@ def __new__(cls, data=None,
tz=tz, normalize=normalize, closed=closed,
infer_dst=infer_dst)
- if not isinstance(data, (np.ndarray, ABCSeries)):
+ if not isinstance(data, (np.ndarray, Index, ABCSeries)):
if np.isscalar(data):
raise ValueError('DatetimeIndex() must be called with a '
'collection of some kind, %s was passed'
@@ -262,7 +270,7 @@ def __new__(cls, data=None,
else:
subarr = data.view(_NS_DTYPE)
else:
- if isinstance(data, ABCSeries):
+ if isinstance(data, (ABCSeries, Index)):
values = data.values
else:
values = data
@@ -302,10 +310,7 @@ def __new__(cls, data=None,
subarr = subarr.view(_NS_DTYPE)
- subarr = subarr.view(cls)
- subarr.name = name
- subarr.offset = freq
- subarr.tz = tz
+ subarr = cls._simple_new(subarr, name=name, freq=freq, tz=tz)
if verify_integrity and len(subarr) > 0:
if freq is not None and not freq_infer:
@@ -442,10 +447,7 @@ def _generate(cls, start, end, periods, name, offset,
infer_dst=infer_dst)
index = index.view(_NS_DTYPE)
- index = index.view(cls)
- index.name = name
- index.offset = offset
- index.tz = tz
+ index = cls._simple_new(index, name=name, freq=offset, tz=tz)
if not left_closed:
index = index[1:]
@@ -474,15 +476,18 @@ def _local_timestamps(self):
return result.take(reverse)
@classmethod
- def _simple_new(cls, values, name, freq=None, tz=None):
+ def _simple_new(cls, values, name=None, freq=None, tz=None):
+ if not getattr(values,'dtype',None):
+ values = np.array(values,copy=False)
if values.dtype != _NS_DTYPE:
values = com._ensure_int64(values).view(_NS_DTYPE)
- result = values.view(cls)
+ result = object.__new__(cls)
+ result._data = values
result.name = name
result.offset = freq
result.tz = tslib.maybe_get_tz(tz)
-
+ result._reset_identity()
return result
@property
@@ -517,7 +522,7 @@ def _cached_range(cls, start=None, end=None, periods=None, offset=None,
arr = tools.to_datetime(list(xdr), box=False)
- cachedRange = arr.view(DatetimeIndex)
+ cachedRange = DatetimeIndex._simple_new(arr)
cachedRange.offset = offset
cachedRange.tz = None
cachedRange.name = None
@@ -575,29 +580,37 @@ def _formatter_func(self):
formatter = _get_format_datetime64(is_dates_only=self._is_dates_only)
return lambda x: formatter(x, tz=self.tz)
- def __reduce__(self):
- """Necessary for making this object picklable"""
- object_state = list(np.ndarray.__reduce__(self))
- subclass_state = self.name, self.offset, self.tz
- object_state[2] = (object_state[2], subclass_state)
- return tuple(object_state)
-
def __setstate__(self, state):
"""Necessary for making this object picklable"""
- if len(state) == 2:
- nd_state, own_state = state
- self.name = own_state[0]
- self.offset = own_state[1]
- self.tz = own_state[2]
- np.ndarray.__setstate__(self, nd_state)
-
- # provide numpy < 1.7 compat
- if nd_state[2] == 'M8[us]':
- new_state = np.ndarray.__reduce__(self.values.astype('M8[ns]'))
- np.ndarray.__setstate__(self, new_state[2])
+ if isinstance(state, dict):
+ super(DatetimeIndex, self).__setstate__(state)
- else: # pragma: no cover
- np.ndarray.__setstate__(self, state)
+ elif isinstance(state, tuple):
+
+ # < 0.15 compat
+ if len(state) == 2:
+ nd_state, own_state = state
+ data = np.empty(nd_state[1], dtype=nd_state[2])
+ np.ndarray.__setstate__(data, nd_state)
+
+ self.name = own_state[0]
+ self.offset = own_state[1]
+ self.tz = own_state[2]
+
+ # provide numpy < 1.7 compat
+ if nd_state[2] == 'M8[us]':
+ new_state = np.ndarray.__reduce__(data.astype('M8[ns]'))
+ np.ndarray.__setstate__(data, new_state[2])
+
+ else: # pragma: no cover
+ data = np.empty(state)
+ np.ndarray.__setstate__(data, state)
+
+ self._data = data
+
+ else:
+ raise Exception("invalid pickle state")
+ _unpickle_compat = __setstate__
def _add_delta(self, delta):
if isinstance(delta, (Tick, timedelta)):
@@ -662,7 +675,7 @@ def to_datetime(self, dayfirst=False):
return self.copy()
def groupby(self, f):
- objs = self.asobject
+ objs = self.asobject.values
return _algos.groupby_object(objs, f)
def summary(self, name=None):
@@ -982,7 +995,7 @@ def _wrap_joined_index(self, joined, other):
if (isinstance(other, DatetimeIndex)
and self.offset == other.offset
and self._can_fast_union(other)):
- joined = self._view_like(joined)
+ joined = self._shallow_copy(joined)
joined.name = name
return joined
else:
@@ -1044,7 +1057,7 @@ def _fast_union(self, other):
loc = right.searchsorted(left_end, side='right')
right_chunk = right.values[loc:]
dates = com._concat_compat((left.values, right_chunk))
- return self._view_like(dates)
+ return self._shallow_copy(dates)
else:
return left
else:
@@ -1140,7 +1153,7 @@ def intersection(self, other):
else:
lslice = slice(*left.slice_locs(start, end))
left_chunk = left.values[lslice]
- return self._view_like(left_chunk)
+ return self._shallow_copy(left_chunk)
def _partial_date_slice(self, reso, parsed, use_lhs=True, use_rhs=True):
@@ -1357,10 +1370,9 @@ def slice_locs(self, start=None, end=None):
return Index.slice_locs(self, start, end)
def __getitem__(self, key):
- """Override numpy.ndarray's __getitem__ method to work as desired"""
- arr_idx = self.view(np.ndarray)
+ getitem = self._data.__getitem__
if np.isscalar(key):
- val = arr_idx[key]
+ val = getitem(key)
return Timestamp(val, offset=self.offset, tz=self.tz)
else:
if com._is_bool_indexer(key):
@@ -1377,7 +1389,7 @@ def __getitem__(self, key):
else:
new_offset = self.offset
- result = arr_idx[key]
+ result = getitem(key)
if result.ndim > 1:
return result
@@ -1388,18 +1400,20 @@ def __getitem__(self, key):
def map(self, f):
try:
result = f(self)
- if not isinstance(result, np.ndarray):
+ if not isinstance(result, (np.ndarray, Index)):
raise TypeError
return result
except Exception:
- return _algos.arrmap_object(self.asobject, f)
+ return _algos.arrmap_object(self.asobject.values, f)
# alias to offset
- @property
- def freq(self):
- """ return the frequency object if its set, otherwise None """
+ def _get_freq(self):
return self.offset
+ def _set_freq(self, value):
+ self.offset = value
+ freq = property(fget=_get_freq, fset=_set_freq, doc="get/set the frequncy of the Index")
+
@cache_readonly
def inferred_freq(self):
try:
@@ -1443,14 +1457,14 @@ def _time(self):
"""
# can't call self.map() which tries to treat func as ufunc
# and causes recursion warnings on python 2.6
- return _algos.arrmap_object(self.asobject, lambda x: x.time())
+ return _algos.arrmap_object(self.asobject.values, lambda x: x.time())
@property
def _date(self):
"""
Returns numpy array of datetime.date. The date part of the Timestamps.
"""
- return _algos.arrmap_object(self.asobject, lambda x: x.date())
+ return _algos.arrmap_object(self.asobject.values, lambda x: x.date())
def normalize(self):
@@ -1466,7 +1480,7 @@ def normalize(self):
tz=self.tz)
def searchsorted(self, key, side='left'):
- if isinstance(key, np.ndarray):
+ if isinstance(key, (np.ndarray, Index)):
key = np.array(key, dtype=_NS_DTYPE, copy=False)
else:
key = _to_m8(key, tz=self.tz)
@@ -1609,13 +1623,6 @@ def delete(self, loc):
new_dates = tslib.tz_convert(new_dates, 'UTC', self.tz)
return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz)
- def _view_like(self, ndarray):
- result = ndarray.view(type(self))
- result.offset = self.offset
- result.tz = self.tz
- result.name = self.name
- return result
-
def tz_convert(self, tz):
"""
Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
@@ -1639,7 +1646,7 @@ def tz_convert(self, tz):
'tz_localize to localize')
# No conversion since timestamps are all UTC to begin with
- return self._simple_new(self.values, self.name, self.offset, tz)
+ return self._shallow_copy(tz=tz)
def tz_localize(self, tz, infer_dst=False):
"""
@@ -1669,7 +1676,7 @@ def tz_localize(self, tz, infer_dst=False):
# Convert to UTC
new_dates = tslib.tz_localize_to_utc(self.asi8, tz, infer_dst=infer_dst)
new_dates = new_dates.view(_NS_DTYPE)
- return self._simple_new(new_dates, self.name, self.offset, tz)
+ return self._shallow_copy(new_dates, tz=tz)
def indexer_at_time(self, time, asof=False):
"""
@@ -1782,7 +1789,7 @@ def to_julian_date(self):
self.microsecond/3600.0/1e+6 +
self.nanosecond/3600.0/1e+9
)/24.0)
-
+DatetimeIndex._add_numeric_methods_disabled()
def _generate_regular_range(start, end, periods, offset):
if isinstance(offset, Tick):
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 7f865fd9aefa8..ddd1ee34f0798 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -60,12 +60,22 @@ class Period(PandasObject):
minute : int, default 0
second : int, default 0
"""
+ _typ = 'periodindex'
__slots__ = ['freq', 'ordinal']
_comparables = ['name','freqstr']
+ @classmethod
+ def _from_ordinal(cls, ordinal, freq):
+ """ fast creation from an ordinal and freq that are already validated! """
+ self = object.__new__(cls)
+ self.ordinal = ordinal
+ self.freq = freq
+ return self
+
def __init__(self, value=None, freq=None, ordinal=None,
year=None, month=1, quarter=None, day=1,
hour=0, minute=0, second=0):
+
# freq points to a tuple (base, mult); base is one of the defined
# periods such as A, Q, etc. Every five minutes would be, e.g.,
# ('T', 5) but may be passed in as a string like '5T'
@@ -563,6 +573,8 @@ class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
"""
_box_scalars = True
_allow_period_index_ops = True
+ _attributes = ['name','freq']
+ _is_numeric_dtype = False
__eq__ = _period_index_cmp('__eq__')
__ne__ = _period_index_cmp('__ne__', nat_result=True)
@@ -572,9 +584,7 @@ class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
__ge__ = _period_index_cmp('__ge__')
def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
- periods=None, copy=False, name=None, year=None, month=None,
- quarter=None, day=None, hour=None, minute=None, second=None,
- tz=None):
+ periods=None, copy=False, name=None, tz=None, **kwargs):
freq = frequencies.get_standard_freq(freq)
@@ -589,32 +599,24 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
if ordinal is not None:
data = np.asarray(ordinal, dtype=np.int64)
else:
- fields = [year, month, quarter, day, hour, minute, second]
data, freq = cls._generate_range(start, end, periods,
- freq, fields)
+ freq, kwargs)
else:
ordinal, freq = cls._from_arraylike(data, freq, tz)
data = np.array(ordinal, dtype=np.int64, copy=False)
- subarr = data.view(cls)
- subarr.name = name
- subarr.freq = freq
-
- return subarr
+ return cls._simple_new(data, name=name, freq=freq)
@classmethod
def _generate_range(cls, start, end, periods, freq, fields):
- field_count = com._count_not_none(*fields)
+ field_count = len(fields)
if com._count_not_none(start, end) > 0:
if field_count > 0:
raise ValueError('Can either instantiate from fields '
'or endpoints, but not both')
subarr, freq = _get_ordinal_range(start, end, periods, freq)
elif field_count > 0:
- y, mth, q, d, h, minute, s = fields
- subarr, freq = _range_from_fields(year=y, month=mth, quarter=q,
- day=d, hour=h, minute=minute,
- second=s, freq=freq)
+ subarr, freq = _range_from_fields(freq=freq, **fields)
else:
raise ValueError('Not enough parameters to construct '
'Period range')
@@ -623,7 +625,8 @@ def _generate_range(cls, start, end, periods, freq, fields):
@classmethod
def _from_arraylike(cls, data, freq, tz):
- if not isinstance(data, np.ndarray):
+
+ if not isinstance(data, (np.ndarray, PeriodIndex, DatetimeIndex, Int64Index)):
if np.isscalar(data) or isinstance(data, Period):
raise ValueError('PeriodIndex() must be called with a '
'collection of some kind, %s was passed'
@@ -681,10 +684,12 @@ def _from_arraylike(cls, data, freq, tz):
return data, freq
@classmethod
- def _simple_new(cls, values, name, freq=None, **kwargs):
- result = values.view(cls)
+ def _simple_new(cls, values, name=None, freq=None, **kwargs):
+ result = object.__new__(cls)
+ result._data = values
result.name = name
result.freq = freq
+ result._reset_identity()
return result
@property
@@ -704,7 +709,7 @@ def __contains__(self, key):
@property
def _box_func(self):
- return lambda x: Period(ordinal=x, freq=self.freq)
+ return lambda x: Period._from_ordinal(ordinal=x, freq=self.freq)
def asof_locs(self, where, mask):
"""
@@ -800,17 +805,15 @@ def to_datetime(self, dayfirst=False):
def map(self, f):
try:
result = f(self)
- if not isinstance(result, np.ndarray):
+ if not isinstance(result, (np.ndarray, Index)):
raise TypeError
return result
except Exception:
- return _algos.arrmap_object(self.asobject, f)
+ return _algos.arrmap_object(self.asobject.values, f)
def _get_object_array(self):
freq = self.freq
- boxfunc = lambda x: Period(ordinal=x, freq=freq)
- boxer = np.frompyfunc(boxfunc, 1, 1)
- return boxer(self.values)
+ return np.array([ Period._from_ordinal(ordinal=x, freq=freq) for x in self.values], copy=False)
def _mpl_repr(self):
# how to represent ourselves to matplotlib
@@ -823,6 +826,13 @@ def equals(self, other):
if self.is_(other):
return True
+ if (not hasattr(other, 'inferred_type') or
+ other.inferred_type != 'int64'):
+ try:
+ other = PeriodIndex(other)
+ except:
+ return False
+
return np.array_equal(self.asi8, other.asi8)
def to_timestamp(self, freq=None, how='start'):
@@ -1042,21 +1052,19 @@ def _wrap_union_result(self, other, result):
def _apply_meta(self, rawarr):
if not isinstance(rawarr, PeriodIndex):
- rawarr = rawarr.view(PeriodIndex)
- rawarr.freq = self.freq
+ rawarr = PeriodIndex(rawarr, freq=self.freq)
return rawarr
def __getitem__(self, key):
- """Override numpy.ndarray's __getitem__ method to work as desired"""
- arr_idx = self.view(np.ndarray)
+ getitem = self._data.__getitem__
if np.isscalar(key):
- val = arr_idx[key]
+ val = getitem(key)
return Period(ordinal=val, freq=self.freq)
else:
if com._is_bool_indexer(key):
key = np.asarray(key)
- result = arr_idx[key]
+ result = getitem(key)
if result.ndim > 1:
# MPL kludge
# values = np.asarray(list(values), dtype=object)
@@ -1129,7 +1137,7 @@ def append(self, other):
if isinstance(to_concat[0], PeriodIndex):
if len(set([x.freq for x in to_concat])) > 1:
# box
- to_concat = [x.asobject for x in to_concat]
+ to_concat = [x.asobject.values for x in to_concat]
else:
cat_values = np.concatenate([x.values for x in to_concat])
return PeriodIndex(cat_values, freq=self.freq, name=name)
@@ -1138,26 +1146,35 @@ def append(self, other):
for x in to_concat]
return Index(com._concat_compat(to_concat), name=name)
- def __reduce__(self):
- """Necessary for making this object picklable"""
- object_state = list(np.ndarray.__reduce__(self))
- subclass_state = (self.name, self.freq)
- object_state[2] = (object_state[2], subclass_state)
- return tuple(object_state)
-
def __setstate__(self, state):
"""Necessary for making this object picklable"""
- if len(state) == 2:
- nd_state, own_state = state
- np.ndarray.__setstate__(self, nd_state)
- self.name = own_state[0]
- try: # backcompat
- self.freq = own_state[1]
- except:
- pass
- else: # pragma: no cover
- np.ndarray.__setstate__(self, state)
+ if isinstance(state, dict):
+ super(PeriodIndex, self).__setstate__(state)
+
+ elif isinstance(state, tuple):
+
+ # < 0.15 compat
+ if len(state) == 2:
+ nd_state, own_state = state
+ data = np.empty(nd_state[1], dtype=nd_state[2])
+ np.ndarray.__setstate__(data, nd_state)
+
+ try: # backcompat
+ self.freq = own_state[1]
+ except:
+ pass
+
+ else: # pragma: no cover
+ data = np.empty(state)
+ np.ndarray.__setstate__(self, state)
+
+ self._data = data
+
+ else:
+ raise Exception("invalid pickle state")
+ _unpickle_compat = __setstate__
+PeriodIndex._add_numeric_methods_disabled()
def _get_ordinal_range(start, end, periods, freq):
if com._count_not_none(start, end, periods) < 2:
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index b95553f87ec6b..899d2bfdc9c76 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -61,7 +61,7 @@ def tsplot(series, plotf, **kwargs):
if not hasattr(ax, '_plot_data'):
ax._plot_data = []
ax._plot_data.append((series, plotf, kwargs))
- lines = plotf(ax, series.index, series.values, **kwargs)
+ lines = plotf(ax, series.index._mpl_repr(), series.values, **kwargs)
# set date formatter, locators and rescale limits
format_dateaxis(ax, ax.freq)
@@ -152,7 +152,7 @@ def _replot_ax(ax, freq, kwargs):
idx = series.index.asfreq(freq, how='S')
series.index = idx
ax._plot_data.append(series)
- lines.append(plotf(ax, series.index, series.values, **kwds)[0])
+ lines.append(plotf(ax, series.index._mpl_repr(), series.values, **kwds)[0])
labels.append(com.pprint_thing(series.name))
return lines, labels
diff --git a/pandas/tseries/tests/test_converter.py b/pandas/tseries/tests/test_converter.py
index 902b9cb549e32..a1b873e1c0bea 100644
--- a/pandas/tseries/tests/test_converter.py
+++ b/pandas/tseries/tests/test_converter.py
@@ -84,8 +84,8 @@ def _assert_less(ts1, ts2):
if not val1 < val2:
raise AssertionError('{0} is not less than {1}.'.format(val1, val2))
- # Matplotlib's time representation using floats cannot distinguish intervals smaller
- # than ~10 microsecond in the common range of years.
+ # Matplotlib's time representation using floats cannot distinguish intervals smaller
+ # than ~10 microsecond in the common range of years.
ts = Timestamp('2012-1-1')
_assert_less(ts, ts + Second())
_assert_less(ts, ts + Milli())
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py
index 7b0bfa98690e2..b109f6585092a 100644
--- a/pandas/tseries/tests/test_daterange.py
+++ b/pandas/tseries/tests/test_daterange.py
@@ -1,6 +1,5 @@
from datetime import datetime
from pandas.compat import range
-import pickle
import nose
import sys
import numpy as np
@@ -168,9 +167,7 @@ def test_shift(self):
self.assertEqual(shifted[0], rng[0] + datetools.bday)
def test_pickle_unpickle(self):
- pickled = pickle.dumps(self.rng)
- unpickled = pickle.loads(pickled)
-
+ unpickled = self.round_trip_pickle(self.rng)
self.assertIsNotNone(unpickled.offset)
def test_union(self):
@@ -561,9 +558,7 @@ def test_shift(self):
self.assertEqual(shifted[0], rng[0] + datetools.cday)
def test_pickle_unpickle(self):
- pickled = pickle.dumps(self.rng)
- unpickled = pickle.loads(pickled)
-
+ unpickled = self.round_trip_pickle(self.rng)
self.assertIsNotNone(unpickled.offset)
def test_union(self):
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index b9d4dd80438ef..b7abedbafa7b0 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -2052,7 +2052,7 @@ def test_range_slice_outofbounds(self):
for idx in [didx, pidx]:
df = DataFrame(dict(units=[100 + i for i in range(10)]), index=idx)
- empty = DataFrame(index=DatetimeIndex([], freq='D'), columns=['units'])
+ empty = DataFrame(index=idx.__class__([], freq='D'), columns=['units'])
tm.assert_frame_equal(df['2013/09/01':'2013/09/30'], empty)
tm.assert_frame_equal(df['2013/09/30':'2013/10/02'], df.iloc[:2])
@@ -2408,7 +2408,7 @@ def test_pickle_freq(self):
# GH2891
import pickle
prng = period_range('1/1/2011', '1/1/2012', freq='M')
- new_prng = pickle.loads(pickle.dumps(prng))
+ new_prng = self.round_trip_pickle(prng)
self.assertEqual(new_prng.freq,'M')
def test_slice_keep_name(self):
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index b52dca76f2c77..6b34ae0eb9384 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -298,7 +298,7 @@ def test_dataframe(self):
bts = DataFrame({'a': tm.makeTimeSeries()})
ax = bts.plot()
idx = ax.get_lines()[0].get_xdata()
- assert_array_equal(bts.index.to_period(), idx)
+ assert_array_equal(bts.index.to_period(), PeriodIndex(idx))
@slow
def test_axis_limits(self):
@@ -605,8 +605,8 @@ def test_mixed_freq_regular_first(self):
ax = s1.plot()
ax2 = s2.plot(style='g')
lines = ax2.get_lines()
- idx1 = lines[0].get_xdata()
- idx2 = lines[1].get_xdata()
+ idx1 = PeriodIndex(lines[0].get_xdata())
+ idx2 = PeriodIndex(lines[1].get_xdata())
self.assertTrue(idx1.equals(s1.index.to_period('B')))
self.assertTrue(idx2.equals(s2.index.to_period('B')))
left, right = ax2.get_xlim()
@@ -881,9 +881,9 @@ def test_secondary_upsample(self):
low.plot()
ax = high.plot(secondary_y=True)
for l in ax.get_lines():
- self.assertEqual(l.get_xdata().freq, 'D')
+ self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
for l in ax.right_ax.get_lines():
- self.assertEqual(l.get_xdata().freq, 'D')
+ self.assertEqual(PeriodIndex(l.get_xdata()).freq, 'D')
@slow
def test_secondary_legend(self):
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 6dbf095189d36..9487949adf23a 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -27,7 +27,7 @@
import pandas.index as _index
-from pandas.compat import range, long, StringIO, lrange, lmap, zip, product
+from pandas.compat import range, long, StringIO, lrange, lmap, zip, product, PY3_2
from numpy.random import rand
from numpy.testing import assert_array_equal
from pandas.util.testing import assert_frame_equal
@@ -871,11 +871,11 @@ def test_string_na_nat_conversion(self):
result2 = to_datetime(strings)
tm.assert_isinstance(result2, DatetimeIndex)
- assert_almost_equal(result, result2)
+ self.assert_numpy_array_equal(result, result2)
malformed = np.array(['1/100/2000', np.nan], dtype=object)
result = to_datetime(malformed)
- assert_almost_equal(result, malformed)
+ self.assert_numpy_array_equal(result, malformed)
self.assertRaises(ValueError, to_datetime, malformed,
errors='raise')
@@ -2058,18 +2058,15 @@ def test_period_resample_with_local_timezone_dateutil(self):
def test_pickle(self):
#GH4606
- from pandas.compat import cPickle
- import pickle
- for pick in [pickle, cPickle]:
- p = pick.loads(pick.dumps(NaT))
- self.assertTrue(p is NaT)
+ p = self.round_trip_pickle(NaT)
+ self.assertTrue(p is NaT)
- idx = pd.to_datetime(['2013-01-01', NaT, '2014-01-06'])
- idx_p = pick.loads(pick.dumps(idx))
- self.assertTrue(idx_p[0] == idx[0])
- self.assertTrue(idx_p[1] is NaT)
- self.assertTrue(idx_p[2] == idx[2])
+ idx = pd.to_datetime(['2013-01-01', NaT, '2014-01-06'])
+ idx_p = self.round_trip_pickle(idx)
+ self.assertTrue(idx_p[0] == idx[0])
+ self.assertTrue(idx_p[1] is NaT)
+ self.assertTrue(idx_p[2] == idx[2])
def _simple_ts(start, end, freq='D'):
@@ -2212,6 +2209,9 @@ def test_comparisons_coverage(self):
self.assert_numpy_array_equal(result, exp)
def test_comparisons_nat(self):
+ if PY3_2:
+ raise nose.SkipTest('nat comparisons on 3.2 broken')
+
fidx1 = pd.Index([1.0, np.nan, 3.0, np.nan, 5.0, 7.0])
fidx2 = pd.Index([2.0, 3.0, np.nan, np.nan, 6.0, 7.0])
@@ -2233,9 +2233,11 @@ def test_comparisons_nat(self):
# Check pd.NaT is handles as the same as np.nan
for idx1, idx2 in cases:
+
result = idx1 < idx2
expected = np.array([True, False, False, False, True, False])
self.assert_numpy_array_equal(result, expected)
+
result = idx2 > idx1
expected = np.array([True, False, False, False, True, False])
self.assert_numpy_array_equal(result, expected)
@@ -2243,6 +2245,7 @@ def test_comparisons_nat(self):
result = idx1 <= idx2
expected = np.array([True, False, False, False, True, True])
self.assert_numpy_array_equal(result, expected)
+
result = idx2 >= idx1
expected = np.array([True, False, False, False, True, True])
self.assert_numpy_array_equal(result, expected)
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 13f432d5cea2a..42048ec9877fa 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -98,6 +98,13 @@ def assert_numpy_array_equal(self, np_array, assert_equal):
return
raise AssertionError('{0} is not equal to {1}.'.format(np_array, assert_equal))
+ def round_trip_pickle(self, obj, path=None):
+ if path is None:
+ path = u('__%s__.pickle' % rands(10))
+ with ensure_clean(path) as path:
+ pd.to_pickle(obj, path)
+ return pd.read_pickle(path)
+
def assert_numpy_array_equivalent(self, np_array, assert_equal):
"""Checks that 'np_array' is equivalent to 'assert_equal'
| make `Index` now subclass `PandasObject/IndexOpsMixin` rather than `ndarray`
should allow much easier new Index classes (e.g. #7640)
This doesn't change the public API at all, and provides compat
closes #5080
back compat for pickles is now way simpler
ToDo:
- docs
- [x] release note
- [x] warnings in io.rst about this change
- [x] fixes `.repeat` on MultiIndex (broken in master)
- [x] minor API compat issue with comparisons of `DatetimeIndex` with `NaT` vs ndarrays
- [x] merge with searchsorted issues/PR ( #6712, #7447, #6469)
- [x] tests fixed (FIXMES), just a few left
- [x] perf
- [x] json completely broken ATM
- [x] #7796 FIXME (PeriodIndex not supported in HDF), not really a big deal
- [x] #7439 since `Index` now doesn't have implicit ops (aside from `__sub__/__add__`, these need to be added in (e.g. `__mul__,__div__,__truediv__`).
- [x] bool(Index/MultiIndex) : https://github.com/pydata/pandas/issues/7897 (will address this later)
closes #5155 (perf fix for Period creation), slight increase on the plotting
because of the the plottling routines holding array of Periods (rather than PeriodIndex).
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
period_setitem | 16.2210 | 122.9340 | 0.1319 |
timeseries_iter_periodindex | 1197.7839 | 6906.1464 | 0.1734 |
timeseries_iter_periodindex_preexit | 12.4850 | 69.8563 | 0.1787 |
timeseries_period_downsample_mean | 11.2850 | 11.1457 | 1.0125 |
plot_timeseries_period | 107.6056 | 86.5277 | 1.2436 |
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7891 | 2014-07-31T17:15:09Z | 2014-08-07T11:34:54Z | 2014-08-07T11:34:54Z | 2014-08-07T12:10:59Z |
PERF: groupby / frame apply optimization | diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 9659d4c3bd6e0..eabe1b43004df 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -635,11 +635,11 @@ def apply(self, func, *args, **kwargs):
@wraps(func)
def f(g):
- # ignore SettingWithCopy here in case the user mutates
- with option_context('mode.chained_assignment',None):
- return func(g, *args, **kwargs)
+ return func(g, *args, **kwargs)
- return self._python_apply_general(f)
+ # ignore SettingWithCopy here in case the user mutates
+ with option_context('mode.chained_assignment',None):
+ return self._python_apply_general(f)
def _python_apply_general(self, f):
keys, values, mutated = self.grouper.apply(f, self._selected_obj,
diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py
index 15ebeba941ccd..e798961ea7bf9 100644
--- a/pandas/io/tests/test_data.py
+++ b/pandas/io/tests/test_data.py
@@ -445,7 +445,10 @@ def test_fred(self):
end = datetime(2013, 1, 27)
received = web.DataReader("GDP", "fred", start, end)['GDP'].tail(1)[0]
- self.assertEqual(int(received), 16535)
+
+ # < 7/30/14 16535 was returned
+ #self.assertEqual(int(received), 16535)
+ self.assertEqual(int(received), 16502)
self.assertRaises(Exception, web.DataReader, "NON EXISTENT SERIES",
'fred', start, end)
| ```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
groupby_frame_apply_overhead | 9.1400 | 70.3743 | 0.1299 |
groupby_frame_apply | 43.5640 | 186.6427 | 0.2334 |
groupby_apply_dict_return | 39.6016 | 80.1926 | 0.4938 |
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7881 | 2014-07-30T16:50:51Z | 2014-07-30T17:56:01Z | 2014-07-30T17:56:01Z | 2014-07-30T17:56:01Z |
ENH add level argument to set_names, set_levels and set_labels (GH7792) | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 837e3b386f3d0..023c200e271ab 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -2162,6 +2162,17 @@ you can specify ``inplace=True`` to have the data change in place.
ind.name = "bob"
ind
+.. versionadded:: 0.15.0
+
+``set_names``, ``set_levels``, and ``set_labels`` also take an optional
+`level`` argument
+
+.. ipython:: python
+
+ index
+ index.levels[1]
+ index.set_levels(["a", "b"], level=1)
+
Adding an index to an existing DataFrame
----------------------------------------
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 9279d8b0288c4..2e5ec8e2f4193 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -35,6 +35,16 @@ API changes
levels aren't all level names or all level numbers. See
:ref:`Reshaping by stacking and unstacking <reshaping.stack_multiple>`.
+- :func:`set_names`, :func:`set_labels`, and :func:`set_levels` methods now take an optional ``level`` keyword argument to all modification of specific level(s) of a MultiIndex. Additionally :func:`set_names` now accepts a scalar string value when operating on an ``Index`` or on a specific level of a ``MultiIndex`` (:issue:`7792`)
+
+ .. ipython:: python
+
+ idx = pandas.MultiIndex.from_product([['a'], range(3), list("pqr")], names=['foo', 'bar', 'baz'])
+ idx.set_names('qux', level=0)
+ idx.set_names(['qux','baz'], level=[0,1])
+ idx.set_levels(['a','b','c'], level='bar')
+ idx.set_levels([['a','b','c'],[1,2,3]], level=[1,2])
+
- Raise a ``ValueError`` in ``df.to_hdf`` with 'fixed' format, if ``df`` has non-unique columns as the resulting file will be broken (:issue:`7761`)
- :func:`rolling_min`, :func:`rolling_max`, :func:`rolling_cov`, and :func:`rolling_corr`
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 81602d5240a08..8c43511866e9a 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -362,7 +362,7 @@ def nlevels(self):
def _get_names(self):
return FrozenList((self.name,))
- def _set_names(self, values):
+ def _set_names(self, values, level=None):
if len(values) != 1:
raise ValueError('Length of new names must be 1, got %d'
% len(values))
@@ -370,28 +370,61 @@ def _set_names(self, values):
names = property(fset=_set_names, fget=_get_names)
- def set_names(self, names, inplace=False):
+ def set_names(self, names, level=None, inplace=False):
"""
Set new names on index. Defaults to returning new index.
Parameters
----------
- names : sequence
- names to set
+ names : str or sequence
+ name(s) to set
+ level : int or level name, or sequence of int / level names (default None)
+ If the index is a MultiIndex (hierarchical), level(s) to set (None for all levels)
+ Otherwise level must be None
inplace : bool
if True, mutates in place
Returns
-------
new index (of same type and class...etc) [if inplace, returns None]
+
+ Examples
+ --------
+ >>> Index([1, 2, 3, 4]).set_names('foo')
+ Int64Index([1, 2, 3, 4], dtype='int64')
+ >>> Index([1, 2, 3, 4]).set_names(['foo'])
+ Int64Index([1, 2, 3, 4], dtype='int64')
+ >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
+ (2, u'one'), (2, u'two')],
+ names=['foo', 'bar'])
+ >>> idx.set_names(['baz', 'quz'])
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'baz', u'quz'])
+ >>> idx.set_names('baz', level=0)
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'baz', u'bar'])
"""
- if not com.is_list_like(names):
+ if level is not None and self.nlevels == 1:
+ raise ValueError('Level must be None for non-MultiIndex')
+
+ if level is not None and not com.is_list_like(level) and com.is_list_like(names):
+ raise TypeError("Names must be a string")
+
+ if not com.is_list_like(names) and level is None and self.nlevels > 1:
raise TypeError("Must pass list-like as `names`.")
+
+ if not com.is_list_like(names):
+ names = [names]
+ if level is not None and not com.is_list_like(level):
+ level = [level]
+
if inplace:
idx = self
else:
idx = self._shallow_copy()
- idx._set_names(names)
+ idx._set_names(names, level=level)
if not inplace:
return idx
@@ -2218,19 +2251,30 @@ def _verify_integrity(self):
def _get_levels(self):
return self._levels
- def _set_levels(self, levels, copy=False, validate=True,
+ def _set_levels(self, levels, level=None, copy=False, validate=True,
verify_integrity=False):
# This is NOT part of the levels property because it should be
# externally not allowed to set levels. User beware if you change
# _levels directly
if validate and len(levels) == 0:
raise ValueError('Must set non-zero number of levels.')
- if validate and len(levels) != len(self._labels):
- raise ValueError('Length of levels must match length of labels.')
- levels = FrozenList(_ensure_index(lev, copy=copy)._shallow_copy()
- for lev in levels)
+ if validate and level is None and len(levels) != self.nlevels:
+ raise ValueError('Length of levels must match number of levels.')
+ if validate and level is not None and len(levels) != len(level):
+ raise ValueError('Length of levels must match length of level.')
+
+ if level is None:
+ new_levels = FrozenList(_ensure_index(lev, copy=copy)._shallow_copy()
+ for lev in levels)
+ else:
+ level = [self._get_level_number(l) for l in level]
+ new_levels = list(self._levels)
+ for l, v in zip(level, levels):
+ new_levels[l] = _ensure_index(v, copy=copy)._shallow_copy()
+ new_levels = FrozenList(new_levels)
+
names = self.names
- self._levels = levels
+ self._levels = new_levels
if any(names):
self._set_names(names)
@@ -2240,15 +2284,17 @@ def _set_levels(self, levels, copy=False, validate=True,
if verify_integrity:
self._verify_integrity()
- def set_levels(self, levels, inplace=False, verify_integrity=True):
+ def set_levels(self, levels, level=None, inplace=False, verify_integrity=True):
"""
Set new levels on MultiIndex. Defaults to returning
new index.
Parameters
----------
- levels : sequence
- new levels to apply
+ levels : sequence or list of sequence
+ new level(s) to apply
+ level : int or level name, or sequence of int / level names (default None)
+ level(s) to set (None for all levels)
inplace : bool
if True, mutates in place
verify_integrity : bool (default True)
@@ -2257,15 +2303,47 @@ def set_levels(self, levels, inplace=False, verify_integrity=True):
Returns
-------
new index (of same type and class...etc)
- """
- if not com.is_list_like(levels) or not com.is_list_like(levels[0]):
- raise TypeError("Levels must be list of lists-like")
+
+
+ Examples
+ --------
+ >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
+ (2, u'one'), (2, u'two')],
+ names=['foo', 'bar'])
+ >>> idx.set_levels([['a','b'], [1,2]])
+ MultiIndex(levels=[[u'a', u'b'], [1, 2]],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_levels(['a','b'], level=0)
+ MultiIndex(levels=[[u'a', u'b'], [u'one', u'two']],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_levels(['a','b'], level='bar')
+ MultiIndex(levels=[[1, 2], [u'a', u'b']],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_levels([['a','b'], [1,2]], level=[0,1])
+ MultiIndex(levels=[[u'a', u'b'], [1, 2]],
+ labels=[[0, 0, 1, 1], [0, 1, 0, 1]],
+ names=[u'foo', u'bar'])
+ """
+ if level is not None and not com.is_list_like(level):
+ if not com.is_list_like(levels):
+ raise TypeError("Levels must be list-like")
+ if com.is_list_like(levels[0]):
+ raise TypeError("Levels must be list-like")
+ level = [level]
+ levels = [levels]
+ elif level is None or com.is_list_like(level):
+ if not com.is_list_like(levels) or not com.is_list_like(levels[0]):
+ raise TypeError("Levels must be list of lists-like")
+
if inplace:
idx = self
else:
idx = self._shallow_copy()
idx._reset_identity()
- idx._set_levels(levels, validate=True,
+ idx._set_levels(levels, level=level, validate=True,
verify_integrity=verify_integrity)
if not inplace:
return idx
@@ -2280,27 +2358,42 @@ def set_levels(self, levels, inplace=False, verify_integrity=True):
def _get_labels(self):
return self._labels
- def _set_labels(self, labels, copy=False, validate=True,
+ def _set_labels(self, labels, level=None, copy=False, validate=True,
verify_integrity=False):
- if validate and len(labels) != self.nlevels:
- raise ValueError("Length of labels must match length of levels")
- self._labels = FrozenList(
- _ensure_frozen(labs, copy=copy)._shallow_copy() for labs in labels)
+
+ if validate and level is None and len(labels) != self.nlevels:
+ raise ValueError("Length of labels must match number of levels")
+ if validate and level is not None and len(labels) != len(level):
+ raise ValueError('Length of labels must match length of levels.')
+
+ if level is None:
+ new_labels = FrozenList(_ensure_frozen(v, copy=copy)._shallow_copy()
+ for v in labels)
+ else:
+ level = [self._get_level_number(l) for l in level]
+ new_labels = list(self._labels)
+ for l, v in zip(level, labels):
+ new_labels[l] = _ensure_frozen(v, copy=copy)._shallow_copy()
+ new_labels = FrozenList(new_labels)
+
+ self._labels = new_labels
self._tuples = None
self._reset_cache()
if verify_integrity:
self._verify_integrity()
- def set_labels(self, labels, inplace=False, verify_integrity=True):
+ def set_labels(self, labels, level=None, inplace=False, verify_integrity=True):
"""
Set new labels on MultiIndex. Defaults to returning
new index.
Parameters
----------
- labels : sequence of arrays
+ labels : sequence or list of sequence
new labels to apply
+ level : int or level name, or sequence of int / level names (default None)
+ level(s) to set (None for all levels)
inplace : bool
if True, mutates in place
verify_integrity : bool (default True)
@@ -2309,15 +2402,46 @@ def set_labels(self, labels, inplace=False, verify_integrity=True):
Returns
-------
new index (of same type and class...etc)
- """
- if not com.is_list_like(labels) or not com.is_list_like(labels[0]):
- raise TypeError("Labels must be list of lists-like")
+
+ Examples
+ --------
+ >>> idx = MultiIndex.from_tuples([(1, u'one'), (1, u'two'),
+ (2, u'one'), (2, u'two')],
+ names=['foo', 'bar'])
+ >>> idx.set_labels([[1,0,1,0], [0,0,1,1]])
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[1, 0, 1, 0], [0, 0, 1, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_labels([1,0,1,0], level=0)
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[1, 0, 1, 0], [0, 1, 0, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_labels([0,0,1,1], level='bar')
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[0, 0, 1, 1], [0, 0, 1, 1]],
+ names=[u'foo', u'bar'])
+ >>> idx.set_labels([[1,0,1,0], [0,0,1,1]], level=[0,1])
+ MultiIndex(levels=[[1, 2], [u'one', u'two']],
+ labels=[[1, 0, 1, 0], [0, 0, 1, 1]],
+ names=[u'foo', u'bar'])
+ """
+ if level is not None and not com.is_list_like(level):
+ if not com.is_list_like(labels):
+ raise TypeError("Labels must be list-like")
+ if com.is_list_like(labels[0]):
+ raise TypeError("Labels must be list-like")
+ level = [level]
+ labels = [labels]
+ elif level is None or com.is_list_like(level):
+ if not com.is_list_like(labels) or not com.is_list_like(labels[0]):
+ raise TypeError("Labels must be list of lists-like")
+
if inplace:
idx = self
else:
idx = self._shallow_copy()
idx._reset_identity()
- idx._set_labels(labels, verify_integrity=verify_integrity)
+ idx._set_labels(labels, level=level, verify_integrity=verify_integrity)
if not inplace:
return idx
@@ -2434,18 +2558,30 @@ def __len__(self):
def _get_names(self):
return FrozenList(level.name for level in self.levels)
- def _set_names(self, values, validate=True):
+ def _set_names(self, names, level=None, validate=True):
"""
sets names on levels. WARNING: mutates!
Note that you generally want to set this *after* changing levels, so
- that it only acts on copies"""
- values = list(values)
- if validate and len(values) != self.nlevels:
- raise ValueError('Length of names must match length of levels')
+ that it only acts on copies
+ """
+
+ names = list(names)
+
+ if validate and level is not None and len(names) != len(level):
+ raise ValueError('Length of names must match length of level.')
+ if validate and level is None and len(names) != self.nlevels:
+ raise ValueError(
+ 'Length of names must match number of levels in MultiIndex.')
+
+ if level is None:
+ level = range(self.nlevels)
+ else:
+ level = [self._get_level_number(l) for l in level]
+
# set the name
- for name, level in zip(values, self.levels):
- level.rename(name, inplace=True)
+ for l, name in zip(level, names):
+ self.levels[l].rename(name, inplace=True)
names = property(
fset=_set_names, fget=_get_names, doc="Names of levels in MultiIndex")
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index a8486beb57042..8b1f6ce3e7f45 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -70,9 +70,11 @@ def test_set_name_methods(self):
self.assertIsNone(res)
self.assertEqual(ind.name, new_name)
self.assertEqual(ind.names, [new_name])
- with assertRaisesRegexp(TypeError, "list-like"):
- # should still fail even if it would be the right length
- ind.set_names("a")
+ #with assertRaisesRegexp(TypeError, "list-like"):
+ # # should still fail even if it would be the right length
+ # ind.set_names("a")
+ with assertRaisesRegexp(ValueError, "Level must be None"):
+ ind.set_names("a", level=0)
# rename in place just leaves tuples and other containers alone
name = ('A', 'B')
ind = self.intIndex
@@ -1509,15 +1511,30 @@ def test_set_names_and_rename(self):
self.assertIsNone(res)
self.assertEqual(ind.names, new_names2)
- def test_set_levels_and_set_labels(self):
+ # set names for specific level (# GH7792)
+ ind = self.index.set_names(new_names[0], level=0)
+ self.assertEqual(self.index.names, self.index_names)
+ self.assertEqual(ind.names, [new_names[0], self.index_names[1]])
+
+ res = ind.set_names(new_names2[0], level=0, inplace=True)
+ self.assertIsNone(res)
+ self.assertEqual(ind.names, [new_names2[0], self.index_names[1]])
+
+ # set names for multiple levels
+ ind = self.index.set_names(new_names, level=[0, 1])
+ self.assertEqual(self.index.names, self.index_names)
+ self.assertEqual(ind.names, new_names)
+
+ res = ind.set_names(new_names2, level=[0, 1], inplace=True)
+ self.assertIsNone(res)
+ self.assertEqual(ind.names, new_names2)
+
+
+ def test_set_levels(self):
# side note - you probably wouldn't want to use levels and labels
# directly like this - but it is possible.
levels, labels = self.index.levels, self.index.labels
new_levels = [[lev + 'a' for lev in level] for level in levels]
- major_labels, minor_labels = labels
- major_labels = [(x + 1) % 3 for x in major_labels]
- minor_labels = [(x + 1) % 1 for x in minor_labels]
- new_labels = [major_labels, minor_labels]
def assert_matching(actual, expected):
# avoid specifying internal representation
@@ -1539,6 +1556,58 @@ def assert_matching(actual, expected):
self.assertIsNone(inplace_return)
assert_matching(ind2.levels, new_levels)
+ # level changing specific level [w/o mutation]
+ ind2 = self.index.set_levels(new_levels[0], level=0)
+ assert_matching(ind2.levels, [new_levels[0], levels[1]])
+ assert_matching(self.index.levels, levels)
+
+ ind2 = self.index.set_levels(new_levels[1], level=1)
+ assert_matching(ind2.levels, [levels[0], new_levels[1]])
+ assert_matching(self.index.levels, levels)
+
+ # level changing multiple levels [w/o mutation]
+ ind2 = self.index.set_levels(new_levels, level=[0, 1])
+ assert_matching(ind2.levels, new_levels)
+ assert_matching(self.index.levels, levels)
+
+ # level changing specific level [w/ mutation]
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_levels(new_levels[0], level=0, inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.levels, [new_levels[0], levels[1]])
+ assert_matching(self.index.levels, levels)
+
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_levels(new_levels[1], level=1, inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.levels, [levels[0], new_levels[1]])
+ assert_matching(self.index.levels, levels)
+
+ # level changing multiple levels [w/ mutation]
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_levels(new_levels, level=[0, 1], inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.levels, new_levels)
+ assert_matching(self.index.levels, levels)
+
+ def test_set_labels(self):
+ # side note - you probably wouldn't want to use levels and labels
+ # directly like this - but it is possible.
+ levels, labels = self.index.levels, self.index.labels
+ major_labels, minor_labels = labels
+ major_labels = [(x + 1) % 3 for x in major_labels]
+ minor_labels = [(x + 1) % 1 for x in minor_labels]
+ new_labels = [major_labels, minor_labels]
+
+ def assert_matching(actual, expected):
+ # avoid specifying internal representation
+ # as much as possible
+ self.assertEqual(len(actual), len(expected))
+ for act, exp in zip(actual, expected):
+ act = np.asarray(act)
+ exp = np.asarray(exp)
+ assert_almost_equal(act, exp)
+
# label changing [w/o mutation]
ind2 = self.index.set_labels(new_labels)
assert_matching(ind2.labels, new_labels)
@@ -1550,6 +1619,40 @@ def assert_matching(actual, expected):
self.assertIsNone(inplace_return)
assert_matching(ind2.labels, new_labels)
+ # label changing specific level [w/o mutation]
+ ind2 = self.index.set_labels(new_labels[0], level=0)
+ assert_matching(ind2.labels, [new_labels[0], labels[1]])
+ assert_matching(self.index.labels, labels)
+
+ ind2 = self.index.set_labels(new_labels[1], level=1)
+ assert_matching(ind2.labels, [labels[0], new_labels[1]])
+ assert_matching(self.index.labels, labels)
+
+ # label changing multiple levels [w/o mutation]
+ ind2 = self.index.set_labels(new_labels, level=[0, 1])
+ assert_matching(ind2.labels, new_labels)
+ assert_matching(self.index.labels, labels)
+
+ # label changing specific level [w/ mutation]
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_labels(new_labels[0], level=0, inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.labels, [new_labels[0], labels[1]])
+ assert_matching(self.index.labels, labels)
+
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_labels(new_labels[1], level=1, inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.labels, [labels[0], new_labels[1]])
+ assert_matching(self.index.labels, labels)
+
+ # label changing multiple levels [w/ mutation]
+ ind2 = self.index.copy()
+ inplace_return = ind2.set_labels(new_labels, level=[0, 1], inplace=True)
+ self.assertIsNone(inplace_return)
+ assert_matching(ind2.labels, new_labels)
+ assert_matching(self.index.labels, labels)
+
def test_set_levels_labels_names_bad_input(self):
levels, labels = self.index.levels, self.index.labels
names = self.index.names
@@ -1575,6 +1678,27 @@ def test_set_levels_labels_names_bad_input(self):
with tm.assertRaisesRegexp(TypeError, 'list-like'):
self.index.set_names(names[0])
+ # should have equal lengths
+ with tm.assertRaisesRegexp(TypeError, 'list of lists-like'):
+ self.index.set_levels(levels[0], level=[0, 1])
+
+ with tm.assertRaisesRegexp(TypeError, 'list-like'):
+ self.index.set_levels(levels, level=0)
+
+ # should have equal lengths
+ with tm.assertRaisesRegexp(TypeError, 'list of lists-like'):
+ self.index.set_labels(labels[0], level=[0, 1])
+
+ with tm.assertRaisesRegexp(TypeError, 'list-like'):
+ self.index.set_labels(labels, level=0)
+
+ # should have equal lengths
+ with tm.assertRaisesRegexp(ValueError, 'Length of names'):
+ self.index.set_names(names[0], level=[0, 1])
+
+ with tm.assertRaisesRegexp(TypeError, 'string'):
+ self.index.set_names(names, level=0)
+
def test_metadata_immutable(self):
levels, labels = self.index.levels, self.index.labels
# shouldn't be able to set at either the top level or base level
| Closes #7792
New `level` argument:
```
set_names(self, names, level=None, inplace=False)
set_levels(self, levels, level=None, inplace=False, verify_integrity=True)
set_labels(self, labels, level=None, inplace=False, verify_integrity=True)
```
e.g.
set_names('foo',level=1)
set_names(['foo','bar'],level=[1,2])
set_levels(['a','b','c'],level=1)
set_levels([['a','b','c'],[1,2,3]],level=[1,2])
| https://api.github.com/repos/pandas-dev/pandas/pulls/7874 | 2014-07-29T21:07:09Z | 2014-07-30T22:13:57Z | 2014-07-30T22:13:57Z | 2014-07-30T22:38:48Z |
BUG/FIX: groupby should raise on multi-valued filter | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 9279d8b0288c4..523939b39c580 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -336,7 +336,8 @@ Bug Fixes
-
+- Bug in ``GroupBy.filter()`` where fast path vs. slow path made the filter
+ return a non scalar value that appeared valid but wasnt' (:issue:`7870`).
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index eabe1b43004df..93be135e9ff40 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2945,48 +2945,34 @@ def filter(self, func, dropna=True, *args, **kwargs):
>>> grouped = df.groupby(lambda x: mapping[x])
>>> grouped.filter(lambda x: x['A'].sum() + x['B'].sum() > 0)
"""
- from pandas.tools.merge import concat
indices = []
obj = self._selected_obj
gen = self.grouper.get_iterator(obj, axis=self.axis)
- fast_path, slow_path = self._define_paths(func, *args, **kwargs)
-
- path = None
for name, group in gen:
object.__setattr__(group, 'name', name)
- if path is None:
- # Try slow path and fast path.
- try:
- path, res = self._choose_path(fast_path, slow_path, group)
- except Exception: # pragma: no cover
- res = fast_path(group)
- path = fast_path
- else:
- res = path(group)
+ res = func(group)
- def add_indices():
- indices.append(self._get_index(name))
+ try:
+ res = res.squeeze()
+ except AttributeError: # allow e.g., scalars and frames to pass
+ pass
# interpret the result of the filter
- if isinstance(res, (bool, np.bool_)):
- if res:
- add_indices()
+ if (isinstance(res, (bool, np.bool_)) or
+ np.isscalar(res) and isnull(res)):
+ if res and notnull(res):
+ indices.append(self._get_index(name))
else:
- if getattr(res, 'ndim', None) == 1:
- val = res.ravel()[0]
- if val and notnull(val):
- add_indices()
- else:
-
- # in theory you could do .all() on the boolean result ?
- raise TypeError("the filter must return a boolean result")
+ # non scalars aren't allowed
+ raise TypeError("filter function returned a %s, "
+ "but expected a scalar bool" %
+ type(res).__name__)
- filtered = self._apply_filter(indices, dropna)
- return filtered
+ return self._apply_filter(indices, dropna)
class DataFrameGroupBy(NDFrameGroupBy):
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 5adaacbeb9d29..f958d5481ad33 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -3968,6 +3968,32 @@ def test_filter_has_access_to_grouped_cols(self):
filt = g.filter(lambda x: x['A'].sum() == 2)
assert_frame_equal(filt, df.iloc[[0, 1]])
+ def test_filter_enforces_scalarness(self):
+ df = pd.DataFrame([
+ ['best', 'a', 'x'],
+ ['worst', 'b', 'y'],
+ ['best', 'c', 'x'],
+ ['best','d', 'y'],
+ ['worst','d', 'y'],
+ ['worst','d', 'y'],
+ ['best','d', 'z'],
+ ], columns=['a', 'b', 'c'])
+ with tm.assertRaisesRegexp(TypeError, 'filter function returned a.*'):
+ df.groupby('c').filter(lambda g: g['a'] == 'best')
+
+ def test_filter_non_bool_raises(self):
+ df = pd.DataFrame([
+ ['best', 'a', 1],
+ ['worst', 'b', 1],
+ ['best', 'c', 1],
+ ['best','d', 1],
+ ['worst','d', 1],
+ ['worst','d', 1],
+ ['best','d', 1],
+ ], columns=['a', 'b', 'c'])
+ with tm.assertRaisesRegexp(TypeError, 'filter function returned a.*'):
+ df.groupby('a').filter(lambda g: g.c.mean())
+
def test_index_label_overlaps_location(self):
# checking we don't have any label/location confusion in the
# the wake of GH5375
| closes #7870
| https://api.github.com/repos/pandas-dev/pandas/pulls/7871 | 2014-07-29T17:43:20Z | 2014-07-30T23:15:10Z | 2014-07-30T23:15:10Z | 2014-07-30T23:15:11Z |
BUG: Bug in multi-index slicing with missing indexers (GH7866) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index b0267c3dc5163..fe5ad52397ee8 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -274,7 +274,7 @@ Bug Fixes
- Bug in ``DatetimeIndex`` and ``PeriodIndex`` in-place addition and subtraction cause different result from normal one (:issue:`6527`)
- Bug in adding and subtracting ``PeriodIndex`` with ``PeriodIndex`` raise ``TypeError`` (:issue:`7741`)
- Bug in ``combine_first`` with ``PeriodIndex`` data raises ``TypeError`` (:issue:`3367`)
-
+- Bug in multi-index slicing with missing indexers (:issue:`7866`)
- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 81602d5240a08..cfac0a42eaa75 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -3638,9 +3638,17 @@ def _convert_indexer(r):
ranges.append(k)
elif com.is_list_like(k):
# a collection of labels to include from this level (these are or'd)
- ranges.append(reduce(
- np.logical_or,[ _convert_indexer(self._get_level_indexer(x, level=i)
- ) for x in k ]))
+ indexers = []
+ for x in k:
+ try:
+ indexers.append(_convert_indexer(self._get_level_indexer(x, level=i)))
+ except (KeyError):
+
+ # ignore not founds
+ continue
+
+ ranges.append(reduce(np.logical_or,indexers))
+
elif _is_null_slice(k):
# empty slice
pass
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 64e9d18d0aa2f..6a5a433ce3e35 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1941,6 +1941,39 @@ def f():
df.val['X']
self.assertRaises(KeyError, f)
+
+ # GH 7866
+ # multi-index slicing with missing indexers
+ s = pd.Series(np.arange(9),
+ index=pd.MultiIndex.from_product([['A','B','C'],['foo','bar','baz']],
+ names=['one','two'])
+ ).sortlevel()
+
+ expected = pd.Series(np.arange(3),
+ index=pd.MultiIndex.from_product([['A'],['foo','bar','baz']],
+ names=['one','two'])
+ ).sortlevel()
+
+ result = s.loc[['A']]
+ assert_series_equal(result,expected)
+ result = s.loc[['A','D']]
+ assert_series_equal(result,expected)
+
+ # empty series
+ result = s.loc[['D']]
+ expected = s.loc[[]]
+ assert_series_equal(result,expected)
+
+ idx = pd.IndexSlice
+ expected = pd.Series([0,3,6],
+ index=pd.MultiIndex.from_product([['A','B','C'],['foo']],
+ names=['one','two'])
+ ).sortlevel()
+ result = s.loc[idx[:,['foo']]]
+ assert_series_equal(result,expected)
+ result = s.loc[idx[:,['foo','bah']]]
+ assert_series_equal(result,expected)
+
def test_setitem_dtype_upcast(self):
# GH3216
| closes #7866
| https://api.github.com/repos/pandas-dev/pandas/pulls/7867 | 2014-07-29T11:55:44Z | 2014-07-30T23:48:47Z | 2014-07-30T23:48:47Z | 2014-07-31T11:25:56Z |
BUG: Fixed incorrect string length calculation when writing strings in Stata | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 91ffb5091e927..32af1924aee70 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -3529,6 +3529,13 @@ outside of this range, the data is cast to ``int16``.
Conversion from ``int64`` to ``float64`` may result in a loss of precision
if ``int64`` values are larger than 2**53.
+.. warning::
+ :class:`~pandas.io.stata.StataWriter`` and
+ :func:`~pandas.core.frame.DataFrame.to_stata` only support fixed width
+ strings containing up to 244 characters, a limitation imposed by the version
+ 115 dta file format. Attempting to write *Stata* dta files with strings
+ longer than 244 characters raises a ``ValueError``.
+
.. _io.stata_reader:
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index dbdae6ed7144e..e5ba8efd25b02 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -119,6 +119,11 @@ API changes
- The ``infer_types`` argument to :func:`~pandas.io.html.read_html` now has no
effect (:issue:`7762`, :issue:`7032`).
+- ``DataFrame.to_stata`` and ``StataWriter`` check string length for
+ compatibility with limitations imposed in dta files where fixed-width
+ strings must contain 244 or fewer characters. Attempting to write Stata
+ dta files with strings longer than 244 characters raises a ``ValueError``. (:issue:`7858`)
+
.. _whatsnew_0150.cat:
@@ -312,7 +317,7 @@ Bug Fixes
- Bug in ``DataFrame.plot`` with ``subplots=True`` may draw unnecessary minor xticks and yticks (:issue:`7801`)
- Bug in ``StataReader`` which did not read variable labels in 117 files due to difference between Stata documentation and implementation (:issue:`7816`)
-
+- Bug in ``StataReader`` where strings were always converted to 244 characters-fixed width irrespective of underlying string size (:issue:`7858`)
- Bug in ``expanding_cov``, ``expanding_corr``, ``rolling_cov``, ``rolling_cov``, ``ewmcov``, and ``ewmcorr``
returning results with columns sorted by name and producing an error for non-unique columns;
now handles non-unique columns and returns columns in original order
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 3458a95ac096d..5b5ce3e59e16e 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -23,6 +23,7 @@
from pandas.compat import long, lrange, lmap, lzip, text_type, string_types
from pandas import isnull
from pandas.io.common import get_filepath_or_buffer
+from pandas.lib import max_len_string_array, is_string_array
from pandas.tslib import NaT
def read_stata(filepath_or_buffer, convert_dates=True,
@@ -181,6 +182,11 @@ def _datetime_to_stata_elapsed(date, fmt):
raise ValueError("fmt %s not understood" % fmt)
+excessive_string_length_error = """
+Fixed width strings in Stata .dta files are limited to 244 (or fewer) characters.
+Column '%s' does not satisfy this restriction.
+"""
+
class PossiblePrecisionLoss(Warning):
pass
@@ -1040,12 +1046,14 @@ def _dtype_to_stata_type(dtype):
"Please report an error to the developers." % dtype)
-def _dtype_to_default_stata_fmt(dtype):
+def _dtype_to_default_stata_fmt(dtype, column):
"""
Maps numpy dtype to stata's default format for this type. Not terribly
important since users can change this in Stata. Semantics are
string -> "%DDs" where DD is the length of the string
+ object -> "%DDs" where DD is the length of the string, if a string, or 244
+ for anything that cannot be converted to a string.
float64 -> "%10.0g"
float32 -> "%9.0g"
int64 -> "%9.0g"
@@ -1055,9 +1063,21 @@ def _dtype_to_default_stata_fmt(dtype):
"""
#TODO: expand this to handle a default datetime format?
if dtype.type == np.string_:
+ if max_len_string_array(column.values) > 244:
+ raise ValueError(excessive_string_length_error % column.name)
+
return "%" + str(dtype.itemsize) + "s"
elif dtype.type == np.object_:
- return "%244s"
+ try:
+ # Try to use optimal size if available
+ itemsize = max_len_string_array(column.values)
+ except:
+ # Default size
+ itemsize = 244
+ if itemsize > 244:
+ raise ValueError(excessive_string_length_error % column.name)
+
+ return "%" + str(itemsize) + "s"
elif dtype == np.float64:
return "%10.0g"
elif dtype == np.float32:
@@ -1264,7 +1284,9 @@ def __iter__(self):
)
dtypes[key] = np.dtype(new_type)
self.typlist = [_dtype_to_stata_type(dt) for dt in dtypes]
- self.fmtlist = [_dtype_to_default_stata_fmt(dt) for dt in dtypes]
+ self.fmtlist = []
+ for col, dtype in dtypes.iteritems():
+ self.fmtlist.append(_dtype_to_default_stata_fmt(dtype, data[col]))
# set the given format for the datetime cols
if self._convert_dates is not None:
for key in self._convert_dates:
diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py
index 5271604235922..459a1fe6c0e89 100644
--- a/pandas/io/tests/test_stata.py
+++ b/pandas/io/tests/test_stata.py
@@ -565,6 +565,30 @@ def test_variable_labels(self):
self.assertTrue(k in keys)
self.assertTrue(v in labels)
+ def test_minimal_size_col(self):
+ str_lens = (1, 100, 244)
+ s = {}
+ for str_len in str_lens:
+ s['s' + str(str_len)] = Series(['a' * str_len, 'b' * str_len, 'c' * str_len])
+ original = DataFrame(s)
+ with tm.ensure_clean() as path:
+ original.to_stata(path, write_index=False)
+ sr = StataReader(path)
+ variables = sr.varlist
+ formats = sr.fmtlist
+ for variable, fmt in zip(variables, formats):
+ self.assertTrue(int(variable[1:]) == int(fmt[1:-1]))
+
+ def test_excessively_long_string(self):
+ str_lens = (1, 244, 500)
+ s = {}
+ for str_len in str_lens:
+ s['s' + str(str_len)] = Series(['a' * str_len, 'b' * str_len, 'c' * str_len])
+ original = DataFrame(s)
+ with tm.assertRaises(ValueError):
+ with tm.ensure_clean() as path:
+ original.to_stata(path)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| Strings were incorrectly written using 244 character irrespective of the actual
length of the underlying due to changes in pandas where the underlying NumPy
datatype of strings is always np.object_, and never np.string_. Closes #7858
| https://api.github.com/repos/pandas-dev/pandas/pulls/7862 | 2014-07-29T06:19:15Z | 2014-08-01T13:30:09Z | 2014-08-01T13:30:09Z | 2014-08-20T15:32:49Z |
BUG: Check the first element of "others.values" rather than "others". | diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index 3e730942ffc0e..16ad471d37e85 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -12,7 +12,7 @@
def _get_array_list(arr, others):
- if len(others) and isinstance(others[0], (list, np.ndarray)):
+ if len(others) and isinstance(others.values[0], (list, np.ndarray)):
arrays = [arr] + list(others)
else:
arrays = [arr, others]
| Closes #7857
| https://api.github.com/repos/pandas-dev/pandas/pulls/7859 | 2014-07-28T15:14:58Z | 2014-08-02T07:23:53Z | null | 2014-08-02T13:04:33Z |
BUG/API: allow get_indexer to work with nans | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 81602d5240a08..da4defcecaa99 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2133,6 +2133,12 @@ def isin(self, values):
return lib.ismember_nans(self._array_values(), value_set,
isnull(list(value_set)).any())
+ def get_indexer(self, values, method=None, limit=None):
+ result = super(Float64Index, self).get_indexer(values, method=method,
+ limit=limit)
+ result[result == -1] = self._nan_idxs
+ return result
+
class MultiIndex(Index):
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index a8486beb57042..b7263cc96855c 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -1049,6 +1049,18 @@ def test_astype_from_object(self):
tm.assert_equal(result.dtype, expected.dtype)
tm.assert_index_equal(result, expected)
+ def test_get_indexer_nans(self):
+ index = Index([1, 2, np.nan])
+ result = index.get_indexer([np.nan])
+ np.testing.assert_array_equal(result, np.array([2]))
+
+ index = Index([1, np.nan, 2, np.nan])
+ with tm.assertRaisesRegexp(InvalidIndexError, 'Reindexing.*valid.*'):
+ index.get_indexer([np.nan])
+
+ index = Index([1, np.nan, 2])
+ result = index.get_indexer([np.nan, np.nan])
+ np.testing.assert_array_equal(result, np.array([1, 1]))
class TestInt64Index(tm.TestCase):
_multiprocess_can_split_ = True
| - [ ] clear the FIXME in Categorical in #7820 (comment)
closes #7820
| https://api.github.com/repos/pandas-dev/pandas/pulls/7855 | 2014-07-27T16:23:32Z | 2015-04-08T15:03:14Z | null | 2022-10-13T00:16:04Z |
CLN: Clean tslib, frequencies import | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index ea7f963f79f28..c40ff67789b45 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -16,7 +16,7 @@
from pandas.core.series import Series, remove_na
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex, Period
-from pandas.tseries.frequencies import get_period_alias, get_base_alias
+import pandas.tseries.frequencies as frequencies
from pandas.tseries.offsets import DateOffset
from pandas.compat import range, lrange, lmap, map, zip, string_types
import pandas.compat as compat
@@ -1504,8 +1504,8 @@ def _is_dynamic_freq(self, freq):
if isinstance(freq, DateOffset):
freq = freq.rule_code
else:
- freq = get_base_alias(freq)
- freq = get_period_alias(freq)
+ freq = frequencies.get_base_alias(freq)
+ freq = frequencies.get_period_alias(freq)
return freq is not None and self._no_base(freq)
def _no_base(self, freq):
@@ -1513,10 +1513,9 @@ def _no_base(self, freq):
from pandas.core.frame import DataFrame
if (isinstance(self.data, (Series, DataFrame))
and isinstance(self.data.index, DatetimeIndex)):
- import pandas.tseries.frequencies as freqmod
- base = freqmod.get_freq(freq)
+ base = frequencies.get_freq(freq)
x = self.data.index
- if (base <= freqmod.FreqGroup.FR_DAY):
+ if (base <= frequencies.FreqGroup.FR_DAY):
return x[:1].is_normalized
return Period(x[0], freq).to_timestamp(tz=x.tz) == x[0]
@@ -1632,8 +1631,8 @@ def _maybe_convert_index(self, data):
freq = getattr(data.index, 'inferred_freq', None)
if isinstance(freq, DateOffset):
freq = freq.rule_code
- freq = get_base_alias(freq)
- freq = get_period_alias(freq)
+ freq = frequencies.get_base_alias(freq)
+ freq = frequencies.get_period_alias(freq)
if freq is None:
ax = self._get_ax(0)
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 4aa424ea08031..518bb4180ec89 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -14,7 +14,7 @@
from pandas.compat import u
from pandas.tseries.frequencies import (
infer_freq, to_offset, get_period_alias,
- Resolution, get_reso_string, _tz_convert_with_transitions)
+ Resolution, _tz_convert_with_transitions)
from pandas.core.base import DatetimeIndexOpsMixin
from pandas.tseries.offsets import DateOffset, generate_range, Tick, CDay
from pandas.tseries.tools import parse_time_string, normalize_date
@@ -291,7 +291,7 @@ def __new__(cls, data=None,
tz = subarr.tz
else:
if tz is not None:
- tz = tools._maybe_get_tz(tz)
+ tz = tslib.maybe_get_tz(tz)
if (not isinstance(data, DatetimeIndex) or
getattr(data, 'tz', None) is None):
@@ -361,10 +361,14 @@ def _generate(cls, start, end, periods, name, offset,
raise ValueError('Start and end cannot both be tz-aware with '
'different timezones')
- inferred_tz = tools._maybe_get_tz(inferred_tz)
+ inferred_tz = tslib.maybe_get_tz(inferred_tz)
# these may need to be localized
- tz = tools._maybe_get_tz(tz, start or end)
+ tz = tslib.maybe_get_tz(tz)
+ if tz is not None:
+ date = start or end
+ if date.tzinfo is not None and hasattr(tz, 'localize'):
+ tz = tz.localize(date.replace(tzinfo=None)).tzinfo
if tz is not None and inferred_tz is not None:
if not inferred_tz == tz:
@@ -477,7 +481,7 @@ def _simple_new(cls, values, name, freq=None, tz=None):
result = values.view(cls)
result.name = name
result.offset = freq
- result.tz = tools._maybe_get_tz(tz)
+ result.tz = tslib.maybe_get_tz(tz)
return result
@@ -1620,7 +1624,7 @@ def tz_convert(self, tz):
-------
normalized : DatetimeIndex
"""
- tz = tools._maybe_get_tz(tz)
+ tz = tslib.maybe_get_tz(tz)
if self.tz is None:
# tz naive, use tz_localize
@@ -1648,7 +1652,7 @@ def tz_localize(self, tz, infer_dst=False):
"""
if self.tz is not None:
raise TypeError("Already tz-aware, use tz_convert to convert.")
- tz = tools._maybe_get_tz(tz)
+ tz = tslib.maybe_get_tz(tz)
# Convert to UTC
new_dates = tslib.tz_localize_to_utc(self.asi8, tz, infer_dst=infer_dst)
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 887bf806dd4e4..7f865fd9aefa8 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -5,12 +5,11 @@
import numpy as np
from pandas.core.base import PandasObject
-from pandas.tseries.frequencies import (get_freq_code as _gfc,
- _month_numbers, FreqGroup)
+import pandas.tseries.frequencies as frequencies
+from pandas.tseries.frequencies import get_freq_code as _gfc
from pandas.tseries.index import DatetimeIndex, Int64Index, Index
from pandas.core.base import DatetimeIndexOpsMixin
from pandas.tseries.tools import parse_time_string
-import pandas.tseries.frequencies as _freq_mod
import pandas.core.common as com
from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box,
@@ -116,7 +115,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
dt, _, reso = parse_time_string(value, freq)
if freq is None:
try:
- freq = _freq_mod.Resolution.get_freq(reso)
+ freq = frequencies.Resolution.get_freq(reso)
except KeyError:
raise ValueError("Invalid frequency or could not infer: %s" % reso)
@@ -142,7 +141,7 @@ def __init__(self, value=None, freq=None, ordinal=None,
dt.hour, dt.minute, dt.second, dt.microsecond, 0,
base)
- self.freq = _freq_mod._get_freq_str(base)
+ self.freq = frequencies._get_freq_str(base)
def __eq__(self, other):
if isinstance(other, Period):
@@ -267,7 +266,7 @@ def to_timestamp(self, freq=None, how='start', tz=None):
if freq is None:
base, mult = _gfc(self.freq)
- freq = _freq_mod.get_to_timestamp_base(base)
+ freq = frequencies.get_to_timestamp_base(base)
base, mult = _gfc(freq)
val = self.asfreq(freq, how)
@@ -296,7 +295,7 @@ def now(cls, freq=None):
def __repr__(self):
base, mult = _gfc(self.freq)
formatted = tslib.period_format(self.ordinal, base)
- freqstr = _freq_mod._reverse_period_code_map[base]
+ freqstr = frequencies._reverse_period_code_map[base]
if not compat.PY3:
encoding = com.get_option("display.encoding")
@@ -577,7 +576,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
quarter=None, day=None, hour=None, minute=None, second=None,
tz=None):
- freq = _freq_mod.get_standard_freq(freq)
+ freq = frequencies.get_standard_freq(freq)
if periods is not None:
if com.is_float(periods):
@@ -767,7 +766,7 @@ def freqstr(self):
def asfreq(self, freq=None, how='E'):
how = _validate_end_alias(how)
- freq = _freq_mod.get_standard_freq(freq)
+ freq = frequencies.get_standard_freq(freq)
base1, mult1 = _gfc(self.freq)
base2, mult2 = _gfc(freq)
@@ -845,7 +844,7 @@ def to_timestamp(self, freq=None, how='start'):
if freq is None:
base, mult = _gfc(self.freq)
- freq = _freq_mod.get_to_timestamp_base(base)
+ freq = frequencies.get_to_timestamp_base(base)
base, mult = _gfc(freq)
new_data = self.asfreq(freq, how)
@@ -889,8 +888,8 @@ def get_value(self, series, key):
except (KeyError, IndexError):
try:
asdt, parsed, reso = parse_time_string(key, self.freq)
- grp = _freq_mod._infer_period_group(reso)
- freqn = _freq_mod._period_group(self.freq)
+ grp = frequencies._infer_period_group(reso)
+ freqn = frequencies._period_group(self.freq)
vals = self.values
@@ -978,8 +977,8 @@ def _get_string_slice(self, key):
key, parsed, reso = parse_time_string(key, self.freq)
- grp = _freq_mod._infer_period_group(reso)
- freqn = _freq_mod._period_group(self.freq)
+ grp = frequencies._infer_period_group(reso)
+ freqn = frequencies._period_group(self.freq)
if reso == 'year':
t1 = Period(year=parsed.year, freq='A')
@@ -1216,12 +1215,12 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
if quarter is not None:
if freq is None:
freq = 'Q'
- base = FreqGroup.FR_QTR
+ base = frequencies.FreqGroup.FR_QTR
else:
base, mult = _gfc(freq)
if mult != 1:
raise ValueError('Only mult == 1 supported')
- if base != FreqGroup.FR_QTR:
+ if base != frequencies.FreqGroup.FR_QTR:
raise AssertionError("base must equal FR_QTR")
year, quarter = _make_field_arrays(year, quarter)
@@ -1273,7 +1272,7 @@ def _quarter_to_myear(year, quarter, freq):
if quarter <= 0 or quarter > 4:
raise ValueError('Quarter must be 1 <= q <= 4')
- mnum = _month_numbers[_freq_mod._get_rule_month(freq)] + 1
+ mnum = frequencies._month_numbers[frequencies._get_rule_month(freq)] + 1
month = (mnum + (quarter - 1) * 3) % 12 + 1
if month > mnum:
year -= 1
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 10a8286f4bec9..24deb8a298688 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -9,9 +9,9 @@
from pandas import Index, DatetimeIndex, Timestamp, Series, date_range, period_range
-from pandas.tseries.frequencies import to_offset, infer_freq
+import pandas.tseries.frequencies as frequencies
from pandas.tseries.tools import to_datetime
-import pandas.tseries.frequencies as fmod
+
import pandas.tseries.offsets as offsets
from pandas.tseries.period import PeriodIndex
import pandas.compat as compat
@@ -23,40 +23,40 @@ def test_to_offset_multiple():
freqstr = '2h30min'
freqstr2 = '2h 30min'
- result = to_offset(freqstr)
- assert(result == to_offset(freqstr2))
+ result = frequencies.to_offset(freqstr)
+ assert(result == frequencies.to_offset(freqstr2))
expected = offsets.Minute(150)
assert(result == expected)
freqstr = '2h30min15s'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
expected = offsets.Second(150 * 60 + 15)
assert(result == expected)
freqstr = '2h 60min'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
expected = offsets.Hour(3)
assert(result == expected)
freqstr = '15l500u'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
expected = offsets.Micro(15500)
assert(result == expected)
freqstr = '10s75L'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
expected = offsets.Milli(10075)
assert(result == expected)
if not _np_version_under1p7:
freqstr = '2800N'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
expected = offsets.Nano(2800)
assert(result == expected)
# malformed
try:
- to_offset('2h20m')
+ frequencies.to_offset('2h20m')
except ValueError:
pass
else:
@@ -65,31 +65,31 @@ def test_to_offset_multiple():
def test_to_offset_negative():
freqstr = '-1S'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
assert(result.n == -1)
freqstr = '-5min10s'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
assert(result.n == -310)
def test_to_offset_leading_zero():
freqstr = '00H 00T 01S'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
assert(result.n == 1)
freqstr = '-00H 03T 14S'
- result = to_offset(freqstr)
+ result = frequencies.to_offset(freqstr)
assert(result.n == -194)
def test_anchored_shortcuts():
- result = to_offset('W')
- expected = to_offset('W-SUN')
+ result = frequencies.to_offset('W')
+ expected = frequencies.to_offset('W-SUN')
assert(result == expected)
- result = to_offset('Q')
- expected = to_offset('Q-DEC')
+ result = frequencies.to_offset('Q')
+ expected = frequencies.to_offset('Q-DEC')
assert(result == expected)
@@ -100,26 +100,26 @@ class TestFrequencyInference(tm.TestCase):
def test_raise_if_period_index(self):
index = PeriodIndex(start="1/1/1990", periods=20, freq="M")
- self.assertRaises(TypeError, infer_freq, index)
+ self.assertRaises(TypeError, frequencies.infer_freq, index)
def test_raise_if_too_few(self):
index = _dti(['12/31/1998', '1/3/1999'])
- self.assertRaises(ValueError, infer_freq, index)
+ self.assertRaises(ValueError, frequencies.infer_freq, index)
def test_business_daily(self):
index = _dti(['12/31/1998', '1/3/1999', '1/4/1999'])
- self.assertEqual(infer_freq(index), 'B')
+ self.assertEqual(frequencies.infer_freq(index), 'B')
def test_day(self):
self._check_tick(timedelta(1), 'D')
def test_day_corner(self):
index = _dti(['1/1/2000', '1/2/2000', '1/3/2000'])
- self.assertEqual(infer_freq(index), 'D')
+ self.assertEqual(frequencies.infer_freq(index), 'D')
def test_non_datetimeindex(self):
dates = to_datetime(['1/1/2000', '1/2/2000', '1/3/2000'])
- self.assertEqual(infer_freq(dates), 'D')
+ self.assertEqual(frequencies.infer_freq(dates), 'D')
def test_hour(self):
self._check_tick(timedelta(hours=1), 'H')
@@ -149,15 +149,15 @@ def _check_tick(self, base_delta, code):
exp_freq = '%d%s' % (i, code)
else:
exp_freq = code
- self.assertEqual(infer_freq(index), exp_freq)
+ self.assertEqual(frequencies.infer_freq(index), exp_freq)
index = _dti([b + base_delta * 7] +
[b + base_delta * j for j in range(3)])
- self.assertIsNone(infer_freq(index))
+ self.assertIsNone(frequencies.infer_freq(index))
index = _dti([b + base_delta * j for j in range(3)] +
[b + base_delta * 7])
- self.assertIsNone(infer_freq(index))
+ self.assertIsNone(frequencies.infer_freq(index))
def test_weekly(self):
days = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
@@ -175,7 +175,7 @@ def test_week_of_month(self):
def test_week_of_month_fake(self):
#All of these dates are on same day of week and are 4 or 5 weeks apart
index = DatetimeIndex(["2013-08-27","2013-10-01","2013-10-29","2013-11-26"])
- assert infer_freq(index) != 'WOM-4TUE'
+ assert frequencies.infer_freq(index) != 'WOM-4TUE'
def test_monthly(self):
self._check_generated_range('1/1/2000', 'M')
@@ -212,9 +212,9 @@ def _check_generated_range(self, start, freq):
gen = date_range(start, periods=7, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
- self.assertEqual(infer_freq(index), gen.freqstr)
+ self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
else:
- inf_freq = infer_freq(index)
+ inf_freq = frequencies.infer_freq(index)
self.assertTrue((inf_freq == 'Q-DEC' and
gen.freqstr in ('Q', 'Q-DEC', 'Q-SEP', 'Q-JUN',
'Q-MAR'))
@@ -228,9 +228,9 @@ def _check_generated_range(self, start, freq):
gen = date_range(start, periods=5, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
- self.assertEqual(infer_freq(index), gen.freqstr)
+ self.assertEqual(frequencies.infer_freq(index), gen.freqstr)
else:
- inf_freq = infer_freq(index)
+ inf_freq = frequencies.infer_freq(index)
self.assertTrue((inf_freq == 'Q-DEC' and
gen.freqstr in ('Q', 'Q-DEC', 'Q-SEP', 'Q-JUN',
'Q-MAR'))
@@ -281,7 +281,7 @@ def test_non_datetimeindex(self):
vals = rng.to_pydatetime()
- result = infer_freq(vals)
+ result = frequencies.infer_freq(vals)
self.assertEqual(result, rng.inferred_freq)
def test_invalid_index_types(self):
@@ -290,17 +290,17 @@ def test_invalid_index_types(self):
for i in [ tm.makeIntIndex(10),
tm.makeFloatIndex(10),
tm.makePeriodIndex(10) ]:
- self.assertRaises(TypeError, lambda : infer_freq(i))
+ self.assertRaises(TypeError, lambda : frequencies.infer_freq(i))
for i in [ tm.makeStringIndex(10),
tm.makeUnicodeIndex(10) ]:
- self.assertRaises(ValueError, lambda : infer_freq(i))
+ self.assertRaises(ValueError, lambda : frequencies.infer_freq(i))
def test_string_datetimelike_compat(self):
# GH 6463
- expected = infer_freq(['2004-01', '2004-02', '2004-03', '2004-04'])
- result = infer_freq(Index(['2004-01', '2004-02', '2004-03', '2004-04']))
+ expected = frequencies.infer_freq(['2004-01', '2004-02', '2004-03', '2004-04'])
+ result = frequencies.infer_freq(Index(['2004-01', '2004-02', '2004-03', '2004-04']))
self.assertEqual(result,expected)
def test_series(self):
@@ -311,24 +311,24 @@ def test_series(self):
# invalid type of Series
for s in [ Series(np.arange(10)),
Series(np.arange(10.))]:
- self.assertRaises(TypeError, lambda : infer_freq(s))
+ self.assertRaises(TypeError, lambda : frequencies.infer_freq(s))
# a non-convertible string
- self.assertRaises(ValueError, lambda : infer_freq(Series(['foo','bar'])))
+ self.assertRaises(ValueError, lambda : frequencies.infer_freq(Series(['foo','bar'])))
# cannot infer on PeriodIndex
for freq in [None, 'L', 'Y']:
s = Series(period_range('2013',periods=10,freq=freq))
- self.assertRaises(TypeError, lambda : infer_freq(s))
+ self.assertRaises(TypeError, lambda : frequencies.infer_freq(s))
# DateTimeIndex
for freq in ['M', 'L', 'S']:
s = Series(date_range('20130101',periods=10,freq=freq))
- inferred = infer_freq(s)
+ inferred = frequencies.infer_freq(s)
self.assertEqual(inferred,freq)
s = Series(date_range('20130101','20130110'))
- inferred = infer_freq(s)
+ inferred = frequencies.infer_freq(s)
self.assertEqual(inferred,'D')
MONTHS = ['JAN', 'FEB', 'MAR', 'APR', 'MAY', 'JUN', 'JUL', 'AUG', 'SEP',
@@ -336,20 +336,20 @@ def test_series(self):
def test_is_superperiod_subperiod():
- assert(fmod.is_superperiod(offsets.YearEnd(), offsets.MonthEnd()))
- assert(fmod.is_subperiod(offsets.MonthEnd(), offsets.YearEnd()))
+ assert(frequencies.is_superperiod(offsets.YearEnd(), offsets.MonthEnd()))
+ assert(frequencies.is_subperiod(offsets.MonthEnd(), offsets.YearEnd()))
- assert(fmod.is_superperiod(offsets.Hour(), offsets.Minute()))
- assert(fmod.is_subperiod(offsets.Minute(), offsets.Hour()))
+ assert(frequencies.is_superperiod(offsets.Hour(), offsets.Minute()))
+ assert(frequencies.is_subperiod(offsets.Minute(), offsets.Hour()))
- assert(fmod.is_superperiod(offsets.Second(), offsets.Milli()))
- assert(fmod.is_subperiod(offsets.Milli(), offsets.Second()))
+ assert(frequencies.is_superperiod(offsets.Second(), offsets.Milli()))
+ assert(frequencies.is_subperiod(offsets.Milli(), offsets.Second()))
- assert(fmod.is_superperiod(offsets.Milli(), offsets.Micro()))
- assert(fmod.is_subperiod(offsets.Micro(), offsets.Milli()))
+ assert(frequencies.is_superperiod(offsets.Milli(), offsets.Micro()))
+ assert(frequencies.is_subperiod(offsets.Micro(), offsets.Milli()))
- assert(fmod.is_superperiod(offsets.Micro(), offsets.Nano()))
- assert(fmod.is_subperiod(offsets.Nano(), offsets.Micro()))
+ assert(frequencies.is_superperiod(offsets.Micro(), offsets.Nano()))
+ assert(frequencies.is_subperiod(offsets.Nano(), offsets.Micro()))
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index d99cfb254cc48..065aa9236e539 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -19,7 +19,7 @@
from pandas.tseries.frequencies import _offset_map
from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache, date_range
-from pandas.tseries.tools import parse_time_string, _maybe_get_tz
+from pandas.tseries.tools import parse_time_string
import pandas.tseries.offsets as offsets
from pandas.tslib import NaT, Timestamp
@@ -243,7 +243,7 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
- tz_obj = _maybe_get_tz(tz)
+ tz_obj = tslib.maybe_get_tz(tz)
dt_tz = tslib._localize_pydatetime(dt, tz_obj)
result = func(dt_tz)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index f5f66a49c29d4..b9d4dd80438ef 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -15,7 +15,7 @@
from pandas.tseries.period import Period, PeriodIndex, period_range
from pandas.tseries.index import DatetimeIndex, date_range, Index
from pandas.tseries.tools import to_datetime
-import pandas.tseries.period as pmod
+import pandas.tseries.period as period
import pandas.core.datetools as datetools
import pandas as pd
@@ -508,7 +508,7 @@ def test_properties_nat(self):
def test_pnow(self):
dt = datetime.now()
- val = pmod.pnow('D')
+ val = period.pnow('D')
exp = Period(dt, freq='D')
self.assertEqual(val, exp)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index f2bc66f156c75..9d5f45735feb5 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -15,7 +15,7 @@
import pandas.core.datetools as datetools
import pandas.tseries.offsets as offsets
import pandas.tseries.tools as tools
-import pandas.tseries.frequencies as fmod
+import pandas.tseries.frequencies as frequencies
import pandas as pd
from pandas.util.testing import assert_series_equal, assert_almost_equal
@@ -28,7 +28,6 @@
import pandas.index as _index
from pandas.compat import range, long, StringIO, lrange, lmap, zip, product
-import pandas.core.datetools as dt
from numpy.random import rand
from numpy.testing import assert_array_equal
from pandas.util.testing import assert_frame_equal
@@ -2961,7 +2960,7 @@ def test_datetimeindex_constructor(self):
edate = datetime(2000, 1, 1)
idx = DatetimeIndex(start=sdate, freq='1B', periods=20)
self.assertEqual(len(idx), 20)
- self.assertEqual(idx[0], sdate + 0 * dt.bday)
+ self.assertEqual(idx[0], sdate + 0 * datetools.bday)
self.assertEqual(idx.freq, 'B')
idx = DatetimeIndex(end=edate, freq=('D', 5), periods=20)
@@ -2971,19 +2970,19 @@ def test_datetimeindex_constructor(self):
idx1 = DatetimeIndex(start=sdate, end=edate, freq='W-SUN')
idx2 = DatetimeIndex(start=sdate, end=edate,
- freq=dt.Week(weekday=6))
+ freq=datetools.Week(weekday=6))
self.assertEqual(len(idx1), len(idx2))
self.assertEqual(idx1.offset, idx2.offset)
idx1 = DatetimeIndex(start=sdate, end=edate, freq='QS')
idx2 = DatetimeIndex(start=sdate, end=edate,
- freq=dt.QuarterBegin(startingMonth=1))
+ freq=datetools.QuarterBegin(startingMonth=1))
self.assertEqual(len(idx1), len(idx2))
self.assertEqual(idx1.offset, idx2.offset)
idx1 = DatetimeIndex(start=sdate, end=edate, freq='BQ')
idx2 = DatetimeIndex(start=sdate, end=edate,
- freq=dt.BQuarterEnd(startingMonth=12))
+ freq=datetools.BQuarterEnd(startingMonth=12))
self.assertEqual(len(idx1), len(idx2))
self.assertEqual(idx1.offset, idx2.offset)
@@ -3474,31 +3473,31 @@ def test_delta_preserve_nanos(self):
self.assertEqual(result.nanosecond, val.nanosecond)
def test_frequency_misc(self):
- self.assertEqual(fmod.get_freq_group('T'),
- fmod.FreqGroup.FR_MIN)
+ self.assertEqual(frequencies.get_freq_group('T'),
+ frequencies.FreqGroup.FR_MIN)
- code, stride = fmod.get_freq_code(offsets.Hour())
- self.assertEqual(code, fmod.FreqGroup.FR_HR)
+ code, stride = frequencies.get_freq_code(offsets.Hour())
+ self.assertEqual(code, frequencies.FreqGroup.FR_HR)
- code, stride = fmod.get_freq_code((5, 'T'))
- self.assertEqual(code, fmod.FreqGroup.FR_MIN)
+ code, stride = frequencies.get_freq_code((5, 'T'))
+ self.assertEqual(code, frequencies.FreqGroup.FR_MIN)
self.assertEqual(stride, 5)
offset = offsets.Hour()
- result = fmod.to_offset(offset)
+ result = frequencies.to_offset(offset)
self.assertEqual(result, offset)
- result = fmod.to_offset((5, 'T'))
+ result = frequencies.to_offset((5, 'T'))
expected = offsets.Minute(5)
self.assertEqual(result, expected)
- self.assertRaises(ValueError, fmod.get_freq_code, (5, 'baz'))
+ self.assertRaises(ValueError, frequencies.get_freq_code, (5, 'baz'))
- self.assertRaises(ValueError, fmod.to_offset, '100foo')
+ self.assertRaises(ValueError, frequencies.to_offset, '100foo')
- self.assertRaises(ValueError, fmod.to_offset, ('', ''))
+ self.assertRaises(ValueError, frequencies.to_offset, ('', ''))
- result = fmod.get_standard_freq(offsets.Hour())
+ result = frequencies.get_standard_freq(offsets.Hour())
self.assertEqual(result, 'H')
def test_hash_equivalent(self):
@@ -3936,7 +3935,7 @@ def test_to_datetime_format_microsecond(self):
val = '01-Apr-2011 00:00:01.978'
format = '%d-%b-%Y %H:%M:%S.%f'
result = to_datetime(val, format=format)
- exp = dt.datetime.strptime(val, format)
+ exp = datetime.strptime(val, format)
self.assertEqual(result, exp)
def test_to_datetime_format_time(self):
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index b4ab813d3debe..457a95deb16d9 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -56,20 +56,6 @@ def _infer(a, b):
return tz
-def _maybe_get_tz(tz, date=None):
- tz = tslib.maybe_get_tz(tz)
- if com.is_integer(tz):
- import pytz
- tz = pytz.FixedOffset(tz / 60)
-
- # localize and get the tz
- if date is not None and tz is not None:
- if date.tzinfo is not None and hasattr(tz,'localize'):
- tz = tz.localize(date.replace(tzinfo=None)).tzinfo
-
- return tz
-
-
def _guess_datetime_format(dt_str, dayfirst=False,
dt_str_parse=compat.parse_date,
dt_str_split=_DATEUTIL_LEXER_SPLIT):
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 655b92cfe70f3..dc9f3fa258985 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1121,9 +1121,10 @@ cpdef inline object maybe_get_tz(object tz):
tz._filename = zone
else:
tz = pytz.timezone(tz)
- return tz
- else:
- return tz
+ elif util.is_integer_object(tz):
+ tz = pytz.FixedOffset(tz / 60)
+ return tz
+
class OutOfBoundsDatetime(ValueError):
@@ -2223,7 +2224,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
result_b.fill(NPY_NAT)
# left side
- idx_shifted = _ensure_int64(
+ idx_shifted = ensure_int64(
np.maximum(0, trans.searchsorted(vals - DAY_NS, side='right') - 1))
for i in range(n):
@@ -2235,7 +2236,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
result_a[i] = v
# right side
- idx_shifted = _ensure_int64(
+ idx_shifted = ensure_int64(
np.maximum(0, trans.searchsorted(vals + DAY_NS, side='right') - 1))
for i in range(n):
@@ -2313,14 +2314,8 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
return result
-cdef _ensure_int64(object arr):
- if util.is_array(arr):
- if (<ndarray> arr).descr.type_num == NPY_INT64:
- return arr
- else:
- return arr.astype(np.int64)
- else:
- return np.array(arr, dtype=np.int64)
+import pandas.algos as algos
+ensure_int64 = algos.ensure_int64
cdef inline bisect_right_i8(int64_t *data, int64_t val, Py_ssize_t n):
| Fixed:
- Some modules are imported different aliases
- Merge duplicated functions (`maybe_get_tz` and `ensure_int64`)
| https://api.github.com/repos/pandas-dev/pandas/pulls/7854 | 2014-07-27T13:22:43Z | 2014-07-28T14:57:03Z | 2014-07-28T14:57:03Z | 2014-08-04T13:18:06Z |
BUG: left join on index with multiple matches now works (GH5391) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index a30400322716c..98186c8fc32b1 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -310,6 +310,7 @@ Bug Fixes
- Bug in ``DatetimeIndex.value_counts`` doesn't preserve tz (:issue:`7735`)
- Bug in ``PeriodIndex.value_counts`` results in ``Int64Index`` (:issue:`7735`)
+- Bug in ``DataFrame.join`` when doing left join on index and there are multiple matches (:issue:`5391`)
diff --git a/pandas/src/join.pyx b/pandas/src/join.pyx
index 91102a2fa6a18..4c32aa902d64d 100644
--- a/pandas/src/join.pyx
+++ b/pandas/src/join.pyx
@@ -103,12 +103,18 @@ def left_outer_join(ndarray[int64_t] left, ndarray[int64_t] right,
left_indexer = _get_result_indexer(left_sorter, left_indexer)
right_indexer = _get_result_indexer(right_sorter, right_indexer)
- if not sort:
- if left_sorter.dtype != np.int_:
- left_sorter = left_sorter.astype(np.int_)
-
- rev = np.empty(len(left), dtype=np.int_)
- rev.put(left_sorter, np.arange(len(left)))
+ if not sort: # if not asked to sort, revert to original order
+ if len(left) == len(left_indexer):
+ # no multiple matches for any row on the left
+ # this is a short-cut to avoid np.argsort;
+ # otherwise, the `else` path also works in this case
+ if left_sorter.dtype != np.int_:
+ left_sorter = left_sorter.astype(np.int_)
+
+ rev = np.empty(len(left), dtype=np.int_)
+ rev.put(left_sorter, np.arange(len(left)))
+ else:
+ rev = np.argsort(left_indexer)
right_indexer = right_indexer.take(rev)
left_indexer = left_indexer.take(rev)
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index ee594ef031e82..7ad871e78a53b 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -235,7 +235,9 @@ def _maybe_add_join_keys(self, result, left_indexer, right_indexer):
key_col.put(na_indexer, com.take_1d(self.left_join_keys[i],
left_na_indexer))
- elif left_indexer is not None:
+ elif left_indexer is not None \
+ and isinstance(self.left_join_keys[i], np.ndarray):
+
if name is None:
name = 'key_%d' % i
@@ -562,9 +564,6 @@ def _get_single_indexer(join_key, index, sort=False):
def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
- join_index = left_ax
- left_indexer = None
-
if len(join_keys) > 1:
if not ((isinstance(right_ax, MultiIndex) and
len(join_keys) == right_ax.nlevels)):
@@ -573,22 +572,21 @@ def _left_join_on_index(left_ax, right_ax, join_keys, sort=False):
"number of join keys must be the number of "
"levels in right_ax")
- left_tmp, right_indexer = \
- _get_multiindex_indexer(join_keys, right_ax,
- sort=sort)
- if sort:
- left_indexer = left_tmp
- join_index = left_ax.take(left_indexer)
+ left_indexer, right_indexer = \
+ _get_multiindex_indexer(join_keys, right_ax, sort=sort)
else:
jkey = join_keys[0]
- if sort:
- left_indexer, right_indexer = \
- _get_single_indexer(jkey, right_ax, sort=sort)
- join_index = left_ax.take(left_indexer)
- else:
- right_indexer = right_ax.get_indexer(jkey)
- return join_index, left_indexer, right_indexer
+ left_indexer, right_indexer = \
+ _get_single_indexer(jkey, right_ax, sort=sort)
+
+ if sort or len(left_ax) != len(left_indexer):
+ # if asked to sort or there are 1-to-many matches
+ join_index = left_ax.take(left_indexer)
+ return join_index, left_indexer, right_indexer
+ else:
+ # left frame preserves order & length of its index
+ return left_ax, None, right_indexer
def _right_outer_join(x, y, max_groups):
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index df2f270346e20..151bb82f9c61f 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -958,6 +958,98 @@ def test_left_join_index_preserve_order(self):
right_on=['k1', 'k2'], how='right')
tm.assert_frame_equal(joined.ix[:, expected.columns], expected)
+ def test_left_join_index_multi_match_multiindex(self):
+ left = DataFrame([
+ ['X', 'Y', 'C', 'a'],
+ ['W', 'Y', 'C', 'e'],
+ ['V', 'Q', 'A', 'h'],
+ ['V', 'R', 'D', 'i'],
+ ['X', 'Y', 'D', 'b'],
+ ['X', 'Y', 'A', 'c'],
+ ['W', 'Q', 'B', 'f'],
+ ['W', 'R', 'C', 'g'],
+ ['V', 'Y', 'C', 'j'],
+ ['X', 'Y', 'B', 'd']],
+ columns=['cola', 'colb', 'colc', 'tag'],
+ index=[3, 2, 0, 1, 7, 6, 4, 5, 9, 8])
+
+ right = DataFrame([
+ ['W', 'R', 'C', 0],
+ ['W', 'Q', 'B', 3],
+ ['W', 'Q', 'B', 8],
+ ['X', 'Y', 'A', 1],
+ ['X', 'Y', 'A', 4],
+ ['X', 'Y', 'B', 5],
+ ['X', 'Y', 'C', 6],
+ ['X', 'Y', 'C', 9],
+ ['X', 'Q', 'C', -6],
+ ['X', 'R', 'C', -9],
+ ['V', 'Y', 'C', 7],
+ ['V', 'R', 'D', 2],
+ ['V', 'R', 'D', -1],
+ ['V', 'Q', 'A', -3]],
+ columns=['col1', 'col2', 'col3', 'val'])
+
+ right.set_index(['col1', 'col2', 'col3'], inplace=True)
+ result = left.join(right, on=['cola', 'colb', 'colc'], how='left')
+
+ expected = DataFrame([
+ ['X', 'Y', 'C', 'a', 6],
+ ['X', 'Y', 'C', 'a', 9],
+ ['W', 'Y', 'C', 'e', nan],
+ ['V', 'Q', 'A', 'h', -3],
+ ['V', 'R', 'D', 'i', 2],
+ ['V', 'R', 'D', 'i', -1],
+ ['X', 'Y', 'D', 'b', nan],
+ ['X', 'Y', 'A', 'c', 1],
+ ['X', 'Y', 'A', 'c', 4],
+ ['W', 'Q', 'B', 'f', 3],
+ ['W', 'Q', 'B', 'f', 8],
+ ['W', 'R', 'C', 'g', 0],
+ ['V', 'Y', 'C', 'j', 7],
+ ['X', 'Y', 'B', 'd', 5]],
+ columns=['cola', 'colb', 'colc', 'tag', 'val'],
+ index=[3, 3, 2, 0, 1, 1, 7, 6, 6, 4, 4, 5, 9, 8])
+
+ tm.assert_frame_equal(result, expected)
+
+ def test_left_join_index_multi_match(self):
+ left = DataFrame([
+ ['c', 0],
+ ['b', 1],
+ ['a', 2],
+ ['b', 3]],
+ columns=['tag', 'val'],
+ index=[2, 0, 1, 3])
+
+ right = DataFrame([
+ ['a', 'v'],
+ ['c', 'w'],
+ ['c', 'x'],
+ ['d', 'y'],
+ ['a', 'z'],
+ ['c', 'r'],
+ ['e', 'q'],
+ ['c', 's']],
+ columns=['tag', 'char'])
+
+ right.set_index('tag', inplace=True)
+ result = left.join(right, on='tag', how='left')
+
+ expected = DataFrame([
+ ['c', 0, 'w'],
+ ['c', 0, 'x'],
+ ['c', 0, 'r'],
+ ['c', 0, 's'],
+ ['b', 1, nan],
+ ['a', 2, 'v'],
+ ['a', 2, 'z'],
+ ['b', 3, nan]],
+ columns=['tag', 'val', 'char'],
+ index=[2, 2, 2, 2, 0, 1, 1, 3])
+
+ tm.assert_frame_equal(result, expected)
+
def test_join_multi_dtypes(self):
# test with multi dtypes in the join index
| closes #5391
```
>>> ### left join with multiple matches - single index case
>>> left = DataFrame([
... ['X', 'Y', 'C', 'a'],
... ['W', 'Y', 'C', 'e'],
... ['V', 'Q', 'A', 'h'],
... ['V', 'R', 'D', 'i'],
... ['X', 'Y', 'D', 'b'],
... ['X', 'Y', 'A', 'c'],
... ['W', 'Q', 'B', 'f'],
... ['W', 'R', 'C', 'g'],
... ['V', 'Y', 'C', 'j'],
... ['X', 'Y', 'B', 'd']],
... columns=['cola', 'colb', 'colc', 'tag'],
... index=[3, 2, 0, 1, 7, 6, 4, 5, 9, 8])
>>>
... right = DataFrame([
... ['W', 'R', 'C', 0],
... ['W', 'Q', 'B', 3],
... ['W', 'Q', 'B', 8],
... ['X', 'Y', 'A', 1],
... ['X', 'Y', 'A', 4],
... ['X', 'Y', 'B', 5],
... ['X', 'Y', 'C', 6],
... ['X', 'Y', 'C', 9],
... ['X', 'Q', 'C', -6],
... ['X', 'R', 'C', -9],
... ['V', 'Y', 'C', 7],
... ['V', 'R', 'D', 2],
... ['V', 'R', 'D', -1],
... ['V', 'Q', 'A', -3]],
... columns=['col1', 'col2', 'col3', 'val'])
>>>
... right.set_index(['col1', 'col2', 'col3'], inplace=True)
>>> result = left.join(right, on=['cola', 'colb', 'colc'], how='left')
>>>
... expected = DataFrame([
... ['X', 'Y', 'C', 'a', 6],
... ['X', 'Y', 'C', 'a', 9],
... ['W', 'Y', 'C', 'e', nan],
... ['V', 'Q', 'A', 'h', -3],
... ['V', 'R', 'D', 'i', 2],
... ['V', 'R', 'D', 'i', -1],
... ['X', 'Y', 'D', 'b', nan],
... ['X', 'Y', 'A', 'c', 1],
... ['X', 'Y', 'A', 'c', 4],
... ['W', 'Q', 'B', 'f', 3],
... ['W', 'Q', 'B', 'f', 8],
... ['W', 'R', 'C', 'g', 0],
... ['V', 'Y', 'C', 'j', 7],
... ['X', 'Y', 'B', 'd', 5]],
... columns=['cola', 'colb', 'colc', 'tag', 'val'],
... index=[3, 3, 2, 0, 1, 1, 7, 6, 6, 4, 4, 5, 9, 8])
>>>
... tm.assert_frame_equal(result, expected)
>>> print(left, right, result, sep='\n')
cola colb colc tag
3 X Y C a
2 W Y C e
0 V Q A h
1 V R D i
7 X Y D b
6 X Y A c
4 W Q B f
5 W R C g
9 V Y C j
8 X Y B d
val
col1 col2 col3
W R C 0
Q B 3
B 8
X Y A 1
A 4
B 5
C 6
C 9
Q C -6
R C -9
V Y C 7
R D 2
D -1
Q A -3
cola colb colc tag val
3 X Y C a 6
3 X Y C a 9
2 W Y C e NaN
0 V Q A h -3
1 V R D i 2
1 V R D i -1
7 X Y D b NaN
6 X Y A c 1
6 X Y A c 4
4 W Q B f 3
4 W Q B f 8
5 W R C g 0
9 V Y C j 7
8 X Y B d 5
>>> ### left join with multiple matches - multi index case
>>> left = DataFrame([
... ['c', 0],
... ['b', 1],
... ['a', 2],
... ['b', 3]],
... columns=['tag', 'val'],
... index=[2, 0, 1, 3])
>>>
... right = DataFrame([
... ['a', 'v'],
... ['c', 'w'],
... ['c', 'x'],
... ['d', 'y'],
... ['a', 'z'],
... ['c', 'r'],
... ['e', 'q'],
... ['c', 's']],
... columns=['tag', 'char'])
>>>
... right.set_index('tag', inplace=True)
>>> result = left.join(right, on='tag', how='left')
>>>
... expected = DataFrame([
... ['c', 0, 'w'],
... ['c', 0, 'x'],
... ['c', 0, 'r'],
... ['c', 0, 's'],
... ['b', 1, nan],
... ['a', 2, 'v'],
... ['a', 2, 'z'],
... ['b', 3, nan]],
... columns=['tag', 'val', 'char'],
... index=[2, 2, 2, 2, 0, 1, 1, 3])
>>>
... tm.assert_frame_equal(result, expected)
>>> print(left, right, result, sep='\n')
tag val
2 c 0
0 b 1
1 a 2
3 b 3
char
tag
a v
c w
c x
d y
a z
c r
e q
c s
tag val char
2 c 0 w
2 c 0 x
2 c 0 r
2 c 0 s
0 b 1 NaN
1 a 2 v
1 a 2 z
3 b 3 NaN
```
This closes on: https://github.com/pydata/pandas/issues/5391
By providing all the matches when doing left join on index, both in the case of single index and multi-index. It also preserves the index order of the calling (left) DataFrame (as it used to), though when there are multiple matches the indices repeat and the index loses integrity.
The added test cases should be self-explanatory.
Thank you,
| https://api.github.com/repos/pandas-dev/pandas/pulls/7853 | 2014-07-27T00:05:31Z | 2014-08-21T22:10:45Z | null | 2014-09-04T00:24:19Z |
ENH: tz_localize(None) allows to reset tz | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index c672a3d030bb9..1bc9cca17aeec 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1454,6 +1454,19 @@ to determine the right offset.
rng_hourly_eastern = rng_hourly.tz_localize('US/Eastern', infer_dst=True)
rng_hourly_eastern.values
+
+To remove timezone from tz-aware ``DatetimeIndex``, use ``tz_localize(None)`` or ``tz_convert(None)``. ``tz_localize(None)`` will remove timezone holding local time representations. ``tz_convert(None)`` will remove timezone after converting to UTC time.
+
+.. ipython:: python
+
+ didx = DatetimeIndex(start='2014-08-01 09:00', freq='H', periods=10, tz='US/Eastern')
+ didx
+ didx.tz_localize(None)
+ didx.tz_convert(None)
+
+ # tz_convert(None) is identical with tz_convert('UTC').tz_localize(None)
+ didx.tz_convert('UCT').tz_localize(None)
+
.. _timeseries.timedeltas:
Time Deltas
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index ef2b91d044d86..31947e3107708 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -142,6 +142,18 @@ API changes
In [3]: idx.isin(['a', 'c', 'e'], level=1)
Out[3]: array([ True, False, True, True, False, True], dtype=bool)
+- ``tz_localize(None)`` for tz-aware ``Timestamp`` and ``DatetimeIndex`` now removes timezone holding local time,
+previously results in ``Exception`` or ``TypeError`` (:issue:`7812`)
+
+ .. ipython:: python
+
+ ts = Timestamp('2014-08-01 09:00', tz='US/Eastern')
+ ts
+ ts.tz_localize(None)
+
+ didx = DatetimeIndex(start='2014-08-01 09:00', freq='H', periods=10, tz='US/Eastern')
+ didx
+ didx.tz_localize(None)
.. _whatsnew_0150.cat:
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 5f7c93d38653a..7ad913e8f5671 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1618,7 +1618,14 @@ def _view_like(self, ndarray):
def tz_convert(self, tz):
"""
- Convert DatetimeIndex from one time zone to another (using pytz/dateutil)
+ Convert tz-aware DatetimeIndex from one time zone to another (using pytz/dateutil)
+
+ Parameters
+ ----------
+ tz : string, pytz.timezone, dateutil.tz.tzfile or None
+ Time zone for time. Corresponding timestamps would be converted to
+ time zone of the TimeSeries.
+ None will remove timezone holding UTC time.
Returns
-------
@@ -1636,13 +1643,15 @@ def tz_convert(self, tz):
def tz_localize(self, tz, infer_dst=False):
"""
- Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil)
+ Localize tz-naive DatetimeIndex to given time zone (using pytz/dateutil),
+ or remove timezone from tz-aware DatetimeIndex
Parameters
----------
- tz : string or pytz.timezone or dateutil.tz.tzfile
+ tz : string, pytz.timezone, dateutil.tz.tzfile or None
Time zone for time. Corresponding timestamps would be converted to
- time zone of the TimeSeries
+ time zone of the TimeSeries.
+ None will remove timezone holding local time.
infer_dst : boolean, default False
Attempt to infer fall dst-transition hours based on order
@@ -1651,13 +1660,15 @@ def tz_localize(self, tz, infer_dst=False):
localized : DatetimeIndex
"""
if self.tz is not None:
- raise TypeError("Already tz-aware, use tz_convert to convert.")
- tz = tslib.maybe_get_tz(tz)
-
- # Convert to UTC
- new_dates = tslib.tz_localize_to_utc(self.asi8, tz, infer_dst=infer_dst)
+ if tz is None:
+ new_dates = tslib.tz_convert(self.asi8, 'UTC', self.tz)
+ else:
+ raise TypeError("Already tz-aware, use tz_convert to convert.")
+ else:
+ tz = tslib.maybe_get_tz(tz)
+ # Convert to UTC
+ new_dates = tslib.tz_localize_to_utc(self.asi8, tz, infer_dst=infer_dst)
new_dates = new_dates.view(_NS_DTYPE)
-
return self._simple_new(new_dates, self.name, self.offset, tz)
def indexer_at_time(self, time, asof=False):
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index ab969f13289ac..bcfb2357b668d 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -863,6 +863,7 @@ def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
class TestTimeZones(tm.TestCase):
_multiprocess_can_split_ = True
+ timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']
def setUp(self):
tm._skip_if_no_pytz()
@@ -882,6 +883,24 @@ def test_tz_localize_naive(self):
self.assertTrue(conv.equals(exp))
+ def test_tz_localize_roundtrip(self):
+ for tz in self.timezones:
+ idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M')
+ idx2 = date_range(start='2014-01-01', end='2014-12-31', freq='D')
+ idx3 = date_range(start='2014-01-01', end='2014-03-01', freq='H')
+ idx4 = date_range(start='2014-08-01', end='2014-10-31', freq='T')
+ for idx in [idx1, idx2, idx3, idx4]:
+ localized = idx.tz_localize(tz)
+ expected = date_range(start=idx[0], end=idx[-1], freq=idx.freq, tz=tz)
+ tm.assert_index_equal(localized, expected)
+
+ with tm.assertRaises(TypeError):
+ localized.tz_localize(tz)
+
+ reset = localized.tz_localize(None)
+ tm.assert_index_equal(reset, idx)
+ self.assertTrue(reset.tzinfo is None)
+
def test_series_frame_tz_localize(self):
rng = date_range('1/1/2011', periods=100, freq='H')
@@ -930,6 +949,29 @@ def test_series_frame_tz_convert(self):
ts = Series(1, index=rng)
tm.assertRaisesRegexp(TypeError, "Cannot convert tz-naive", ts.tz_convert, 'US/Eastern')
+ def test_tz_convert_roundtrip(self):
+ for tz in self.timezones:
+ idx1 = date_range(start='2014-01-01', end='2014-12-31', freq='M', tz='UTC')
+ exp1 = date_range(start='2014-01-01', end='2014-12-31', freq='M')
+
+ idx2 = date_range(start='2014-01-01', end='2014-12-31', freq='D', tz='UTC')
+ exp2 = date_range(start='2014-01-01', end='2014-12-31', freq='D')
+
+ idx3 = date_range(start='2014-01-01', end='2014-03-01', freq='H', tz='UTC')
+ exp3 = date_range(start='2014-01-01', end='2014-03-01', freq='H')
+
+ idx4 = date_range(start='2014-08-01', end='2014-10-31', freq='T', tz='UTC')
+ exp4 = date_range(start='2014-08-01', end='2014-10-31', freq='T')
+
+
+ for idx, expected in [(idx1, exp1), (idx2, exp2), (idx3, exp3), (idx4, exp4)]:
+ converted = idx.tz_convert(tz)
+ reset = converted.tz_convert(None)
+ tm.assert_index_equal(reset, expected)
+ self.assertTrue(reset.tzinfo is None)
+ tm.assert_index_equal(reset, converted.tz_convert('UTC').tz_localize(None))
+
+
def test_join_utc_convert(self):
rng = date_range('1/1/2011', periods=100, freq='H', tz='utc')
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index 79eaa97d50322..563ab74ad975a 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -97,6 +97,33 @@ def test_tz(self):
self.assertEqual(conv.nanosecond, 5)
self.assertEqual(conv.hour, 19)
+ def test_tz_localize_roundtrip(self):
+ for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']:
+ for t in ['2014-02-01 09:00', '2014-07-08 09:00', '2014-11-01 17:00',
+ '2014-11-05 00:00']:
+ ts = Timestamp(t)
+ localized = ts.tz_localize(tz)
+ self.assertEqual(localized, Timestamp(t, tz=tz))
+
+ with tm.assertRaises(Exception):
+ localized.tz_localize(tz)
+
+ reset = localized.tz_localize(None)
+ self.assertEqual(reset, ts)
+ self.assertTrue(reset.tzinfo is None)
+
+ def test_tz_convert_roundtrip(self):
+ for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']:
+ for t in ['2014-02-01 09:00', '2014-07-08 09:00', '2014-11-01 17:00',
+ '2014-11-05 00:00']:
+ ts = Timestamp(t, tz='UTC')
+ converted = ts.tz_convert(tz)
+
+ reset = converted.tz_convert(None)
+ self.assertEqual(reset, Timestamp(t))
+ self.assertTrue(reset.tzinfo is None)
+ self.assertEqual(reset, converted.tz_convert('UTC').tz_localize(None))
+
def test_barely_oob_dts(self):
one_us = np.timedelta64(1).astype('timedelta64[us]')
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index b8342baae16bd..3b1a969e17a18 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -373,11 +373,14 @@ class Timestamp(_Timestamp):
def tz_localize(self, tz, infer_dst=False):
"""
- Convert naive Timestamp to local time zone
+ Convert naive Timestamp to local time zone, or remove
+ timezone from tz-aware Timestamp.
Parameters
----------
- tz : pytz.timezone or dateutil.tz.tzfile
+ tz : string, pytz.timezone, dateutil.tz.tzfile or None
+ Time zone for time which Timestamp will be converted to.
+ None will remove timezone holding local time.
infer_dst : boolean, default False
Attempt to infer fall dst-transition hours based on order
@@ -392,8 +395,13 @@ class Timestamp(_Timestamp):
infer_dst=infer_dst)[0]
return Timestamp(value, tz=tz)
else:
- raise Exception('Cannot localize tz-aware Timestamp, use '
- 'tz_convert for conversions')
+ if tz is None:
+ # reset tz
+ value = tz_convert_single(self.value, 'UTC', self.tz)
+ return Timestamp(value, tz=None)
+ else:
+ raise Exception('Cannot localize tz-aware Timestamp, use '
+ 'tz_convert for conversions')
def tz_convert(self, tz):
"""
@@ -402,7 +410,9 @@ class Timestamp(_Timestamp):
Parameters
----------
- tz : pytz.timezone or dateutil.tz.tzfile
+ tz : string, pytz.timezone, dateutil.tz.tzfile or None
+ Time zone for time which Timestamp will be converted to.
+ None will remove timezone holding UTC time.
Returns
-------
| Closes #7812. Allow `tz_localize(None)` for tz-aware `Timestamp` and `DatetimeIndex` to reset tz.
CC @rockg, @nehalecky
```
t = pd.Timestamp('2014-01-01 09:00', tz='Asia/Tokyo')
# tz_localize resets tz holding current tz representation
t.tz_localize(None)
#2014-01-01 09:00:00
# tz_convert reset tz using UTC representation (no changes by this PR)
t.tz_convert(None)
#2014-01-01 00:00:00
idx = pd.date_range(start='2011-01-01 09:00', freq='H', periods=10, tz='Asia/Tokyo')
idx.tz_localize(None)
# <class 'pandas.tseries.index.DatetimeIndex'
# [2011-01-01 09:00:00, ..., 2011-01-01 18:00:00]
# Length: 10, Freq: H, Timezone: None
# (no changes by this PR)
idx.tz_convert(None)
# <class 'pandas.tseries.index.DatetimeIndex'>
# [2011-01-01 00:00:00, ..., 2011-01-01 09:00:00]
# Length: 10, Freq: H, Timezone: None
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7852 | 2014-07-26T14:52:42Z | 2014-08-05T14:49:29Z | 2014-08-05T14:49:29Z | 2014-08-07T22:13:11Z |
BUG: fix greedy date parsing in read_html | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index a30400322716c..aa6f1ce28a90d 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -106,6 +106,9 @@ API changes
See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
+- The ``infer_types`` argument to :func:`~pandas.io.html.read_html` now has no
+ effect (:issue:`7762`, :issue:`7032`).
+
.. _whatsnew_0150.cat:
@@ -320,6 +323,8 @@ Bug Fixes
+- Bug in ``read_html`` where the ``infer_types`` argument forced coercion of
+ date-likes incorrectly (:issue:`7762`, :issue:`7032`).
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 5ea6ca36ac764..d9c980b5e88db 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -607,11 +607,6 @@ def _data_to_frame(data, header, index_col, skiprows, infer_types,
parse_dates=parse_dates, tupleize_cols=tupleize_cols,
thousands=thousands)
df = tp.read()
-
- if infer_types: # TODO: rm this code so infer_types has no effect in 0.14
- df = df.convert_objects(convert_dates='coerce')
- else:
- df = df.applymap(text_type)
return df
@@ -757,9 +752,8 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
that sequence. Note that a single element sequence means 'skip the nth
row' whereas an integer means 'skip n rows'.
- infer_types : bool, optional
- This option is deprecated in 0.13, an will have no effect in 0.14. It
- defaults to ``True``.
+ infer_types : None, optional
+ This has no effect since 0.15.0. It is here for backwards compatibility.
attrs : dict or None, optional
This is a dictionary of attributes that you can pass to use to identify
@@ -838,9 +832,7 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
pandas.io.parsers.read_csv
"""
if infer_types is not None:
- warnings.warn("infer_types will have no effect in 0.14", FutureWarning)
- else:
- infer_types = True # TODO: remove effect of this in 0.14
+ warnings.warn("infer_types has no effect since 0.15", FutureWarning)
# Type check here. We don't want to parse only to fail because of an
# invalid value of an integer skiprows.
diff --git a/pandas/io/tests/data/wikipedia_states.html b/pandas/io/tests/data/wikipedia_states.html
new file mode 100644
index 0000000000000..6765954dd13d1
--- /dev/null
+++ b/pandas/io/tests/data/wikipedia_states.html
@@ -0,0 +1,1757 @@
+<!DOCTYPE html>
+<html lang="en" dir="ltr" class="client-nojs">
+<head>
+<meta charset="UTF-8" />
+<title>List of U.S. states and territories by area - Wikipedia, the free encyclopedia</title>
+<meta name="generator" content="MediaWiki 1.24wmf14" />
+<link rel="alternate" href="android-app://org.wikipedia/http/en.m.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_area" />
+<link rel="alternate" type="application/x-wiki" title="Edit this page" href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit" />
+<link rel="edit" title="Edit this page" href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit" />
+<link rel="apple-touch-icon" href="//bits.wikimedia.org/apple-touch/wikipedia.png" />
+<link rel="shortcut icon" href="//bits.wikimedia.org/favicon/wikipedia.ico" />
+<link rel="search" type="application/opensearchdescription+xml" href="/w/opensearch_desc.php" title="Wikipedia (en)" />
+<link rel="EditURI" type="application/rsd+xml" href="//en.wikipedia.org/w/api.php?action=rsd" />
+<link rel="copyright" href="//creativecommons.org/licenses/by-sa/3.0/" />
+<link rel="alternate" type="application/atom+xml" title="Wikipedia Atom feed" href="/w/index.php?title=Special:RecentChanges&feed=atom" />
+<link rel="canonical" href="http://en.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_area" />
+<link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&lang=en&modules=ext.gadget.DRN-wizard%2CReferenceTooltips%2Ccharinsert%2CrefToolbar%2Cteahouse%7Cext.rtlcite%2Cwikihiero%7Cext.uls.nojs%7Cext.visualEditor.viewPageTarget.noscript%7Cmediawiki.legacy.commonPrint%2Cshared%7Cmediawiki.skinning.interface%7Cmediawiki.ui.button%7Cskins.vector.styles%7Cwikibase.client.init&only=styles&skin=vector&*" />
+<meta name="ResourceLoaderDynamicStyles" content="" />
+<link rel="stylesheet" href="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&lang=en&modules=site&only=styles&skin=vector&*" />
+<style>a:lang(ar),a:lang(kk-arab),a:lang(mzn),a:lang(ps),a:lang(ur){text-decoration:none}
+/* cache key: enwiki:resourceloader:filter:minify-css:7:3904d24a08aa08f6a68dc338f9be277e */</style>
+<script src="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&lang=en&modules=startup&only=scripts&skin=vector&*"></script>
+<script>if(window.mw){
+mw.config.set({"wgCanonicalNamespace":"","wgCanonicalSpecialPageName":false,"wgNamespaceNumber":0,"wgPageName":"List_of_U.S._states_and_territories_by_area","wgTitle":"List of U.S. states and territories by area","wgCurRevisionId":614847271,"wgRevisionId":614847271,"wgArticleId":87513,"wgIsArticle":true,"wgIsRedirect":false,"wgAction":"view","wgUserName":null,"wgUserGroups":["*"],"wgCategories":["Geography of the United States","Lists of states of the United States"],"wgBreakFrames":false,"wgPageContentLanguage":"en","wgPageContentModel":"wikitext","wgSeparatorTransformTable":["",""],"wgDigitTransformTable":["",""],"wgDefaultDateFormat":"dmy","wgMonthNames":["","January","February","March","April","May","June","July","August","September","October","November","December"],"wgMonthNamesShort":["","Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"],"wgRelevantPageName":"List_of_U.S._states_and_territories_by_area","wgIsProbablyEditable":true,"wgRestrictionEdit":[],"wgRestrictionMove":[],"wgWikiEditorEnabledModules":{"toolbar":true,"dialogs":true,"hidesig":true,"preview":false,"previewDialog":false,"publish":false},"wgBetaFeaturesFeatures":[],"wgMediaViewerOnClick":true,"wgVisualEditor":{"isPageWatched":false,"magnifyClipIconURL":"//bits.wikimedia.org/static-1.24wmf14/skins/common/images/magnify-clip.png","pageLanguageCode":"en","pageLanguageDir":"ltr","svgMaxSize":2048,"namespacesWithSubpages":{"6":0,"8":0,"1":true,"2":true,"3":true,"4":true,"5":true,"7":true,"9":true,"10":true,"11":true,"12":true,"13":true,"14":true,"15":true,"100":true,"101":true,"102":true,"103":true,"104":true,"105":true,"106":true,"107":true,"108":true,"109":true,"110":true,"111":true,"447":true,"2600":false,"828":true,"829":true}},"wikilove-recipient":"","wikilove-anon":0,"wgGuidedTourHelpGuiderUrl":"Help:Guided tours/guider","wgULSAcceptLanguageList":["en-us"],"wgULSCurrentAutonym":"English","wgFlaggedRevsParams":{"tags":{"status":{"levels":1,"quality":2,"pristine":3}}},"wgStableRevisionId":null,"wgCategoryTreePageCategoryOptions":"{\"mode\":0,\"hideprefix\":20,\"showcount\":true,\"namespaces\":false}","wgNoticeProject":"wikipedia","wgWikibaseItemId":"Q150340"});
+}</script><script>if(window.mw){
+mw.loader.implement("user.options",function($,jQuery){mw.user.options.set({"ccmeonemails":0,"cols":80,"date":"default","diffonly":0,"disablemail":0,"editfont":"default","editondblclick":0,"editsectiononrightclick":0,"enotifminoredits":0,"enotifrevealaddr":0,"enotifusertalkpages":1,"enotifwatchlistpages":0,"extendwatchlist":0,"fancysig":0,"forceeditsummary":0,"gender":"unknown","hideminor":0,"hidepatrolled":0,"imagesize":2,"math":0,"minordefault":0,"newpageshidepatrolled":0,"nickname":"","norollbackdiff":0,"numberheadings":0,"previewonfirst":0,"previewontop":1,"rcdays":7,"rclimit":50,"rows":25,"showhiddencats":false,"shownumberswatching":1,"showtoolbar":1,"skin":"vector","stubthreshold":0,"thumbsize":4,"underline":2,"uselivepreview":0,"usenewrc":0,"watchcreations":1,"watchdefault":0,"watchdeletion":0,"watchlistdays":3,"watchlisthideanons":0,"watchlisthidebots":0,"watchlisthideliu":0,"watchlisthideminor":0,"watchlisthideown":0,"watchlisthidepatrolled":0,"watchmoves":0,"wllimit":250,
+"useeditwarning":1,"prefershttps":1,"flaggedrevssimpleui":1,"flaggedrevsstable":0,"flaggedrevseditdiffs":true,"flaggedrevsviewdiffs":false,"usebetatoolbar":1,"usebetatoolbar-cgd":1,"multimediaviewer-enable":true,"visualeditor-enable":0,"visualeditor-betatempdisable":0,"visualeditor-enable-experimental":0,"visualeditor-enable-language":0,"visualeditor-hidebetawelcome":0,"wikilove-enabled":1,"mathJax":false,"echo-subscriptions-web-page-review":true,"echo-subscriptions-email-page-review":false,"ep_showtoplink":false,"ep_bulkdelorgs":false,"ep_bulkdelcourses":true,"ep_showdyk":true,"echo-subscriptions-web-education-program":true,"echo-subscriptions-email-education-program":false,"echo-notify-show-link":true,"echo-show-alert":true,"echo-email-frequency":0,"echo-email-format":"html","echo-subscriptions-email-system":true,"echo-subscriptions-web-system":true,"echo-subscriptions-email-user-rights":true,"echo-subscriptions-web-user-rights":true,"echo-subscriptions-email-other":false,
+"echo-subscriptions-web-other":true,"echo-subscriptions-email-edit-user-talk":false,"echo-subscriptions-web-edit-user-talk":true,"echo-subscriptions-email-reverted":false,"echo-subscriptions-web-reverted":true,"echo-subscriptions-email-article-linked":false,"echo-subscriptions-web-article-linked":false,"echo-subscriptions-email-mention":false,"echo-subscriptions-web-mention":true,"echo-subscriptions-web-edit-thank":true,"echo-subscriptions-email-edit-thank":false,"echo-subscriptions-web-flow-discussion":true,"echo-subscriptions-email-flow-discussion":false,"gettingstarted-task-toolbar-show-intro":true,"uls-preferences":"","language":"en","variant-gan":"gan","variant-iu":"iu","variant-kk":"kk","variant-ku":"ku","variant-shi":"shi","variant-sr":"sr","variant-tg":"tg","variant-uz":"uz","variant-zh":"zh","searchNs0":true,"searchNs1":false,"searchNs2":false,"searchNs3":false,"searchNs4":false,"searchNs5":false,"searchNs6":false,"searchNs7":false,"searchNs8":false,"searchNs9":false,
+"searchNs10":false,"searchNs11":false,"searchNs12":false,"searchNs13":false,"searchNs14":false,"searchNs15":false,"searchNs100":false,"searchNs101":false,"searchNs108":false,"searchNs109":false,"searchNs118":false,"searchNs119":false,"searchNs446":false,"searchNs447":false,"searchNs710":false,"searchNs711":false,"searchNs828":false,"searchNs829":false,"searchNs2600":false,"gadget-teahouse":1,"gadget-ReferenceTooltips":1,"gadget-DRN-wizard":1,"gadget-charinsert":1,"gadget-refToolbar":1,"gadget-mySandbox":1,"variant":"en"});},{},{});mw.loader.implement("user.tokens",function($,jQuery){mw.user.tokens.set({"editToken":"+\\","patrolToken":false,"watchToken":false});},{},{});
+/* cache key: enwiki:resourceloader:filter:minify-js:7:ffff827f827051d73171f6b2dc70d368 */
+}</script>
+<script>if(window.mw){
+mw.loader.load(["mediawiki.page.startup","mediawiki.legacy.wikibits","mediawiki.legacy.ajax","ext.centralauth.centralautologin","mmv.head","ext.visualEditor.viewPageTarget.init","ext.uls.init","ext.uls.interface","ext.centralNotice.bannerController","skins.vector.js"]);
+}</script>
+<link rel="dns-prefetch" href="//meta.wikimedia.org" />
+<!--[if lt IE 7]><style type="text/css">body{behavior:url("/w/static-1.24wmf14/skins/Vector/csshover.min.htc")}</style><![endif]-->
+</head>
+<body class="mediawiki ltr sitedir-ltr ns-0 ns-subject page-List_of_U_S_states_and_territories_by_area skin-vector action-view vector-animateLayout">
+ <div id="mw-page-base" class="noprint"></div>
+ <div id="mw-head-base" class="noprint"></div>
+ <div id="content" class="mw-body" role="main">
+ <a id="top"></a>
+
+ <div id="mw-js-message" style="display:none;"></div>
+ <div id="siteNotice"><!-- CentralNotice --></div>
+ <h1 id="firstHeading" class="firstHeading" lang="en"><span dir="auto">List of U.S. states and territories by area</span></h1>
+ <div id="bodyContent" class="mw-body-content">
+ <div id="siteSub">From Wikipedia, the free encyclopedia</div>
+ <div id="contentSub"></div>
+ <div id="jump-to-nav" class="mw-jump">
+ Jump to: <a href="#mw-navigation">navigation</a>, <a href="#p-search">search</a>
+ </div>
+ <div id="mw-content-text" lang="en" dir="ltr" class="mw-content-ltr"><div class="thumb tright">
+<div class="thumbinner" style="width:222px;"><a href="/wiki/File:Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg/220px-Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg" width="220" height="126" class="thumbimage" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg/330px-Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/28/Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg/440px-Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg 2x" data-file-width="1015" data-file-height="580" /></a>
+<div class="thumbcaption">
+<div class="magnify"><a href="/wiki/File:Image_shows_the_50_states_by_area-_Check_the_legend_for_more_details-_2014-06-29_05-36.jpg" class="internal" title="Enlarge"><img src="//bits.wikimedia.org/static-1.24wmf14/skins/common/images/magnify-clip.png" width="15" height="11" alt="" /></a></div>
+Image shows the 50 states by area. Check the legend for more details.</div>
+</div>
+</div>
+<p>This is a complete <b>list of the <a href="/wiki/U.S._state" title="U.S. state">states of the United States</a> and its major <a href="/wiki/Territories_of_the_United_States" title="Territories of the United States">territories</a></b> ordered by <i>total area</i>, <i>land area</i>, and <i>water area</i>. The water area figures include inland, coastal, <a href="/wiki/Great_Lakes" title="Great Lakes">Great Lakes</a>, and <a href="/wiki/Territorial_waters" title="Territorial waters">territorial waters</a>. Glaciers and intermittent water features are counted as land area.<sup id="cite_ref-1" class="reference"><a href="#cite_note-1"><span>[</span>1<span>]</span></a></sup></p>
+<p></p>
+<div id="toc" class="toc">
+<div id="toctitle">
+<h2>Contents</h2>
+</div>
+<ul>
+<li class="toclevel-1 tocsection-1"><a href="#Area_by_state.2Fterritory"><span class="tocnumber">1</span> <span class="toctext">Area by state/territory</span></a></li>
+<li class="toclevel-1 tocsection-2"><a href="#Area_by_division"><span class="tocnumber">2</span> <span class="toctext">Area by division</span></a></li>
+<li class="toclevel-1 tocsection-3"><a href="#Area_by_region"><span class="tocnumber">3</span> <span class="toctext">Area by region</span></a></li>
+<li class="toclevel-1 tocsection-4"><a href="#See_also"><span class="tocnumber">4</span> <span class="toctext">See also</span></a></li>
+<li class="toclevel-1 tocsection-5"><a href="#Notes"><span class="tocnumber">5</span> <span class="toctext">Notes</span></a></li>
+<li class="toclevel-1 tocsection-6"><a href="#References"><span class="tocnumber">6</span> <span class="toctext">References</span></a></li>
+<li class="toclevel-1 tocsection-7"><a href="#External_links"><span class="tocnumber">7</span> <span class="toctext">External links</span></a></li>
+</ul>
+</div>
+<p></p>
+<div style="clear:both;"></div>
+<h2><span class="mw-headline" id="Area_by_state.2Fterritory">Area by state/territory</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=1" title="Edit section: Area by state/territory">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<table class="wikitable sortable">
+<tr>
+<th></th>
+<th colspan="3">Total area<sup id="cite_ref-2010census_2-0" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Land area<sup id="cite_ref-2010census_2-1" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Water<sup id="cite_ref-2010census_2-2" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+</tr>
+<tr>
+<th>State/territory</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % land</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % water</th>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/e6/Flag_of_Alaska.svg/21px-Flag_of_Alaska.svg.png" width="21" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/e6/Flag_of_Alaska.svg/33px-Flag_of_Alaska.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/e6/Flag_of_Alaska.svg/43px-Flag_of_Alaska.svg.png 2x" data-file-width="1416" data-file-height="1000" /> </span><a href="/wiki/Alaska" title="Alaska">Alaska</a></td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">665,384.04</td>
+<td align="right">1,723,337</td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">570,640.95</td>
+<td align="right">1,477,953</td>
+<td align="right"><span style="display:none" class="sortkey">7001857600000000000</span>85.76%</td>
+<td align="right">94,743.10</td>
+<td align="right">245,384</td>
+<td align="right"><span style="display:none" class="sortkey">7001142400000000000</span>14.24%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Texas.svg/23px-Flag_of_Texas.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Texas.svg/35px-Flag_of_Texas.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Texas.svg/45px-Flag_of_Texas.svg.png 2x" data-file-width="1080" data-file-height="720" /> </span><a href="/wiki/Texas" title="Texas">Texas</a></td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">268,596.46</td>
+<td align="right">695,662</td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">261,231.71</td>
+<td align="right">676,587</td>
+<td align="right"><span style="display:none" class="sortkey">7001972600000000000</span>97.26%</td>
+<td align="right">7,364.75</td>
+<td align="right">19,075</td>
+<td align="right"><span style="display:none" class="sortkey">7000274000000000000</span>2.74%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_California.svg/23px-Flag_of_California.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_California.svg/35px-Flag_of_California.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_California.svg/45px-Flag_of_California.svg.png 2x" data-file-width="900" data-file-height="600" /> </span><a href="/wiki/California" title="California">California</a></td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">163,694.74</td>
+<td align="right">423,967</td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">155,779.22</td>
+<td align="right">403,466</td>
+<td align="right"><span style="display:none" class="sortkey">7001951600000000000</span>95.16%</td>
+<td align="right">7,915.52</td>
+<td align="right">20,501</td>
+<td align="right"><span style="display:none" class="sortkey">7000484000000000000</span>4.84%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Flag_of_Montana.svg/23px-Flag_of_Montana.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Flag_of_Montana.svg/35px-Flag_of_Montana.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/c/cb/Flag_of_Montana.svg/45px-Flag_of_Montana.svg.png 2x" data-file-width="615" data-file-height="410" /> </span><a href="/wiki/Montana" title="Montana">Montana</a></td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">147,039.71</td>
+<td align="right">380,831</td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">145,545.80</td>
+<td align="right">376,962</td>
+<td align="right"><span style="display:none" class="sortkey">7001989800000000000</span>98.98%</td>
+<td align="right">1,493.91</td>
+<td align="right">3,869</td>
+<td align="right"><span style="display:none" class="sortkey">7000102000000000000</span>1.02%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Flag_of_New_Mexico.svg/23px-Flag_of_New_Mexico.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Flag_of_New_Mexico.svg/35px-Flag_of_New_Mexico.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/c/c3/Flag_of_New_Mexico.svg/45px-Flag_of_New_Mexico.svg.png 2x" data-file-width="1200" data-file-height="800" /> </span><a href="/wiki/New_Mexico" title="New Mexico">New Mexico</a></td>
+<td align="center"><span style="" class="sortkey">!B9983905620875 </span>5</td>
+<td align="right">121,590.30</td>
+<td align="right">314,917</td>
+<td align="center"><span style="" class="sortkey">!B9983905620875 </span>5</td>
+<td align="right">121,298.15</td>
+<td align="right">314,161</td>
+<td align="right"><span style="display:none" class="sortkey">7001997600000000000</span>99.76%</td>
+<td align="right">292.15</td>
+<td align="right">757</td>
+<td align="right"><span style="display:none" class="sortkey">6999240000000000000</span>0.24%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arizona.svg/23px-Flag_of_Arizona.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arizona.svg/35px-Flag_of_Arizona.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arizona.svg/45px-Flag_of_Arizona.svg.png 2x" data-file-width="900" data-file-height="600" /> </span><a href="/wiki/Arizona" title="Arizona">Arizona</a></td>
+<td align="center"><span style="" class="sortkey">!B9982082405307 </span>6</td>
+<td align="right">113,990.30</td>
+<td align="right">295,234</td>
+<td align="center"><span style="" class="sortkey">!B9982082405307 </span>6</td>
+<td align="right">113,594.08</td>
+<td align="right">294,207</td>
+<td align="right"><span style="display:none" class="sortkey">7001996500000000000</span>99.65%</td>
+<td align="right">396.22</td>
+<td align="right">1,026</td>
+<td align="right"><span style="display:none" class="sortkey">6999350000000000000</span>0.35%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Flag_of_Nevada.svg/23px-Flag_of_Nevada.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Flag_of_Nevada.svg/35px-Flag_of_Nevada.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f1/Flag_of_Nevada.svg/45px-Flag_of_Nevada.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/Nevada" title="Nevada">Nevada</a></td>
+<td align="center"><span style="" class="sortkey">!B9980540898509 </span>7</td>
+<td align="right">110,571.82</td>
+<td align="right">286,380</td>
+<td align="center"><span style="" class="sortkey">!B9980540898509 </span>7</td>
+<td align="right">109,781.18</td>
+<td align="right">284,332</td>
+<td align="right"><span style="display:none" class="sortkey">7001992800000000000</span>99.28%</td>
+<td align="right">790.65</td>
+<td align="right">2,048</td>
+<td align="right"><span style="display:none" class="sortkey">6999720000000000000</span>0.72%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/46/Flag_of_Colorado.svg/23px-Flag_of_Colorado.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/46/Flag_of_Colorado.svg/35px-Flag_of_Colorado.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/46/Flag_of_Colorado.svg/45px-Flag_of_Colorado.svg.png 2x" data-file-width="1800" data-file-height="1200" /> </span><a href="/wiki/Colorado" title="Colorado">Colorado</a></td>
+<td align="center"><span style="" class="sortkey">!B9979205584583 </span>8</td>
+<td align="right">104,093.67</td>
+<td align="right">269,601</td>
+<td align="center"><span style="" class="sortkey">!B9979205584583 </span>8</td>
+<td align="right">103,641.89</td>
+<td align="right">268,431</td>
+<td align="right"><span style="display:none" class="sortkey">7001995700000099999</span>99.57%</td>
+<td align="right">451.78</td>
+<td align="right">1,170</td>
+<td align="right"><span style="display:none" class="sortkey">6999430000000000000</span>0.43%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Oregon.svg/23px-Flag_of_Oregon.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Oregon.svg/35px-Flag_of_Oregon.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Oregon.svg/46px-Flag_of_Oregon.svg.png 2x" data-file-width="750" data-file-height="450" /> </span><a href="/wiki/Oregon" title="Oregon">Oregon</a></td>
+<td align="center"><span style="" class="sortkey">!B9978027754226 </span>9</td>
+<td align="right">98,378.54</td>
+<td align="right">254,799</td>
+<td align="center"><span style="" class="sortkey">!B9976974149070 </span>10</td>
+<td align="right">95,988.01</td>
+<td align="right">248,608</td>
+<td align="right"><span style="display:none" class="sortkey">7001975700000099999</span>97.57%</td>
+<td align="right">2,390.53</td>
+<td align="right">6,191</td>
+<td align="right"><span style="display:none" class="sortkey">7000243000000000000</span>2.43%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/bc/Flag_of_Wyoming.svg/22px-Flag_of_Wyoming.svg.png" width="22" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/b/bc/Flag_of_Wyoming.svg/33px-Flag_of_Wyoming.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/b/bc/Flag_of_Wyoming.svg/43px-Flag_of_Wyoming.svg.png 2x" data-file-width="1000" data-file-height="700" /> </span><a href="/wiki/Wyoming" title="Wyoming">Wyoming</a></td>
+<td align="center"><span style="" class="sortkey">!B9976974149070 </span>10</td>
+<td align="right">97,813.01</td>
+<td align="right">253,335</td>
+<td align="center"><span style="" class="sortkey">!B9978027754226 </span>9</td>
+<td align="right">97,093.14</td>
+<td align="right">251,470</td>
+<td align="right"><span style="display:none" class="sortkey">7001992600000000000</span>99.26%</td>
+<td align="right">719.87</td>
+<td align="right">1,864</td>
+<td align="right"><span style="display:none" class="sortkey">6999740000000000000</span>0.74%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/b5/Flag_of_Michigan.svg/23px-Flag_of_Michigan.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/b/b5/Flag_of_Michigan.svg/35px-Flag_of_Michigan.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/b/b5/Flag_of_Michigan.svg/45px-Flag_of_Michigan.svg.png 2x" data-file-width="685" data-file-height="457" /> </span><a href="/wiki/Michigan" title="Michigan">Michigan</a></td>
+<td align="center"><span style="" class="sortkey">!B9976021047272 </span>11</td>
+<td align="right">96,713.51</td>
+<td align="right">250,487</td>
+<td align="center"><span style="" class="sortkey">!B9969089575466 </span>22</td>
+<td align="right">56,538.90</td>
+<td align="right">146,435</td>
+<td align="right"><span style="display:none" class="sortkey">7001584600000000000</span>58.46%</td>
+<td align="right">40,174.61</td>
+<td align="right">104,052</td>
+<td align="right"><span style="display:none" class="sortkey">7001415400000000000</span>41.54%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Minnesota.svg/23px-Flag_of_Minnesota.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Minnesota.svg/35px-Flag_of_Minnesota.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/b/b9/Flag_of_Minnesota.svg/46px-Flag_of_Minnesota.svg.png 2x" data-file-width="500" data-file-height="318" /> </span><a href="/wiki/Minnesota" title="Minnesota">Minnesota</a></td>
+<td align="center"><span style="" class="sortkey">!B9975150933502 </span>12</td>
+<td align="right">86,935.83</td>
+<td align="right">225,163</td>
+<td align="center"><span style="" class="sortkey">!B9973609426703 </span>14</td>
+<td align="right">79,626.74</td>
+<td align="right">206,232</td>
+<td align="right"><span style="display:none" class="sortkey">7001915900000000000</span>91.59%</td>
+<td align="right">7,309.09</td>
+<td align="right">18,930</td>
+<td align="right"><span style="display:none" class="sortkey">7000841000000000000</span>8.41%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/7/7f/Flag_of_Utah.png/23px-Flag_of_Utah.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/7/7f/Flag_of_Utah.png/35px-Flag_of_Utah.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/7/7f/Flag_of_Utah.png/46px-Flag_of_Utah.png 2x" data-file-width="2875" data-file-height="1725" /> </span><a href="/wiki/Utah" title="Utah">Utah</a></td>
+<td align="center"><span style="" class="sortkey">!B9974350506425 </span>13</td>
+<td align="right">84,896.88</td>
+<td align="right">219,882</td>
+<td align="center"><span style="" class="sortkey">!B9975150933502 </span>12</td>
+<td align="right">82,169.62</td>
+<td align="right">212,818</td>
+<td align="right"><span style="display:none" class="sortkey">7001967900000000000</span>96.79%</td>
+<td align="right">2,727.26</td>
+<td align="right">7,064</td>
+<td align="right"><span style="display:none" class="sortkey">7000321000000000000</span>3.21%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Flag_of_Idaho.svg/19px-Flag_of_Idaho.svg.png" width="19" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Flag_of_Idaho.svg/29px-Flag_of_Idaho.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/a4/Flag_of_Idaho.svg/38px-Flag_of_Idaho.svg.png 2x" data-file-width="660" data-file-height="520" /> </span><a href="/wiki/Idaho" title="Idaho">Idaho</a></td>
+<td align="center"><span style="" class="sortkey">!B9973609426703 </span>14</td>
+<td align="right">83,568.95</td>
+<td align="right">216,443</td>
+<td align="center"><span style="" class="sortkey">!B9976021047272 </span>11</td>
+<td align="right">82,643.12</td>
+<td align="right">214,045</td>
+<td align="right"><span style="display:none" class="sortkey">7001988900000000000</span>98.89%</td>
+<td align="right">925.83</td>
+<td align="right">2,398</td>
+<td align="right"><span style="display:none" class="sortkey">7000111000000000000</span>1.11%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/d/da/Flag_of_Kansas.svg/23px-Flag_of_Kansas.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/d/da/Flag_of_Kansas.svg/35px-Flag_of_Kansas.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/d/da/Flag_of_Kansas.svg/46px-Flag_of_Kansas.svg.png 2x" data-file-width="5400" data-file-height="3240" /> </span><a href="/wiki/Kansas" title="Kansas">Kansas</a></td>
+<td align="center"><span style="" class="sortkey">!B9972919497988 </span>15</td>
+<td align="right">82,278.36</td>
+<td align="right">213,100</td>
+<td align="center"><span style="" class="sortkey">!B9974350506425 </span>13</td>
+<td align="right">81,758.72</td>
+<td align="right">211,754</td>
+<td align="right"><span style="display:none" class="sortkey">7001993700000000000</span>99.37%</td>
+<td align="right">519.64</td>
+<td align="right">1,346</td>
+<td align="right"><span style="display:none" class="sortkey">6999630000000000000</span>0.63%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/4d/Flag_of_Nebraska.svg/23px-Flag_of_Nebraska.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/4d/Flag_of_Nebraska.svg/35px-Flag_of_Nebraska.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/4d/Flag_of_Nebraska.svg/46px-Flag_of_Nebraska.svg.png 2x" data-file-width="750" data-file-height="450" /> </span><a href="/wiki/Nebraska" title="Nebraska">Nebraska</a></td>
+<td align="center"><span style="" class="sortkey">!B9972274112777 </span>16</td>
+<td align="right">77,347.81</td>
+<td align="right">200,330</td>
+<td align="center"><span style="" class="sortkey">!B9972919497988 </span>15</td>
+<td align="right">76,824.17</td>
+<td align="right">198,974</td>
+<td align="right"><span style="display:none" class="sortkey">7001993200000099999</span>99.32%</td>
+<td align="right">523.64</td>
+<td align="right">1,356</td>
+<td align="right"><span style="display:none" class="sortkey">6999680000000000000</span>0.68%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_South_Dakota.svg/23px-Flag_of_South_Dakota.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_South_Dakota.svg/35px-Flag_of_South_Dakota.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_South_Dakota.svg/46px-Flag_of_South_Dakota.svg.png 2x" data-file-width="720" data-file-height="450" /> </span><a href="/wiki/South_Dakota" title="South Dakota">South Dakota</a></td>
+<td align="center"><span style="" class="sortkey">!B9971667866559 </span>17</td>
+<td align="right">77,115.68</td>
+<td align="right">199,729</td>
+<td align="center"><span style="" class="sortkey">!B9972274112777 </span>16</td>
+<td align="right">75,811.00</td>
+<td align="right">196,350</td>
+<td align="right"><span style="display:none" class="sortkey">7001983100000000000</span>98.31%</td>
+<td align="right">1,304.68</td>
+<td align="right">3,379</td>
+<td align="right"><span style="display:none" class="sortkey">7000169000000000000</span>1.69%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Washington.svg/23px-Flag_of_Washington.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Washington.svg/35px-Flag_of_Washington.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Washington.svg/46px-Flag_of_Washington.svg.png 2x" data-file-width="1106" data-file-height="658" /> </span><a href="/wiki/Washington_(state)" title="Washington (state)">Washington</a></td>
+<td align="center"><span style="" class="sortkey">!B9971096282421 </span>18</td>
+<td align="right">71,297.95</td>
+<td align="right">184,661</td>
+<td align="center"><span style="" class="sortkey">!B9970042677264 </span>20</td>
+<td align="right">66,455.52</td>
+<td align="right">172,119</td>
+<td align="right"><span style="display:none" class="sortkey">7001932100000099999</span>93.21%</td>
+<td align="right">4,842.43</td>
+<td align="right">12,542</td>
+<td align="right"><span style="display:none" class="sortkey">7000679000000000000</span>6.79%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Flag_of_North_Dakota.svg/20px-Flag_of_North_Dakota.svg.png" width="20" height="16" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Flag_of_North_Dakota.svg/30px-Flag_of_North_Dakota.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/ee/Flag_of_North_Dakota.svg/40px-Flag_of_North_Dakota.svg.png 2x" data-file-width="575" data-file-height="450" /> </span><a href="/wiki/North_Dakota" title="North Dakota">North Dakota</a></td>
+<td align="center"><span style="" class="sortkey">!B9970555610208 </span>19</td>
+<td align="right">70,698.32</td>
+<td align="right">183,108</td>
+<td align="center"><span style="" class="sortkey">!B9971667866559 </span>17</td>
+<td align="right">69,000.80</td>
+<td align="right">178,711</td>
+<td align="right"><span style="display:none" class="sortkey">7001976000000000000</span>97.60%</td>
+<td align="right">1,697.52</td>
+<td align="right">4,397</td>
+<td align="right"><span style="display:none" class="sortkey">7000240000000000000</span>2.40%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Flag_of_Oklahoma.svg/23px-Flag_of_Oklahoma.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Flag_of_Oklahoma.svg/35px-Flag_of_Oklahoma.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/6/6e/Flag_of_Oklahoma.svg/45px-Flag_of_Oklahoma.svg.png 2x" data-file-width="675" data-file-height="450" /> </span><a href="/wiki/Oklahoma" title="Oklahoma">Oklahoma</a></td>
+<td align="center"><span style="" class="sortkey">!B9970042677264 </span>20</td>
+<td align="right">69,898.87</td>
+<td align="right">181,037</td>
+<td align="center"><span style="" class="sortkey">!B9970555610208 </span>19</td>
+<td align="right">68,594.92</td>
+<td align="right">177,660</td>
+<td align="right"><span style="display:none" class="sortkey">7001981300000000000</span>98.13%</td>
+<td align="right">1,303.95</td>
+<td align="right">3,377</td>
+<td align="right"><span style="display:none" class="sortkey">7000187000000000000</span>1.87%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Flag_of_Missouri.svg/23px-Flag_of_Missouri.svg.png" width="23" height="13" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Flag_of_Missouri.svg/35px-Flag_of_Missouri.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/5/5a/Flag_of_Missouri.svg/46px-Flag_of_Missouri.svg.png 2x" data-file-width="2400" data-file-height="1400" /> </span><a href="/wiki/Missouri" title="Missouri">Missouri</a></td>
+<td align="center"><span style="" class="sortkey">!B9969554775622 </span>21</td>
+<td align="right">69,706.99</td>
+<td align="right">180,540</td>
+<td align="center"><span style="" class="sortkey">!B9971096282421 </span>18</td>
+<td align="right">68,741.52</td>
+<td align="right">178,040</td>
+<td align="right"><span style="display:none" class="sortkey">7001986100000000000</span>98.61%</td>
+<td align="right">965.47</td>
+<td align="right">2,501</td>
+<td align="right"><span style="display:none" class="sortkey">7000138990000099999</span>1.39%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Florida.svg/23px-Flag_of_Florida.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Florida.svg/35px-Flag_of_Florida.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Florida.svg/45px-Flag_of_Florida.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/Florida" title="Florida">Florida</a></td>
+<td align="center"><span style="" class="sortkey">!B9969089575466 </span>22</td>
+<td align="right">65,757.70</td>
+<td align="right">170,312</td>
+<td align="center"><span style="" class="sortkey">!B9967419034619 </span>26</td>
+<td align="right">53,624.76</td>
+<td align="right">138,887</td>
+<td align="right"><span style="display:none" class="sortkey">7001815500000000000</span>81.55%</td>
+<td align="right">12,132.94</td>
+<td align="right">31,424</td>
+<td align="right"><span style="display:none" class="sortkey">7001184500000000000</span>18.45%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_Wisconsin.svg/23px-Flag_of_Wisconsin.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_Wisconsin.svg/35px-Flag_of_Wisconsin.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_Wisconsin.svg/45px-Flag_of_Wisconsin.svg.png 2x" data-file-width="675" data-file-height="450" /> </span><a href="/wiki/Wisconsin" title="Wisconsin">Wisconsin</a></td>
+<td align="center"><span style="" class="sortkey">!B9968645057840 </span>23</td>
+<td align="right">65,496.38</td>
+<td align="right">169,635</td>
+<td align="center"><span style="" class="sortkey">!B9967811241751 </span>25</td>
+<td align="right">54,157.80</td>
+<td align="right">140,268</td>
+<td align="right"><span style="display:none" class="sortkey">7001826900000000000</span>82.69%</td>
+<td align="right">11,338.57</td>
+<td align="right">29,367</td>
+<td align="right"><span style="display:none" class="sortkey">7001173109999900000</span>17.31%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Georgia_%28U.S._state%29.svg/23px-Flag_of_Georgia_%28U.S._state%29.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Georgia_%28U.S._state%29.svg/35px-Flag_of_Georgia_%28U.S._state%29.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/5/54/Flag_of_Georgia_%28U.S._state%29.svg/46px-Flag_of_Georgia_%28U.S._state%29.svg.png 2x" data-file-width="6912" data-file-height="4320" /> </span><a href="/wiki/Georgia_(U.S._state)" title="Georgia (U.S. state)">Georgia</a></td>
+<td align="center"><span style="" class="sortkey">!B9968219461696 </span>24</td>
+<td align="right">59,425.15</td>
+<td align="right">153,910</td>
+<td align="center"><span style="" class="sortkey">!B9969554775622 </span>21</td>
+<td align="right">57,513.49</td>
+<td align="right">148,959</td>
+<td align="right"><span style="display:none" class="sortkey">7001967800000000000</span>96.78%</td>
+<td align="right">1,911.66</td>
+<td align="right">4,951</td>
+<td align="right"><span style="display:none" class="sortkey">7000322000000000000</span>3.22%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_Illinois.svg/23px-Flag_of_Illinois.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_Illinois.svg/35px-Flag_of_Illinois.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/0/01/Flag_of_Illinois.svg/46px-Flag_of_Illinois.svg.png 2x" data-file-width="778" data-file-height="460" /> </span><a href="/wiki/Illinois" title="Illinois">Illinois</a></td>
+<td align="center"><span style="" class="sortkey">!B9967811241751 </span>25</td>
+<td align="right">57,913.55</td>
+<td align="right">149,995</td>
+<td align="center"><span style="" class="sortkey">!B9968219461696 </span>24</td>
+<td align="right">55,518.93</td>
+<td align="right">143,793</td>
+<td align="right"><span style="display:none" class="sortkey">7001958700000000000</span>95.87%</td>
+<td align="right">2,394.62</td>
+<td align="right">6,202</td>
+<td align="right"><span style="display:none" class="sortkey">7000413000000000000</span>4.13%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Flag_of_Iowa.svg/22px-Flag_of_Iowa.svg.png" width="22" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Flag_of_Iowa.svg/34px-Flag_of_Iowa.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/aa/Flag_of_Iowa.svg/44px-Flag_of_Iowa.svg.png 2x" data-file-width="670" data-file-height="459" /> </span><a href="/wiki/Iowa" title="Iowa">Iowa</a></td>
+<td align="center"><span style="" class="sortkey">!B9967419034619 </span>26</td>
+<td align="right">56,272.81</td>
+<td align="right">145,746</td>
+<td align="center"><span style="" class="sortkey">!B9968645057840 </span>23</td>
+<td align="right">55,857.13</td>
+<td align="right">144,669</td>
+<td align="right"><span style="display:none" class="sortkey">7001992600000000000</span>99.26%</td>
+<td align="right">415.68</td>
+<td align="right">1,077</td>
+<td align="right"><span style="display:none" class="sortkey">6999740000000000000</span>0.74%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_New_York.svg/23px-Flag_of_New_York.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_New_York.svg/35px-Flag_of_New_York.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/1a/Flag_of_New_York.svg/46px-Flag_of_New_York.svg.png 2x" data-file-width="900" data-file-height="450" /> </span><a href="/wiki/New_York" title="New York">New York</a></td>
+<td align="center"><span style="" class="sortkey">!B9967041631339 </span>27</td>
+<td align="right">54,554.98</td>
+<td align="right">141,297</td>
+<td align="center"><span style="" class="sortkey">!B9965988026183 </span>30</td>
+<td align="right">47,126.40</td>
+<td align="right">122,057</td>
+<td align="right"><span style="display:none" class="sortkey">7001863800000000000</span>86.38%</td>
+<td align="right">7,428.58</td>
+<td align="right">19,240</td>
+<td align="right"><span style="display:none" class="sortkey">7001136200000099999</span>13.62%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Flag_of_North_Carolina.svg/23px-Flag_of_North_Carolina.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Flag_of_North_Carolina.svg/35px-Flag_of_North_Carolina.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/b/bb/Flag_of_North_Carolina.svg/45px-Flag_of_North_Carolina.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/North_Carolina" title="North Carolina">North Carolina</a></td>
+<td align="center"><span style="" class="sortkey">!B9966677954898 </span>28</td>
+<td align="right">53,819.16</td>
+<td align="right">139,391</td>
+<td align="center"><span style="" class="sortkey">!B9966327041700 </span>29</td>
+<td align="right">48,617.91</td>
+<td align="right">125,920</td>
+<td align="right"><span style="display:none" class="sortkey">7001903400000000000</span>90.34%</td>
+<td align="right">5,201.25</td>
+<td align="right">13,471</td>
+<td align="right"><span style="display:none" class="sortkey">7000966000000000000</span>9.66%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arkansas.svg/23px-Flag_of_Arkansas.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arkansas.svg/35px-Flag_of_Arkansas.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Flag_of_Arkansas.svg/45px-Flag_of_Arkansas.svg.png 2x" data-file-width="450" data-file-height="300" /> </span><a href="/wiki/Arkansas" title="Arkansas">Arkansas</a></td>
+<td align="center"><span style="" class="sortkey">!B9966327041700 </span>29</td>
+<td align="right">53,178.55</td>
+<td align="right">137,732</td>
+<td align="center"><span style="" class="sortkey">!B9967041631339 </span>27</td>
+<td align="right">52,035.48</td>
+<td align="right">134,771</td>
+<td align="right"><span style="display:none" class="sortkey">7001978500000000000</span>97.85%</td>
+<td align="right">1,143.07</td>
+<td align="right">2,961</td>
+<td align="right"><span style="display:none" class="sortkey">7000215000000000000</span>2.15%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Flag_of_Alabama.svg/23px-Flag_of_Alabama.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Flag_of_Alabama.svg/35px-Flag_of_Alabama.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/5/5c/Flag_of_Alabama.svg/45px-Flag_of_Alabama.svg.png 2x" data-file-width="600" data-file-height="400" /> </span><a href="/wiki/Alabama" title="Alabama">Alabama</a></td>
+<td align="center"><span style="" class="sortkey">!B9965988026183 </span>30</td>
+<td align="right">52,420.07</td>
+<td align="right">135,767</td>
+<td align="center"><span style="" class="sortkey">!B9966677954898 </span>28</td>
+<td align="right">50,645.33</td>
+<td align="right">131,171</td>
+<td align="right"><span style="display:none" class="sortkey">7001966100000000000</span>96.61%</td>
+<td align="right">1,774.74</td>
+<td align="right">4,597</td>
+<td align="right"><span style="display:none" class="sortkey">7000339000000000000</span>3.39%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_Louisiana.svg/23px-Flag_of_Louisiana.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_Louisiana.svg/35px-Flag_of_Louisiana.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_Louisiana.svg/46px-Flag_of_Louisiana.svg.png 2x" data-file-width="990" data-file-height="630" /> </span><a href="/wiki/Louisiana" title="Louisiana">Louisiana</a></td>
+<td align="center"><span style="" class="sortkey">!B9965660127955 </span>31</td>
+<td align="right">52,378.13</td>
+<td align="right">135,659</td>
+<td align="center"><span style="" class="sortkey">!B9965034924385 </span>33</td>
+<td align="right">43,203.90</td>
+<td align="right">111,898</td>
+<td align="right"><span style="display:none" class="sortkey">7001824800000000000</span>82.48%</td>
+<td align="right">9,174.23</td>
+<td align="right">23,761</td>
+<td align="right"><span style="display:none" class="sortkey">7001175200000000000</span>17.52%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/42/Flag_of_Mississippi.svg/23px-Flag_of_Mississippi.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/42/Flag_of_Mississippi.svg/35px-Flag_of_Mississippi.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/42/Flag_of_Mississippi.svg/45px-Flag_of_Mississippi.svg.png 2x" data-file-width="900" data-file-height="600" /> </span><a href="/wiki/Mississippi" title="Mississippi">Mississippi</a></td>
+<td align="center"><span style="" class="sortkey">!B9965342640972 </span>32</td>
+<td align="right">48,431.78</td>
+<td align="right">125,438</td>
+<td align="center"><span style="" class="sortkey">!B9965660127955 </span>31</td>
+<td align="right">46,923.27</td>
+<td align="right">121,531</td>
+<td align="right"><span style="display:none" class="sortkey">7001968900000000000</span>96.89%</td>
+<td align="right">1,508.51</td>
+<td align="right">3,907</td>
+<td align="right"><span style="display:none" class="sortkey">7000311000000000000</span>3.11%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Pennsylvania.svg/23px-Flag_of_Pennsylvania.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Pennsylvania.svg/35px-Flag_of_Pennsylvania.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f7/Flag_of_Pennsylvania.svg/45px-Flag_of_Pennsylvania.svg.png 2x" data-file-width="675" data-file-height="450" /> </span><a href="/wiki/Pennsylvania" title="Pennsylvania">Pennsylvania</a></td>
+<td align="center"><span style="" class="sortkey">!B9965034924385 </span>33</td>
+<td align="right">46,054.35</td>
+<td align="right">119,280</td>
+<td align="center"><span style="" class="sortkey">!B9965342640972 </span>32</td>
+<td align="right">44,742.70</td>
+<td align="right">115,883</td>
+<td align="right"><span style="display:none" class="sortkey">7001971500000000000</span>97.15%</td>
+<td align="right">1,311.64</td>
+<td align="right">3,397</td>
+<td align="right"><span style="display:none" class="sortkey">7000285000000000000</span>2.85%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Ohio.svg/23px-Flag_of_Ohio.svg.png" width="23" height="14" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Ohio.svg/35px-Flag_of_Ohio.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Flag_of_Ohio.svg/46px-Flag_of_Ohio.svg.png 2x" data-file-width="520" data-file-height="320" /> </span><a href="/wiki/Ohio" title="Ohio">Ohio</a></td>
+<td align="center"><span style="" class="sortkey">!B9964736394753 </span>34</td>
+<td align="right">44,825.58</td>
+<td align="right">116,098</td>
+<td align="center"><span style="" class="sortkey">!B9964446519385 </span>35</td>
+<td align="right">40,860.69</td>
+<td align="right">105,829</td>
+<td align="right"><span style="display:none" class="sortkey">7001911500000000000</span>91.15%</td>
+<td align="right">3,964.89</td>
+<td align="right">10,269</td>
+<td align="right"><span style="display:none" class="sortkey">7000885000000000000</span>8.85%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/47/Flag_of_Virginia.svg/22px-Flag_of_Virginia.svg.png" width="22" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/47/Flag_of_Virginia.svg/34px-Flag_of_Virginia.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/47/Flag_of_Virginia.svg/44px-Flag_of_Virginia.svg.png 2x" data-file-width="670" data-file-height="460" /> </span><a href="/wiki/Virginia" title="Virginia">Virginia</a></td>
+<td align="center"><span style="" class="sortkey">!B9964446519385 </span>35</td>
+<td align="right">42,774.93</td>
+<td align="right">110,787</td>
+<td align="center"><span style="" class="sortkey">!B9964164810615 </span>36</td>
+<td align="right">39,490.09</td>
+<td align="right">102,279</td>
+<td align="right"><span style="display:none" class="sortkey">7001923200000099999</span>92.32%</td>
+<td align="right">3,284.84</td>
+<td align="right">8,508</td>
+<td align="right"><span style="display:none" class="sortkey">7000768000000000000</span>7.68%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/9e/Flag_of_Tennessee.svg/23px-Flag_of_Tennessee.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/9e/Flag_of_Tennessee.svg/35px-Flag_of_Tennessee.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/9e/Flag_of_Tennessee.svg/46px-Flag_of_Tennessee.svg.png 2x" data-file-width="500" data-file-height="300" /> </span><a href="/wiki/Tennessee" title="Tennessee">Tennessee</a></td>
+<td align="center"><span style="" class="sortkey">!B9964164810615 </span>36</td>
+<td align="right">42,144.25</td>
+<td align="right">109,153</td>
+<td align="center"><span style="" class="sortkey">!B9964736394753 </span>34</td>
+<td align="right">41,234.90</td>
+<td align="right">106,798</td>
+<td align="right"><span style="display:none" class="sortkey">7001978400000000000</span>97.84%</td>
+<td align="right">909.36</td>
+<td align="right">2,355</td>
+<td align="right"><span style="display:none" class="sortkey">7000216000000000000</span>2.16%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/8/8d/Flag_of_Kentucky.svg/23px-Flag_of_Kentucky.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/8/8d/Flag_of_Kentucky.svg/35px-Flag_of_Kentucky.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/8/8d/Flag_of_Kentucky.svg/46px-Flag_of_Kentucky.svg.png 2x" data-file-width="950" data-file-height="500" /> </span><a href="/wiki/Kentucky" title="Kentucky">Kentucky</a></td>
+<td align="center"><span style="" class="sortkey">!B9963890820873 </span>37</td>
+<td align="right">40,407.80</td>
+<td align="right">104,656</td>
+<td align="center"><span style="" class="sortkey">!B9963890820873 </span>37</td>
+<td align="right">39,486.34</td>
+<td align="right">102,269</td>
+<td align="right"><span style="display:none" class="sortkey">7001977200000000000</span>97.72%</td>
+<td align="right">921.46</td>
+<td align="right">2,387</td>
+<td align="right"><span style="display:none" class="sortkey">7000227990000099999</span>2.28%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/ac/Flag_of_Indiana.svg/23px-Flag_of_Indiana.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/ac/Flag_of_Indiana.svg/35px-Flag_of_Indiana.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/ac/Flag_of_Indiana.svg/45px-Flag_of_Indiana.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/Indiana" title="Indiana">Indiana</a></td>
+<td align="center"><span style="" class="sortkey">!B9963624138402 </span>38</td>
+<td align="right">36,419.55</td>
+<td align="right">94,326</td>
+<td align="center"><span style="" class="sortkey">!B9963624138402 </span>38</td>
+<td align="right">35,826.11</td>
+<td align="right">92,789</td>
+<td align="right"><span style="display:none" class="sortkey">7001983700000000000</span>98.37%</td>
+<td align="right">593.44</td>
+<td align="right">1,537</td>
+<td align="right"><span style="display:none" class="sortkey">7000162990000000000</span>1.63%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/35/Flag_of_Maine.svg/23px-Flag_of_Maine.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/35/Flag_of_Maine.svg/35px-Flag_of_Maine.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/35/Flag_of_Maine.svg/45px-Flag_of_Maine.svg.png 2x" data-file-width="687" data-file-height="458" /> </span><a href="/wiki/Maine" title="Maine">Maine</a></td>
+<td align="center"><span style="" class="sortkey">!B9963364383538 </span>39</td>
+<td align="right">35,379.74</td>
+<td align="right">91,633</td>
+<td align="center"><span style="" class="sortkey">!B9963364383538 </span>39</td>
+<td align="right">30,842.92</td>
+<td align="right">79,883</td>
+<td align="right"><span style="display:none" class="sortkey">7001871800000000000</span>87.18%</td>
+<td align="right">4,536.82</td>
+<td align="right">11,750</td>
+<td align="right"><span style="display:none" class="sortkey">7001128200000000000</span>12.82%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/6/69/Flag_of_South_Carolina.svg/23px-Flag_of_South_Carolina.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/6/69/Flag_of_South_Carolina.svg/35px-Flag_of_South_Carolina.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/6/69/Flag_of_South_Carolina.svg/45px-Flag_of_South_Carolina.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/South_Carolina" title="South Carolina">South Carolina</a></td>
+<td align="center"><span style="" class="sortkey">!B9963111205458 </span>40</td>
+<td align="right">32,020.49</td>
+<td align="right">82,933</td>
+<td align="center"><span style="" class="sortkey">!B9963111205458 </span>40</td>
+<td align="right">30,060.70</td>
+<td align="right">77,857</td>
+<td align="right"><span style="display:none" class="sortkey">7001938800000000000</span>93.88%</td>
+<td align="right">1,959.79</td>
+<td align="right">5,076</td>
+<td align="right"><span style="display:none" class="sortkey">7000612000000000000</span>6.12%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_West_Virginia.svg/23px-Flag_of_West_Virginia.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_West_Virginia.svg/35px-Flag_of_West_Virginia.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/22/Flag_of_West_Virginia.svg/46px-Flag_of_West_Virginia.svg.png 2x" data-file-width="760" data-file-height="400" /> </span><a href="/wiki/West_Virginia" title="West Virginia">West Virginia</a></td>
+<td align="center"><span style="" class="sortkey">!B9962864279332 </span>41</td>
+<td align="right">24,230.04</td>
+<td align="right">62,756</td>
+<td align="center"><span style="" class="sortkey">!B9962864279332 </span>41</td>
+<td align="right">24,038.21</td>
+<td align="right">62,259</td>
+<td align="right"><span style="display:none" class="sortkey">7001992100000099999</span>99.21%</td>
+<td align="right">191.83</td>
+<td align="right">497</td>
+<td align="right"><span style="display:none" class="sortkey">6999790000000000000</span>0.79%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/a/a0/Flag_of_Maryland.svg/23px-Flag_of_Maryland.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/a/a0/Flag_of_Maryland.svg/35px-Flag_of_Maryland.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/a/a0/Flag_of_Maryland.svg/45px-Flag_of_Maryland.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/Maryland" title="Maryland">Maryland</a></td>
+<td align="center"><span style="" class="sortkey">!B9962623303817 </span>42</td>
+<td align="right">12,405.93</td>
+<td align="right">32,131</td>
+<td align="center"><span style="" class="sortkey">!B9962623303817 </span>42</td>
+<td align="right">9,707.24</td>
+<td align="right">25,142</td>
+<td align="right"><span style="display:none" class="sortkey">7001782500000000000</span>78.25%</td>
+<td align="right">2,698.69</td>
+<td align="right">6,990</td>
+<td align="right"><span style="display:none" class="sortkey">7001217500000000000</span>21.75%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Flag_of_Hawaii.svg/23px-Flag_of_Hawaii.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Flag_of_Hawaii.svg/35px-Flag_of_Hawaii.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/ef/Flag_of_Hawaii.svg/46px-Flag_of_Hawaii.svg.png 2x" data-file-width="1200" data-file-height="600" /> </span><a href="/wiki/Hawaii" title="Hawaii">Hawaii</a></td>
+<td align="center"><span style="" class="sortkey">!B9962387998843 </span>43</td>
+<td align="right">10,931.72</td>
+<td align="right">28,313</td>
+<td align="center"><span style="" class="sortkey">!B9961498523982 </span>47</td>
+<td align="right">6,422.63</td>
+<td align="right">16,635</td>
+<td align="right"><span style="display:none" class="sortkey">7001587500000000000</span>58.75%</td>
+<td align="right">4,509.09</td>
+<td align="right">11,678</td>
+<td align="right"><span style="display:none" class="sortkey">7001412500000000000</span>41.25%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f2/Flag_of_Massachusetts.svg/23px-Flag_of_Massachusetts.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f2/Flag_of_Massachusetts.svg/35px-Flag_of_Massachusetts.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f2/Flag_of_Massachusetts.svg/46px-Flag_of_Massachusetts.svg.png 2x" data-file-width="1500" data-file-height="900" /> </span><a href="/wiki/Massachusetts" title="Massachusetts">Massachusetts</a></td>
+<td align="center"><span style="" class="sortkey">!B9962158103660 </span>44</td>
+<td align="right">10,554.39</td>
+<td align="right">27,336</td>
+<td align="center"><span style="" class="sortkey">!B9961933375102 </span>45</td>
+<td align="right">7,800.06</td>
+<td align="right">20,202</td>
+<td align="right"><span style="display:none" class="sortkey">7001739000000000000</span>73.90%</td>
+<td align="right">2,754.33</td>
+<td align="right">7,134</td>
+<td align="right"><span style="display:none" class="sortkey">7001261000000000000</span>26.10%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/49/Flag_of_Vermont.svg/23px-Flag_of_Vermont.svg.png" width="23" height="14" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/49/Flag_of_Vermont.svg/35px-Flag_of_Vermont.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/49/Flag_of_Vermont.svg/46px-Flag_of_Vermont.svg.png 2x" data-file-width="750" data-file-height="450" /> </span><a href="/wiki/Vermont" title="Vermont">Vermont</a></td>
+<td align="center"><span style="" class="sortkey">!B9961933375102 </span>45</td>
+<td align="right">9,616.36</td>
+<td align="right">24,906</td>
+<td align="center"><span style="" class="sortkey">!B9962387998843 </span>43</td>
+<td align="right">9,216.66</td>
+<td align="right">23,871</td>
+<td align="right"><span style="display:none" class="sortkey">7001958400000000000</span>95.84%</td>
+<td align="right">399.71</td>
+<td align="right">1,035</td>
+<td align="right"><span style="display:none" class="sortkey">7000416000000000000</span>4.16%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_New_Hampshire.svg/23px-Flag_of_New_Hampshire.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_New_Hampshire.svg/35px-Flag_of_New_Hampshire.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_New_Hampshire.svg/45px-Flag_of_New_Hampshire.svg.png 2x" data-file-width="660" data-file-height="440" /> </span><a href="/wiki/New_Hampshire" title="New Hampshire">New Hampshire</a></td>
+<td align="center"><span style="" class="sortkey">!B9961713586035 </span>46</td>
+<td align="right">9,349.16</td>
+<td align="right">24,214</td>
+<td align="center"><span style="" class="sortkey">!B9962158103660 </span>44</td>
+<td align="right">8,952.65</td>
+<td align="right">23,187</td>
+<td align="right"><span style="display:none" class="sortkey">7001957600000000000</span>95.76%</td>
+<td align="right">396.51</td>
+<td align="right">1,027</td>
+<td align="right"><span style="display:none" class="sortkey">7000424000000000000</span>4.24%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/92/Flag_of_New_Jersey.svg/23px-Flag_of_New_Jersey.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/92/Flag_of_New_Jersey.svg/35px-Flag_of_New_Jersey.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/92/Flag_of_New_Jersey.svg/45px-Flag_of_New_Jersey.svg.png 2x" data-file-width="750" data-file-height="500" /> </span><a href="/wiki/New_Jersey" title="New Jersey">New Jersey</a></td>
+<td align="center"><span style="" class="sortkey">!B9961498523982 </span>47</td>
+<td align="right">8,722.58</td>
+<td align="right">22,591</td>
+<td align="center"><span style="" class="sortkey">!B9961713586035 </span>46</td>
+<td align="right">7,354.22</td>
+<td align="right">19,047</td>
+<td align="right"><span style="display:none" class="sortkey">7001843100000000000</span>84.31%</td>
+<td align="right">1,368.36</td>
+<td align="right">3,544</td>
+<td align="right"><span style="display:none" class="sortkey">7001156900000000000</span>15.69%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/96/Flag_of_Connecticut.svg/20px-Flag_of_Connecticut.svg.png" width="20" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/9/96/Flag_of_Connecticut.svg/30px-Flag_of_Connecticut.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/9/96/Flag_of_Connecticut.svg/40px-Flag_of_Connecticut.svg.png 2x" data-file-width="594" data-file-height="459" /> </span><a href="/wiki/Connecticut" title="Connecticut">Connecticut</a></td>
+<td align="center"><span style="" class="sortkey">!B9961287989890 </span>48</td>
+<td align="right">5,543.41</td>
+<td align="right">14,357</td>
+<td align="center"><span style="" class="sortkey">!B9961287989890 </span>48</td>
+<td align="right">4,842.36</td>
+<td align="right">12,542</td>
+<td align="right"><span style="display:none" class="sortkey">7001873500000000000</span>87.35%</td>
+<td align="right">701.06</td>
+<td align="right">1,816</td>
+<td align="right"><span style="display:none" class="sortkey">7001126500000000000</span>12.65%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/c/c6/Flag_of_Delaware.svg/23px-Flag_of_Delaware.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/c/c6/Flag_of_Delaware.svg/35px-Flag_of_Delaware.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/c/c6/Flag_of_Delaware.svg/45px-Flag_of_Delaware.svg.png 2x" data-file-width="600" data-file-height="400" /> </span><a href="/wiki/Delaware" title="Delaware">Delaware</a></td>
+<td align="center"><span style="" class="sortkey">!B9961081797018 </span>49</td>
+<td align="right">2,488.72</td>
+<td align="right">6,446</td>
+<td align="center"><span style="" class="sortkey">!B9961081797018 </span>49</td>
+<td align="right">1,948.54</td>
+<td align="right">5,047</td>
+<td align="right"><span style="display:none" class="sortkey">7001782900000000000</span>78.29%</td>
+<td align="right">540.18</td>
+<td align="right">1,399</td>
+<td align="right"><span style="display:none" class="sortkey">7001217100000000000</span>21.71%</td>
+</tr>
+<tr valign="top">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f3/Flag_of_Rhode_Island.svg/18px-Flag_of_Rhode_Island.svg.png" width="18" height="17" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f3/Flag_of_Rhode_Island.svg/28px-Flag_of_Rhode_Island.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f3/Flag_of_Rhode_Island.svg/36px-Flag_of_Rhode_Island.svg.png 2x" data-file-width="531" data-file-height="496" /> </span><a href="/wiki/Rhode_Island" title="Rhode Island">Rhode Island</a></td>
+<td align="center"><span style="" class="sortkey">!B9960879769945 </span>50</td>
+<td align="right">1,544.89</td>
+<td align="right">4,001</td>
+<td align="center"><span style="" class="sortkey">!B9960879769945 </span>50</td>
+<td align="right">1,033.81</td>
+<td align="right">2,678</td>
+<td align="right"><span style="display:none" class="sortkey">7001669200000000000</span>66.92%</td>
+<td align="right">511.07</td>
+<td align="right">1,324</td>
+<td align="right"><span style="display:none" class="sortkey">7001330800000000000</span>33.08%</td>
+</tr>
+<tr valign="top" style="background:lightgreen">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Flag_of_Washington%2C_D.C..svg/23px-Flag_of_Washington%2C_D.C..svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Flag_of_Washington%2C_D.C..svg/35px-Flag_of_Washington%2C_D.C..svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/3/3e/Flag_of_Washington%2C_D.C..svg/46px-Flag_of_Washington%2C_D.C..svg.png 2x" data-file-width="800" data-file-height="400" /> </span><a href="/wiki/Washington,_D.C." title="Washington, D.C.">District of Columbia</a></td>
+<td align="center"></td>
+<td align="right">68.34</td>
+<td align="right">177</td>
+<td align="center"></td>
+<td align="right">61.05</td>
+<td align="right">158</td>
+<td align="right"><span style="display:none" class="sortkey">7001893300000000000</span>89.33%</td>
+<td align="right">7.29</td>
+<td align="right">19</td>
+<td align="right"><span style="display:none" class="sortkey">7001106700000000000</span>10.67%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_Puerto_Rico.svg/23px-Flag_of_Puerto_Rico.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_Puerto_Rico.svg/35px-Flag_of_Puerto_Rico.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/2/28/Flag_of_Puerto_Rico.svg/45px-Flag_of_Puerto_Rico.svg.png 2x" data-file-width="900" data-file-height="600" /> </span><a href="/wiki/Puerto_Rico" title="Puerto Rico">Puerto Rico</a></td>
+<td align="center"></td>
+<td align="right">5,324.84</td>
+<td align="right">13,791</td>
+<td align="center"></td>
+<td align="right">3,423.78</td>
+<td align="right">8,868</td>
+<td align="right"><span style="display:none" class="sortkey">7001643000000000000</span>64.30%</td>
+<td align="right">1,901.07</td>
+<td align="right">4,924</td>
+<td align="right"><span style="display:none" class="sortkey">7001357000000000000</span>35.70%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_the_Northern_Mariana_Islands.svg/23px-Flag_of_the_Northern_Mariana_Islands.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_the_Northern_Mariana_Islands.svg/35px-Flag_of_the_Northern_Mariana_Islands.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/e/e0/Flag_of_the_Northern_Mariana_Islands.svg/46px-Flag_of_the_Northern_Mariana_Islands.svg.png 2x" data-file-width="1000" data-file-height="500" /> </span><a href="/wiki/Northern_Mariana_Islands" title="Northern Mariana Islands">Northern Mariana Islands</a></td>
+<td align="center"></td>
+<td align="right">1,975.57</td>
+<td align="right">5,117</td>
+<td align="center"></td>
+<td align="right">182.33</td>
+<td align="right">472</td>
+<td align="right"><span style="display:none" class="sortkey">7000923000000000000</span>9.23%</td>
+<td align="right">1,793.24</td>
+<td align="right">4,644</td>
+<td align="right"><span style="display:none" class="sortkey">7001907700000000000</span>90.77%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Flag_of_the_United_States_Virgin_Islands.svg/23px-Flag_of_the_United_States_Virgin_Islands.svg.png" width="23" height="15" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Flag_of_the_United_States_Virgin_Islands.svg/35px-Flag_of_the_United_States_Virgin_Islands.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Flag_of_the_United_States_Virgin_Islands.svg/45px-Flag_of_the_United_States_Virgin_Islands.svg.png 2x" data-file-width="744" data-file-height="496" /> </span><a href="/wiki/United_States_Virgin_Islands" title="United States Virgin Islands">United States Virgin Islands</a></td>
+<td align="center"></td>
+<td align="right">732.93</td>
+<td align="right">1,898</td>
+<td align="center"></td>
+<td align="right">134.32</td>
+<td align="right">348</td>
+<td align="right"><span style="display:none" class="sortkey">7001183309999999999</span>18.33%</td>
+<td align="right">598.61</td>
+<td align="right">1,550</td>
+<td align="right"><span style="display:none" class="sortkey">7001816700000000000</span>81.67%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/8/87/Flag_of_American_Samoa.svg/23px-Flag_of_American_Samoa.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/8/87/Flag_of_American_Samoa.svg/35px-Flag_of_American_Samoa.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/8/87/Flag_of_American_Samoa.svg/46px-Flag_of_American_Samoa.svg.png 2x" data-file-width="1000" data-file-height="500" /> </span><a href="/wiki/American_Samoa" title="American Samoa">American Samoa</a></td>
+<td align="center"></td>
+<td align="right">581.05</td>
+<td align="right">1,505</td>
+<td align="center"></td>
+<td align="right">76.46</td>
+<td align="right">198</td>
+<td align="right"><span style="display:none" class="sortkey">7001131600000000000</span>13.16%</td>
+<td align="right">504.60</td>
+<td align="right">1,307</td>
+<td align="right"><span style="display:none" class="sortkey">7001868400000000000</span>86.84%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/0/07/Flag_of_Guam.svg/23px-Flag_of_Guam.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/0/07/Flag_of_Guam.svg/35px-Flag_of_Guam.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/0/07/Flag_of_Guam.svg/46px-Flag_of_Guam.svg.png 2x" data-file-width="820" data-file-height="440" /> </span><a href="/wiki/Guam" title="Guam">Guam</a></td>
+<td align="center"></td>
+<td align="right">570.62</td>
+<td align="right">1,478</td>
+<td align="center"></td>
+<td align="right">209.80</td>
+<td align="right">543</td>
+<td align="right"><span style="display:none" class="sortkey">7001367700000000000</span>36.77%</td>
+<td align="right">360.82</td>
+<td align="right">935</td>
+<td align="right"><span style="display:none" class="sortkey">7001632300000000000</span>63.23%</td>
+</tr>
+<tr valign="top" style="background:beige">
+<td><span class="flagicon"><a href="/wiki/United_States" title="United States"><img alt="United States" src="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/23px-Flag_of_the_United_States.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/35px-Flag_of_the_United_States.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/46px-Flag_of_the_United_States.svg.png 2x" data-file-width="1235" data-file-height="650" /></a></span> <i><a href="/wiki/United_States_Minor_Outlying_Islands" title="United States Minor Outlying Islands">Minor Outlying Islands</a></i><sup id="cite_ref-2000census_3-0" class="reference"><a href="#cite_note-2000census-3"><span>[</span>3<span>]</span></a></sup><sup id="cite_ref-4" class="reference"><a href="#cite_note-4"><span>[</span>a<span>]</span></a></sup></td>
+<td align="center"></td>
+<td align="right">16.0</td>
+<td align="right">41</td>
+<td align="center"></td>
+<td align="right">16.0</td>
+<td align="right">41</td>
+<td align="center">—</td>
+<td align="center">—</td>
+<td align="center">—</td>
+<td align="center">—</td>
+</tr>
+<tr class="sortbottom" valign="top" style="background: #D0E6FF;">
+<td><span class="flagicon"><a href="/wiki/United_States" title="United States"><img alt="United States" src="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/23px-Flag_of_the_United_States.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/35px-Flag_of_the_United_States.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/46px-Flag_of_the_United_States.svg.png 2x" data-file-width="1235" data-file-height="650" /></a></span> <b><a href="/wiki/Contiguous_United_States" title="Contiguous United States">Contiguous United States</a></b></td>
+<td align="center"><b>Total</b></td>
+<td align="right">3,120,426.47</td>
+<td align="right">8,081,867</td>
+<td align="center"></td>
+<td align="right">2,954,841.42</td>
+<td align="right">7,653,004</td>
+<td align="right"><span style="display:none" class="sortkey">7001946900000000000</span>94.69%</td>
+<td align="right">165,584.6</td>
+<td align="right">428,862</td>
+<td align="right"><span style="display:none" class="sortkey">7000530990000099999</span>5.31%</td>
+</tr>
+<tr class="sortbottom" valign="top" style="background: #D0E6FF;">
+<td><span class="flagicon"><a href="/wiki/United_States" title="United States"><img alt="United States" src="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/23px-Flag_of_the_United_States.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/35px-Flag_of_the_United_States.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/46px-Flag_of_the_United_States.svg.png 2x" data-file-width="1235" data-file-height="650" /></a></span> <b>50 states and D.C.</b></td>
+<td align="center"><b>Total</b></td>
+<td align="right">3,796,742.23</td>
+<td align="right">9,833,517</td>
+<td align="center"></td>
+<td align="right">3,531,905.43</td>
+<td align="right">9,147,593</td>
+<td align="right"><span style="display:none" class="sortkey">7001930200000000000</span>93.02%</td>
+<td align="right">264,836.79</td>
+<td align="right">685,924</td>
+<td align="right"><span style="display:none" class="sortkey">7000698000000000000</span>6.98%</td>
+</tr>
+<tr class="sortbottom" valign="top" style="background: #D0E6FF;">
+<td><span class="flagicon"><a href="/wiki/United_States" title="United States"><img alt="United States" src="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/23px-Flag_of_the_United_States.svg.png" width="23" height="12" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/35px-Flag_of_the_United_States.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/46px-Flag_of_the_United_States.svg.png 2x" data-file-width="1235" data-file-height="650" /></a></span> <b>All U.S. territory</b></td>
+<td align="center"><b>Total</b></td>
+<td align="right">3,805,943.26</td>
+<td align="right">9,857,348</td>
+<td align="center"></td>
+<td align="right">3,535,948.12</td>
+<td align="right">9,158,064</td>
+<td align="right"><span style="display:none" class="sortkey">7001929100000000000</span>92.91%</td>
+<td align="right">269,995.13</td>
+<td align="right">699,284</td>
+<td align="right"><span style="display:none" class="sortkey">7000709000000000000</span>7.09%</td>
+</tr>
+</table>
+<h2><span class="mw-headline" id="Area_by_division">Area by division</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=2" title="Edit section: Area by division">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<table class="wikitable sortable">
+<tr>
+<th></th>
+<th colspan="3">Total area<sup id="cite_ref-2010census_2-4" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Land area<sup id="cite_ref-2010census_2-5" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Water<sup id="cite_ref-2010census_2-6" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+</tr>
+<tr>
+<th>Division</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % land</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % water</th>
+</tr>
+<tr>
+<td><a href="/wiki/East_North_Central_States" title="East North Central States">East North Central</a></td>
+<td align="center"><span style="" class="sortkey">!B9983905620875 </span>5</td>
+<td align="right">301,368.57</td>
+<td align="right">780,541</td>
+<td align="center"><span style="" class="sortkey">!B9982082405307 </span>6</td>
+<td align="right">242,902.44</td>
+<td align="right">629,114</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">58,466.13</td>
+<td align="right">151,427</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/East_South_Central_States" title="East South Central States">East South Central</a></td>
+<td align="center"><span style="" class="sortkey">!B9980540898509 </span>7</td>
+<td align="right">183,403.89</td>
+<td align="right">475,014</td>
+<td align="center"><span style="" class="sortkey">!B9980540898509 </span>7</td>
+<td align="right">178,289.83</td>
+<td align="right">461,769</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9978027754226 </span>9</td>
+<td align="right">5,114.60</td>
+<td align="right">13,247</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Mid-Atlantic_States" title="Mid-Atlantic States" class="mw-redirect">Middle Atlantic</a></td>
+<td align="center"><span style="" class="sortkey">!B9979205584583 </span>8</td>
+<td align="right">109,331.89</td>
+<td align="right">283,168</td>
+<td align="center"><span style="" class="sortkey">!B9979205584583 </span>8</td>
+<td align="right">99,223.32</td>
+<td align="right">256,987</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9982082405307 </span>6</td>
+<td align="right">10,108.57</td>
+<td align="right">26,181</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Mountain_States" title="Mountain States">Mountain</a></td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">863,564.63</td>
+<td align="right">2,236,622</td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">855,766.98</td>
+<td align="right">2,216,426</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9979205584583 </span>8</td>
+<td align="right">7,797.65</td>
+<td align="right">20,196</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/New_England" title="New England">New England</a></td>
+<td align="center"><span style="" class="sortkey">!B9978027754226 </span>9</td>
+<td align="right">71,987.96</td>
+<td align="right">186,448</td>
+<td align="center"><span style="" class="sortkey">!B9978027754226 </span>9</td>
+<td align="right">62,668.46</td>
+<td align="right">162,311</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9980540898509 </span>7</td>
+<td align="right">9,299.50</td>
+<td align="right">24,086</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Pacific_States" title="Pacific States">Pacific</a></td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">1,009,687.00</td>
+<td align="right">2,615,077</td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">895,286.33</td>
+<td align="right">2,318,781</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">114,400.67</td>
+<td align="right">296,296</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/South_Atlantic_States" title="South Atlantic States">South Atlantic</a></td>
+<td align="center"><span style="" class="sortkey">!B9982082405307 </span>6</td>
+<td align="right">292,990.46</td>
+<td align="right">758,842</td>
+<td align="center"><span style="" class="sortkey">!B9983905620875 </span>5</td>
+<td align="right">265,061.97</td>
+<td align="right">686,507</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">27,928.49</td>
+<td align="right">72,334</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/West_North_Central_States" title="West North Central States">West North Central</a></td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">520,355.80</td>
+<td align="right">1,347,715</td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">507,620.08</td>
+<td align="right">1,314,730</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9983905620875 </span>5</td>
+<td align="right">12,735.72</td>
+<td align="right">32,985</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/West_South_Central_States" title="West South Central States">West South Central</a></td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">444,052.01</td>
+<td align="right">1,150,089</td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">425,066.01</td>
+<td align="right">1,100,916</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">18,986.00</td>
+<td align="right">49,174</td>
+<td></td>
+</tr>
+</table>
+<h2><span class="mw-headline" id="Area_by_region">Area by region</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=3" title="Edit section: Area by region">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<table class="wikitable sortable">
+<tr>
+<th></th>
+<th colspan="3">Total area<sup id="cite_ref-2010census_2-7" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Land area<sup id="cite_ref-2010census_2-8" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+<th colspan="4">Water<sup id="cite_ref-2010census_2-9" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></th>
+</tr>
+<tr>
+<th>Region</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % land</th>
+<th>Rank</th>
+<th>sq mi</th>
+<th>km²</th>
+<th> % water</th>
+</tr>
+<tr>
+<td><a href="/wiki/Midwestern_United_States" title="Midwestern United States">Midwest</a></td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">821,724.38</td>
+<td align="right">2,128,256</td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">750,522.52</td>
+<td align="right">1,943,844</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">71,201.86</td>
+<td align="right">184,412</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Northeastern_United_States" title="Northeastern United States">Northeast</a></td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">181,319.85</td>
+<td align="right">469,616</td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">161,911.78</td>
+<td align="right">419,350</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9986137056388 </span>4</td>
+<td align="right">19,408.07</td>
+<td align="right">50,267</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Southern_United_States" title="Southern United States">South</a></td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">920,446.37</td>
+<td align="right">2,383,945</td>
+<td align="center"><span style="" class="sortkey">!B9993068528194 </span>2</td>
+<td align="right">868,417.82</td>
+<td align="right">2,249,192</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!B9989013877113 </span>3</td>
+<td align="right">52,028.55</td>
+<td align="right">134,753</td>
+<td></td>
+</tr>
+<tr>
+<td><a href="/wiki/Western_United_States" title="Western United States">West</a></td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">1,873,251.63</td>
+<td align="right">4,851,699</td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">1,751,053.31</td>
+<td align="right">4,535,207</td>
+<td></td>
+<td align="center"><span style="" class="sortkey">!C </span>1</td>
+<td align="right">122,198.32</td>
+<td align="right">316,492</td>
+<td></td>
+</tr>
+</table>
+<ul class="gallery mw-gallery-traditional">
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:28.5px auto;"><a href="/wiki/File:US_States_by_Total_Area.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/b/bb/US_States_by_Total_Area.svg/120px-US_States_by_Total_Area.svg.png" width="120" height="93" data-file-width="932" data-file-height="723" /></a></div>
+</div>
+<div class="gallerytext">
+<p>U.S. states by total area</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:28.5px auto;"><a href="/wiki/File:US_States_by_Total_Land_Area.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/5/5b/US_States_by_Total_Land_Area.svg/120px-US_States_by_Total_Land_Area.svg.png" width="120" height="93" data-file-width="932" data-file-height="723" /></a></div>
+</div>
+<div class="gallerytext">
+<p>U.S. states by land area</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:27.5px auto;"><a href="/wiki/File:US_States_by_Water_Area.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/7/70/US_States_by_Water_Area.svg/120px-US_States_by_Water_Area.svg.png" width="120" height="95" data-file-width="966" data-file-height="764" /></a></div>
+</div>
+<div class="gallerytext">
+<p>U.S. states by water area</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:27.5px auto;"><a href="/wiki/File:US_States_by_Water_Percentage.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/2d/US_States_by_Water_Percentage.svg/120px-US_States_by_Water_Percentage.svg.png" width="120" height="95" data-file-width="966" data-file-height="764" /></a></div>
+</div>
+<div class="gallerytext">
+<p>U.S. states by water percentage</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:36px auto;"><a href="/wiki/File:Map_of_USA_AK_full.png" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/8/8c/Map_of_USA_AK_full.png/120px-Map_of_USA_AK_full.png" width="120" height="78" data-file-width="284" data-file-height="184" /></a></div>
+</div>
+<div class="gallerytext">
+<p>Alaska is the largest state by total area, land area, and water area</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:37px auto;"><a href="/wiki/File:Alaska-Size.png" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/4e/Alaska-Size.png/120px-Alaska-Size.png" width="120" height="76" data-file-width="764" data-file-height="485" /></a></div>
+</div>
+<div class="gallerytext">
+<p>The area of <a href="/wiki/Alaska" title="Alaska">Alaska</a> is <span style="display:none" class="sortkey">7001180000000000000</span>18% of the area of the <a href="/wiki/United_States" title="United States">United States</a> and <span style="display:none" class="sortkey">7001210000000000000</span>21% of the area of the <a href="/wiki/Contiguous_United_States" title="Contiguous United States">contiguous United States</a></p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:36px auto;"><a href="/wiki/File:Map_of_USA_TX.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/c/cc/Map_of_USA_TX.svg/120px-Map_of_USA_TX.svg.png" width="120" height="78" data-file-width="286" data-file-height="186" /></a></div>
+</div>
+<div class="gallerytext">
+<p>The second largest state, Texas, is only <span style="display:none" class="sortkey">7001400000000000000</span>40% of the total area of the largest state, Alaska</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:36px auto;"><a href="/wiki/File:Map_of_USA_highlighting_Rhode_Island.png" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/91/Map_of_USA_highlighting_Rhode_Island.png/120px-Map_of_USA_highlighting_Rhode_Island.png" width="120" height="78" data-file-width="280" data-file-height="183" /></a></div>
+</div>
+<div class="gallerytext">
+<p>Rhode Island is the smallest state by total area and land area</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:15px auto;"><a href="/wiki/File:Map_of_California_highlighting_San_Bernardino_County.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/7/77/Map_of_California_highlighting_San_Bernardino_County.svg/104px-Map_of_California_highlighting_San_Bernardino_County.svg.png" width="104" height="120" data-file-width="9164" data-file-height="10536" /></a></div>
+</div>
+<div class="gallerytext">
+<p><a href="/wiki/San_Bernardino_County" title="San Bernardino County" class="mw-redirect">San Bernardino County</a> is the largest <a href="/wiki/County" title="County">county</a> in the U.S. and is larger than each of the nine smallest states, including larger than the four smallest states combined. (Although some of Alaska's boroughs and census areas are larger, they are not true counties.)</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:25.5px auto;"><a href="/wiki/File:Michigan.svg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/9/93/Michigan.svg/120px-Michigan.svg.png" width="120" height="99" data-file-width="741" data-file-height="612" /></a></div>
+</div>
+<div class="gallerytext">
+<p>Michigan is second (after Alaska) in water area, and first in water percentage</p>
+</div>
+</div>
+</li>
+<li class="gallerybox" style="width: 155px">
+<div style="width: 155px">
+<div class="thumb" style="width: 150px;">
+<div style="margin:22.5px auto;"><a href="/wiki/File:STS-95_Florida_From_Space.jpg" class="image"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/2/2e/STS-95_Florida_From_Space.jpg/120px-STS-95_Florida_From_Space.jpg" width="120" height="105" data-file-width="3000" data-file-height="2624" /></a></div>
+</div>
+<div class="gallerytext">
+<p>Florida is mostly a peninsula, and has the third largest water area and seventh largest water area percentage</p>
+</div>
+</div>
+</li>
+</ul>
+<h2><span class="mw-headline" id="See_also">See also</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=4" title="Edit section: See also">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<div class="noprint tright portal" style="border:solid #aaa 1px;margin:0.5em 0 0.5em 1em;">
+<table style="background:#f9f9f9;font-size:85%;line-height:110%;max-width:175px;">
+<tr valign="middle">
+<td style="text-align:center;"><a href="/wiki/File:Flag_of_the_United_States.svg" class="image"><img alt="Portal icon" src="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/32px-Flag_of_the_United_States.svg.png" width="32" height="17" class="thumbborder" srcset="//upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/48px-Flag_of_the_United_States.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/a/a4/Flag_of_the_United_States.svg/64px-Flag_of_the_United_States.svg.png 2x" data-file-width="1235" data-file-height="650" /></a></td>
+<td style="padding:0 0.2em;vertical-align:middle;font-style:italic;font-weight:bold;"><a href="/wiki/Portal:United_States" title="Portal:United States">United States portal</a></td>
+</tr>
+</table>
+</div>
+<ul>
+<li><a href="/wiki/List_of_the_largest_country_subdivisions_by_area" title="List of the largest country subdivisions by area" class="mw-redirect">List of the largest country subdivisions by area</a></li>
+<li><a href="/wiki/List_of_political_and_geographic_subdivisions_by_total_area" title="List of political and geographic subdivisions by total area">List of political and geographic subdivisions by total area</a></li>
+</ul>
+<h2><span class="mw-headline" id="Notes">Notes</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=5" title="Edit section: Notes">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<div class="reflist columns references-column-width" style="-moz-column-width: 30em; -webkit-column-width: 30em; column-width: 30em; list-style-type: lower-alpha;">
+<ol class="references">
+<li id="cite_note-4"><span class="mw-cite-backlink"><b><a href="#cite_ref-4">^</a></b></span> <span class="reference-text">Areas were not published in the 2010 census, unlike previous years, as the U.S. Census Bureau no longer collects data on the Minor Outlying Islands.<sup id="cite_ref-2010census_2-3" class="reference"><a href="#cite_note-2010census-2"><span>[</span>2<span>]</span></a></sup></span></li>
+</ol>
+</div>
+<h2><span class="mw-headline" id="References">References</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=6" title="Edit section: References">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<div class="reflist columns references-column-width" style="-moz-column-width: 30em; -webkit-column-width: 30em; column-width: 30em; list-style-type: decimal;">
+<ol class="references">
+<li id="cite_note-1"><span class="mw-cite-backlink"><b><a href="#cite_ref-1">^</a></b></span> <span class="reference-text"><a rel="nofollow" class="external text" href="http://www.census.gov/geo/www/tiger/glossry2.pdf">Census 2000 Geographic Terms and Concepts</a>, Census 2000 Geography Glossary, U.S. Census Bureau. Accessed 2007-07-10</span></li>
+<li id="cite_note-2010census-2"><span class="mw-cite-backlink">^ <a href="#cite_ref-2010census_2-0"><sup><i><b>a</b></i></sup></a> <a href="#cite_ref-2010census_2-1"><sup><i><b>b</b></i></sup></a> <a href="#cite_ref-2010census_2-2"><sup><i><b>c</b></i></sup></a> <a href="#cite_ref-2010census_2-3"><sup><i><b>d</b></i></sup></a> <a href="#cite_ref-2010census_2-4"><sup><i><b>e</b></i></sup></a> <a href="#cite_ref-2010census_2-5"><sup><i><b>f</b></i></sup></a> <a href="#cite_ref-2010census_2-6"><sup><i><b>g</b></i></sup></a> <a href="#cite_ref-2010census_2-7"><sup><i><b>h</b></i></sup></a> <a href="#cite_ref-2010census_2-8"><sup><i><b>i</b></i></sup></a> <a href="#cite_ref-2010census_2-9"><sup><i><b>j</b></i></sup></a></span> <span class="reference-text"><span class="citation web"><a rel="nofollow" class="external text" href="http://www.census.gov/prod/cen2010/cph-2-1.pdf">"United States Summary: 2010, Population and Housing Unit Counts, 2010 Census of Population and Housing"</a> (PDF). <a href="/wiki/United_States_Census_Bureau" title="United States Census Bureau">United States Census Bureau</a>. September 2012. pp. V–2, 1 & 41 (Tables 1 & 18)<span class="reference-accessdate">. Retrieved February 7, 2014</span>.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+U.S.+states+and+territories+by+area&rft.btitle=United+States+Summary%3A+2010%2C+Population+and+Housing+Unit+Counts%2C+2010+Census+of+Population+and+Housing&rft.date=September+2012&rft.genre=book&rft_id=http%3A%2F%2Fwww.census.gov%2Fprod%2Fcen2010%2Fcph-2-1.pdf&rft.pages=V-2%2C+1+%26+41+%28Tables+1+%26+18%29&rft.pub=United+States+Census+Bureau&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook" class="Z3988"><span style="display:none;"> </span></span></span></li>
+<li id="cite_note-2000census-3"><span class="mw-cite-backlink"><b><a href="#cite_ref-2000census_3-0">^</a></b></span> <span class="reference-text"><span class="citation web"><a rel="nofollow" class="external text" href="http://www.census.gov/prod/cen2000/phc3-us-pt1.pdf">"United States Summary: 2010, Population and Housing Unit Counts, 2000 Census of Population and Housing"</a> (PDF). United States Census Bureau. April 2004. p. 1 (Table 1)<span class="reference-accessdate">. Retrieved February 10, 2014</span>.</span><span title="ctx_ver=Z39.88-2004&rfr_id=info%3Asid%2Fen.wikipedia.org%3AList+of+U.S.+states+and+territories+by+area&rft.btitle=United+States+Summary%3A+2010%2C+Population+and+Housing+Unit+Counts%2C+2000+Census+of+Population+and+Housing&rft.date=April+2004&rft.genre=book&rft_id=http%3A%2F%2Fwww.census.gov%2Fprod%2Fcen2000%2Fphc3-us-pt1.pdf&rft.pages=1+%28Table+1%29&rft.pub=United+States+Census+Bureau&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook" class="Z3988"><span style="display:none;"> </span></span></span></li>
+</ol>
+</div>
+<h2><span class="mw-headline" id="External_links">External links</span><span class="mw-editsection"><span class="mw-editsection-bracket">[</span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit&section=7" title="Edit section: External links">edit</a><span class="mw-editsection-bracket">]</span></span></h2>
+<table class="metadata plainlinks mbox-small" style="padding:0.25em 0.5em 0.5em 0.75em;border:1px solid #aaa;background:#f9f9f9;">
+<tr style="height:25px;">
+<td colspan="2" style="padding-bottom:0.5em;border-bottom:1px solid #aaa;margin:auto;text-align:center;">Find more about <b>area</b> at Wikipedia's <a href="/wiki/Wikipedia:Wikimedia_sister_projects" title="Wikipedia:Wikimedia sister projects">sister projects</a></td>
+</tr>
+<tr style="height:25px;">
+<td style="padding-top:0.75em;"><a href="//en.wiktionary.org/wiki/Special:Search/area" title="Search Wiktionary"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Wiktionary-logo-en.svg/23px-Wiktionary-logo-en.svg.png" width="23" height="25" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Wiktionary-logo-en.svg/35px-Wiktionary-logo-en.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/f8/Wiktionary-logo-en.svg/46px-Wiktionary-logo-en.svg.png 2x" data-file-width="1000" data-file-height="1089" /></a></td>
+<td style="padding-top:0.75em;"><a href="//en.wiktionary.org/wiki/Special:Search/area" class="extiw" title="wikt:Special:Search/area">Definitions and translations</a> from Wiktionary</td>
+</tr>
+<tr style="height:25px;">
+<td><a href="//commons.wikimedia.org/wiki/Special:Search/area" title="Search Commons"><img alt="" src="//upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/18px-Commons-logo.svg.png" width="18" height="25" srcset="//upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/28px-Commons-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/en/thumb/4/4a/Commons-logo.svg/37px-Commons-logo.svg.png 2x" data-file-width="1024" data-file-height="1376" /></a></td>
+<td><a href="//commons.wikimedia.org/wiki/Special:Search/area" class="extiw" title="commons:Special:Search/area">Media</a> from Commons</td>
+</tr>
+<tr style="height:25px;">
+<td><a href="//en.wikiquote.org/wiki/Special:Search/area" title="Search Wikiquote"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikiquote-logo.svg/21px-Wikiquote-logo.svg.png" width="21" height="25" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikiquote-logo.svg/32px-Wikiquote-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikiquote-logo.svg/42px-Wikiquote-logo.svg.png 2x" data-file-width="300" data-file-height="355" /></a></td>
+<td><a href="//en.wikiquote.org/wiki/Special:Search/area" class="extiw" title="q:Special:Search/area">Quotations</a> from Wikiquote</td>
+</tr>
+<tr style="height:25px;">
+<td><a href="//en.wikisource.org/wiki/Special:Search/area" title="Search Wikisource"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/24px-Wikisource-logo.svg.png" width="24" height="25" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/36px-Wikisource-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/4/4c/Wikisource-logo.svg/48px-Wikisource-logo.svg.png 2x" data-file-width="410" data-file-height="430" /></a></td>
+<td><a href="//en.wikisource.org/wiki/Special:Search/area" class="extiw" title="s:Special:Search/area">Source texts</a> from Wikisource</td>
+</tr>
+<tr style="height:25px;">
+<td><a href="//en.wikibooks.org/wiki/Special:Search/area" title="Search Wikibooks"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/25px-Wikibooks-logo.svg.png" width="25" height="25" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/38px-Wikibooks-logo.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/f/fa/Wikibooks-logo.svg/50px-Wikibooks-logo.svg.png 2x" data-file-width="300" data-file-height="300" /></a></td>
+<td><a href="//en.wikibooks.org/wiki/Special:Search/area" class="extiw" title="b:Special:Search/area">Textbooks</a> from Wikibooks</td>
+</tr>
+<tr style="height:25px;">
+<td><a href="//en.wikiversity.org/wiki/Special:Search/area" title="Search Wikiversity"><img alt="" src="//upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Wikiversity-logo-en.svg/25px-Wikiversity-logo-en.svg.png" width="25" height="23" srcset="//upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Wikiversity-logo-en.svg/38px-Wikiversity-logo-en.svg.png 1.5x, //upload.wikimedia.org/wikipedia/commons/thumb/1/1b/Wikiversity-logo-en.svg/50px-Wikiversity-logo-en.svg.png 2x" data-file-width="1000" data-file-height="900" /></a></td>
+<td><a href="//en.wikiversity.org/wiki/Special:Search/area" class="extiw" title="v:Special:Search/area">Learning resources</a> from Wikiversity</td>
+</tr>
+</table>
+<table cellspacing="0" class="navbox" style="border-spacing:0;">
+<tr>
+<td style="padding:2px;">
+<table cellspacing="0" class="nowraplinks collapsible autocollapse navbox-inner" style="border-spacing:0;background:transparent;color:inherit;">
+<tr>
+<th scope="col" class="navbox-title" colspan="2">
+<div class="plainlinks hlist navbar mini">
+<ul>
+<li class="nv-view"><a href="/wiki/Template:USStateLists" title="Template:USStateLists"><span title="View this template" style=";;background:none transparent;border:none;;">v</span></a></li>
+<li class="nv-talk"><a href="/wiki/Template_talk:USStateLists" title="Template talk:USStateLists"><span title="Discuss this template" style=";;background:none transparent;border:none;;">t</span></a></li>
+<li class="nv-edit"><a class="external text" href="//en.wikipedia.org/w/index.php?title=Template:USStateLists&action=edit"><span title="Edit this template" style=";;background:none transparent;border:none;;">e</span></a></li>
+</ul>
+</div>
+<div style="font-size:110%;"><a href="/wiki/List_of_U.S._state_lists" title="List of U.S. state lists" class="mw-redirect">United States state-related lists</a></div>
+</th>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<td class="navbox-abovebelow" colspan="2">
+<div><a href="/wiki/List_of_states_and_territories_of_the_United_States" title="List of states and territories of the United States">List of states and territories of the United States</a></div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Demographics</th>
+<td class="navbox-list navbox-odd hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_educational_attainment" title="List of U.S. states by educational attainment">Educational attainment</a></li>
+<li><a href="/wiki/Irreligion_in_the_United_States#Demographics" title="Irreligion in the United States">Irreligion</a></li>
+<li><a href="/wiki/List_of_U.S._states%27_largest_cities_by_population" title="List of U.S. states' largest cities by population">Five most populous cities</a></li>
+<li><a href="/wiki/List_of_the_most_populous_counties_by_U.S._state" title="List of the most populous counties by U.S. state">Most populous county</a></li>
+<li><a href="/wiki/List_of_U.S._states_and_territories_by_population#States_and_territories" title="List of U.S. states and territories by population">Population</a>
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_population_density" title="List of U.S. states by population density">Density</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_population_growth_rate" title="List of U.S. states by population growth rate">Growth rate</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_historical_population" title="List of U.S. states by historical population">Historical</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_African-American_population" title="List of U.S. states by African-American population">African American</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_Amish_population" title="List of U.S. states by Amish population">Amish</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_Hispanic_and_Latino_population" title="List of U.S. states by Hispanic and Latino population">Hispanic and Latino</a></li>
+<li><a href="/wiki/Spanish_language_in_the_United_States#Geographic_distribution" title="Spanish language in the United States">Spanish-speaking</a></li>
+</ul>
+</li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Economy</th>
+<td class="navbox-list navbox-even hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_the_number_of_billionaires" title="List of U.S. states by the number of billionaires">Billionaires</a></li>
+<li><a href="/wiki/List_of_U.S._state_budgets" title="List of U.S. state budgets">Budgets</a></li>
+<li><a href="/wiki/Federal_tax_revenue_by_state" title="Federal tax revenue by state">Federal tax revenue</a></li>
+<li><a href="/wiki/Federal_taxation_and_spending_by_state" title="Federal taxation and spending by state">Federal net taxation less spending</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_GDP" title="List of U.S. states by GDP">Gross domestic product</a>
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_economic_growth_rate" title="List of U.S. states by economic growth rate">Growth rate</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_GDP_per_capita" title="List of U.S. states by GDP per capita">Per capita</a></li>
+</ul>
+</li>
+<li><a href="/wiki/List_of_U.S._states_by_income" title="List of U.S. states by income">Income</a>
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_income#States_ranked_by_median_household_income" title="List of U.S. states by income">Household</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_income#States_ranked_by_per_capita_income" title="List of U.S. states by income">Per capita</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_Gini_coefficient" title="List of U.S. states by Gini coefficient">Inequality</a></li>
+</ul>
+</li>
+<li><a href="/wiki/Union_affiliation_by_U.S._state" title="Union affiliation by U.S. state">Labor affiliation</a></li>
+<li><a href="/wiki/List_of_U.S._state_minimum_wages" title="List of U.S. state minimum wages" class="mw-redirect">Minimum wages</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_poverty_rate" title="List of U.S. states by poverty rate">Poverty rates</a></li>
+<li><a href="/wiki/Sales_taxes_in_the_United_States#By_jurisdiction" title="Sales taxes in the United States">Sales taxes</a></li>
+<li><a href="/wiki/State_tax_levels_in_the_United_States" title="State tax levels in the United States">State income taxes</a>
+<ul>
+<li><a href="/wiki/State_income_tax#U.S._States_with_a_flat_rate_individual_income_tax" title="State income tax">Flat rate</a></li>
+<li><a href="/wiki/State_income_tax#U.S._States_with_no_individual_income_tax" title="State income tax">None</a></li>
+</ul>
+</li>
+<li><a href="/wiki/List_of_U.S._states_by_unemployment_rate" title="List of U.S. states by unemployment rate">Unemployment rates</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Environment</th>
+<td class="navbox-list navbox-odd hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_carbon_dioxide_emissions" title="List of U.S. states by carbon dioxide emissions">Carbon dioxide emissions</a></li>
+<li><a href="/wiki/List_of_U.S._state_parks" title="List of U.S. state parks">Parks</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_electricity_production_from_renewable_sources" title="List of U.S. states by electricity production from renewable sources">Renewable energy</a></li>
+<li><a href="/wiki/List_of_Superfund_sites_in_the_United_States" title="List of Superfund sites in the United States">Superfund sites</a></li>
+<li><a href="/wiki/List_of_U.S._state_and_tribal_wilderness_areas" title="List of U.S. state and tribal wilderness areas">Wildernesses</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Geography</th>
+<td class="navbox-list navbox-even hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><strong class="selflink">Area</strong></li>
+<li><a href="/wiki/List_of_U.S._states_by_coastline" title="List of U.S. states by coastline">Coastline</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_elevation" title="List of U.S. states by elevation">Elevations</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Government</th>
+<td class="navbox-list navbox-odd hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/Wikipedia:List_of_U.S._state_portals" title="Wikipedia:List of U.S. state portals">Agencies</a></li>
+<li><a href="/wiki/State_attorney_general" title="State attorney general">Attorneys general</a></li>
+<li><a href="/wiki/List_of_capitals_in_the_United_States" title="List of capitals in the United States">Capitals</a>
+<ul>
+<li><a href="/wiki/List_of_capitals_in_the_United_States#Historical_state_capitals" title="List of capitals in the United States">Historical</a></li>
+</ul>
+</li>
+<li><a href="/wiki/List_of_state_capitols_in_the_United_States" title="List of state capitols in the United States">Capitol buildings</a></li>
+<li><a href="/wiki/Comparison_of_U.S._state_governments" title="Comparison of U.S. state governments">Comparison</a></li>
+<li><a href="/wiki/List_of_counties_by_U.S._state" title="List of counties by U.S. state">Counties</a>
+<ul>
+<li><a href="/wiki/Index_of_U.S._counties" title="Index of U.S. counties">Alphabetical</a></li>
+<li><a href="/wiki/County_(United_States)#Number_of_county_equivalents_per_state" title="County (United States)">Number</a></li>
+</ul>
+</li>
+<li><a href="/wiki/List_of_current_United_States_governors" title="List of current United States governors">Governors</a></li>
+<li><a href="/wiki/List_of_United_States_state_legislatures" title="List of United States state legislatures">Legislatures</a></li>
+<li><a href="/wiki/List_of_U.S._state_libraries_and_archives" title="List of U.S. state libraries and archives">Libraries and archives</a></li>
+<li><a href="/wiki/Languages_of_the_United_States#Official_language_status" title="Languages of the United States">Official languages</a></li>
+<li><a href="/wiki/List_of_U.S._states%27_Poets_Laureate" title="List of U.S. states' Poets Laureate">Poets laureate</a></li>
+<li><a href="/wiki/State_supreme_court" title="State supreme court">State supreme courts</a></li>
+<li><a href="/wiki/State_treasurer" title="State treasurer">State treasurers</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Health</th>
+<td class="navbox-list navbox-even hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/List_of_hospitals_in_the_United_States" title="List of hospitals in the United States">Hospitals</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_American_Human_Development_Index" title="List of U.S. states by American Human Development Index">American HDI</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_life_expectancy" title="List of U.S. states by life expectancy">Life expectancy</a></li>
+<li><a href="/wiki/Obesity_in_the_United_States#Prevalence_by_state" title="Obesity in the United States">Obesity rates</a></li>
+<li><a href="/wiki/List_of_U.S._states_and_territories_by_fertility_rate" title="List of U.S. states and territories by fertility rate">Total fertility rates</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">History</th>
+<td class="navbox-list navbox-odd hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/Historic_regions_of_the_United_States" title="Historic regions of the United States">Historic regions of the United States</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_date_of_statehood" title="List of U.S. states by date of statehood">Date of statehood</a></li>
+<li><a href="/wiki/List_of_U.S._state_name_etymologies" title="List of U.S. state name etymologies">Etymologies</a></li>
+<li><a href="/wiki/List_of_capitals_in_the_United_States#Historical_state_capitals" title="List of capitals in the United States">Historical capitals</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_historical_population" title="List of U.S. states by historical population">Historical population</a></li>
+<li><a href="/wiki/List_of_U.S._state_historical_societies_and_museums" title="List of U.S. state historical societies and museums">Historical societies and museums</a></li>
+<li><a href="/wiki/List_of_U.S._National_Historic_Landmarks_by_state" title="List of U.S. National Historic Landmarks by state">National Historic Landmarks</a></li>
+<li><a href="/wiki/List_of_U.S._states_that_were_never_U.S._territories" title="List of U.S. states that were never U.S. territories">Never territories</a></li>
+<li><a href="/wiki/List_of_U.S._state_partition_proposals" title="List of U.S. state partition proposals">Partition proposals</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_date_of_statehood#Table" title="List of U.S. states by date of statehood">Preceding entities</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Law</th>
+<td class="navbox-list navbox-even hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/Abortion_in_the_United_States_by_state#State_table" title="Abortion in the United States by state">Abortion</a></li>
+<li><a href="/wiki/Ages_of_consent_in_North_America#United_States" title="Ages of consent in North America">Age of consent</a></li>
+<li><a href="/wiki/Alcohol_laws_of_the_United_States" title="Alcohol laws of the United States">Alcohol</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_Alford_plea_usage#U.S._states" title="List of U.S. states by Alford plea usage">Alford plea</a></li>
+<li><a href="/wiki/Restrictions_on_cell_phone_use_while_driving_in_the_United_States" title="Restrictions on cell phone use while driving in the United States">Cell phones and driving laws</a></li>
+<li><a href="/wiki/List_of_U.S._state_constitutions" title="List of U.S. state constitutions" class="mw-redirect">Constitutions</a></li>
+<li><a href="/wiki/Gun_laws_in_the_United_States_by_state" title="Gun laws in the United States by state">Firearms</a>
+<ul>
+<li><a href="/wiki/Gun_violence_in_the_United_States_by_state" title="Gun violence in the United States by state">Homicide</a></li>
+</ul>
+</li>
+<li><a href="/wiki/List_of_United_States_state_and_local_law_enforcement_agencies" title="List of United States state and local law enforcement agencies">Law enforcement agencies</a></li>
+<li><a href="/wiki/List_of_U.S._state_legal_codes" title="List of U.S. state legal codes">Legal codes</a></li>
+<li><a href="/wiki/Legality_of_cannabis_by_US_state" title="Legality of cannabis by US state" class="mw-redirect">Legality of cannabis</a></li>
+<li><a href="/wiki/United_States_Peace_Index" title="United States Peace Index">Peace Index</a></li>
+<li><a href="/wiki/List_of_United_States_state_prisons" title="List of United States state prisons">Prisons</a>
+<ul>
+<li><a href="/wiki/List_of_U.S._states_by_incarceration_rate" title="List of U.S. states by incarceration rate">Incarceration rate</a></li>
+</ul>
+</li>
+<li><a href="/wiki/Same-sex_marriage_status_in_the_United_States_by_state" title="Same-sex marriage status in the United States by state" class="mw-redirect">Same-sex marriage</a>
+<ul>
+<li><a href="/wiki/List_of_U.S._state_constitutional_amendments_banning_same-sex_unions_by_type" title="List of U.S. state constitutional amendments banning same-sex unions by type">Constitutional bans</a></li>
+<li><a href="/wiki/Same-sex_marriage_law_in_the_United_States_by_state" title="Same-sex marriage law in the United States by state">Law</a></li>
+</ul>
+</li>
+<li><a href="/wiki/Seat_belt_legislation_in_the_United_States#The_laws_by_state" title="Seat belt legislation in the United States">Seat belt laws</a></li>
+<li><a href="/wiki/List_of_U.S._state_constitutional_provisions_allowing_self-representation_in_state_courts" title="List of U.S. state constitutional provisions allowing self-representation in state courts">Self-representation</a></li>
+<li><a href="/wiki/List_of_smoking_bans_in_the_United_States" title="List of smoking bans in the United States">Smoking</a></li>
+</ul>
+</div>
+</td>
+</tr>
+<tr style="height:2px;">
+<td colspan="2"></td>
+</tr>
+<tr>
+<th scope="row" class="navbox-group">Miscellaneous</th>
+<td class="navbox-list navbox-odd hlist" style="text-align:left;border-left-width:2px;border-left-style:solid;width:100%;padding:0px;">
+<div style="padding:0em 0.25em;">
+<ul>
+<li><a href="/wiki/List_of_U.S._state_abbreviations" title="List of U.S. state abbreviations">Abbreviations</a></li>
+<li><a href="/wiki/List_of_U.S._state_abbreviations" title="List of U.S. state abbreviations">Codes</a></li>
+<li><a href="/wiki/List_of_demonyms_for_U.S._states" title="List of demonyms for U.S. states">Demonyms</a></li>
+<li><a href="/wiki/List_of_U.S._state,_district,_and_territorial_insignia" title="List of U.S. state, district, and territorial insignia">Insignia</a></li>
+<li><a href="/wiki/List_of_U.S._states_by_vehicles_per_capita" title="List of U.S. states by vehicles per capita">Motor vehicles</a></li>
+<li><a href="/wiki/Wikipedia:List_of_U.S._state_portals" title="Wikipedia:List of U.S. state portals">Portals</a></li>
+<li><a href="/wiki/Lists_of_United_States_state_insignia" title="Lists of United States state insignia" class="mw-redirect">Symbols</a></li>
+<li><a href="/wiki/List_of_tallest_buildings_by_U.S._state" title="List of tallest buildings by U.S. state">Tallest buildings</a></li>
+<li><a href="/wiki/List_of_time_zones_by_U.S._state" title="List of time zones by U.S. state" class="mw-redirect">Time zones</a></li>
+<li><a href="/wiki/List_of_fictional_U.S._states" title="List of fictional U.S. states">Fictional states</a></li>
+</ul>
+</div>
+</td>
+</tr>
+</table>
+</td>
+</tr>
+</table>
+
+
+<!--
+NewPP limit report
+Parsed by terbium
+CPU time usage: 6.220 seconds
+Real time usage: 7.420 seconds
+Preprocessor visited node count: 33194/1000000
+Preprocessor generated node count: 33781/1500000
+Post‐expand include size: 222845/2048000 bytes
+Template argument size: 25771/2048000 bytes
+Highest expansion depth: 18/40
+Expensive parser function count: 2/500
+Lua time usage: 0.493/10.000 seconds
+Lua memory usage: 4.17 MB/50 MB
+-->
+
+<!-- Saved in parser cache with key enwiki:pcache:idhash:87513-0!*!0!!en!4!* and timestamp 20140725194944 and revision id 614847271
+ -->
+<noscript><img src="//en.wikipedia.org/wiki/Special:CentralAutoLogin/start?type=1x1" alt="" title="" width="1" height="1" style="border: none; position: absolute;" /></noscript></div> <div class="printfooter">
+ Retrieved from "<a dir="ltr" href="http://en.wikipedia.org/w/index.php?title=List_of_U.S._states_and_territories_by_area&oldid=614847271">http://en.wikipedia.org/w/index.php?title=List_of_U.S._states_and_territories_by_area&oldid=614847271</a>" </div>
+ <div id='catlinks' class='catlinks'><div id="mw-normal-catlinks" class="mw-normal-catlinks"><a href="/wiki/Help:Category" title="Help:Category">Categories</a>: <ul><li><a href="/wiki/Category:Geography_of_the_United_States" title="Category:Geography of the United States">Geography of the United States</a></li><li><a href="/wiki/Category:Lists_of_states_of_the_United_States" title="Category:Lists of states of the United States">Lists of states of the United States</a></li></ul></div></div> <div class="visualClear"></div>
+ </div>
+ </div>
+ <div id="mw-navigation">
+ <h2>Navigation menu</h2>
+
+ <div id="mw-head">
+ <div id="p-personal" role="navigation" class="" aria-labelledby="p-personal-label">
+ <h3 id="p-personal-label">Personal tools</h3>
+ <ul>
+ <li id="pt-createaccount"><a href="/w/index.php?title=Special:UserLogin&returnto=List+of+U.S.+states+and+territories+by+area&type=signup">Create account</a></li><li id="pt-login"><a href="/w/index.php?title=Special:UserLogin&returnto=List+of+U.S.+states+and+territories+by+area" title="You're encouraged to log in; however, it's not mandatory. [o]" accesskey="o">Log in</a></li> </ul>
+ </div>
+ <div id="left-navigation">
+ <div id="p-namespaces" role="navigation" class="vectorTabs" aria-labelledby="p-namespaces-label">
+ <h3 id="p-namespaces-label">Namespaces</h3>
+ <ul>
+ <li id="ca-nstab-main" class="selected"><span><a href="/wiki/List_of_U.S._states_and_territories_by_area" title="View the content page [c]" accesskey="c">Article</a></span></li>
+ <li id="ca-talk"><span><a href="/wiki/Talk:List_of_U.S._states_and_territories_by_area" title="Discussion about the content page [t]" accesskey="t">Talk</a></span></li>
+ </ul>
+ </div>
+ <div id="p-variants" role="navigation" class="vectorMenu emptyPortlet" aria-labelledby="p-variants-label">
+ <h3 id="mw-vector-current-variant">
+ </h3>
+
+ <h3 id="p-variants-label"><span>Variants</span><a href="#"></a></h3>
+
+ <div class="menu">
+ <ul>
+ </ul>
+ </div>
+ </div>
+ </div>
+ <div id="right-navigation">
+ <div id="p-views" role="navigation" class="vectorTabs" aria-labelledby="p-views-label">
+ <h3 id="p-views-label">Views</h3>
+ <ul>
+ <li id="ca-view" class="selected"><span><a href="/wiki/List_of_U.S._states_and_territories_by_area" >Read</a></span></li>
+ <li id="ca-edit"><span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=edit" title="You can edit this page. Please use the preview button before saving [e]" accesskey="e">Edit</a></span></li>
+ <li id="ca-history" class="collapsible"><span><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=history" title="Past versions of this page [h]" accesskey="h">View history</a></span></li>
+ </ul>
+ </div>
+ <div id="p-cactions" role="navigation" class="vectorMenu emptyPortlet" aria-labelledby="p-cactions-label">
+ <h3 id="p-cactions-label"><span>More</span><a href="#"></a></h3>
+
+ <div class="menu">
+ <ul>
+ </ul>
+ </div>
+ </div>
+ <div id="p-search" role="search">
+ <h3>
+ <label for="searchInput">Search</label>
+ </h3>
+
+ <form action="/w/index.php" id="searchform">
+ <div id="simpleSearch">
+ <input type="search" name="search" placeholder="Search" title="Search Wikipedia [f]" accesskey="f" id="searchInput" /><input type="hidden" value="Special:Search" name="title" /><input type="submit" name="fulltext" value="Search" title="Search Wikipedia for this text" id="mw-searchButton" class="searchButton mw-fallbackSearchButton" /><input type="submit" name="go" value="Go" title="Go to a page with this exact name if one exists" id="searchButton" class="searchButton" /> </div>
+ </form>
+ </div>
+ </div>
+ </div>
+ <div id="mw-panel">
+ <div id="p-logo" role="banner"><a style="background-image: url(//upload.wikimedia.org/wikipedia/en/b/bc/Wiki.png);" href="/wiki/Main_Page" title="Visit the main page"></a></div>
+ <div class="portal" role="navigation" id='p-navigation' aria-labelledby='p-navigation-label'>
+ <h3 id='p-navigation-label'>Navigation</h3>
+
+ <div class="body">
+ <ul>
+ <li id="n-mainpage-description"><a href="/wiki/Main_Page" title="Visit the main page [z]" accesskey="z">Main page</a></li>
+ <li id="n-contents"><a href="/wiki/Portal:Contents" title="Guides to browsing Wikipedia">Contents</a></li>
+ <li id="n-featuredcontent"><a href="/wiki/Portal:Featured_content" title="Featured content – the best of Wikipedia">Featured content</a></li>
+ <li id="n-currentevents"><a href="/wiki/Portal:Current_events" title="Find background information on current events">Current events</a></li>
+ <li id="n-randompage"><a href="/wiki/Special:Random" title="Load a random article [x]" accesskey="x">Random article</a></li>
+ <li id="n-sitesupport"><a href="https://donate.wikimedia.org/wiki/Special:FundraiserRedirector?utm_source=donate&utm_medium=sidebar&utm_campaign=C13_en.wikipedia.org&uselang=en" title="Support us">Donate to Wikipedia</a></li>
+ <li id="n-shoplink"><a href="//shop.wikimedia.org" title="Visit the Wikimedia Shop">Wikimedia Shop</a></li>
+ </ul>
+ </div>
+ </div>
+ <div class="portal" role="navigation" id='p-interaction' aria-labelledby='p-interaction-label'>
+ <h3 id='p-interaction-label'>Interaction</h3>
+
+ <div class="body">
+ <ul>
+ <li id="n-help"><a href="/wiki/Help:Contents" title="Guidance on how to use and edit Wikipedia">Help</a></li>
+ <li id="n-aboutsite"><a href="/wiki/Wikipedia:About" title="Find out about Wikipedia">About Wikipedia</a></li>
+ <li id="n-portal"><a href="/wiki/Wikipedia:Community_portal" title="About the project, what you can do, where to find things">Community portal</a></li>
+ <li id="n-recentchanges"><a href="/wiki/Special:RecentChanges" title="A list of recent changes in the wiki [r]" accesskey="r">Recent changes</a></li>
+ <li id="n-contactpage"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact page</a></li>
+ </ul>
+ </div>
+ </div>
+ <div class="portal" role="navigation" id='p-tb' aria-labelledby='p-tb-label'>
+ <h3 id='p-tb-label'>Tools</h3>
+
+ <div class="body">
+ <ul>
+ <li id="t-whatlinkshere"><a href="/wiki/Special:WhatLinksHere/List_of_U.S._states_and_territories_by_area" title="List of all English Wikipedia pages containing links to this page [j]" accesskey="j">What links here</a></li>
+ <li id="t-recentchangeslinked"><a href="/wiki/Special:RecentChangesLinked/List_of_U.S._states_and_territories_by_area" title="Recent changes in pages linked from this page [k]" accesskey="k">Related changes</a></li>
+ <li id="t-upload"><a href="/wiki/Wikipedia:File_Upload_Wizard" title="Upload files [u]" accesskey="u">Upload file</a></li>
+ <li id="t-specialpages"><a href="/wiki/Special:SpecialPages" title="A list of all special pages [q]" accesskey="q">Special pages</a></li>
+ <li id="t-permalink"><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&oldid=614847271" title="Permanent link to this revision of the page">Permanent link</a></li>
+ <li id="t-info"><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&action=info">Page information</a></li>
+ <li id="t-wikibase"><a href="//www.wikidata.org/wiki/Q150340" title="Link to connected data repository item [g]" accesskey="g">Wikidata item</a></li>
+ <li id="t-cite"><a href="/w/index.php?title=Special:Cite&page=List_of_U.S._states_and_territories_by_area&id=614847271" title="Information on how to cite this page">Cite this page</a></li> </ul>
+ </div>
+ </div>
+ <div class="portal" role="navigation" id='p-coll-print_export' aria-labelledby='p-coll-print_export-label'>
+ <h3 id='p-coll-print_export-label'>Print/export</h3>
+
+ <div class="body">
+ <ul>
+ <li id="coll-create_a_book"><a href="/w/index.php?title=Special:Book&bookcmd=book_creator&referer=List+of+U.S.+states+and+territories+by+area">Create a book</a></li>
+ <li id="coll-download-as-rl"><a href="/w/index.php?title=Special:Book&bookcmd=render_article&arttitle=List+of+U.S.+states+and+territories+by+area&oldid=614847271&writer=rl">Download as PDF</a></li>
+ <li id="t-print"><a href="/w/index.php?title=List_of_U.S._states_and_territories_by_area&printable=yes" title="Printable version of this page [p]" accesskey="p">Printable version</a></li>
+ </ul>
+ </div>
+ </div>
+ <div class="portal" role="navigation" id='p-lang' aria-labelledby='p-lang-label'>
+ <h3 id='p-lang-label'>Languages</h3>
+
+ <div class="body">
+ <ul>
+ <li class="interlanguage-link interwiki-ar"><a href="//ar.wikipedia.org/wiki/%D9%85%D9%84%D8%AD%D9%82:%D9%82%D8%A7%D8%A6%D9%85%D8%A9_%D8%A7%D9%84%D9%88%D9%84%D8%A7%D9%8A%D8%A7%D8%AA_%D8%A7%D9%84%D8%A3%D9%85%D8%B1%D9%8A%D9%83%D9%8A%D8%A9_%D8%AD%D8%B3%D8%A8_%D8%A7%D9%84%D9%85%D8%B3%D8%A7%D8%AD%D8%A9" title="ملحق:قائمة الولايات الأمريكية حسب المساحة – Arabic" lang="ar" hreflang="ar">العربية</a></li>
+ <li class="interlanguage-link interwiki-bg"><a href="//bg.wikipedia.org/wiki/%D0%A1%D0%BF%D0%B8%D1%81%D1%8A%D0%BA_%D0%BD%D0%B0_%D1%89%D0%B0%D1%82%D0%B8%D1%82%D0%B5_%D0%B2_%D0%A1%D0%90%D0%A9_%D0%BF%D0%BE_%D0%BF%D0%BB%D0%BE%D1%89" title="Списък на щатите в САЩ по площ – Bulgarian" lang="bg" hreflang="bg">Български</a></li>
+ <li class="interlanguage-link interwiki-bar"><a href="//bar.wikipedia.org/wiki/Bundesstootn_vo_da_USA_noch_Fl%C3%A4chn" title="Bundesstootn vo da USA noch Flächn – Bavarian" lang="bar" hreflang="bar">Boarisch</a></li>
+ <li class="interlanguage-link interwiki-ca"><a href="//ca.wikipedia.org/wiki/Llista_d%27estats_dels_Estats_Units_per_superf%C3%ADcie" title="Llista d'estats dels Estats Units per superfície – Catalan" lang="ca" hreflang="ca">Català</a></li>
+ <li class="interlanguage-link interwiki-cs"><a href="//cs.wikipedia.org/wiki/Seznam_st%C3%A1t%C5%AF_a_teritori%C3%AD_USA_podle_rozlohy" title="Seznam států a teritorií USA podle rozlohy – Czech" lang="cs" hreflang="cs">Čeština</a></li>
+ <li class="interlanguage-link interwiki-cy"><a href="//cy.wikipedia.org/wiki/Rhestr_taleithau%27r_Unol_Daleithiau_yn_%C3%B4l_arwynebedd" title="Rhestr taleithau'r Unol Daleithiau yn ôl arwynebedd – Welsh" lang="cy" hreflang="cy">Cymraeg</a></li>
+ <li class="interlanguage-link interwiki-da"><a href="//da.wikipedia.org/wiki/USA%27s_delstater_efter_areal" title="USA's delstater efter areal – Danish" lang="da" hreflang="da">Dansk</a></li>
+ <li class="interlanguage-link interwiki-de"><a href="//de.wikipedia.org/wiki/Liste_der_Bundesstaaten_der_Vereinigten_Staaten_nach_Fl%C3%A4che" title="Liste der Bundesstaaten der Vereinigten Staaten nach Fläche – German" lang="de" hreflang="de">Deutsch</a></li>
+ <li class="interlanguage-link interwiki-el"><a href="//el.wikipedia.org/wiki/%CE%9A%CE%B1%CF%84%CE%AC%CE%BB%CE%BF%CE%B3%CE%BF%CF%82_%CF%80%CE%BF%CE%BB%CE%B9%CF%84%CE%B5%CE%B9%CF%8E%CE%BD_%CE%BA%CE%B1%CE%B9_%CE%B5%CE%B4%CE%B1%CF%86%CF%8E%CE%BD_%CF%84%CF%89%CE%BD_%CE%97%CE%A0%CE%91_%CE%B1%CE%BD%CE%AC_%CE%AD%CE%BA%CF%84%CE%B1%CF%83%CE%B7" title="Κατάλογος πολιτειών και εδαφών των ΗΠΑ ανά έκταση – Greek" lang="el" hreflang="el">Ελληνικά</a></li>
+ <li class="interlanguage-link interwiki-es"><a href="//es.wikipedia.org/wiki/Anexo:Estados_de_los_Estados_Unidos_por_superficie" title="Anexo:Estados de los Estados Unidos por superficie – Spanish" lang="es" hreflang="es">Español</a></li>
+ <li class="interlanguage-link interwiki-fr"><a href="//fr.wikipedia.org/wiki/%C3%89tats_des_%C3%89tats-Unis_par_superficie" title="États des États-Unis par superficie – French" lang="fr" hreflang="fr">Français</a></li>
+ <li class="interlanguage-link interwiki-ga"><a href="//ga.wikipedia.org/wiki/Liosta_st%C3%A1it_SAM_de_r%C3%A9ir_achair" title="Liosta stáit SAM de réir achair – Irish" lang="ga" hreflang="ga">Gaeilge</a></li>
+ <li class="interlanguage-link interwiki-gl"><a href="//gl.wikipedia.org/wiki/Lista_dos_estados_dos_EUA_por_%C3%A1rea" title="Lista dos estados dos EUA por área – Galician" lang="gl" hreflang="gl">Galego</a></li>
+ <li class="interlanguage-link interwiki-ilo"><a href="//ilo.wikipedia.org/wiki/Listaan_dagiti_estado_ken_territorio_iti_Estados_Unidos_babaen_ti_kadakkel" title="Listaan dagiti estado ken territorio iti Estados Unidos babaen ti kadakkel – Iloko" lang="ilo" hreflang="ilo">Ilokano</a></li>
+ <li class="interlanguage-link interwiki-it"><a href="//it.wikipedia.org/wiki/Stati_degli_Stati_Uniti_d%27America_per_superficie" title="Stati degli Stati Uniti d'America per superficie – Italian" lang="it" hreflang="it">Italiano</a></li>
+ <li class="interlanguage-link interwiki-jv"><a href="//jv.wikipedia.org/wiki/Dhaptar_negara_bag%C3%A9an_Am%C3%A9rika_Sar%C3%A9kat_miturut_jembar_wewengkon" title="Dhaptar negara bagéan Amérika Sarékat miturut jembar wewengkon – Javanese" lang="jv" hreflang="jv">Basa Jawa</a></li>
+ <li class="interlanguage-link interwiki-la"><a href="//la.wikipedia.org/wiki/Index_civitatum_Americae_per_superficiem" title="Index civitatum Americae per superficiem – Latin" lang="la" hreflang="la">Latina</a></li>
+ <li class="interlanguage-link interwiki-hu"><a href="//hu.wikipedia.org/wiki/Az_Amerikai_Egyes%C3%BClt_%C3%81llamok_tag%C3%A1llamainak_list%C3%A1ja_ter%C3%BClet%C3%BCk_szerint" title="Az Amerikai Egyesült Államok tagállamainak listája területük szerint – Hungarian" lang="hu" hreflang="hu">Magyar</a></li>
+ <li class="interlanguage-link interwiki-mk"><a href="//mk.wikipedia.org/wiki/%D0%A1%D0%BF%D0%B8%D1%81%D0%BE%D0%BA_%D0%BD%D0%B0_%D1%81%D0%BE%D1%98%D1%83%D0%B7%D0%BD%D0%B8_%D0%B4%D1%80%D0%B6%D0%B0%D0%B2%D0%B8_%D0%B8_%D1%82%D0%B5%D1%80%D0%B8%D1%82%D0%BE%D1%80%D0%B8%D0%B8_%D0%BD%D0%B0_%D0%A1%D0%90%D0%94_%D0%BF%D0%BE_%D0%BF%D0%BE%D0%B2%D1%80%D1%88%D0%B8%D0%BD%D0%B0" title="Список на сојузни држави и територии на САД по површина – Macedonian" lang="mk" hreflang="mk">Македонски</a></li>
+ <li class="interlanguage-link interwiki-no"><a href="//no.wikipedia.org/wiki/Liste_over_USAs_delstater_etter_areal" title="Liste over USAs delstater etter areal – Norwegian (bokmål)" lang="no" hreflang="no">Norsk bokmål</a></li>
+ <li class="interlanguage-link interwiki-pl"><a href="//pl.wikipedia.org/wiki/Stany_USA_wed%C5%82ug_powierzchni" title="Stany USA według powierzchni – Polish" lang="pl" hreflang="pl">Polski</a></li>
+ <li class="interlanguage-link interwiki-pt"><a href="//pt.wikipedia.org/wiki/Anexo:Lista_de_estados_e_territ%C3%B3rios_dos_Estados_Unidos_por_%C3%A1rea" title="Anexo:Lista de estados e territórios dos Estados Unidos por área – Portuguese" lang="pt" hreflang="pt">Português</a></li>
+ <li class="interlanguage-link interwiki-ro"><a href="//ro.wikipedia.org/wiki/List%C4%83_a_statelor_SUA_ordonate_dup%C4%83_m%C4%83rimea_suprafe%C8%9Bei" title="Listă a statelor SUA ordonate după mărimea suprafeței – Romanian" lang="ro" hreflang="ro">Română</a></li>
+ <li class="interlanguage-link interwiki-sq"><a href="//sq.wikipedia.org/wiki/Renditja_e_Shteteve_t%C3%AB_SHBA-s%C3%AB_sipas_Sip%C3%ABrfaqes" title="Renditja e Shteteve të SHBA-së sipas Sipërfaqes – Albanian" lang="sq" hreflang="sq">Shqip</a></li>
+ <li class="interlanguage-link interwiki-simple"><a href="//simple.wikipedia.org/wiki/List_of_U.S._states_by_area" title="List of U.S. states by area – Simple English" lang="simple" hreflang="simple">Simple English</a></li>
+ <li class="interlanguage-link interwiki-sk"><a href="//sk.wikipedia.org/wiki/Zoznam_%C5%A1t%C3%A1tov_a_terit%C3%B3ri%C3%AD_USA_pod%C4%BEa_rozlohy" title="Zoznam štátov a teritórií USA podľa rozlohy – Slovak" lang="sk" hreflang="sk">Slovenčina</a></li>
+ <li class="interlanguage-link interwiki-sr"><a href="//sr.wikipedia.org/wiki/%D0%A1%D0%BF%D0%B8%D1%81%D0%B0%D0%BA_%D1%81%D0%B0%D0%B2%D0%B5%D0%B7%D0%BD%D0%B8%D1%85_%D0%B4%D1%80%D0%B6%D0%B0%D0%B2%D0%B0_%D0%B8_%D1%82%D0%B5%D1%80%D0%B8%D1%82%D0%BE%D1%80%D0%B8%D1%98%D0%B0_%D0%A1%D0%90%D0%94_%D0%BF%D0%BE_%D0%BF%D0%BE%D0%B2%D1%80%D1%88%D0%B8%D0%BD%D0%B8" title="Списак савезних држава и територија САД по површини – Serbian" lang="sr" hreflang="sr">Српски / srpski</a></li>
+ <li class="interlanguage-link interwiki-fi"><a href="//fi.wikipedia.org/wiki/Luettelo_Yhdysvaltain_osavaltioista_pinta-alan_mukaan" title="Luettelo Yhdysvaltain osavaltioista pinta-alan mukaan – Finnish" lang="fi" hreflang="fi">Suomi</a></li>
+ <li class="interlanguage-link interwiki-sv"><a href="//sv.wikipedia.org/wiki/Lista_%C3%B6ver_USA:s_delstater_och_territorier_efter_yta" title="Lista över USA:s delstater och territorier efter yta – Swedish" lang="sv" hreflang="sv">Svenska</a></li>
+ <li class="interlanguage-link interwiki-ur"><a href="//ur.wikipedia.org/wiki/%D9%81%DB%81%D8%B1%D8%B3%D8%AA_%D8%A7%D9%85%D8%B1%DB%8C%DA%A9%DB%8C_%D8%B1%DB%8C%D8%A7%D8%B3%D8%AA%DB%8C%DA%BA_%D8%A7%D9%88%D8%B1_%D8%B9%D9%84%D8%A7%D9%82%DB%81_%D8%AC%D8%A7%D8%AA_%D8%A8%D9%84%D8%AD%D8%A7%D8%B8_%D8%B1%D9%82%D8%A8%DB%81" title="فہرست امریکی ریاستیں اور علاقہ جات بلحاظ رقبہ – Urdu" lang="ur" hreflang="ur">اردو</a></li>
+ <li class="interlanguage-link interwiki-vi"><a href="//vi.wikipedia.org/wiki/Danh_s%C3%A1ch_ti%E1%BB%83u_bang_Hoa_K%E1%BB%B3_theo_di%E1%BB%87n_t%C3%ADch" title="Danh sách tiểu bang Hoa Kỳ theo diện tích – Vietnamese" lang="vi" hreflang="vi">Tiếng Việt</a></li>
+ <li class="interlanguage-link interwiki-zh"><a href="//zh.wikipedia.org/wiki/%E7%BE%8E%E5%9B%BD%E5%90%84%E5%B7%9E%E9%9D%A2%E7%A7%AF%E5%88%97%E8%A1%A8" title="美国各州面积列表 – Chinese" lang="zh" hreflang="zh">中文</a></li>
+ <li class="uls-p-lang-dummy"><a href="#"></a></li>
+ </ul>
+ <div class='after-portlet after-portlet-lang'><span class="wb-langlinks-edit wb-langlinks-link"><a action="edit" href="//www.wikidata.org/wiki/Q150340#sitelinks-wikipedia" text="Edit links" title="Edit interlanguage links" class="wbc-editpage">Edit links</a></span></div> </div>
+ </div>
+ </div>
+ </div>
+ <div id="footer" role="contentinfo">
+ <ul id="footer-info">
+ <li id="footer-info-lastmod"> This page was last modified on 29 June 2014 at 05:36.<br /></li>
+ <li id="footer-info-copyright">Text is available under the <a rel="license" href="//en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License">Creative Commons Attribution-ShareAlike License</a><a rel="license" href="//creativecommons.org/licenses/by-sa/3.0/" style="display:none;"></a>;
+additional terms may apply. By using this site, you agree to the <a href="//wikimediafoundation.org/wiki/Terms_of_Use">Terms of Use</a> and <a href="//wikimediafoundation.org/wiki/Privacy_policy">Privacy Policy</a>. Wikipedia® is a registered trademark of the <a href="//www.wikimediafoundation.org/">Wikimedia Foundation, Inc.</a>, a non-profit organization.</li>
+ </ul>
+ <ul id="footer-places">
+ <li id="footer-places-privacy"><a href="//wikimediafoundation.org/wiki/Privacy_policy" title="wikimedia:Privacy policy">Privacy policy</a></li>
+ <li id="footer-places-about"><a href="/wiki/Wikipedia:About" title="Wikipedia:About">About Wikipedia</a></li>
+ <li id="footer-places-disclaimer"><a href="/wiki/Wikipedia:General_disclaimer" title="Wikipedia:General disclaimer">Disclaimers</a></li>
+ <li id="footer-places-contact"><a href="//en.wikipedia.org/wiki/Wikipedia:Contact_us">Contact Wikipedia</a></li>
+ <li id="footer-places-developers"><a href="https://www.mediawiki.org/wiki/Special:MyLanguage/How_to_contribute">Developers</a></li>
+ <li id="footer-places-mobileview"><a href="//en.m.wikipedia.org/wiki/List_of_U.S._states_and_territories_by_area" class="noprint stopMobileRedirectToggle">Mobile view</a></li>
+ </ul>
+ <ul id="footer-icons" class="noprint">
+ <li id="footer-copyrightico">
+ <a href="//wikimediafoundation.org/"><img src="//bits.wikimedia.org/images/wikimedia-button.png" width="88" height="31" alt="Wikimedia Foundation"/></a>
+ </li>
+ <li id="footer-poweredbyico">
+ <a href="//www.mediawiki.org/"><img src="//bits.wikimedia.org/static-1.24wmf14/skins/common/images/poweredby_mediawiki_88x31.png" alt="Powered by MediaWiki" width="88" height="31" /></a>
+ </li>
+ </ul>
+ <div style="clear:both"></div>
+ </div>
+ <script>/*<![CDATA[*/window.jQuery && jQuery.ready();/*]]>*/</script><script>if(window.mw){
+mw.loader.state({"site":"loading","user":"ready","user.groups":"ready"});
+}</script>
+<script>if(window.mw){
+mw.loader.load(["ext.cite","mediawiki.toc","mobile.desktop","mediawiki.action.view.postEdit","mediawiki.user","mediawiki.hidpi","mediawiki.page.ready","mediawiki.searchSuggest","ext.gadget.teahouse","ext.gadget.ReferenceTooltips","ext.gadget.DRN-wizard","ext.gadget.charinsert","ext.gadget.refToolbar","mmv.bootstrap.autostart","ext.eventLogging.subscriber","ext.navigationTiming","schema.UniversalLanguageSelector","ext.uls.eventlogger","ext.uls.interlanguage"],null,true);
+}</script>
+<script src="//bits.wikimedia.org/en.wikipedia.org/load.php?debug=false&lang=en&modules=site&only=scripts&skin=vector&*"></script>
+<script>if(window.mw){
+mw.config.set({"wgBackendResponseTime":215,"wgHostname":"mw1025"});
+}</script>
+ </body>
+</html>
+
\ No newline at end of file
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index a7540fc716e1f..ecfc4c87d585d 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -86,26 +86,21 @@ def test_bs4_version_fails():
flavor='bs4')
-class TestReadHtml(tm.TestCase):
- @classmethod
- def setUpClass(cls):
- super(TestReadHtml, cls).setUpClass()
- _skip_if_none_of(('bs4', 'html5lib'))
-
+class ReadHtmlMixin(object):
def read_html(self, *args, **kwargs):
- kwargs['flavor'] = kwargs.get('flavor', self.flavor)
+ kwargs.setdefault('flavor', self.flavor)
return read_html(*args, **kwargs)
- def setup_data(self):
- self.spam_data = os.path.join(DATA_PATH, 'spam.html')
- self.banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- def setup_flavor(self):
- self.flavor = 'bs4'
+class TestReadHtml(tm.TestCase, ReadHtmlMixin):
+ flavor = 'bs4'
+ spam_data = os.path.join(DATA_PATH, 'spam.html')
+ banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- def setUp(self):
- self.setup_data()
- self.setup_flavor()
+ @classmethod
+ def setUpClass(cls):
+ super(TestReadHtml, cls).setUpClass()
+ _skip_if_none_of(('bs4', 'html5lib'))
def test_to_html_compat(self):
df = mkdf(4, 3, data_gen_f=lambda *args: rand(), c_idx_names=False,
@@ -262,8 +257,7 @@ def test_infer_types(self):
df2 = self.read_html(self.spam_data, 'Unit', index_col=0,
infer_types=True)
- with tm.assertRaises(AssertionError):
- assert_framelist_equal(df1, df2)
+ assert_framelist_equal(df1, df2)
def test_string_io(self):
with open(self.spam_data) as f:
@@ -568,7 +562,9 @@ def test_different_number_of_rows(self):
def test_parse_dates_list(self):
df = DataFrame({'date': date_range('1/1/2001', periods=10)})
expected = df.to_html()
- res = self.read_html(expected, parse_dates=[0], index_col=0)
+ res = self.read_html(expected, parse_dates=[1], index_col=0)
+ tm.assert_frame_equal(df, res[0])
+ res = self.read_html(expected, parse_dates=['date'], index_col=0)
tm.assert_frame_equal(df, res[0])
def test_parse_dates_combine(self):
@@ -588,6 +584,13 @@ def test_computer_sales_page(self):
with tm.assert_produces_warning(FutureWarning):
self.read_html(data, infer_types=False, header=[0, 1])
+ def test_wikipedia_states_table(self):
+ data = os.path.join(DATA_PATH, 'wikipedia_states.html')
+ assert os.path.isfile(data), '%r is not a file' % data
+ assert os.path.getsize(data), '%r is an empty file' % data
+ result = self.read_html(data, 'Arizona', header=1)[0]
+ nose.tools.assert_equal(result['sq mi'].dtype, np.dtype('float64'))
+
def _lang_enc(filename):
return os.path.splitext(os.path.basename(filename))[0].split('_')
@@ -637,31 +640,28 @@ def setUpClass(cls):
_skip_if_no(cls.flavor)
-class TestReadHtmlLxml(tm.TestCase):
+class TestReadHtmlLxml(tm.TestCase, ReadHtmlMixin):
+ flavor = 'lxml'
+
@classmethod
def setUpClass(cls):
super(TestReadHtmlLxml, cls).setUpClass()
_skip_if_no('lxml')
- def read_html(self, *args, **kwargs):
- self.flavor = ['lxml']
- kwargs['flavor'] = kwargs.get('flavor', self.flavor)
- return read_html(*args, **kwargs)
-
def test_data_fail(self):
from lxml.etree import XMLSyntaxError
spam_data = os.path.join(DATA_PATH, 'spam.html')
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
with tm.assertRaises(XMLSyntaxError):
- self.read_html(spam_data, flavor=['lxml'])
+ self.read_html(spam_data)
with tm.assertRaises(XMLSyntaxError):
- self.read_html(banklist_data, flavor=['lxml'])
+ self.read_html(banklist_data)
def test_works_on_valid_markup(self):
filename = os.path.join(DATA_PATH, 'valid_markup.html')
- dfs = self.read_html(filename, index_col=0, flavor=['lxml'])
+ dfs = self.read_html(filename, index_col=0)
tm.assert_isinstance(dfs, list)
tm.assert_isinstance(dfs[0], DataFrame)
@@ -674,7 +674,9 @@ def test_fallback_success(self):
def test_parse_dates_list(self):
df = DataFrame({'date': date_range('1/1/2001', periods=10)})
expected = df.to_html()
- res = self.read_html(expected, parse_dates=[0], index_col=0)
+ res = self.read_html(expected, parse_dates=[1], index_col=0)
+ tm.assert_frame_equal(df, res[0])
+ res = self.read_html(expected, parse_dates=['date'], index_col=0)
tm.assert_frame_equal(df, res[0])
def test_parse_dates_combine(self):
@@ -694,8 +696,8 @@ def test_computer_sales_page(self):
def test_invalid_flavor():
url = 'google.com'
- nose.tools.assert_raises(ValueError, read_html, url, 'google',
- flavor='not a* valid**++ flaver')
+ with tm.assertRaises(ValueError):
+ read_html(url, 'google', flavor='not a* valid**++ flaver')
def get_elements_from_file(url, element='table'):
| Also, `infer_types` now has no effect _for real_ :)
closes #7762
closes #7032
| https://api.github.com/repos/pandas-dev/pandas/pulls/7851 | 2014-07-26T14:47:20Z | 2014-07-28T13:24:39Z | 2014-07-28T13:24:39Z | 2014-07-28T13:24:39Z |
BUG: fix multi-column sort that includes Categoricals / concat (GH7848/GH7864) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index b0267c3dc5163..9279d8b0288c4 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -116,7 +116,8 @@ Categoricals in Series/DataFrame
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:class:`~pandas.Categorical` can now be included in `Series` and `DataFrames` and gained new
-methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`, :issue:`7444`, :issue:`7839`).
+methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`,
+:issue:`7444`, :issue:`7839`, :issue:`7848`, :issue:`7864`).
For full docs, see the :ref:`Categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index d049a6d64aac3..f9ed6c2fecc3c 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -16,7 +16,6 @@
from pandas.core.config import get_option
from pandas.core import format as fmt
-
def _cat_compare_op(op):
def f(self, other):
if isinstance(other, (Categorical, np.ndarray)):
@@ -45,16 +44,6 @@ def _maybe_to_categorical(array):
return array
-def _get_codes_for_values(values, levels):
- from pandas.core.algorithms import _get_data_algo, _hashtables
- if values.dtype != levels.dtype:
- values = com._ensure_object(values)
- levels = com._ensure_object(levels)
- (hash_klass, vec_klass), vals = _get_data_algo(values, _hashtables)
- t = hash_klass(len(levels))
- t.map_locations(levels)
- return com._ensure_platform_int(t.lookup(values))
-
_codes_doc = """The level codes of this categorical.
Level codes are an array if integer which are the positions of the real
@@ -484,7 +473,7 @@ def argsort(self, ascending=True, **kwargs):
result = result[::-1]
return result
- def order(self, inplace=False, ascending=True, **kwargs):
+ def order(self, inplace=False, ascending=True, na_position='last', **kwargs):
""" Sorts the Category by level value returning a new Categorical by default.
Only ordered Categoricals can be sorted!
@@ -495,11 +484,11 @@ def order(self, inplace=False, ascending=True, **kwargs):
----------
ascending : boolean, default True
Sort ascending. Passing False sorts descending
+ inplace : boolean, default False
+ Do operation in place.
na_position : {'first', 'last'} (optional, default='last')
'first' puts NaNs at the beginning
'last' puts NaNs at the end
- inplace : boolean, default False
- Do operation in place.
Returns
-------
@@ -511,18 +500,22 @@ def order(self, inplace=False, ascending=True, **kwargs):
"""
if not self.ordered:
raise TypeError("Categorical not ordered")
- _sorted = np.sort(self._codes.copy())
+ if na_position not in ['last','first']:
+ raise ValueError('invalid na_position: {!r}'.format(na_position))
+
+ codes = np.sort(self._codes.copy())
if not ascending:
- _sorted = _sorted[::-1]
+ codes = codes[::-1]
+
if inplace:
- self._codes = _sorted
+ self._codes = codes
return
else:
- return Categorical(values=_sorted,levels=self.levels, ordered=self.ordered,
+ return Categorical(values=codes,levels=self.levels, ordered=self.ordered,
name=self.name, fastpath=True)
- def sort(self, inplace=True, ascending=True, **kwargs):
+ def sort(self, inplace=True, ascending=True, na_position='last', **kwargs):
""" Sorts the Category inplace by level value.
Only ordered Categoricals can be sorted!
@@ -533,11 +526,11 @@ def sort(self, inplace=True, ascending=True, **kwargs):
----------
ascending : boolean, default True
Sort ascending. Passing False sorts descending
+ inplace : boolean, default False
+ Do operation in place.
na_position : {'first', 'last'} (optional, default='last')
'first' puts NaNs at the beginning
'last' puts NaNs at the end
- inplace : boolean, default False
- Do operation in place.
Returns
-------
@@ -932,3 +925,20 @@ def describe(self):
result.index.name = 'levels'
result.columns = ['counts','freqs']
return result
+
+##### utility routines #####
+
+def _get_codes_for_values(values, levels):
+ """"
+ utility routine to turn values into codes given the specified levels
+ """
+
+ from pandas.core.algorithms import _get_data_algo, _hashtables
+ if values.dtype != levels.dtype:
+ values = com._ensure_object(values)
+ levels = com._ensure_object(levels)
+ (hash_klass, vec_klass), vals = _get_data_algo(values, _hashtables)
+ t = hash_klass(len(levels))
+ t.map_locations(levels)
+ return com._ensure_platform_int(t.lookup(values))
+
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 48c3b4ece1d95..9659d4c3bd6e0 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -3415,34 +3415,38 @@ def _lexsort_indexer(keys, orders=None, na_position='last'):
orders = [True] * len(keys)
for key, order in zip(keys, orders):
- key = np.asanyarray(key)
- rizer = _hash.Factorizer(len(key))
- if not key.dtype == np.object_:
- key = key.astype('O')
+ # we are already a Categorical
+ if is_categorical_dtype(key):
+ c = key
- # factorize maps nans to na_sentinel=-1
- ids = rizer.factorize(key, sort=True)
- n = len(rizer.uniques)
- mask = (ids == -1)
+ # create the Categorical
+ else:
+ c = Categorical(key,ordered=True)
+
+ if na_position not in ['last','first']:
+ raise ValueError('invalid na_position: {!r}'.format(na_position))
+
+ n = len(c.levels)
+ codes = c.codes.copy()
+
+ mask = (c.codes == -1)
if order: # ascending
if na_position == 'last':
- ids = np.where(mask, n, ids)
+ codes = np.where(mask, n, codes)
elif na_position == 'first':
- ids += 1
- else:
- raise ValueError('invalid na_position: {!r}'.format(na_position))
+ codes += 1
else: # not order means descending
if na_position == 'last':
- ids = np.where(mask, n, n-ids-1)
+ codes = np.where(mask, n, n-codes-1)
elif na_position == 'first':
- ids = np.where(mask, 0, n-ids)
- else:
- raise ValueError('invalid na_position: {!r}'.format(na_position))
+ codes = np.where(mask, 0, n-codes)
if mask.any():
n += 1
+
shape.append(n)
- labels.append(ids)
+ labels.append(codes)
+
return _indexer_from_factorized(labels, shape)
def _nargsort(items, kind='quicksort', ascending=True, na_position='last'):
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 98e8d4f88104f..23ba06938825d 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -451,9 +451,9 @@ def to_native_types(self, slicer=None, na_rep='', **kwargs):
values[mask] = na_rep
return values.tolist()
- def _validate_merge(self, blocks):
- """ validate that we can merge these blocks """
- return True
+ def _concat_blocks(self, blocks, values):
+ """ return the block concatenation """
+ return self._holder(values[0])
# block actions ####
def copy(self, deep=True):
@@ -1639,15 +1639,19 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None,
ndim=self.ndim,
placement=self.mgr_locs)
- def _validate_merge(self, blocks):
- """ validate that we can merge these blocks """
+ def _concat_blocks(self, blocks, values):
+ """
+ validate that we can merge these blocks
+
+ return the block concatenation
+ """
levels = self.values.levels
for b in blocks:
if not levels.equals(b.values.levels):
raise ValueError("incompatible levels in categorical block merge")
- return True
+ return self._holder(values[0], levels=levels)
def to_native_types(self, slicer=None, na_rep='', **kwargs):
""" convert to our native types format, slicing if desired """
@@ -4026,17 +4030,11 @@ def concatenate_join_units(join_units, concat_axis, copy):
else:
concat_values = com._concat_compat(to_concat, axis=concat_axis)
- # FIXME: optimization potential: if len(join_units) == 1, single join unit
- # is densified and sparsified back.
if any(unit.needs_block_conversion for unit in join_units):
# need to ask the join unit block to convert to the underlying repr for us
blocks = [ unit.block for unit in join_units if unit.block is not None ]
-
- # may need to validate this combination
- blocks[0]._validate_merge(blocks)
-
- return blocks[0]._holder(concat_values[0])
+ return blocks[0]._concat_blocks(blocks, concat_values)
else:
return concat_values
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index b70e50eb3d030..642912805d06d 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -983,6 +983,47 @@ def f():
df.sort(columns=["unsort"], ascending=False)
self.assertRaises(TypeError, f)
+ # multi-columns sort
+ # GH 7848
+ df = DataFrame({"id":[6,5,4,3,2,1], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
+ df["grade"] = pd.Categorical(df["raw_grade"])
+ df['grade'].cat.reorder_levels(['b', 'e', 'a'])
+
+ # sorts 'grade' according to the order of the levels
+ result = df.sort(columns=['grade'])
+ expected = df.iloc[[1,2,5,0,3,4]]
+ tm.assert_frame_equal(result,expected)
+
+ # multi
+ result = df.sort(columns=['grade', 'id'])
+ expected = df.iloc[[2,1,5,4,3,0]]
+ tm.assert_frame_equal(result,expected)
+
+ # reverse
+ cat = Categorical(["a","c","c","b","d"], ordered=True)
+ res = cat.order(ascending=False)
+ exp_val = np.array(["d","c", "c", "b","a"],dtype=object)
+ exp_levels = np.array(["a","b","c","d"],dtype=object)
+ self.assert_numpy_array_equal(res.__array__(), exp_val)
+ self.assert_numpy_array_equal(res.levels, exp_levels)
+
+ # some NaN positions
+
+ cat = Categorical(["a","c","b","d", np.nan], ordered=True)
+ res = cat.order(ascending=False, na_position='last')
+ exp_val = np.array(["d","c","b","a", np.nan],dtype=object)
+ exp_levels = np.array(["a","b","c","d"],dtype=object)
+ # FIXME: IndexError: Out of bounds on buffer access (axis 0)
+ #self.assert_numpy_array_equal(res.__array__(), exp_val)
+ #self.assert_numpy_array_equal(res.levels, exp_levels)
+
+ cat = Categorical(["a","c","b","d", np.nan], ordered=True)
+ res = cat.order(ascending=False, na_position='first')
+ exp_val = np.array([np.nan, "d","c","b","a"],dtype=object)
+ exp_levels = np.array(["a","b","c","d"],dtype=object)
+ # FIXME: IndexError: Out of bounds on buffer access (axis 0)
+ #self.assert_numpy_array_equal(res.__array__(), exp_val)
+ #self.assert_numpy_array_equal(res.levels, exp_levels)
def test_slicing(self):
cat = Series(Categorical([1,2,3,4]))
@@ -1429,6 +1470,22 @@ def f():
pd.concat([df,df_wrong_levels])
self.assertRaises(ValueError, f)
+ # GH 7864
+ # make sure ordering is preserverd
+ df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
+ df["grade"] = pd.Categorical(df["raw_grade"])
+ df['grade'].cat.reorder_levels(['e', 'a', 'b'])
+
+ df1 = df[0:3]
+ df2 = df[3:]
+
+ self.assert_numpy_array_equal(df['grade'].cat.levels, df1['grade'].cat.levels)
+ self.assert_numpy_array_equal(df['grade'].cat.levels, df2['grade'].cat.levels)
+
+ dfx = pd.concat([df1, df2])
+ dfx['grade'].cat.levels
+ self.assert_numpy_array_equal(df['grade'].cat.levels, dfx['grade'].cat.levels)
+
def test_append(self):
cat = pd.Categorical(["a","b"], levels=["a","b"])
vals = [1,2]
| CLN: refactor _lexsort_indexer to use Categoricals
closes #7848
closes #7864
| https://api.github.com/repos/pandas-dev/pandas/pulls/7850 | 2014-07-26T13:53:03Z | 2014-07-29T22:44:36Z | 2014-07-29T22:44:36Z | 2014-07-31T21:27:35Z |
Docs: improve installation instructions, recommend Anaconda. | diff --git a/doc/source/install.rst b/doc/source/install.rst
index fe56b53d7cb82..c30a086295f00 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -2,64 +2,250 @@
.. currentmodule:: pandas
-************
+============
Installation
-************
+============
-You have the option to install an `official release
-<http://pypi.python.org/pypi/pandas>`__ or to build the `development version
-<http://github.com/pydata/pandas>`__. If you choose to install from source and
-are running Windows, you will have to ensure that you have a compatible C
-compiler (MinGW or Visual Studio) installed. `How-to install MinGW on Windows
-<http://docs.cython.org/src/tutorial/appendix.html>`__
+The easiest way for the majority of users to install pandas is to install it
+as part of the `Anaconda <http://docs.continuum.io/anaconda/>`__ distribution, a
+cross platform distribution for data analysis and scientific computing.
+This is the recommended installation method for most users.
+
+Instructions for installing from source,
+`PyPI <http://pypi.python.org/pypi/pandas>`__, various Linux distributions, or a
+`development version <http://github.com/pydata/pandas>`__ are also provided.
Python version support
-~~~~~~~~~~~~~~~~~~~~~~
+----------------------
Officially Python 2.6, 2.7, 3.2, 3.3, and 3.4.
+Installing pandas
+-----------------
+
+Trying out pandas, no installation required!
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The easiest way to start experimenting with pandas doesn't involve installing
+pandas at all.
+
+`Wakari <https://wakari.io>`__ is a free service that provides a hosted
+`IPython Notebook <http://ipython.org/notebook.html>`__ service in the cloud.
+
+Simply create an account, and have access to pandas from within your brower via
+an `IPython Notebook <http://ipython.org/notebook.html>`__ in a few minutes.
+
+Installing pandas with Anaconda
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Installing pandas and the rest of the `NumPy <http://www.numpy.org/>`__ and
+`SciPy <http://www.scipy.org/>`__ stack can be a little
+difficult for inexperienced users.
+
+The simplest way to install not only pandas, but Python and the most popular
+packages that make up the `SciPy <http://www.scipy.org/>`__ stack
+(`IPython <http://ipython.org/>`__, `NumPy <http://www.numpy.org/>`__,
+`Matplotlib <http://matplotlib.org/>`__, ...) is with
+`Anaconda <http://docs.continuum.io/anaconda/>`__, a cross-platform
+(Linux, Mac OS X, Windows) Python distribution for data analytics and
+scientific computing.
+
+After running a simple installer, the user will have access to pandas and the
+rest of the `SciPy <http://www.scipy.org/>`__ stack without needing to install
+anything else, and without needing to wait for any software to be compiled.
+
+Installation instructions for `Anaconda <http://docs.continuum.io/anaconda/>`__
+`can be found here <http://docs.continuum.io/anaconda/install.html>`__.
+
+A full list of the packages available as part of the
+`Anaconda <http://docs.continuum.io/anaconda/>`__ distribution
+`can be found here <http://docs.continuum.io/anaconda/pkg-docs.html>`__.
+
+An additional advantage of installing with Anaconda is that you don't require
+admin rights to install it, it will install in the user's home directory, and
+this also makes it trivial to delete Anaconda at a later date (just delete
+that folder).
+
+Installing pandas with Miniconda
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The previous section outlined how to get pandas installed as part of the
+`Anaconda <http://docs.continuum.io/anaconda/>`__ distribution.
+However this approach means you will install well over one hundred packages
+and involves downloading the installer which is a few hundred megabytes in size.
+
+If you want to have more control on which packages, or have a limited internet
+bandwidth, then installing pandas with
+`Miniconda <http://conda.pydata.org/miniconda.html>`__ may be a better solution.
+
+`Conda <http://conda.pydata.org/docs/>`__ is the package manager that the
+`Anaconda <http://docs.continuum.io/anaconda/>`__ distribution is built upon.
+It is a package manager that is both cross-platform and language agnostic
+(it can play a similar role to a pip and virtualenv combination).
+
+`Miniconda <http://conda.pydata.org/miniconda.html>`__ allows you to create a
+minimal self contained Python installation, and then use the
+`Conda <http://conda.pydata.org/docs/>`__ command to install additional packages.
+
+First you will need `Conda <http://conda.pydata.org/docs/>`__ to be installed and
+downloading and running the `Miniconda
+<http://conda.pydata.org/miniconda.html>`__
+will do this for you. The installer
+`can be found here <http://conda.pydata.org/miniconda.html>`__
+
+The next step is to create a new conda environment (these are analogous to a
+virtualenv but they also allow you to specify precisely which Python version
+to install also). Run the following commands from a terminal window::
+
+ conda create -n name_of_my_env python
+
+This will create a minimal environment with only Python installed in it.
+To put your self inside this environment run::
+
+ source activate name_of_my_env
+
+On Windows the command is::
-Binary installers
-~~~~~~~~~~~~~~~~~
+ activate name_of_my_env
-.. _all-platforms:
+The final step required is to install pandas. This can be done with the
+following command::
-All platforms
-_____________
+ conda install pandas
-Stable installers available on `PyPI <http://pypi.python.org/pypi/pandas>`__
+To install a specific pandas version::
-Preliminary builds and installers on the `pandas download page <http://pandas.pydata.org/getpandas.html>`__ .
+ conda install pandas=0.13.1
-Overview
-___________
+To install other packages, IPython for example::
+
+ conda install ipython
+
+To install the full `Anaconda <http://docs.continuum.io/anaconda/>`__
+distribution::
+
+ conda install anaconda
+
+If you require any packages that are available to pip but not conda, simply
+install pip, and use pip to install these packages::
+
+ conda install pip
+ pip install django
+
+Installing from PyPI
+~~~~~~~~~~~~~~~~~~~~
+
+pandas can be installed via pip from
+`PyPI <http://pypi.python.org/pypi/pandas>`__.
+
+::
+
+ pip install pandas
+
+This will likely require the installation of a number of dependencies,
+including NumPy, will require a compiler to compile required bits of code,
+and can take a few minutes to complete.
+
+Installing using your Linux distribution's package manager.
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. csv-table::
- :header: "Platform", "Distribution", "Status", "Download / Repository Link", "Install method"
- :widths: 10, 10, 10, 20, 50
+ :header: "Distribution", "Status", "Download / Repository Link", "Install method"
+ :widths: 10, 10, 20, 50
+
+
+ Debian, stable, `official Debian repository <http://packages.debian.org/search?keywords=pandas&searchon=names&suite=all§ion=all>`__ , ``sudo apt-get install python-pandas``
+ Debian & Ubuntu, unstable (latest packages), `NeuroDebian <http://neuro.debian.net/index.html#how-to-use-this-repository>`__ , ``sudo apt-get install python-pandas``
+ Ubuntu, stable, `official Ubuntu repository <http://packages.ubuntu.com/search?keywords=pandas&searchon=names&suite=all§ion=all>`__ , ``sudo apt-get install python-pandas``
+ Ubuntu, unstable (daily builds), `PythonXY PPA <https://code.launchpad.net/~pythonxy/+archive/pythonxy-devel>`__; activate by: ``sudo add-apt-repository ppa:pythonxy/pythonxy-devel && sudo apt-get update``, ``sudo apt-get install python-pandas``
+ OpenSuse & Fedora, stable, `OpenSuse Repository <http://software.opensuse.org/package/python-pandas?search_term=pandas>`__ , ``zypper in python-pandas``
+
+
+
+
+
+
+
+
- Windows, all, stable, :ref:`all-platforms`, ``pip install pandas``
- Mac, all, stable, :ref:`all-platforms`, ``pip install pandas``
- Linux, Debian, stable, `official Debian repository <http://packages.debian.org/search?keywords=pandas&searchon=names&suite=all§ion=all>`__ , ``sudo apt-get install python-pandas``
- Linux, Debian & Ubuntu, unstable (latest packages), `NeuroDebian <http://neuro.debian.net/index.html#how-to-use-this-repository>`__ , ``sudo apt-get install python-pandas``
- Linux, Ubuntu, stable, `official Ubuntu repository <http://packages.ubuntu.com/search?keywords=pandas&searchon=names&suite=all§ion=all>`__ , ``sudo apt-get install python-pandas``
- Linux, Ubuntu, unstable (daily builds), `PythonXY PPA <https://code.launchpad.net/~pythonxy/+archive/pythonxy-devel>`__; activate by: ``sudo add-apt-repository ppa:pythonxy/pythonxy-devel && sudo apt-get update``, ``sudo apt-get install python-pandas``
- Linux, OpenSuse & Fedora, stable, `OpenSuse Repository <http://software.opensuse.org/package/python-pandas?search_term=pandas>`__ , ``zypper in python-pandas``
+Installing from source
+~~~~~~~~~~~~~~~~~~~~~~
+.. note::
+ Installing from the git repository requires a recent installation of `Cython
+ <http://cython.org>`__ as the cythonized C sources are no longer checked
+ into source control. Released source distributions will contain the built C
+ files. I recommend installing the latest Cython via ``easy_install -U
+ Cython``
+The source code is hosted at http://github.com/pydata/pandas, it can be checked
+out using git and compiled / installed like so:
+::
+ git clone git://github.com/pydata/pandas.git
+ cd pandas
+ python setup.py install
+Make sure you have Cython installed when installing from the repository,
+rather then a tarball or pypi.
+On Windows, I suggest installing the MinGW compiler suite following the
+directions linked to above. Once configured property, run the following on the
+command line:
+
+::
+
+ python setup.py build --compiler=mingw32
+ python setup.py install
+
+Note that you will not be able to import pandas if you open an interpreter in
+the source directory unless you build the C extensions in place:
+
+::
+
+ python setup.py build_ext --inplace
+
+The most recent version of MinGW (any installer dated after 2011-08-03)
+has removed the '-mno-cygwin' option but Distutils has not yet been updated to
+reflect that. Thus, you may run into an error like "unrecognized command line
+option '-mno-cygwin'". Until the bug is fixed in Distutils, you may need to
+install a slightly older version of MinGW (2011-08-02 installer).
+
+Running the test suite
+~~~~~~~~~~~~~~~~~~~~~~
+
+pandas is equipped with an exhaustive set of unit tests covering about 97% of
+the codebase as of this writing. To run it on your machine to verify that
+everything is working (and you have all of the dependencies, soft and hard,
+installed), make sure you have `nose
+<http://readthedocs.org/docs/nose/en/latest/>`__ and run:
+::
+ $ nosetests pandas
+ ..........................................................................
+ .......................S..................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ ..........................................................................
+ .................S........................................................
+ ....
+ ----------------------------------------------------------------------
+ Ran 818 tests in 21.631s
+ OK (SKIP=2)
Dependencies
-~~~~~~~~~~~~
+------------
* `NumPy <http://www.numpy.org>`__: 1.6.1 or higher
* `python-dateutil <http://labix.org/python-dateutil>`__ 1.5
@@ -165,75 +351,3 @@ Optional Dependencies
distribution like `Enthought Canopy
<http://enthought.com/products/canopy>`__ may be worth considering.
-Installing from source
-~~~~~~~~~~~~~~~~~~~~~~
-.. note::
-
- Installing from the git repository requires a recent installation of `Cython
- <http://cython.org>`__ as the cythonized C sources are no longer checked
- into source control. Released source distributions will contain the built C
- files. I recommend installing the latest Cython via ``easy_install -U
- Cython``
-
-The source code is hosted at http://github.com/pydata/pandas, it can be checked
-out using git and compiled / installed like so:
-
-::
-
- git clone git://github.com/pydata/pandas.git
- cd pandas
- python setup.py install
-
-Make sure you have Cython installed when installing from the repository,
-rather then a tarball or pypi.
-
-On Windows, I suggest installing the MinGW compiler suite following the
-directions linked to above. Once configured property, run the following on the
-command line:
-
-::
-
- python setup.py build --compiler=mingw32
- python setup.py install
-
-Note that you will not be able to import pandas if you open an interpreter in
-the source directory unless you build the C extensions in place:
-
-::
-
- python setup.py build_ext --inplace
-
-The most recent version of MinGW (any installer dated after 2011-08-03)
-has removed the '-mno-cygwin' option but Distutils has not yet been updated to
-reflect that. Thus, you may run into an error like "unrecognized command line
-option '-mno-cygwin'". Until the bug is fixed in Distutils, you may need to
-install a slightly older version of MinGW (2011-08-02 installer).
-
-Running the test suite
-~~~~~~~~~~~~~~~~~~~~~~
-
-pandas is equipped with an exhaustive set of unit tests covering about 97% of
-the codebase as of this writing. To run it on your machine to verify that
-everything is working (and you have all of the dependencies, soft and hard,
-installed), make sure you have `nose
-<http://readthedocs.org/docs/nose/en/latest/>`__ and run:
-
-::
-
- $ nosetests pandas
- ..........................................................................
- .......................S..................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- ..........................................................................
- .................S........................................................
- ....
- ----------------------------------------------------------------------
- Ran 818 tests in 21.631s
-
- OK (SKIP=2)
| I believe installing pandas via Anaconda should definitely be the recommended installation method for most users.
It is cross platform, doesn't require a compiler, and will install all requirements for the user also (including NumPy).
| https://api.github.com/repos/pandas-dev/pandas/pulls/7849 | 2014-07-26T10:18:17Z | 2014-07-29T09:58:55Z | 2014-07-29T09:58:55Z | 2014-07-31T02:22:40Z |
COMPAT: SettingWithCopy will now warn when slices which can generate views are then set | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 39ff807d6b1e4..51ddbdd4dbee6 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -91,6 +91,22 @@ API changes
df
df.dtypes
+- ``SettingWithCopy`` raise/warnings (according to the option ``mode.chained_assignment``) will now be issued when setting a value on a sliced mixed-dtype DataFrame using chained-assignment. (:issue:`7845`)
+
+ .. code-block:: python
+
+ In [1]: df = DataFrame(np.arange(0,9), columns=['count'])
+
+ In [2]: df['group'] = 'b'
+
+ In [3]: df.iloc[0:5]['group'] = 'a'
+ /usr/local/bin/ipython:1: SettingWithCopyWarning:
+ A value is trying to be set on a copy of a slice from a DataFrame.
+ Try using .loc[row_indexer,col_indexer] = value instead
+
+ See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy
+
+
.. _whatsnew_0150.cat:
Categoricals in Series/DataFrame
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 636dedfbeb7b7..4b8d13ce30355 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2075,13 +2075,12 @@ def _set_item(self, key, value):
Add series to DataFrame in specified column.
If series is a numpy-array (not a Series/TimeSeries), it must be the
- same length as the DataFrame's index or an error will be thrown.
+ same length as the DataFrames index or an error will be thrown.
- Series/TimeSeries will be conformed to the DataFrame's index to
+ Series/TimeSeries will be conformed to the DataFrames index to
ensure homogeneity.
"""
- is_existing = key in self.columns
self._ensure_valid_index(value)
value = self._sanitize_column(key, value)
NDFrame._set_item(self, key, value)
@@ -2089,7 +2088,7 @@ def _set_item(self, key, value):
# check if we are modifying a copy
# try to set first as we want an invalid
# value exeption to occur first
- if is_existing:
+ if len(self):
self._check_setitem_copy()
def insert(self, loc, column, value, allow_duplicates=False):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 71da20af2ad43..cef18c5ad3c2b 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1133,7 +1133,13 @@ def _slice(self, slobj, axis=0, typ=None):
"""
axis = self._get_block_manager_axis(axis)
- return self._constructor(self._data.get_slice(slobj, axis=axis))
+ result = self._constructor(self._data.get_slice(slobj, axis=axis))
+
+ # this could be a view
+ # but only in a single-dtyped view slicable case
+ is_copy = axis!=0 or result._is_view
+ result._set_is_copy(self, copy=is_copy)
+ return result
def _set_item(self, key, value):
self._data.set(key, value)
@@ -1149,10 +1155,28 @@ def _set_is_copy(self, ref=None, copy=True):
self.is_copy = None
def _check_setitem_copy(self, stacklevel=4, t='setting'):
- """ validate if we are doing a settitem on a chained copy.
+ """
+
+ validate if we are doing a settitem on a chained copy.
If you call this function, be sure to set the stacklevel such that the
- user will see the error *at the level of setting*"""
+ user will see the error *at the level of setting*
+
+ It is technically possible to figure out that we are setting on
+ a copy even WITH a multi-dtyped pandas object. In other words, some blocks
+ may be views while other are not. Currently _is_view will ALWAYS return False
+ for multi-blocks to avoid having to handle this case.
+
+ df = DataFrame(np.arange(0,9), columns=['count'])
+ df['group'] = 'b'
+
+ # this technically need not raise SettingWithCopy if both are view (which is not
+ # generally guaranteed but is usually True
+ # however, this is in general not a good practice and we recommend using .loc
+ df.iloc[0:5]['group'] = 'a'
+
+ """
+
if self.is_copy:
value = config.get_option('mode.chained_assignment')
@@ -1170,14 +1194,18 @@ def _check_setitem_copy(self, stacklevel=4, t='setting'):
pass
if t == 'referant':
- t = ("A value is trying to be set on a copy of a slice from a "
+ t = ("\n"
+ "A value is trying to be set on a copy of a slice from a "
"DataFrame\n\n"
- "See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy")
+ "See the the caveats in the documentation: "
+ "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy")
else:
- t = ("A value is trying to be set on a copy of a slice from a "
- "DataFrame.\nTry using .loc[row_index,col_indexer] = value "
- "instead\n\n"
- "See the the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy")
+ t = ("\n"
+ "A value is trying to be set on a copy of a slice from a "
+ "DataFrame.\n"
+ "Try using .loc[row_indexer,col_indexer] = value instead\n\n"
+ "See the the caveats in the documentation: "
+ "http://pandas.pydata.org/pandas-docs/stable/indexing.html#indexing-view-versus-copy")
if value == 'raise':
raise SettingWithCopyError(t)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 08d3fbe335f35..48c3b4ece1d95 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -25,7 +25,7 @@
notnull, _DATELIKE_DTYPES, is_numeric_dtype,
is_timedelta64_dtype, is_datetime64_dtype,
is_categorical_dtype)
-
+from pandas.core.config import option_context
from pandas import _np_version_under1p7
import pandas.lib as lib
from pandas.lib import Timestamp
@@ -635,7 +635,9 @@ def apply(self, func, *args, **kwargs):
@wraps(func)
def f(g):
- return func(g, *args, **kwargs)
+ # ignore SettingWithCopy here in case the user mutates
+ with option_context('mode.chained_assignment',None):
+ return func(g, *args, **kwargs)
return self._python_apply_general(f)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index cad7b579aa554..98e8d4f88104f 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2519,6 +2519,14 @@ def is_view(self):
""" return a boolean if we are a single block and are a view """
if len(self.blocks) == 1:
return self.blocks[0].values.base is not None
+
+ # It is technically possible to figure out which blocks are views
+ # e.g. [ b.values.base is not None for b in self.blocks ]
+ # but then we have the case of possibly some blocks being a view
+ # and some blocks not. setting in theory is possible on the non-view
+ # blocks w/o causing a SettingWithCopy raise/warn. But this is a bit
+ # complicated
+
return False
def get_bool_data(self, copy=False):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 0dd729d58f174..c5cacda17edba 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -418,8 +418,12 @@ def test_setitem(self):
self.frame['col8'] = 'foo'
assert((self.frame['col8'] == 'foo').all())
+ # this is partially a view (e.g. some blocks are view)
+ # so raise/warn
smaller = self.frame[:2]
- smaller['col10'] = ['1', '2']
+ def f():
+ smaller['col10'] = ['1', '2']
+ self.assertRaises(com.SettingWithCopyError, f)
self.assertEqual(smaller['col10'].dtype, np.object_)
self.assertTrue((smaller['col10'] == ['1', '2']).all())
@@ -830,8 +834,11 @@ def test_fancy_getitem_slice_mixed(self):
self.assertEqual(sliced['D'].dtype, np.float64)
# get view with single block
+ # setting it triggers setting with copy
sliced = self.frame.ix[:, -3:]
- sliced['C'] = 4.
+ def f():
+ sliced['C'] = 4.
+ self.assertRaises(com.SettingWithCopyError, f)
self.assertTrue((self.frame['C'] == 4).all())
def test_fancy_setitem_int_labels(self):
@@ -1618,7 +1625,10 @@ def test_irow(self):
assert_frame_equal(result, expected)
# verify slice is view
- result[2] = 0.
+ # setting it makes it raise/warn
+ def f():
+ result[2] = 0.
+ self.assertRaises(com.SettingWithCopyError, f)
exp_col = df[2].copy()
exp_col[4:8] = 0.
assert_series_equal(df[2], exp_col)
@@ -1645,7 +1655,10 @@ def test_icol(self):
assert_frame_equal(result, expected)
# verify slice is view
- result[8] = 0.
+ # and that we are setting a copy
+ def f():
+ result[8] = 0.
+ self.assertRaises(com.SettingWithCopyError, f)
self.assertTrue((df[8] == 0).all())
# list of integers
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index ea4d66074e65a..5adaacbeb9d29 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -13,6 +13,7 @@
from pandas.core.groupby import (SpecificationError, DataError,
_nargsort, _lexsort_indexer)
from pandas.core.series import Series
+from pandas.core.config import option_context
from pandas.util.testing import (assert_panel_equal, assert_frame_equal,
assert_series_equal, assert_almost_equal,
assert_index_equal, assertRaisesRegexp)
@@ -2299,9 +2300,11 @@ def f(group):
self.assertEqual(result['d'].dtype, np.float64)
- for key, group in grouped:
- res = f(group)
- assert_frame_equal(res, result.ix[key])
+ # this is by definition a mutating operation!
+ with option_context('mode.chained_assignment',None):
+ for key, group in grouped:
+ res = f(group)
+ assert_frame_equal(res, result.ix[key])
def test_groupby_wrong_multi_labels(self):
from pandas import read_csv
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 0e962800fef08..64e9d18d0aa2f 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -3246,6 +3246,13 @@ def f():
df['column1'] = df['column1'] + 'c'
str(df)
+ # from SO: http://stackoverflow.com/questions/24054495/potential-bug-setting-value-for-undefined-column-using-iloc
+ df = DataFrame(np.arange(0,9), columns=['count'])
+ df['group'] = 'b'
+ def f():
+ df.iloc[0:5]['group'] = 'a'
+ self.assertRaises(com.SettingWithCopyError, f)
+
def test_float64index_slicing_bug(self):
# GH 5557, related to slicing a float index
ser = {256: 2321.0, 1: 78.0, 2: 2716.0, 3: 0.0, 4: 369.0, 5: 0.0, 6: 269.0, 7: 0.0, 8: 0.0, 9: 0.0, 10: 3536.0, 11: 0.0, 12: 24.0, 13: 0.0, 14: 931.0, 15: 0.0, 16: 101.0, 17: 78.0, 18: 9643.0, 19: 0.0, 20: 0.0, 21: 0.0, 22: 63761.0, 23: 0.0, 24: 446.0, 25: 0.0, 26: 34773.0, 27: 0.0, 28: 729.0, 29: 78.0, 30: 0.0, 31: 0.0, 32: 3374.0, 33: 0.0, 34: 1391.0, 35: 0.0, 36: 361.0, 37: 0.0, 38: 61808.0, 39: 0.0, 40: 0.0, 41: 0.0, 42: 6677.0, 43: 0.0, 44: 802.0, 45: 0.0, 46: 2691.0, 47: 0.0, 48: 3582.0, 49: 0.0, 50: 734.0, 51: 0.0, 52: 627.0, 53: 70.0, 54: 2584.0, 55: 0.0, 56: 324.0, 57: 0.0, 58: 605.0, 59: 0.0, 60: 0.0, 61: 0.0, 62: 3989.0, 63: 10.0, 64: 42.0, 65: 0.0, 66: 904.0, 67: 0.0, 68: 88.0, 69: 70.0, 70: 8172.0, 71: 0.0, 72: 0.0, 73: 0.0, 74: 64902.0, 75: 0.0, 76: 347.0, 77: 0.0, 78: 36605.0, 79: 0.0, 80: 379.0, 81: 70.0, 82: 0.0, 83: 0.0, 84: 3001.0, 85: 0.0, 86: 1630.0, 87: 7.0, 88: 364.0, 89: 0.0, 90: 67404.0, 91: 9.0, 92: 0.0, 93: 0.0, 94: 7685.0, 95: 0.0, 96: 1017.0, 97: 0.0, 98: 2831.0, 99: 0.0, 100: 2963.0, 101: 0.0, 102: 854.0, 103: 0.0, 104: 0.0, 105: 0.0, 106: 0.0, 107: 0.0, 108: 0.0, 109: 0.0, 110: 0.0, 111: 0.0, 112: 0.0, 113: 0.0, 114: 0.0, 115: 0.0, 116: 0.0, 117: 0.0, 118: 0.0, 119: 0.0, 120: 0.0, 121: 0.0, 122: 0.0, 123: 0.0, 124: 0.0, 125: 0.0, 126: 67744.0, 127: 22.0, 128: 264.0, 129: 0.0, 260: 197.0, 268: 0.0, 265: 0.0, 269: 0.0, 261: 0.0, 266: 1198.0, 267: 0.0, 262: 2629.0, 258: 775.0, 257: 0.0, 263: 0.0, 259: 0.0, 264: 163.0, 250: 10326.0, 251: 0.0, 252: 1228.0, 253: 0.0, 254: 2769.0, 255: 0.0}
diff --git a/pandas/tools/pivot.py b/pandas/tools/pivot.py
index 9132fea089fe7..ada13d6f4bccb 100644
--- a/pandas/tools/pivot.py
+++ b/pandas/tools/pivot.py
@@ -228,9 +228,14 @@ def _all_key(key):
if len(rows) > 0:
margin = data[rows + values].groupby(rows).agg(aggfunc)
cat_axis = 1
+
for key, piece in table.groupby(level=0, axis=cat_axis):
all_key = _all_key(key)
+
+ # we are going to mutate this, so need to copy!
+ piece = piece.copy()
piece[all_key] = margin[key]
+
table_pieces.append(piece)
margin_keys.append(all_key)
else:
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 4601ad0784562..df2f270346e20 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -1320,6 +1320,7 @@ def test_append_many(self):
result = chunks[0].append(chunks[1:])
tm.assert_frame_equal(result, self.frame)
+ chunks[-1] = chunks[-1].copy()
chunks[-1]['foo'] = 'bar'
result = chunks[0].append(chunks[1:])
tm.assert_frame_equal(result.ix[:, self.frame.columns], self.frame)
@@ -1673,7 +1674,7 @@ def test_join_dups(self):
def test_handle_empty_objects(self):
df = DataFrame(np.random.randn(10, 4), columns=list('abcd'))
- baz = df[:5]
+ baz = df[:5].copy()
baz['foo'] = 'bar'
empty = df[5:5]
| https://api.github.com/repos/pandas-dev/pandas/pulls/7845 | 2014-07-26T01:13:57Z | 2014-07-26T12:47:09Z | 2014-07-26T12:47:09Z | 2014-07-26T12:47:09Z | |
BUG/VIS: rot and fontsize are not applied to timeseries plots | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index a30400322716c..bf9dfa266817b 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -305,7 +305,7 @@ Bug Fixes
(except for the case of two DataFrames with ``pairwise=False``, where behavior is unchanged) (:issue:`7542`)
-
+- Bug in ``DataFrame.plot`` and ``Series.plot`` may ignore ``rot`` and ``fontsize`` keywords (:issue:`7844`)
- Bug in ``DatetimeIndex.value_counts`` doesn't preserve tz (:issue:`7735`)
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index f9ae058c065e3..5d9b43e48e3c1 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -240,21 +240,33 @@ def _check_ticks_props(self, axes, xlabelsize=None, xrot=None,
yrot : number
expected yticks rotation
"""
+ from matplotlib.ticker import NullFormatter
axes = self._flatten_visible(axes)
for ax in axes:
if xlabelsize or xrot:
- xtick = ax.get_xticklabels()[0]
- if xlabelsize is not None:
- self.assertAlmostEqual(xtick.get_fontsize(), xlabelsize)
- if xrot is not None:
- self.assertAlmostEqual(xtick.get_rotation(), xrot)
+ if isinstance(ax.xaxis.get_minor_formatter(), NullFormatter):
+ # If minor ticks has NullFormatter, rot / fontsize are not retained
+ labels = ax.get_xticklabels()
+ else:
+ labels = ax.get_xticklabels() + ax.get_xticklabels(minor=True)
+
+ for label in labels:
+ if xlabelsize is not None:
+ self.assertAlmostEqual(label.get_fontsize(), xlabelsize)
+ if xrot is not None:
+ self.assertAlmostEqual(label.get_rotation(), xrot)
if ylabelsize or yrot:
- ytick = ax.get_yticklabels()[0]
- if ylabelsize is not None:
- self.assertAlmostEqual(ytick.get_fontsize(), ylabelsize)
- if yrot is not None:
- self.assertAlmostEqual(ytick.get_rotation(), yrot)
+ if isinstance(ax.yaxis.get_minor_formatter(), NullFormatter):
+ labels = ax.get_yticklabels()
+ else:
+ labels = ax.get_yticklabels() + ax.get_yticklabels(minor=True)
+
+ for label in labels:
+ if ylabelsize is not None:
+ self.assertAlmostEqual(label.get_fontsize(), ylabelsize)
+ if yrot is not None:
+ self.assertAlmostEqual(label.get_rotation(), yrot)
def _check_ax_scales(self, axes, xaxis='linear', yaxis='linear'):
"""
@@ -872,6 +884,7 @@ def test_plot(self):
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible([ax.xaxis.get_label()])
+ self._check_ticks_props(ax, xrot=30)
_check_plot_works(df.plot, title='blah')
@@ -1069,14 +1082,16 @@ def test_subplots_timeseries(self):
self._check_visible(axes[-1].get_xticklabels(minor=True))
self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
+ self._check_ticks_props(axes, xrot=30)
- axes = df.plot(kind=kind, subplots=True, sharex=False)
+ axes = df.plot(kind=kind, subplots=True, sharex=False, rot=45, fontsize=7)
for ax in axes:
self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible(ax.get_xticklabels(minor=True))
self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
+ self._check_ticks_props(ax, xlabelsize=7, xrot=45)
def test_negative_log(self):
df = - DataFrame(rand(6, 4),
@@ -1363,7 +1378,17 @@ def test_plot_bar(self):
_check_plot_works(df.plot, kind='bar')
df = DataFrame({'a': [0, 1], 'b': [1, 0]})
- _check_plot_works(df.plot, kind='bar')
+ ax = _check_plot_works(df.plot, kind='bar')
+ self._check_ticks_props(ax, xrot=90)
+
+ ax = df.plot(kind='bar', rot=35, fontsize=10)
+ self._check_ticks_props(ax, xrot=35, xlabelsize=10)
+
+ ax = _check_plot_works(df.plot, kind='barh')
+ self._check_ticks_props(ax, yrot=0)
+
+ ax = df.plot(kind='barh', rot=55, fontsize=11)
+ self._check_ticks_props(ax, yrot=55, ylabelsize=11)
def _check_bar_alignment(self, df, kind='bar', stacked=False,
subplots=False, align='center',
@@ -1591,6 +1616,10 @@ def test_kde(self):
ax = _check_plot_works(df.plot, kind='kde')
expected = [com.pprint_thing(c) for c in df.columns]
self._check_legend_labels(ax, labels=expected)
+ self._check_ticks_props(ax, xrot=0)
+
+ ax = df.plot(kind='kde', rot=20, fontsize=5)
+ self._check_ticks_props(ax, xrot=20, xlabelsize=5)
axes = _check_plot_works(df.plot, kind='kde', subplots=True)
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index ea7f963f79f28..40d848a48d103 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -753,6 +753,7 @@ class MPLPlot(object):
"""
_default_rot = 0
+ orientation = None
_pop_attributes = ['label', 'style', 'logy', 'logx', 'loglog',
'mark_right', 'stacked']
@@ -788,7 +789,14 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True,
self.use_index = use_index
self.fontsize = fontsize
- self.rot = rot
+
+ if rot is not None:
+ self.rot = rot
+ else:
+ if isinstance(self._default_rot, dict):
+ self.rot = self._default_rot[self.kind]
+ else:
+ self.rot = self._default_rot
if grid is None:
grid = False if secondary_y else True
@@ -1018,14 +1026,30 @@ def _adorn_subplots(self):
else:
self.axes[0].set_title(self.title)
- if self._need_to_set_index:
- labels = [com.pprint_thing(key) for key in self.data.index]
- labels = dict(zip(range(len(self.data.index)), labels))
+ labels = [com.pprint_thing(key) for key in self.data.index]
+ labels = dict(zip(range(len(self.data.index)), labels))
- for ax_ in self.axes:
- # ax_.set_xticks(self.xticks)
- xticklabels = [labels.get(x, '') for x in ax_.get_xticks()]
- ax_.set_xticklabels(xticklabels, rotation=self.rot)
+ for ax in self.axes:
+ if self.orientation == 'vertical' or self.orientation is None:
+ if self._need_to_set_index:
+ xticklabels = [labels.get(x, '') for x in ax.get_xticks()]
+ ax.set_xticklabels(xticklabels)
+ self._apply_axis_properties(ax.xaxis, rot=self.rot,
+ fontsize=self.fontsize)
+ elif self.orientation == 'horizontal':
+ if self._need_to_set_index:
+ yticklabels = [labels.get(y, '') for y in ax.get_yticks()]
+ ax.set_yticklabels(yticklabels)
+ self._apply_axis_properties(ax.yaxis, rot=self.rot,
+ fontsize=self.fontsize)
+
+ def _apply_axis_properties(self, axis, rot=None, fontsize=None):
+ labels = axis.get_majorticklabels() + axis.get_minorticklabels()
+ for label in labels:
+ if rot is not None:
+ label.set_rotation(rot)
+ if fontsize is not None:
+ label.set_fontsize(fontsize)
@property
def legend_title(self):
@@ -1336,6 +1360,8 @@ def _get_errorbars(self, label=None, index=None, xerr=True, yerr=True):
class KdePlot(MPLPlot):
+ orientation = 'vertical'
+
def __init__(self, data, bw_method=None, ind=None, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
self.bw_method=bw_method
@@ -1480,6 +1506,9 @@ def _post_plot_logic(self):
class LinePlot(MPLPlot):
+ _default_rot = 30
+ orientation = 'vertical'
+
def __init__(self, data, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
if self.stacked:
@@ -1657,16 +1686,9 @@ def _post_plot_logic(self):
index_name = self._get_index_name()
- rot = 30
- if self.rot is not None:
- rot = self.rot
-
for ax in self.axes:
if condition:
- format_date_labels(ax, rot=rot)
- elif self.rot is not None:
- for l in ax.get_xticklabels():
- l.set_rotation(self.rot)
+ format_date_labels(ax, rot=self.rot)
if index_name is not None:
ax.set_xlabel(index_name)
@@ -1767,9 +1789,6 @@ def __init__(self, data, **kwargs):
self.ax_pos = self.tick_pos - self.tickoffset
def _args_adjust(self):
- if self.rot is None:
- self.rot = self._default_rot[self.kind]
-
if com.is_list_like(self.bottom):
self.bottom = np.array(self.bottom)
if com.is_list_like(self.left):
@@ -1859,8 +1878,7 @@ def _post_plot_logic(self):
if self.kind == 'bar':
ax.set_xlim((s_edge, e_edge))
ax.set_xticks(self.tick_pos)
- ax.set_xticklabels(str_index, rotation=self.rot,
- fontsize=self.fontsize)
+ ax.set_xticklabels(str_index)
if not self.log: # GH3254+
ax.axhline(0, color='k', linestyle='--')
if name is not None:
@@ -1869,14 +1887,22 @@ def _post_plot_logic(self):
# horizontal bars
ax.set_ylim((s_edge, e_edge))
ax.set_yticks(self.tick_pos)
- ax.set_yticklabels(str_index, rotation=self.rot,
- fontsize=self.fontsize)
+ ax.set_yticklabels(str_index)
ax.axvline(0, color='k', linestyle='--')
if name is not None:
ax.set_ylabel(name)
else:
raise NotImplementedError(self.kind)
+ @property
+ def orientation(self):
+ if self.kind == 'bar':
+ return 'vertical'
+ elif self.kind == 'barh':
+ return 'horizontal'
+ else:
+ raise NotImplementedError(self.kind)
+
class PiePlot(MPLPlot):
| In some plots, `rot` and `fontsize` arguments are not applied properly.
- timeseries line / area plot: `rot` is not applied to minor ticklabels, and `fontsize` is completely ignored. (Fixed to apply to xticklabels)
- kde plot: `rot` and `fontsize` are completely ignored. (Fixed to apply to xticklabels)
- scatter and hexbin plots: `rot` and `fontsize` are completely ignored. (Under confirmation)
```
import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.randn(10, 4), index=pd.date_range(start='2014-07-01', freq='M', periods=10))
df.plot(rot=80, fontsize=15)
```
### Current Result

### After Fix

| https://api.github.com/repos/pandas-dev/pandas/pulls/7844 | 2014-07-25T23:37:18Z | 2014-07-28T16:00:10Z | 2014-07-28T16:00:10Z | 2014-08-30T21:40:48Z |
BUG: PeriodIndex.unique results in Int64Index | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 7e0931ca1b745..0b7287ed69c56 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -274,6 +274,8 @@ Bug Fixes
- Bug in ``is_superperiod`` and ``is_subperiod`` cannot handle higher frequencies than ``S`` (:issue:`7760`, :issue:`7772`, :issue:`7803`)
+- Bug in ``PeriodIndex.unique`` returns int64 ``np.ndarray`` (:issue:`7540`)
+
- Bug in ``DataFrame.reset_index`` which has ``MultiIndex`` contains ``PeriodIndex`` or ``DatetimeIndex`` with tz raises ``ValueError`` (:issue:`7746`, :issue:`7793`)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index d55196b56c784..beffbfb2923db 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -552,3 +552,17 @@ def __sub__(self, other):
def _add_delta(self, other):
return NotImplemented
+
+ def unique(self):
+ """
+ Index.unique with handling for DatetimeIndex/PeriodIndex metadata
+
+ Returns
+ -------
+ result : DatetimeIndex or PeriodIndex
+ """
+ from pandas.core.index import Int64Index
+ result = Int64Index.unique(self)
+ return self._simple_new(result, name=self.name, freq=self.freq,
+ tz=getattr(self, 'tz', None))
+
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 494c0ee6b2bec..9acb1804a7ef0 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -250,11 +250,13 @@ def test_value_counts_unique_nunique(self):
expected_s = Series(range(10, 0, -1), index=values[::-1], dtype='int64')
tm.assert_series_equal(o.value_counts(), expected_s)
- if isinstance(o, DatetimeIndex):
- # DatetimeIndex.unique returns DatetimeIndex
- self.assertTrue(o.unique().equals(klass(values)))
- else:
- self.assert_numpy_array_equal(o.unique(), values)
+ result = o.unique()
+ if isinstance(o, (DatetimeIndex, PeriodIndex)):
+ self.assertTrue(isinstance(result, o.__class__))
+ self.assertEqual(result.name, o.name)
+ self.assertEqual(result.freq, o.freq)
+
+ self.assert_numpy_array_equal(result, values)
self.assertEqual(o.nunique(), len(np.unique(o.values)))
@@ -263,17 +265,13 @@ def test_value_counts_unique_nunique(self):
klass = type(o)
values = o.values
- if o.values.dtype == 'int64':
- # skips int64 because it doesn't allow to include nan or None
- continue
-
if ((isinstance(o, Int64Index) and not isinstance(o,
(DatetimeIndex, PeriodIndex)))):
# skips int64 because it doesn't allow to include nan or None
continue
# special assign to the numpy array
- if o.values.dtype == 'datetime64[ns]':
+ if o.values.dtype == 'datetime64[ns]' or isinstance(o, PeriodIndex):
values[0:2] = pd.tslib.iNaT
else:
values[0:2] = null_obj
@@ -294,8 +292,8 @@ def test_value_counts_unique_nunique(self):
result = o.unique()
self.assert_numpy_array_equal(result[1:], values[2:])
- if isinstance(o, DatetimeIndex):
- self.assertTrue(result[0] is pd.NaT)
+ if isinstance(o, (DatetimeIndex, PeriodIndex)):
+ self.assertTrue(result.asi8[0] == pd.tslib.iNaT)
else:
self.assertTrue(pd.isnull(result[0]))
@@ -706,7 +704,7 @@ def test_sub_isub(self):
rng -= 1
tm.assert_index_equal(rng, expected)
- def test_value_counts(self):
+ def test_value_counts_unique(self):
# GH 7735
for tz in [None, 'UTC', 'Asia/Tokyo', 'US/Eastern']:
idx = pd.date_range('2011-01-01 09:00', freq='H', periods=10)
@@ -717,6 +715,9 @@ def test_value_counts(self):
expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64')
tm.assert_series_equal(idx.value_counts(), expected)
+ expected = pd.date_range('2011-01-01 09:00', freq='H', periods=10, tz=tz)
+ tm.assert_index_equal(idx.unique(), expected)
+
idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00',
'2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], tz=tz)
@@ -728,6 +729,8 @@ def test_value_counts(self):
expected = Series([3, 2, 1], index=exp_idx)
tm.assert_series_equal(idx.value_counts(dropna=False), expected)
+ tm.assert_index_equal(idx.unique(), exp_idx)
+
class TestPeriodIndexOps(Ops):
_allowed = '_allow_period_index_ops'
@@ -987,7 +990,7 @@ def test_sub_isub(self):
rng -= 1
tm.assert_index_equal(rng, expected)
- def test_value_counts(self):
+ def test_value_counts_unique(self):
# GH 7735
idx = pd.period_range('2011-01-01 09:00', freq='H', periods=10)
# create repeated values, 'n'th element is repeated by n+1 times
@@ -1000,6 +1003,9 @@ def test_value_counts(self):
expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64')
tm.assert_series_equal(idx.value_counts(), expected)
+ expected = pd.period_range('2011-01-01 09:00', freq='H', periods=10)
+ tm.assert_index_equal(idx.unique(), expected)
+
idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00',
'2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], freq='H')
@@ -1011,6 +1017,8 @@ def test_value_counts(self):
expected = Series([3, 2, 1], index=exp_idx)
tm.assert_series_equal(idx.value_counts(dropna=False), expected)
+ tm.assert_index_equal(idx.unique(), exp_idx)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 2a3c53135a644..4aa424ea08031 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -848,18 +848,6 @@ def take(self, indices, axis=0):
return self[maybe_slice]
return super(DatetimeIndex, self).take(indices, axis)
- def unique(self):
- """
- Index.unique with handling for DatetimeIndex metadata
-
- Returns
- -------
- result : DatetimeIndex
- """
- result = Int64Index.unique(self)
- return DatetimeIndex._simple_new(result, tz=self.tz,
- name=self.name)
-
def union(self, other):
"""
Specialized union for DatetimeIndex objects. If combine
| Closes #7540.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7843 | 2014-07-25T23:26:27Z | 2014-07-26T14:44:41Z | 2014-07-26T14:44:41Z | 2014-07-27T01:11:02Z |
BUG: Series.__iter__ not dealing with category type well (GH7839) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 7e0931ca1b745..39ff807d6b1e4 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -97,7 +97,7 @@ Categoricals in Series/DataFrame
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:class:`~pandas.Categorical` can now be included in `Series` and `DataFrames` and gained new
-methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`, :issue:`7444`).
+methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`, :issue:`7444`, :issue:`7839`).
For full docs, see the :ref:`Categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 502c01ce6d1d1..c0e1e8a13eea3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -973,7 +973,9 @@ def _get_repr(
return result
def __iter__(self):
- if np.issubdtype(self.dtype, np.datetime64):
+ if com.is_categorical_dtype(self.dtype):
+ return iter(self.values)
+ elif np.issubdtype(self.dtype, np.datetime64):
return (lib.Timestamp(x) for x in self.values)
else:
return iter(self.values)
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 0aa7f2b67c7c6..b70e50eb3d030 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -647,6 +647,27 @@ def test_nan_handling(self):
np.array(["a","b",np.nan], dtype=np.object_))
self.assert_numpy_array_equal(s3.cat._codes, np.array([0,1,2,0]))
+ def test_sequence_like(self):
+
+ # GH 7839
+ # make sure can iterate
+ df = DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
+ df['grade'] = Categorical(df['raw_grade'])
+
+ # basic sequencing testing
+ result = list(df.grade.cat)
+ expected = np.array(df.grade.cat).tolist()
+ tm.assert_almost_equal(result,expected)
+
+ # iteration
+ for t in df.itertuples(index=False):
+ str(t)
+
+ for row, s in df.iterrows():
+ str(s)
+
+ for c, col in df.iteritems():
+ str(s)
def test_series_delegations(self):
| closes #7839
| https://api.github.com/repos/pandas-dev/pandas/pulls/7842 | 2014-07-25T22:41:05Z | 2014-07-25T23:30:04Z | 2014-07-25T23:30:04Z | 2014-07-25T23:30:04Z |
TST/BUG: html tests not skipping properly if lxml is not installed | diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 326b7bc004564..a7540fc716e1f 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -595,18 +595,28 @@ def _lang_enc(filename):
class TestReadHtmlEncoding(tm.TestCase):
files = glob.glob(os.path.join(DATA_PATH, 'html_encoding', '*.html'))
+ flavor = 'bs4'
+
+ @classmethod
+ def setUpClass(cls):
+ super(TestReadHtmlEncoding, cls).setUpClass()
+ _skip_if_none_of((cls.flavor, 'html5lib'))
+
+ def read_html(self, *args, **kwargs):
+ kwargs['flavor'] = self.flavor
+ return read_html(*args, **kwargs)
def read_filename(self, f, encoding):
- return read_html(f, encoding=encoding, index_col=0)
+ return self.read_html(f, encoding=encoding, index_col=0)
def read_file_like(self, f, encoding):
with open(f, 'rb') as fobj:
- return read_html(BytesIO(fobj.read()), encoding=encoding,
- index_col=0)
+ return self.read_html(BytesIO(fobj.read()), encoding=encoding,
+ index_col=0)
def read_string(self, f, encoding):
with open(f, 'rb') as fobj:
- return read_html(fobj.read(), encoding=encoding, index_col=0)
+ return self.read_html(fobj.read(), encoding=encoding, index_col=0)
def test_encode(self):
for f in self.files:
@@ -618,6 +628,15 @@ def test_encode(self):
tm.assert_frame_equal(from_string, from_filename)
+class TestReadHtmlEncodingLxml(TestReadHtmlEncoding):
+ flavor = 'lxml'
+
+ @classmethod
+ def setUpClass(cls):
+ super(TestReadHtmlEncodingLxml, cls).setUpClass()
+ _skip_if_no(cls.flavor)
+
+
class TestReadHtmlLxml(tm.TestCase):
@classmethod
def setUpClass(cls):
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/7836 | 2014-07-24T19:25:07Z | 2014-07-24T21:25:40Z | 2014-07-24T21:25:40Z | 2014-07-24T21:25:42Z |
ENH: allow MultiIndex column selection from HDFStore | diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py
index b6a1fcbec8339..464bb0886d231 100644
--- a/pandas/computation/expr.py
+++ b/pandas/computation/expr.py
@@ -430,7 +430,9 @@ def visit_List(self, node, **kwargs):
name = self.env.add_tmp([self.visit(e).value for e in node.elts])
return self.term_type(name, self.env)
- visit_Tuple = visit_List
+ def visit_Tuple(self, node, **kwargs):
+ name = self.env.add_tmp(tuple(self.visit(e).value for e in node.elts))
+ return self.term_type(name, self.env)
def visit_Index(self, node, **kwargs):
""" df.index[4] """
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6a944284035c8..81eb788d06c57 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -4371,6 +4371,18 @@ def test_categorical(self):
#result = store.select('df', where = ['index>2'])
#tm.assert_frame_equal(df[df.index>2],result)
+ def test_select_multiindex_columns(self):
+ df = tm.makeCustomDataframe(10, 3,
+ c_idx_nlevels=2, r_idx_type='i',
+ data_gen_f=lambda *args: np.random.randn())
+ where = 'columns == [("C_l0_g0", "C_l1_g0"), ("C_l0_g2", "C_l1_g2")]'
+ with ensure_clean_store(self.path) as store:
+ store.append('df', df)
+ result = store.select('df', where)
+
+ expected = df.iloc[:, [0, 2]]
+ tm.assert_frame_equal(result, expected)
+
def _test_sort(obj):
if isinstance(obj, DataFrame):
return obj.reindex(sorted(obj.index))
| Needs some more testing
| https://api.github.com/repos/pandas-dev/pandas/pulls/7834 | 2014-07-24T19:16:04Z | 2015-04-08T15:04:33Z | null | 2022-10-13T00:16:04Z |
ENH: PeriodIndex can accept freq with mult | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 4394981abb8c3..29b955a55fcc9 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -591,7 +591,7 @@ various docstrings for the classes.
These operations (``apply``, ``rollforward`` and ``rollback``) preserves time (hour, minute, etc) information by default. To reset time, use ``normalize=True`` keyword when creating the offset instance. If ``normalize=True``, result is normalized after the function is applied.
- .. ipython:: python
+.. ipython:: python
day = Day()
day.apply(Timestamp('2014-01-01 09:00'))
@@ -1257,8 +1257,10 @@ be created with the convenience function ``period_range``.
Period
~~~~~~
+
A ``Period`` represents a span of time (e.g., a day, a month, a quarter, etc).
-It can be created using a frequency alias:
+You can specify the span via ``freq`` keyword using a frequency alias like below.
+Because ``freq`` represents a span of ``Period``, it cannot be negative like "-3D".
.. ipython:: python
@@ -1268,11 +1270,10 @@ It can be created using a frequency alias:
Period('2012-1-1 19:00', freq='H')
-Unlike time stamped data, pandas does not support frequencies at multiples of
-DateOffsets (e.g., '3Min') for periods.
+ Period('2012-1-1 19:00', freq='5H')
Adding and subtracting integers from periods shifts the period by its own
-frequency.
+frequency. Arithmetic is not allowed between ``Period`` with different ``freq`` (span).
.. ipython:: python
@@ -1282,6 +1283,15 @@ frequency.
p - 3
+ p = Period('2012-01', freq='2M')
+
+ p + 2
+
+ p - 1
+
+ p == Period('2012-01', freq='3M')
+
+
If ``Period`` freq is daily or higher (``D``, ``H``, ``T``, ``S``, ``L``, ``U``, ``N``), ``offsets`` and ``timedelta``-like can be added if the result can have the same freq. Otherise, ``ValueError`` will be raised.
.. ipython:: python
@@ -1335,6 +1345,13 @@ The ``PeriodIndex`` constructor can also be used directly:
PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')
+Passing multiplied frequency outputs a sequence of ``Period`` which
+has multiplied span.
+
+.. ipython:: python
+
+ PeriodIndex(start='2014-01', freq='3M', periods=4)
+
Just like ``DatetimeIndex``, a ``PeriodIndex`` can also be used to index pandas
objects:
diff --git a/doc/source/whatsnew/v0.17.0.txt b/doc/source/whatsnew/v0.17.0.txt
index e9d39e0441055..8e34ea8f81a67 100644
--- a/doc/source/whatsnew/v0.17.0.txt
+++ b/doc/source/whatsnew/v0.17.0.txt
@@ -109,6 +109,32 @@ We are now supporting a ``Series.dt.strftime`` method for datetime-likes to gene
The string format is as the python standard library and details can be found `here <https://docs.python.org/2/library/datetime.html#strftime-and-strptime-behavior>`_
+.. _whatsnew_0170.periodfreq:
+
+Period Frequency Enhancement
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+``Period``, ``PeriodIndex`` and ``period_range`` can now accept multiplied freq. Also, ``Period.freq`` and ``PeriodIndex.freq`` are now stored as ``DateOffset`` instance like ``DatetimeIndex``, not ``str`` (:issue:`7811`)
+
+Multiplied freq represents a span of corresponding length. Below example creates a period of 3 days. Addition and subtraction will shift the period by its span.
+
+.. ipython:: python
+
+ p = pd.Period('2015-08-01', freq='3D')
+ p
+ p + 1
+ p - 2
+ p.to_timestamp()
+ p.to_timestamp(how='E')
+
+You can use multiplied freq in ``PeriodIndex`` and ``period_range``.
+
+.. ipython:: python
+
+ idx = pd.period_range('2015-08-01', periods=4, freq='2D')
+ idx
+ idx + 1
+
.. _whatsnew_0170.enhancements.sas_xport:
Support for SAS XPORT files
@@ -183,7 +209,6 @@ Other enhancements
- ``pandas.tseries.offsets`` larger than the ``Day`` offset can now be used with with ``Series`` for addition/subtraction (:issue:`10699`). See the :ref:`Documentation <timeseries.offsetseries>` for more details.
- ``.as_blocks`` will now take a ``copy`` optional argument to return a copy of the data, default is to copy (no change in behavior from prior versions), (:issue:`9607`)
-
- ``regex`` argument to ``DataFrame.filter`` now handles numeric column names instead of raising ``ValueError`` (:issue:`10384`).
- ``pd.read_stata`` will now read Stata 118 type files. (:issue:`9882`)
diff --git a/pandas/io/tests/data/legacy_msgpack/0.16.0/0.16.0_x86_64_darwin_2.7.9.msgpack b/pandas/io/tests/data/legacy_msgpack/0.16.0/0.16.0_x86_64_darwin_2.7.9.msgpack
new file mode 100644
index 0000000000000..554f8a6e0742a
Binary files /dev/null and b/pandas/io/tests/data/legacy_msgpack/0.16.0/0.16.0_x86_64_darwin_2.7.9.msgpack differ
diff --git a/pandas/io/tests/data/legacy_msgpack/0.16.2/0.16.2_x86_64_darwin_2.7.9.msgpack b/pandas/io/tests/data/legacy_msgpack/0.16.2/0.16.2_x86_64_darwin_2.7.9.msgpack
new file mode 100644
index 0000000000000..000879f4cb2c2
Binary files /dev/null and b/pandas/io/tests/data/legacy_msgpack/0.16.2/0.16.2_x86_64_darwin_2.7.9.msgpack differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.16.0/0.16.0_x86_64_darwin_2.7.9.pickle b/pandas/io/tests/data/legacy_pickle/0.16.0/0.16.0_x86_64_darwin_2.7.9.pickle
new file mode 100644
index 0000000000000..d45936baa1e00
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.16.0/0.16.0_x86_64_darwin_2.7.9.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.16.2/0.16.2_x86_64_darwin_2.7.9.pickle b/pandas/io/tests/data/legacy_pickle/0.16.2/0.16.2_x86_64_darwin_2.7.9.pickle
new file mode 100644
index 0000000000000..d45936baa1e00
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.16.2/0.16.2_x86_64_darwin_2.7.9.pickle differ
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index 8f2079722c00e..1ade6ac0f8068 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -17,6 +17,8 @@
from pandas.compat import u
from pandas.util.misc import is_little_endian
import pandas
+from pandas.tseries.offsets import Day, MonthEnd
+
class TestPickle():
"""
@@ -90,6 +92,10 @@ def read_pickles(self, version):
if 'ts' in data['series']:
self._validate_timeseries(data['series']['ts'], self.data['series']['ts'])
self._validate_frequency(data['series']['ts'])
+ if 'index' in data:
+ if 'period' in data['index']:
+ self._validate_periodindex(data['index']['period'],
+ self.data['index']['period'])
n += 1
assert n > 0, 'Pickle files are not tested'
@@ -162,7 +168,6 @@ def _validate_timeseries(self, pickled, current):
def _validate_frequency(self, pickled):
# GH 9291
- from pandas.tseries.offsets import Day
freq = pickled.index.freq
result = freq + Day(1)
tm.assert_equal(result, Day(2))
@@ -175,6 +180,13 @@ def _validate_frequency(self, pickled):
tm.assert_equal(isinstance(result, pandas.Timedelta), True)
tm.assert_equal(result, pandas.Timedelta(days=1, nanoseconds=1))
+ def _validate_periodindex(self, pickled, current):
+ tm.assert_index_equal(pickled, current)
+ tm.assertIsInstance(pickled.freq, MonthEnd)
+ tm.assert_equal(pickled.freq, MonthEnd())
+ tm.assert_equal(pickled.freqstr, 'M')
+ tm.assert_index_equal(pickled.shift(2), current.shift(2))
+
if __name__ == '__main__':
import nose
diff --git a/pandas/src/period.pyx b/pandas/src/period.pyx
index 619d1a87a71e0..1dbf469a946b5 100644
--- a/pandas/src/period.pyx
+++ b/pandas/src/period.pyx
@@ -615,6 +615,9 @@ cdef ndarray[int64_t] localize_dt64arr_to_period(ndarray[int64_t] stamps,
return result
+_DIFFERENT_FREQ_ERROR = "Input has different freq={1} from Period(freq={0})"
+
+
cdef class Period(object):
"""
Represents an period of time
@@ -624,8 +627,7 @@ cdef class Period(object):
value : Period or compat.string_types, default None
The time period represented (e.g., '4Q2005')
freq : str, default None
- e.g., 'B' for businessday. Must be a singular rule-code (e.g. 5T is not
- allowed).
+ One of pandas period strings or corresponding objects
year : int, default None
month : int, default 1
quarter : int, default None
@@ -641,12 +643,33 @@ cdef class Period(object):
_comparables = ['name','freqstr']
_typ = 'period'
+ @classmethod
+ def _maybe_convert_freq(cls, object freq):
+
+ if isinstance(freq, compat.string_types):
+ from pandas.tseries.frequencies import _period_alias_dict
+ freq = _period_alias_dict.get(freq, freq)
+ elif isinstance(freq, (int, tuple)):
+ from pandas.tseries.frequencies import get_freq_code as _gfc
+ from pandas.tseries.frequencies import _get_freq_str
+ code, stride = _gfc(freq)
+ freq = _get_freq_str(code, stride)
+
+ from pandas.tseries.frequencies import to_offset
+ freq = to_offset(freq)
+
+ if freq.n <= 0:
+ raise ValueError('Frequency must be positive, because it'
+ ' represents span: {0}'.format(freq.freqstr))
+
+ return freq
+
@classmethod
def _from_ordinal(cls, ordinal, freq):
""" fast creation from an ordinal and freq that are already validated! """
self = Period.__new__(cls)
self.ordinal = ordinal
- self.freq = freq
+ self.freq = cls._maybe_convert_freq(freq)
return self
def __init__(self, value=None, freq=None, ordinal=None,
@@ -659,8 +682,6 @@ cdef class Period(object):
# periods such as A, Q, etc. Every five minutes would be, e.g.,
# ('T', 5) but may be passed in as a string like '5T'
- self.freq = None
-
# ordinal is the period offset from the gregorian proleptic epoch
if ordinal is not None and value is not None:
@@ -675,9 +696,8 @@ cdef class Period(object):
elif value is None:
if freq is None:
raise ValueError("If value is None, freq cannot be None")
-
ordinal = _ordinal_from_fields(year, month, quarter, day,
- hour, minute, second, freq)
+ hour, minute, second, freq)
elif isinstance(value, Period):
other = value
@@ -698,8 +718,8 @@ cdef class Period(object):
if lib.is_integer(value):
value = str(value)
value = value.upper()
-
dt, _, reso = parse_time_string(value, freq)
+
if freq is None:
try:
freq = frequencies.Resolution.get_freq(reso)
@@ -723,24 +743,22 @@ cdef class Period(object):
raise ValueError(msg)
base, mult = _gfc(freq)
- if mult != 1:
- # TODO: Better error message - this is slightly confusing
- raise ValueError('Only mult == 1 supported')
if ordinal is None:
self.ordinal = get_period_ordinal(dt.year, dt.month, dt.day,
- dt.hour, dt.minute, dt.second, dt.microsecond, 0,
- base)
+ dt.hour, dt.minute, dt.second,
+ dt.microsecond, 0, base)
else:
self.ordinal = ordinal
- self.freq = frequencies._get_freq_str(base)
+ self.freq = self._maybe_convert_freq(freq)
def __richcmp__(self, other, op):
if isinstance(other, Period):
from pandas.tseries.frequencies import get_freq_code as _gfc
if other.freq != self.freq:
- raise ValueError("Cannot compare non-conforming periods")
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, other.freqstr)
+ raise ValueError(msg)
if self.ordinal == tslib.iNaT or other.ordinal == tslib.iNaT:
return _nat_scalar_rules[op]
return PyObject_RichCompareBool(self.ordinal, other.ordinal, op)
@@ -758,7 +776,7 @@ cdef class Period(object):
def _add_delta(self, other):
from pandas.tseries import frequencies
if isinstance(other, (timedelta, np.timedelta64, offsets.Tick, Timedelta)):
- offset = frequencies.to_offset(self.freq)
+ offset = frequencies.to_offset(self.freq.rule_code)
if isinstance(offset, offsets.Tick):
nanos = tslib._delta_to_nanoseconds(other)
offset_nanos = tslib._delta_to_nanoseconds(offset)
@@ -769,18 +787,21 @@ cdef class Period(object):
else:
ordinal = self.ordinal + (nanos // offset_nanos)
return Period(ordinal=ordinal, freq=self.freq)
+ msg = 'Input cannnot be converted to Period(freq={0})'
+ raise ValueError(msg)
elif isinstance(other, offsets.DateOffset):
freqstr = frequencies.get_standard_freq(other)
base = frequencies.get_base_alias(freqstr)
-
- if base == self.freq:
+ if base == self.freq.rule_code:
if self.ordinal == tslib.iNaT:
ordinal = self.ordinal
else:
ordinal = self.ordinal + other.n
return Period(ordinal=ordinal, freq=self.freq)
-
- raise ValueError("Input has different freq from Period(freq={0})".format(self.freq))
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, other.freqstr)
+ raise ValueError(msg)
+ else: # pragma no cover
+ return NotImplemented
def __add__(self, other):
if isinstance(other, (timedelta, np.timedelta64,
@@ -790,7 +811,7 @@ cdef class Period(object):
if self.ordinal == tslib.iNaT:
ordinal = self.ordinal
else:
- ordinal = self.ordinal + other
+ ordinal = self.ordinal + other * self.freq.n
return Period(ordinal=ordinal, freq=self.freq)
else: # pragma: no cover
return NotImplemented
@@ -804,7 +825,7 @@ cdef class Period(object):
if self.ordinal == tslib.iNaT:
ordinal = self.ordinal
else:
- ordinal = self.ordinal - other
+ ordinal = self.ordinal - other * self.freq.n
return Period(ordinal=ordinal, freq=self.freq)
elif isinstance(other, Period):
if other.freq != self.freq:
@@ -836,13 +857,18 @@ cdef class Period(object):
base1, mult1 = _gfc(self.freq)
base2, mult2 = _gfc(freq)
- if mult2 != 1:
- raise ValueError('Only mult == 1 supported')
-
- end = how == 'E'
- new_ordinal = period_asfreq(self.ordinal, base1, base2, end)
+ if self.ordinal == tslib.iNaT:
+ ordinal = self.ordinal
+ else:
+ # mult1 can't be negative or 0
+ end = how == 'E'
+ if end:
+ ordinal = self.ordinal + mult1 - 1
+ else:
+ ordinal = self.ordinal
+ ordinal = period_asfreq(ordinal, base1, base2, end)
- return Period(ordinal=new_ordinal, freq=base2)
+ return Period(ordinal=ordinal, freq=freq)
@property
def start_time(self):
@@ -853,7 +879,8 @@ cdef class Period(object):
if self.ordinal == tslib.iNaT:
ordinal = self.ordinal
else:
- ordinal = (self + 1).start_time.value - 1
+ # freq.n can't be negative or 0
+ ordinal = (self + self.freq.n).start_time.value - 1
return Timestamp(ordinal)
def to_timestamp(self, freq=None, how='start', tz=None):
@@ -947,14 +974,15 @@ cdef class Period(object):
def __str__(self):
return self.__unicode__()
+ @property
+ def freqstr(self):
+ return self.freq.freqstr
+
def __repr__(self):
- from pandas.tseries import frequencies
from pandas.tseries.frequencies import get_freq_code as _gfc
base, mult = _gfc(self.freq)
formatted = period_format(self.ordinal, base)
- freqstr = frequencies._reverse_period_code_map[base]
-
- return "Period('%s', '%s')" % (formatted, freqstr)
+ return "Period('%s', '%s')" % (formatted, self.freqstr)
def __unicode__(self):
"""
@@ -1123,9 +1151,6 @@ def _ordinal_from_fields(year, month, quarter, day, hour, minute,
second, freq):
from pandas.tseries.frequencies import get_freq_code as _gfc
base, mult = _gfc(freq)
- if mult != 1:
- raise ValueError('Only mult == 1 supported')
-
if quarter is not None:
year, month = _quarter_to_myear(year, quarter, freq)
diff --git a/pandas/tseries/base.py b/pandas/tseries/base.py
index 96c3883f7cbf3..912a0c3f88405 100644
--- a/pandas/tseries/base.py
+++ b/pandas/tseries/base.py
@@ -13,7 +13,7 @@
import pandas.lib as lib
from pandas.core.index import Index
from pandas.util.decorators import Appender, cache_readonly
-from pandas.tseries.frequencies import infer_freq, to_offset, Resolution
+import pandas.tseries.frequencies as frequencies
import pandas.algos as _algos
@@ -136,7 +136,7 @@ def inferred_freq(self):
frequency.
"""
try:
- return infer_freq(self)
+ return frequencies.infer_freq(self)
except ValueError:
return None
@@ -260,7 +260,7 @@ def min(self, axis=None):
if self.hasnans:
mask = i8 == tslib.iNaT
- min_stamp = self[~mask].asi8.min()
+ min_stamp = i8[~mask].min()
else:
min_stamp = i8.min()
return self._box_func(min_stamp)
@@ -303,7 +303,7 @@ def max(self, axis=None):
if self.hasnans:
mask = i8 == tslib.iNaT
- max_stamp = self[~mask].asi8.max()
+ max_stamp = i8[~mask].max()
else:
max_stamp = i8.max()
return self._box_func(max_stamp)
@@ -352,15 +352,14 @@ def _format_attrs(self):
@cache_readonly
def _resolution(self):
- from pandas.tseries.frequencies import Resolution
- return Resolution.get_reso_from_freq(self.freqstr)
+ return frequencies.Resolution.get_reso_from_freq(self.freqstr)
@cache_readonly
def resolution(self):
"""
Returns day, hour, minute, second, millisecond or microsecond
"""
- return Resolution.get_str(self._resolution)
+ return frequencies.Resolution.get_str(self._resolution)
def _convert_scalar_indexer(self, key, kind=None):
"""
@@ -509,7 +508,7 @@ def shift(self, n, freq=None):
"""
if freq is not None and freq != self.freq:
if isinstance(freq, compat.string_types):
- freq = to_offset(freq)
+ freq = frequencies.to_offset(freq)
result = Index.shift(self, n, freq)
if hasattr(self,'tz'):
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 7e5c3af43c861..9349e440eb9e9 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -175,7 +175,7 @@ def get_to_timestamp_base(base):
def get_freq_group(freq):
"""
- Return frequency code group of given frequency str.
+ Return frequency code group of given frequency str or offset.
Example
-------
@@ -185,9 +185,16 @@ def get_freq_group(freq):
>>> get_freq_group('W-FRI')
4000
"""
+ if isinstance(freq, offsets.DateOffset):
+ freq = freq.rule_code
+
if isinstance(freq, compat.string_types):
base, mult = get_freq_code(freq)
freq = base
+ elif isinstance(freq, int):
+ pass
+ else:
+ raise ValueError('input must be str, offset or int')
return (freq // 1000) * 1000
@@ -592,7 +599,7 @@ def get_standard_freq(freq):
return None
if isinstance(freq, DateOffset):
- return get_offset_name(freq)
+ return freq.rule_code
code, stride = get_freq_code(freq)
return _get_freq_str(code, stride)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index ec416efe1079f..fb6929c77f6b0 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -444,7 +444,10 @@ def _beg_apply_index(self, i, freq):
"""Offsets index to beginning of Period frequency"""
off = i.to_perioddelta('D')
- base_period = i.to_period(freq)
+
+ from pandas.tseries.frequencies import get_freq_code
+ base, mult = get_freq_code(freq)
+ base_period = i.to_period(base)
if self.n < 0:
# when subtracting, dates on start roll to prior
roll = np.where(base_period.to_timestamp() == i - off,
@@ -459,7 +462,11 @@ def _end_apply_index(self, i, freq):
"""Offsets index to end of Period frequency"""
off = i.to_perioddelta('D')
- base_period = i.to_period(freq)
+
+ import pandas.tseries.frequencies as frequencies
+ from pandas.tseries.frequencies import get_freq_code
+ base, mult = get_freq_code(freq)
+ base_period = i.to_period(base)
if self.n > 0:
# when adding, dtates on end roll to next
roll = np.where(base_period.to_timestamp(how='end') == i - off,
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 56d7d45120fdc..832791fc6933c 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -56,6 +56,8 @@ def dt64arr_to_periodarr(data, freq, tz):
# --- Period index sketch
+_DIFFERENT_FREQ_ERROR = "Input has different freq={1} from PeriodIndex(freq={0})"
+
def _period_index_cmp(opname, nat_result=False):
"""
Wrap comparison operations to convert datetime-like to datetime64
@@ -63,13 +65,16 @@ def _period_index_cmp(opname, nat_result=False):
def wrapper(self, other):
if isinstance(other, Period):
func = getattr(self.values, opname)
+ other_base, _ = _gfc(other.freq)
if other.freq != self.freq:
- raise AssertionError("Frequencies must be equal")
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, other.freqstr)
+ raise ValueError(msg)
result = func(other.ordinal)
elif isinstance(other, PeriodIndex):
if other.freq != self.freq:
- raise AssertionError("Frequencies must be equal")
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, other.freqstr)
+ raise ValueError(msg)
result = getattr(self.values, opname)(other.values)
@@ -162,8 +167,6 @@ class PeriodIndex(DatelikeOps, DatetimeIndexOpsMixin, Int64Index):
def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
periods=None, copy=False, name=None, tz=None, **kwargs):
- freq = frequencies.get_standard_freq(freq)
-
if periods is not None:
if is_float(periods):
periods = int(periods)
@@ -237,8 +240,8 @@ def _from_arraylike(cls, data, freq, tz):
else:
base1, _ = _gfc(data.freq)
base2, _ = _gfc(freq)
- data = period.period_asfreq_arr(data.values, base1,
- base2, 1)
+ data = period.period_asfreq_arr(data.values,
+ base1, base2, 1)
else:
if freq is None and len(data) > 0:
freq = getattr(data[0], 'freq', None)
@@ -269,11 +272,9 @@ def _simple_new(cls, values, name=None, freq=None, **kwargs):
result = object.__new__(cls)
result._data = values
result.name = name
-
if freq is None:
- raise ValueError('freq not specified')
- result.freq = freq
-
+ raise ValueError('freq is not specified')
+ result.freq = Period._maybe_convert_freq(freq)
result._reset_identity()
return result
@@ -352,7 +353,8 @@ def astype(self, dtype):
def searchsorted(self, key, side='left'):
if isinstance(key, Period):
if key.freq != self.freq:
- raise ValueError("Different period frequency: %s" % key.freq)
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, key.freqstr)
+ raise ValueError(msg)
key = key.ordinal
elif isinstance(key, compat.string_types):
key = Period(key, freq=self.freq).ordinal
@@ -375,10 +377,6 @@ def is_full(self):
values = self.values
return ((values[1:] - values[:-1]) < 2).all()
- @property
- def freqstr(self):
- return self.freq
-
def asfreq(self, freq=None, how='E'):
"""
Convert the PeriodIndex to the specified frequency `freq`.
@@ -425,11 +423,20 @@ def asfreq(self, freq=None, how='E'):
base1, mult1 = _gfc(self.freq)
base2, mult2 = _gfc(freq)
- if mult2 != 1:
- raise ValueError('Only mult == 1 supported')
-
+ asi8 = self.asi8
+ # mult1 can't be negative or 0
end = how == 'E'
- new_data = period.period_asfreq_arr(self.values, base1, base2, end)
+ if end:
+ ordinal = asi8 + mult1 - 1
+ else:
+ ordinal = asi8
+
+ new_data = period.period_asfreq_arr(ordinal, base1, base2, end)
+
+ if self.hasnans:
+ mask = asi8 == tslib.iNaT
+ new_data[mask] = tslib.iNaT
+
return self._simple_new(new_data, self.name, freq=freq)
def to_datetime(self, dayfirst=False):
@@ -504,7 +511,7 @@ def to_timestamp(self, freq=None, how='start'):
def _maybe_convert_timedelta(self, other):
if isinstance(other, (timedelta, np.timedelta64, offsets.Tick, Timedelta)):
- offset = frequencies.to_offset(self.freq)
+ offset = frequencies.to_offset(self.freq.rule_code)
if isinstance(offset, offsets.Tick):
nanos = tslib._delta_to_nanoseconds(other)
offset_nanos = tslib._delta_to_nanoseconds(offset)
@@ -513,8 +520,7 @@ def _maybe_convert_timedelta(self, other):
elif isinstance(other, offsets.DateOffset):
freqstr = frequencies.get_standard_freq(other)
base = frequencies.get_base_alias(freqstr)
-
- if base == self.freq:
+ if base == self.freq.rule_code:
return other.n
raise ValueError("Input has different freq from PeriodIndex(freq={0})".format(self.freq))
@@ -536,7 +542,7 @@ def shift(self, n):
shifted : PeriodIndex
"""
mask = self.values == tslib.iNaT
- values = self.values + n
+ values = self.values + n * self.freq.n
values[mask] = tslib.iNaT
return PeriodIndex(data=values, name=self.name, freq=self.freq)
@@ -616,7 +622,7 @@ def get_loc(self, key, method=None, tolerance=None):
except TypeError:
pass
- key = Period(key, self.freq)
+ key = Period(key, freq=self.freq)
try:
return Index.get_loc(self, key.ordinal, method, tolerance)
except KeyError:
@@ -688,7 +694,6 @@ def _get_string_slice(self, key):
'ordered time series')
key, parsed, reso = parse_time_string(key, self.freq)
-
grp = frequencies.Resolution.get_freq_group(reso)
freqn = frequencies.get_freq_group(self.freq)
if reso in ['day', 'hour', 'minute', 'second'] and not grp < freqn:
@@ -723,8 +728,8 @@ def _assert_can_do_setop(self, other):
raise ValueError('can only call with other PeriodIndex-ed objects')
if self.freq != other.freq:
- raise ValueError('Only like-indexed PeriodIndexes compatible '
- 'for join (for now)')
+ msg = _DIFFERENT_FREQ_ERROR.format(self.freqstr, other.freqstr)
+ raise ValueError(msg)
def _wrap_union_result(self, other, result):
name = self.name if self.name == other.name else None
@@ -778,12 +783,12 @@ def __array_finalize__(self, obj):
self.name = getattr(obj, 'name', None)
self._reset_identity()
- def take(self, indices, axis=None):
+ def take(self, indices, axis=0):
"""
Analogous to ndarray.take
"""
indices = com._ensure_platform_int(indices)
- taken = self.values.take(indices, axis=axis)
+ taken = self.asi8.take(indices, axis=axis)
return self._simple_new(taken, self.name, freq=self.freq)
def append(self, other):
@@ -850,10 +855,8 @@ def __setstate__(self, state):
data = np.empty(nd_state[1], dtype=nd_state[2])
np.ndarray.__setstate__(data, nd_state)
- try: # backcompat
- self.freq = own_state[1]
- except:
- pass
+ # backcompat
+ self.freq = Period._maybe_convert_freq(own_state[1])
else: # pragma: no cover
data = np.empty(state)
@@ -863,6 +866,7 @@ def __setstate__(self, state):
else:
raise Exception("invalid pickle state")
+
_unpickle_compat = __setstate__
def tz_convert(self, tz):
@@ -916,10 +920,13 @@ def tz_localize(self, tz, infer_dst=False):
PeriodIndex._add_datetimelike_methods()
-def _get_ordinal_range(start, end, periods, freq):
+def _get_ordinal_range(start, end, periods, freq, mult=1):
if com._count_not_none(start, end, periods) < 2:
raise ValueError('Must specify 2 of start, end, periods')
+ if freq is not None:
+ _, mult = _gfc(freq)
+
if start is not None:
start = Period(start, freq)
if end is not None:
@@ -943,15 +950,16 @@ def _get_ordinal_range(start, end, periods, freq):
raise ValueError('Could not infer freq from start/end')
if periods is not None:
+ periods = periods * mult
if start is None:
- data = np.arange(end.ordinal - periods + 1,
- end.ordinal + 1,
+ data = np.arange(end.ordinal - periods + mult,
+ end.ordinal + 1, mult,
dtype=np.int64)
else:
- data = np.arange(start.ordinal, start.ordinal + periods,
+ data = np.arange(start.ordinal, start.ordinal + periods, mult,
dtype=np.int64)
else:
- data = np.arange(start.ordinal, end.ordinal + 1, dtype=np.int64)
+ data = np.arange(start.ordinal, end.ordinal + 1, mult, dtype=np.int64)
return data, freq
@@ -975,8 +983,6 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
base = frequencies.FreqGroup.FR_QTR
else:
base, mult = _gfc(freq)
- if mult != 1:
- raise ValueError('Only mult == 1 supported')
if base != frequencies.FreqGroup.FR_QTR:
raise AssertionError("base must equal FR_QTR")
@@ -987,9 +993,6 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
ordinals.append(val)
else:
base, mult = _gfc(freq)
- if mult != 1:
- raise ValueError('Only mult == 1 supported')
-
arrays = _make_field_arrays(year, month, day, hour, minute, second)
for y, mth, d, h, mn, s in zip(*arrays):
ordinals.append(period.period_ordinal(y, mth, d, h, mn, s, 0, 0, base))
diff --git a/pandas/tseries/tests/test_base.py b/pandas/tseries/tests/test_base.py
index 5741e9cf9c093..03c0e3f778e99 100644
--- a/pandas/tseries/tests/test_base.py
+++ b/pandas/tseries/tests/test_base.py
@@ -1535,10 +1535,10 @@ def _check_freq(index, expected_index):
self.assertEqual(result.freq, 'D')
def test_order(self):
- idx1 = PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'],
- freq='D', name='idx')
+ for freq in ['D', '2D', '4D']:
+ idx = PeriodIndex(['2011-01-01', '2011-01-02', '2011-01-03'],
+ freq=freq, name='idx')
- for idx in [idx1]:
ordered = idx.sort_values()
self.assert_index_equal(ordered, idx)
self.assertEqual(ordered.freq, idx.freq)
@@ -1546,18 +1546,21 @@ def test_order(self):
ordered = idx.sort_values(ascending=False)
expected = idx[::-1]
self.assert_index_equal(ordered, expected)
- self.assertEqual(ordered.freq, 'D')
+ self.assertEqual(ordered.freq, expected.freq)
+ self.assertEqual(ordered.freq, freq)
ordered, indexer = idx.sort_values(return_indexer=True)
self.assert_index_equal(ordered, idx)
self.assert_numpy_array_equal(indexer, np.array([0, 1, 2]))
- self.assertEqual(ordered.freq, 'D')
+ self.assertEqual(ordered.freq, idx.freq)
+ self.assertEqual(ordered.freq, freq)
ordered, indexer = idx.sort_values(return_indexer=True, ascending=False)
expected = idx[::-1]
self.assert_index_equal(ordered, expected)
self.assert_numpy_array_equal(indexer, np.array([2, 1, 0]))
- self.assertEqual(ordered.freq, 'D')
+ self.assertEqual(ordered.freq, expected.freq)
+ self.assertEqual(ordered.freq, freq)
idx1 = PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05',
'2011-01-02', '2011-01-01'], freq='D', name='idx1')
@@ -1610,6 +1613,7 @@ def test_getitem(self):
name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx[0:10:2]
expected = pd.PeriodIndex(['2011-01-01', '2011-01-03', '2011-01-05',
@@ -1617,6 +1621,7 @@ def test_getitem(self):
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx[-20:-5:3]
expected = pd.PeriodIndex(['2011-01-12', '2011-01-15', '2011-01-18',
@@ -1624,6 +1629,7 @@ def test_getitem(self):
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx[4::-1]
expected = PeriodIndex(['2011-01-05', '2011-01-04', '2011-01-03',
@@ -1631,6 +1637,7 @@ def test_getitem(self):
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
def test_take(self):
#GH 10295
@@ -1647,6 +1654,7 @@ def test_take(self):
expected = pd.period_range('2011-01-01', '2011-01-03', freq='D',
name='idx')
self.assert_index_equal(result, expected)
+ self.assertEqual(result.freq, 'D')
self.assertEqual(result.freq, expected.freq)
result = idx.take([0, 2, 4])
@@ -1654,24 +1662,28 @@ def test_take(self):
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx.take([7, 4, 1])
expected = pd.PeriodIndex(['2011-01-08', '2011-01-05', '2011-01-02'],
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx.take([3, 2, 5])
expected = PeriodIndex(['2011-01-04', '2011-01-03', '2011-01-06'],
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
result = idx.take([-3, 2, 5])
expected = PeriodIndex(['2011-01-29', '2011-01-03', '2011-01-06'],
freq='D', name='idx')
self.assert_index_equal(result, expected)
self.assertEqual(result.freq, expected.freq)
+ self.assertEqual(result.freq, 'D')
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 070363460f791..b783459cbfe95 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -129,9 +129,48 @@ def test_anchored_shortcuts():
expected = frequencies.to_offset('W-SUN')
assert(result == expected)
- result = frequencies.to_offset('Q')
- expected = frequencies.to_offset('Q-DEC')
- assert(result == expected)
+ result1 = frequencies.to_offset('Q')
+ result2 = frequencies.to_offset('Q-DEC')
+ expected = offsets.QuarterEnd(startingMonth=12)
+ assert(result1 == expected)
+ assert(result2 == expected)
+
+ result1 = frequencies.to_offset('Q-MAY')
+ expected = offsets.QuarterEnd(startingMonth=5)
+ assert(result1 == expected)
+
+
+def test_get_rule_month():
+ result = frequencies._get_rule_month('W')
+ assert(result == 'DEC')
+ result = frequencies._get_rule_month(offsets.Week())
+ assert(result == 'DEC')
+
+ result = frequencies._get_rule_month('D')
+ assert(result == 'DEC')
+ result = frequencies._get_rule_month(offsets.Day())
+ assert(result == 'DEC')
+
+ result = frequencies._get_rule_month('Q')
+ assert(result == 'DEC')
+ result = frequencies._get_rule_month(offsets.QuarterEnd(startingMonth=12))
+ print(result == 'DEC')
+
+ result = frequencies._get_rule_month('Q-JAN')
+ assert(result == 'JAN')
+ result = frequencies._get_rule_month(offsets.QuarterEnd(startingMonth=1))
+ assert(result == 'JAN')
+
+ result = frequencies._get_rule_month('A-DEC')
+ assert(result == 'DEC')
+ result = frequencies._get_rule_month(offsets.YearEnd())
+ assert(result == 'DEC')
+
+ result = frequencies._get_rule_month('A-MAY')
+ assert(result == 'MAY')
+ result = frequencies._get_rule_month(offsets.YearEnd(month=5))
+ assert(result == 'MAY')
+
class TestFrequencyCode(tm.TestCase):
@@ -154,6 +193,23 @@ def test_freq_code(self):
result = frequencies.get_freq_group(code)
self.assertEqual(result, code // 1000 * 1000)
+ def test_freq_group(self):
+ self.assertEqual(frequencies.get_freq_group('A'), 1000)
+ self.assertEqual(frequencies.get_freq_group('3A'), 1000)
+ self.assertEqual(frequencies.get_freq_group('-1A'), 1000)
+ self.assertEqual(frequencies.get_freq_group('A-JAN'), 1000)
+ self.assertEqual(frequencies.get_freq_group('A-MAY'), 1000)
+ self.assertEqual(frequencies.get_freq_group(offsets.YearEnd()), 1000)
+ self.assertEqual(frequencies.get_freq_group(offsets.YearEnd(month=1)), 1000)
+ self.assertEqual(frequencies.get_freq_group(offsets.YearEnd(month=5)), 1000)
+
+ self.assertEqual(frequencies.get_freq_group('W'), 4000)
+ self.assertEqual(frequencies.get_freq_group('W-MON'), 4000)
+ self.assertEqual(frequencies.get_freq_group('W-FRI'), 4000)
+ self.assertEqual(frequencies.get_freq_group(offsets.Week()), 4000)
+ self.assertEqual(frequencies.get_freq_group(offsets.Week(weekday=1)), 4000)
+ self.assertEqual(frequencies.get_freq_group(offsets.Week(weekday=5)), 4000)
+
def test_get_to_timestamp_base(self):
tsb = frequencies.get_to_timestamp_base
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index cdd9d036fcadc..c828d6d7effb6 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -59,6 +59,10 @@ def test_period_cons_quarterly(self):
p = Period(stamp, freq=freq)
self.assertEqual(p, exp)
+ stamp = exp.to_timestamp('3D', how='end')
+ p = Period(stamp, freq=freq)
+ self.assertEqual(p, exp)
+
def test_period_cons_annual(self):
# bugs in scikits.timeseries
for month in MONTHS:
@@ -82,28 +86,109 @@ def test_period_cons_nat(self):
p = Period('NaT', freq='M')
self.assertEqual(p.ordinal, tslib.iNaT)
self.assertEqual(p.freq, 'M')
+ self.assertEqual((p + 1).ordinal, tslib.iNaT)
p = Period('nat', freq='W-SUN')
self.assertEqual(p.ordinal, tslib.iNaT)
self.assertEqual(p.freq, 'W-SUN')
+ self.assertEqual((p + 1).ordinal, tslib.iNaT)
p = Period(tslib.iNaT, freq='D')
self.assertEqual(p.ordinal, tslib.iNaT)
self.assertEqual(p.freq, 'D')
+ self.assertEqual((p + 1).ordinal, tslib.iNaT)
+
+ p = Period(tslib.iNaT, freq='3D')
+ self.assertEqual(p.ordinal, tslib.iNaT)
+ self.assertEqual(p.freq, offsets.Day(3))
+ self.assertEqual(p.freqstr, '3D')
+ self.assertEqual((p + 1).ordinal, tslib.iNaT)
self.assertRaises(ValueError, Period, 'NaT')
+ def test_period_cons_mult(self):
+ p1 = Period('2011-01', freq='3M')
+ p2 = Period('2011-01', freq='M')
+ self.assertEqual(p1.ordinal, p2.ordinal)
+
+ self.assertEqual(p1.freq, offsets.MonthEnd(3))
+ self.assertEqual(p1.freqstr, '3M')
+
+ self.assertEqual(p2.freq, offsets.MonthEnd())
+ self.assertEqual(p2.freqstr, 'M')
+
+ result = p1 + 1
+ self.assertEqual(result.ordinal, (p2 + 3).ordinal)
+ self.assertEqual(result.freq, p1.freq)
+ self.assertEqual(result.freqstr, '3M')
+
+ result = p1 - 1
+ self.assertEqual(result.ordinal, (p2 - 3).ordinal)
+ self.assertEqual(result.freq, p1.freq)
+ self.assertEqual(result.freqstr, '3M')
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: -3M')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ Period('2011-01', freq='-3M')
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: 0M')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ Period('2011-01', freq='0M')
+
def test_timestamp_tz_arg(self):
+ tm._skip_if_no_pytz()
import pytz
- p = Period('1/1/2005', freq='M').to_timestamp(tz='Europe/Brussels')
- self.assertEqual(p.tz,
- pytz.timezone('Europe/Brussels').normalize(p).tzinfo)
+ for case in ['Europe/Brussels', 'Asia/Tokyo', 'US/Pacific']:
+ p = Period('1/1/2005', freq='M').to_timestamp(tz=case)
+ exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
+ exp_zone = pytz.timezone(case).normalize(p)
+
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, exp_zone.tzinfo)
+ self.assertEqual(p.tz, exp.tz)
+
+ p = Period('1/1/2005', freq='3H').to_timestamp(tz=case)
+ exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
+ exp_zone = pytz.timezone(case).normalize(p)
+
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, exp_zone.tzinfo)
+ self.assertEqual(p.tz, exp.tz)
+
+ p = Period('1/1/2005', freq='A').to_timestamp(freq='A', tz=case)
+ exp = Timestamp('31/12/2005', tz='UTC').tz_convert(case)
+ exp_zone = pytz.timezone(case).normalize(p)
+
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, exp_zone.tzinfo)
+ self.assertEqual(p.tz, exp.tz)
+
+ p = Period('1/1/2005', freq='A').to_timestamp(freq='3H', tz=case)
+ exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
+ exp_zone = pytz.timezone(case).normalize(p)
+
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, exp_zone.tzinfo)
+ self.assertEqual(p.tz, exp.tz)
def test_timestamp_tz_arg_dateutil(self):
from pandas.tslib import _dateutil_gettz as gettz
from pandas.tslib import maybe_get_tz
- p = Period('1/1/2005', freq='M').to_timestamp(tz=maybe_get_tz('dateutil/Europe/Brussels'))
- self.assertEqual(p.tz, gettz('Europe/Brussels'))
+ for case in ['dateutil/Europe/Brussels', 'dateutil/Asia/Tokyo',
+ 'dateutil/US/Pacific']:
+ p = Period('1/1/2005', freq='M').to_timestamp(tz=maybe_get_tz(case))
+ exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, gettz(case.split('/', 1)[1]))
+ self.assertEqual(p.tz, exp.tz)
+
+ p = Period('1/1/2005', freq='M').to_timestamp(freq='3H', tz=maybe_get_tz(case))
+ exp = Timestamp('1/1/2005', tz='UTC').tz_convert(case)
+ self.assertEqual(p, exp)
+ self.assertEqual(p.tz, gettz(case.split('/', 1)[1]))
+ self.assertEqual(p.tz, exp.tz)
def test_timestamp_tz_arg_dateutil_from_string(self):
from pandas.tslib import _dateutil_gettz as gettz
@@ -117,6 +202,21 @@ def test_timestamp_nat_tz(self):
t = Period('NaT', freq='M').to_timestamp(tz='Asia/Tokyo')
self.assertTrue(t is tslib.NaT)
+ def test_timestamp_mult(self):
+ p = pd.Period('2011-01', freq='M')
+ self.assertEqual(p.to_timestamp(how='S'), pd.Timestamp('2011-01-01'))
+ self.assertEqual(p.to_timestamp(how='E'), pd.Timestamp('2011-01-31'))
+
+ p = pd.Period('2011-01', freq='3M')
+ self.assertEqual(p.to_timestamp(how='S'), pd.Timestamp('2011-01-01'))
+ self.assertEqual(p.to_timestamp(how='E'), pd.Timestamp('2011-03-31'))
+
+ def test_timestamp_nat_mult(self):
+ for freq in ['M', '3M']:
+ p = pd.Period('NaT', freq=freq)
+ self.assertTrue(p.to_timestamp(how='S') is pd.NaT)
+ self.assertTrue(p.to_timestamp(how='E') is pd.NaT)
+
def test_period_constructor(self):
i1 = Period('1/1/2005', freq='M')
i2 = Period('Jan 2005')
@@ -252,9 +352,87 @@ def test_period_constructor(self):
self.assertRaises(ValueError, Period, '2007-1-1', freq='X')
+
+ def test_period_constructor_offsets(self):
+ self.assertEqual(Period('1/1/2005', freq=offsets.MonthEnd()),
+ Period('1/1/2005', freq='M'))
+ self.assertEqual(Period('2005', freq=offsets.YearEnd()),
+ Period('2005', freq='A'))
+ self.assertEqual(Period('2005', freq=offsets.MonthEnd()),
+ Period('2005', freq='M'))
+ self.assertEqual(Period('3/10/12', freq=offsets.BusinessDay()),
+ Period('3/10/12', freq='B'))
+ self.assertEqual(Period('3/10/12', freq=offsets.Day()),
+ Period('3/10/12', freq='D'))
+
+ self.assertEqual(Period(year=2005, quarter=1,
+ freq=offsets.QuarterEnd(startingMonth=12)),
+ Period(year=2005, quarter=1, freq='Q'))
+ self.assertEqual(Period(year=2005, quarter=2,
+ freq=offsets.QuarterEnd(startingMonth=12)),
+ Period(year=2005, quarter=2, freq='Q'))
+
+ self.assertEqual(Period(year=2005, month=3, day=1, freq=offsets.Day()),
+ Period(year=2005, month=3, day=1, freq='D'))
+ self.assertEqual(Period(year=2012, month=3, day=10, freq=offsets.BDay()),
+ Period(year=2012, month=3, day=10, freq='B'))
+
+ expected = Period('2005-03-01', freq='3D')
+ self.assertEqual(Period(year=2005, month=3, day=1, freq=offsets.Day(3)),
+ expected)
+ self.assertEqual(Period(year=2005, month=3, day=1, freq='3D'),
+ expected)
+
+ self.assertEqual(Period(year=2012, month=3, day=10, freq=offsets.BDay(3)),
+ Period(year=2012, month=3, day=10, freq='3B'))
+
+ self.assertEqual(Period(200701, freq=offsets.MonthEnd()),
+ Period(200701, freq='M'))
+
+ i1 = Period(ordinal=200701, freq=offsets.MonthEnd())
+ i2 = Period(ordinal=200701, freq='M')
+ self.assertEqual(i1, i2)
+ self.assertEqual(i1.year, 18695)
+ self.assertEqual(i2.year, 18695)
+
+ i1 = Period(datetime(2007, 1, 1), freq='M')
+ i2 = Period('200701', freq='M')
+ self.assertEqual(i1, i2)
+
+ i1 = Period(date(2007, 1, 1), freq='M')
+ i2 = Period(datetime(2007, 1, 1), freq='M')
+ i3 = Period(np.datetime64('2007-01-01'), freq='M')
+ i4 = Period(np.datetime64('2007-01-01 00:00:00Z'), freq='M')
+ i5 = Period(np.datetime64('2007-01-01 00:00:00.000Z'), freq='M')
+ self.assertEqual(i1, i2)
+ self.assertEqual(i1, i3)
+ self.assertEqual(i1, i4)
+ self.assertEqual(i1, i5)
+
+ i1 = Period('2007-01-01 09:00:00.001')
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1000), freq='L')
+ self.assertEqual(i1, expected)
+
+ expected = Period(np.datetime64('2007-01-01 09:00:00.001Z'), freq='L')
+ self.assertEqual(i1, expected)
+
+ i1 = Period('2007-01-01 09:00:00.00101')
+ expected = Period(datetime(2007, 1, 1, 9, 0, 0, 1010), freq='U')
+ self.assertEqual(i1, expected)
+
+ expected = Period(np.datetime64('2007-01-01 09:00:00.00101Z'),
+ freq='U')
+ self.assertEqual(i1, expected)
+
+ self.assertRaises(ValueError, Period, ordinal=200701)
+
+ self.assertRaises(ValueError, Period, '2007-1-1', freq='X')
+
+
def test_freq_str(self):
i1 = Period('1982', freq='Min')
- self.assertNotEqual(i1.freq[0], '1')
+ self.assertEqual(i1.freq, offsets.Minute())
+ self.assertEqual(i1.freqstr, 'T')
def test_repr(self):
p = Period('Jan-2000')
@@ -297,11 +475,14 @@ def test_to_timestamp(self):
aliases = ['s', 'StarT', 'BEGIn']
for a in aliases:
self.assertEqual(start_ts, p.to_timestamp('D', how=a))
+ # freq with mult should not affect to the result
+ self.assertEqual(start_ts, p.to_timestamp('3D', how=a))
end_ts = p.to_timestamp(how='E')
aliases = ['e', 'end', 'FINIsH']
for a in aliases:
self.assertEqual(end_ts, p.to_timestamp('D', how=a))
+ self.assertEqual(end_ts, p.to_timestamp('3D', how=a))
from_lst = ['A', 'Q', 'M', 'W', 'B',
'D', 'H', 'Min', 'S']
@@ -325,10 +506,15 @@ def _ex(p):
result = p.to_timestamp('H', how='end')
expected = datetime(1985, 12, 31, 23)
self.assertEqual(result, expected)
+ result = p.to_timestamp('3H', how='end')
+ self.assertEqual(result, expected)
result = p.to_timestamp('T', how='end')
expected = datetime(1985, 12, 31, 23, 59)
self.assertEqual(result, expected)
+ result = p.to_timestamp('2T', how='end')
+ self.assertEqual(result, expected)
+
result = p.to_timestamp(how='end')
expected = datetime(1985, 12, 31)
@@ -341,8 +527,10 @@ def _ex(p):
self.assertEqual(result, expected)
result = p.to_timestamp('S', how='start')
self.assertEqual(result, expected)
-
- assertRaisesRegexp(ValueError, 'Only mult == 1', p.to_timestamp, '5t')
+ result = p.to_timestamp('3H', how='start')
+ self.assertEqual(result, expected)
+ result = p.to_timestamp('5S', how='start')
+ self.assertEqual(result, expected)
p = Period('NaT', freq='W')
self.assertTrue(p.to_timestamp() is tslib.NaT)
@@ -354,9 +542,9 @@ def test_start_time(self):
p = Period('2012', freq=f)
self.assertEqual(p.start_time, xp)
self.assertEqual(Period('2012', freq='B').start_time,
- datetime(2012, 1, 2))
+ datetime(2012, 1, 2))
self.assertEqual(Period('2012', freq='W').start_time,
- datetime(2011, 12, 26))
+ datetime(2011, 12, 26))
p = Period('NaT', freq='W')
self.assertTrue(p.start_time is tslib.NaT)
@@ -489,19 +677,20 @@ def test_properties_daily(self):
def test_properties_hourly(self):
# Test properties on Periods with hourly frequency.
- h_date = Period(freq='H', year=2007, month=1, day=1, hour=0)
- #
- assert_equal(h_date.year, 2007)
- assert_equal(h_date.quarter, 1)
- assert_equal(h_date.month, 1)
- assert_equal(h_date.day, 1)
- assert_equal(h_date.weekday, 0)
- assert_equal(h_date.dayofyear, 1)
- assert_equal(h_date.hour, 0)
- assert_equal(h_date.days_in_month, 31)
- assert_equal(Period(freq='H', year=2012, month=2, day=1,
- hour=0).days_in_month, 29)
- #
+ h_date1 = Period(freq='H', year=2007, month=1, day=1, hour=0)
+ h_date2 = Period(freq='2H', year=2007, month=1, day=1, hour=0)
+
+ for h_date in [h_date1, h_date2]:
+ assert_equal(h_date.year, 2007)
+ assert_equal(h_date.quarter, 1)
+ assert_equal(h_date.month, 1)
+ assert_equal(h_date.day, 1)
+ assert_equal(h_date.weekday, 0)
+ assert_equal(h_date.dayofyear, 1)
+ assert_equal(h_date.hour, 0)
+ assert_equal(h_date.days_in_month, 31)
+ assert_equal(Period(freq='H', year=2012, month=2, day=1,
+ hour=0).days_in_month, 29)
def test_properties_minutely(self):
# Test properties on Periods with minutely frequency.
@@ -556,9 +745,15 @@ def test_pnow(self):
exp = Period(dt, freq='D')
self.assertEqual(val, exp)
+ val2 = period.pnow('2D')
+ exp2 = Period(dt, freq='2D')
+ self.assertEqual(val2, exp2)
+ self.assertEqual(val.ordinal, val2.ordinal)
+ self.assertEqual(val.ordinal, exp2.ordinal)
+
def test_constructor_corner(self):
- self.assertRaises(ValueError, Period, year=2007, month=1,
- freq='2M')
+ expected = Period('2007-01', freq='2M')
+ self.assertEqual(Period(year=2007, month=1, freq='2M'), expected)
self.assertRaises(ValueError, Period, datetime.now())
self.assertRaises(ValueError, Period, datetime.now().date())
@@ -613,7 +808,13 @@ class TestFreqConversion(tm.TestCase):
def test_asfreq_corner(self):
val = Period(freq='A', year=2007)
- self.assertRaises(ValueError, val.asfreq, '5t')
+ result1 = val.asfreq('5t')
+ result2 = val.asfreq('t')
+ expected = Period('2007-12-31 23:59', freq='t')
+ self.assertEqual(result1.ordinal, expected.ordinal)
+ self.assertEqual(result1.freqstr, '5T')
+ self.assertEqual(result2.ordinal, expected.ordinal)
+ self.assertEqual(result2.freqstr, 'T')
def test_conv_annual(self):
# frequency conversion tests: from Annual Frequency
@@ -795,7 +996,6 @@ def test_conv_monthly(self):
def test_conv_weekly(self):
# frequency conversion tests: from Weekly Frequency
-
ival_W = Period(freq='W', year=2007, month=1, day=1)
ival_WSUN = Period(freq='W', year=2007, month=1, day=7)
@@ -1311,6 +1511,92 @@ def test_asfreq_nat(self):
self.assertEqual(result.ordinal, tslib.iNaT)
self.assertEqual(result.freq, 'M')
+ def test_asfreq_mult(self):
+ # normal freq to mult freq
+ p = Period(freq='A', year=2007)
+ # ordinal will not change
+ for freq in ['3A', offsets.YearEnd(3)]:
+ result = p.asfreq(freq)
+ expected = Period('2007', freq='3A')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+ # ordinal will not change
+ for freq in ['3A', offsets.YearEnd(3)]:
+ result = p.asfreq(freq, how='S')
+ expected = Period('2007', freq='3A')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+
+ # mult freq to normal freq
+ p = Period(freq='3A', year=2007)
+ # ordinal will change because how=E is the default
+ for freq in ['A', offsets.YearEnd()]:
+ result = p.asfreq(freq)
+ expected = Period('2009', freq='A')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+ # ordinal will not change
+ for freq in ['A', offsets.YearEnd()]:
+ result = p.asfreq(freq, how='S')
+ expected = Period('2007', freq='A')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+
+ p = Period(freq='A', year=2007)
+ for freq in ['2M', offsets.MonthEnd(2)]:
+ result = p.asfreq(freq)
+ expected = Period('2007-12', freq='2M')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+ for freq in ['2M', offsets.MonthEnd(2)]:
+ result = p.asfreq(freq, how='S')
+ expected = Period('2007-01', freq='2M')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+
+ p = Period(freq='3A', year=2007)
+ for freq in ['2M', offsets.MonthEnd(2)]:
+ result = p.asfreq(freq)
+ expected = Period('2009-12', freq='2M')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+ for freq in ['2M', offsets.MonthEnd(2)]:
+ result = p.asfreq(freq, how='S')
+ expected = Period('2007-01', freq='2M')
+
+ self.assertEqual(result, expected)
+ self.assertEqual(result.ordinal, expected.ordinal)
+ self.assertEqual(result.freq, expected.freq)
+
+ def test_asfreq_mult_nat(self):
+ # normal freq to mult freq
+ for p in [Period('NaT', freq='A'), Period('NaT', freq='3A'),
+ Period('NaT', freq='2M'), Period('NaT', freq='3D')]:
+ for freq in ['3A', offsets.YearEnd(3)]:
+ result = p.asfreq(freq)
+ expected = Period('NaT', freq='3A')
+ self.assertEqual(result.ordinal, pd.tslib.iNaT)
+ self.assertEqual(result.freq, expected.freq)
+
+ result = p.asfreq(freq, how='S')
+ expected = Period('NaT', freq='3A')
+ self.assertEqual(result.ordinal, pd.tslib.iNaT)
+ self.assertEqual(result.freq, expected.freq)
+
class TestPeriodIndex(tm.TestCase):
@@ -1352,9 +1638,8 @@ def test_constructor_field_arrays(self):
expected = period_range('1990Q3', '2009Q2', freq='Q-DEC')
self.assertTrue(index.equals(expected))
- self.assertRaises(
- ValueError, PeriodIndex, year=years, quarter=quarters,
- freq='2Q-DEC')
+ index2 = PeriodIndex(year=years, quarter=quarters, freq='2Q-DEC')
+ tm.assert_numpy_array_equal(index.asi8, index2.asi8)
index = PeriodIndex(year=years, quarter=quarters)
self.assertTrue(index.equals(expected))
@@ -1422,6 +1707,18 @@ def test_constructor_fromarraylike(self):
result = PeriodIndex(idx, freq='M')
self.assertTrue(result.equals(idx))
+ result = PeriodIndex(idx, freq=offsets.MonthEnd())
+ self.assertTrue(result.equals(idx))
+ self.assertTrue(result.freq, 'M')
+
+ result = PeriodIndex(idx, freq='2M')
+ self.assertTrue(result.equals(idx))
+ self.assertTrue(result.freq, '2M')
+
+ result = PeriodIndex(idx, freq=offsets.MonthEnd(2))
+ self.assertTrue(result.equals(idx))
+ self.assertTrue(result.freq, '2M')
+
result = PeriodIndex(idx, freq='D')
exp = idx.asfreq('D', 'e')
self.assertTrue(result.equals(exp))
@@ -1455,6 +1752,49 @@ def test_constructor_year_and_quarter(self):
p = PeriodIndex(lops)
tm.assert_index_equal(p, idx)
+ def test_constructor_freq_mult(self):
+ # GH #7811
+ for func in [PeriodIndex, period_range]:
+ # must be the same, but for sure...
+ pidx = func(start='2014-01', freq='2M', periods=4)
+ expected = PeriodIndex(['2014-01', '2014-03', '2014-05', '2014-07'], freq='M')
+ tm.assert_index_equal(pidx, expected)
+
+ pidx = func(start='2014-01-02', end='2014-01-15', freq='3D')
+ expected = PeriodIndex(['2014-01-02', '2014-01-05', '2014-01-08', '2014-01-11',
+ '2014-01-14'], freq='D')
+ tm.assert_index_equal(pidx, expected)
+
+ pidx = func(end='2014-01-01 17:00', freq='4H', periods=3)
+ expected = PeriodIndex(['2014-01-01 09:00', '2014-01-01 13:00',
+ '2014-01-01 17:00'], freq='4H')
+ tm.assert_index_equal(pidx, expected)
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: -1M')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ PeriodIndex(['2011-01'], freq='-1M')
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: 0M')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ PeriodIndex(['2011-01'], freq='0M')
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: 0M')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ period_range('2011-01', periods=3, freq='0M')
+
+ def test_constructor_freq_mult_dti_compat(self):
+ import itertools
+ mults = [1, 2, 3, 4, 5]
+ freqs = ['A', 'M', 'D', 'T', 'S']
+ for mult, freq in itertools.product(mults, freqs):
+ freqstr = str(mult) + freq
+ pidx = PeriodIndex(start='2014-04-01', freq=freqstr, periods=10)
+ expected = date_range(start='2014-04-01', freq=freqstr, periods=10).to_period(freq)
+ tm.assert_index_equal(pidx, expected)
+
def test_is_(self):
create_index = lambda: PeriodIndex(freq='A', start='1/1/2001',
end='12/1/2009')
@@ -1563,6 +1903,13 @@ def test_slice_with_zero_step_raises(self):
self.assertRaisesRegexp(ValueError, 'slice step cannot be zero',
lambda: ts.ix[::0])
+ def test_contains(self):
+ rng = period_range('2007-01', freq='M', periods=10)
+
+ self.assertTrue(Period('2007-01', freq='M') in rng)
+ self.assertFalse(Period('2007-01', freq='D') in rng)
+ self.assertFalse(Period('2007-01', freq='2M') in rng)
+
def test_sub(self):
rng = period_range('2007-01', periods=50)
@@ -1614,8 +1961,6 @@ def _get_with_delta(delta, freq='A-DEC'):
exp_index = _get_with_delta(delta)
self.assertTrue(result.index.equals(exp_index))
- self.assertRaises(ValueError, index.to_timestamp, '5t')
-
index = PeriodIndex(freq='H', start='1/1/2001', end='1/2/2001')
series = Series(1, index=index, name='foo')
@@ -1651,7 +1996,7 @@ def test_to_timestamp_repr_is_code(self):
for z in zs:
self.assertEqual( eval(repr(z)), z)
- def test_to_timestamp_period_nat(self):
+ def test_to_timestamp_pi_nat(self):
# GH 7228
index = PeriodIndex(['NaT', '2011-01', '2011-02'], freq='M', name='idx')
@@ -1665,6 +2010,25 @@ def test_to_timestamp_period_nat(self):
self.assertTrue(result2.equals(index))
self.assertEqual(result2.name, 'idx')
+ result3 = result.to_period(freq='3M')
+ exp = PeriodIndex(['NaT', '2011-01', '2011-02'], freq='3M', name='idx')
+ self.assert_index_equal(result3, exp)
+ self.assertEqual(result3.freqstr, '3M')
+
+ msg = ('Frequency must be positive, because it'
+ ' represents span: -2A')
+ with tm.assertRaisesRegexp(ValueError, msg):
+ result.to_period(freq='-2A')
+
+ def test_to_timestamp_pi_mult(self):
+ idx = PeriodIndex(['2011-01', 'NaT', '2011-02'], freq='2M', name='idx')
+ result = idx.to_timestamp()
+ expected = DatetimeIndex(['2011-01-01', 'NaT', '2011-02-01'], name='idx')
+ self.assert_index_equal(result, expected)
+ result = idx.to_timestamp(how='E')
+ expected = DatetimeIndex(['2011-02-28', 'NaT', '2011-03-31'], name='idx')
+ self.assert_index_equal(result, expected)
+
def test_as_frame_columns(self):
rng = period_range('1/1/2000', periods=5)
df = DataFrame(randn(10, 5), columns=rng)
@@ -1794,7 +2158,17 @@ def _get_with_delta(delta, freq='A-DEC'):
# invalid axis
assertRaisesRegexp(ValueError, 'axis', df.to_timestamp, axis=2)
- assertRaisesRegexp(ValueError, 'Only mult == 1', df.to_timestamp, '5t', axis=1)
+
+ result1 = df.to_timestamp('5t', axis=1)
+ result2 = df.to_timestamp('t', axis=1)
+ expected = pd.date_range('2001-01-01', '2009-01-01', freq='AS')
+ self.assertTrue(isinstance(result1.columns, DatetimeIndex))
+ self.assertTrue(isinstance(result2.columns, DatetimeIndex))
+ self.assert_numpy_array_equal(result1.columns.asi8, expected.asi8)
+ self.assert_numpy_array_equal(result2.columns.asi8, expected.asi8)
+ # PeriodIndex.to_timestamp always use 'infer'
+ self.assertEqual(result1.columns.freqstr, 'AS-JAN')
+ self.assertEqual(result2.columns.freqstr, 'AS-JAN')
def test_index_duplicate_periods(self):
# monotonic
@@ -2007,7 +2381,13 @@ def test_asfreq(self):
self.assertEqual(pi7.asfreq('Min', 'S'), pi6)
self.assertRaises(ValueError, pi7.asfreq, 'T', 'foo')
- self.assertRaises(ValueError, pi1.asfreq, '5t')
+ result1 = pi1.asfreq('3M')
+ result2 = pi1.asfreq('M')
+ expected = PeriodIndex(freq='M', start='2001-12', end='2001-12')
+ self.assert_numpy_array_equal(result1.asi8, expected.asi8)
+ self.assertEqual(result1.freqstr, '3M')
+ self.assert_numpy_array_equal(result2.asi8, expected.asi8)
+ self.assertEqual(result2.freqstr, 'M')
def test_asfreq_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M')
@@ -2015,6 +2395,22 @@ def test_asfreq_nat(self):
expected = PeriodIndex(['2011Q1', '2011Q1', 'NaT', '2011Q2'], freq='Q')
self.assertTrue(result.equals(expected))
+ def test_asfreq_mult_pi(self):
+ pi = PeriodIndex(['2001-01', '2001-02', 'NaT', '2001-03'], freq='2M')
+
+ for freq in ['D', '3D']:
+ result = pi.asfreq(freq)
+ exp = PeriodIndex(['2001-02-28', '2001-03-31', 'NaT',
+ '2001-04-30'], freq=freq)
+ self.assert_index_equal(result, exp)
+ self.assertEqual(result.freq, exp.freq)
+
+ result = pi.asfreq(freq, how='S')
+ exp = PeriodIndex(['2001-01-01', '2001-02-01', 'NaT',
+ '2001-03-01'], freq=freq)
+ self.assert_index_equal(result, exp)
+ self.assertEqual(result.freq, exp.freq)
+
def test_period_index_length(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
assert_equal(len(pi), 9)
@@ -2120,12 +2516,19 @@ def test_dti_to_period(self):
dti = DatetimeIndex(start='1/1/2005', end='12/1/2005', freq='M')
pi1 = dti.to_period()
pi2 = dti.to_period(freq='D')
+ pi3 = dti.to_period(freq='3D')
self.assertEqual(pi1[0], Period('Jan 2005', freq='M'))
self.assertEqual(pi2[0], Period('1/31/2005', freq='D'))
+ self.assertEqual(pi3[0], Period('1/31/2005', freq='3D'))
self.assertEqual(pi1[-1], Period('Nov 2005', freq='M'))
self.assertEqual(pi2[-1], Period('11/30/2005', freq='D'))
+ self.assertEqual(pi3[-1], Period('11/30/2005', freq='3D'))
+
+ tm.assert_index_equal(pi1, period_range('1/1/2005', '11/1/2005', freq='M'))
+ tm.assert_index_equal(pi2, period_range('1/1/2005', '11/1/2005', freq='M').asfreq('D'))
+ tm.assert_index_equal(pi3, period_range('1/1/2005', '11/1/2005', freq='M').asfreq('3D'))
def test_pindex_slice_index(self):
pi = PeriodIndex(start='1/1/10', end='12/31/12', freq='M')
@@ -2217,7 +2620,6 @@ def test_getitem_seconds(self):
continue
s = Series(np.random.rand(len(idx)), index=idx)
-
assert_series_equal(s['2013/01/01 10:00'], s[3600:3660])
assert_series_equal(s['2013/01/01 9H'], s[:3600])
for d in ['2013/01/01', '2013/01', '2013']:
@@ -2318,35 +2720,35 @@ def test_to_period_monthish(self):
prng = rng.to_period()
self.assertEqual(prng.freq, 'M')
- def test_no_multiples(self):
- self.assertRaises(ValueError, period_range, '1989Q3', periods=10,
- freq='2Q')
-
- self.assertRaises(ValueError, period_range, '1989', periods=10,
- freq='2A')
- self.assertRaises(ValueError, Period, '1989', freq='2A')
-
- # def test_pindex_multiples(self):
- # pi = PeriodIndex(start='1/1/10', end='12/31/12', freq='2M')
- # self.assertEqual(pi[0], Period('1/1/10', '2M'))
- # self.assertEqual(pi[1], Period('3/1/10', '2M'))
-
- # self.assertEqual(pi[0].asfreq('6M'), pi[2].asfreq('6M'))
- # self.assertEqual(pi[0].asfreq('A'), pi[2].asfreq('A'))
-
- # self.assertEqual(pi[0].asfreq('M', how='S'),
- # Period('Jan 2010', '1M'))
- # self.assertEqual(pi[0].asfreq('M', how='E'),
- # Period('Feb 2010', '1M'))
- # self.assertEqual(pi[1].asfreq('M', how='S'),
- # Period('Mar 2010', '1M'))
-
- # i = Period('1/1/2010 12:05:18', '5S')
- # self.assertEqual(i, Period('1/1/2010 12:05:15', '5S'))
-
- # i = Period('1/1/2010 12:05:18', '5S')
- # self.assertEqual(i.asfreq('1S', how='E'),
- # Period('1/1/2010 12:05:19', '1S'))
+ def test_multiples(self):
+ result1 = Period('1989', freq='2A')
+ result2 = Period('1989', freq='A')
+ self.assertEqual(result1.ordinal, result2.ordinal)
+ self.assertEqual(result1.freqstr, '2A-DEC')
+ self.assertEqual(result2.freqstr, 'A-DEC')
+ self.assertEqual(result1.freq, offsets.YearEnd(2))
+ self.assertEqual(result2.freq, offsets.YearEnd())
+
+ self.assertEqual((result1 + 1).ordinal, result1.ordinal + 2)
+ self.assertEqual((result1 - 1).ordinal, result2.ordinal - 2)
+
+ def test_pindex_multiples(self):
+ pi = PeriodIndex(start='1/1/11', end='12/31/11', freq='2M')
+ expected = PeriodIndex(['2011-01', '2011-03', '2011-05', '2011-07',
+ '2011-09', '2011-11'], freq='M')
+ tm.assert_index_equal(pi, expected)
+ self.assertEqual(pi.freq, offsets.MonthEnd(2))
+ self.assertEqual(pi.freqstr, '2M')
+
+ pi = period_range(start='1/1/11', end='12/31/11', freq='2M')
+ tm.assert_index_equal(pi, expected)
+ self.assertEqual(pi.freq, offsets.MonthEnd(2))
+ self.assertEqual(pi.freqstr, '2M')
+
+ pi = period_range(start='1/1/11', periods=6, freq='2M')
+ tm.assert_index_equal(pi, expected)
+ self.assertEqual(pi.freq, offsets.MonthEnd(2))
+ self.assertEqual(pi.freqstr, '2M')
def test_iteration(self):
index = PeriodIndex(start='1/1/10', periods=4, freq='B')
@@ -2412,7 +2814,8 @@ def test_align_series(self):
# it works!
for kind in ['inner', 'outer', 'left', 'right']:
ts.align(ts[::2], join=kind)
- with assertRaisesRegexp(ValueError, 'Only like-indexed'):
+ msg = "Input has different freq=D from PeriodIndex\\(freq=A-DEC\\)"
+ with assertRaisesRegexp(ValueError, msg):
ts + ts.asfreq('D', how="end")
def test_align_frame(self):
@@ -2444,6 +2847,9 @@ def test_union(self):
self.assertRaises(ValueError, index.join, index.to_timestamp())
+ index3 = period_range('1/1/2000', '1/20/2000', freq='2D')
+ self.assertRaises(ValueError, index.join, index3)
+
def test_intersection(self):
index = period_range('1/1/2000', '1/20/2000', freq='D')
@@ -2461,6 +2867,9 @@ def test_intersection(self):
index2 = period_range('1/1/2000', '1/20/2000', freq='W-WED')
self.assertRaises(ValueError, index.intersection, index2)
+ index3 = period_range('1/1/2000', '1/20/2000', freq='2D')
+ self.assertRaises(ValueError, index.intersection, index3)
+
def test_fields(self):
# year, month, day, hour, minute
# second, weekofyear, week, dayofweek, weekday, dayofyear, quarter
@@ -2614,7 +3023,8 @@ def test_pickle_freq(self):
# GH2891
prng = period_range('1/1/2011', '1/1/2012', freq='M')
new_prng = self.round_trip_pickle(prng)
- self.assertEqual(new_prng.freq,'M')
+ self.assertEqual(new_prng.freq, offsets.MonthEnd())
+ self.assertEqual(new_prng.freqstr, 'M')
def test_slice_keep_name(self):
idx = period_range('20010101', periods=10, freq='D', name='bob')
@@ -2669,12 +3079,24 @@ def test_combine_first(self):
tm.assert_series_equal(result, expected)
def test_searchsorted(self):
- pidx = pd.period_range('2014-01-01', periods=10, freq='D')
- self.assertEqual(
- pidx.searchsorted(pd.Period('2014-01-01', freq='D')), 0)
- self.assertRaisesRegexp(
- ValueError, 'Different period frequency: H',
- lambda: pidx.searchsorted(pd.Period('2014-01-01', freq='H')))
+ for freq in ['D', '2D']:
+ pidx = pd.PeriodIndex(['2014-01-01', '2014-01-02', '2014-01-03',
+ '2014-01-04', '2014-01-05'], freq=freq)
+
+ p1 = pd.Period('2014-01-01', freq=freq)
+ self.assertEqual(pidx.searchsorted(p1), 0)
+
+ p2 = pd.Period('2014-01-04', freq=freq)
+ self.assertEqual(pidx.searchsorted(p2), 3)
+
+ msg = "Input has different freq=H from PeriodIndex"
+ with self.assertRaisesRegexp(ValueError, msg):
+ pidx.searchsorted(pd.Period('2014-01-01', freq='H'))
+
+ msg = "Input has different freq=5D from PeriodIndex"
+ with self.assertRaisesRegexp(ValueError, msg):
+ pidx.searchsorted(pd.Period('2014-01-01', freq='5D'))
+
def test_round_trip(self):
@@ -2704,186 +3126,203 @@ def test_add(self):
def test_add_offset(self):
# freq is DateOffset
- p = Period('2011', freq='A')
- self.assertEqual(p + offsets.YearEnd(2), Period('2013', freq='A'))
+ for freq in ['A', '2A', '3A']:
+ p = Period('2011', freq=freq)
+ self.assertEqual(p + offsets.YearEnd(2), Period('2013', freq=freq))
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p + o
- p = Period('2011-03', freq='M')
- self.assertEqual(p + offsets.MonthEnd(2), Period('2011-05', freq='M'))
- self.assertEqual(p + offsets.MonthEnd(12), Period('2012-03', freq='M'))
+ for freq in ['M', '2M', '3M']:
+ p = Period('2011-03', freq=freq)
+ self.assertEqual(p + offsets.MonthEnd(2), Period('2011-05', freq=freq))
+ self.assertEqual(p + offsets.MonthEnd(12), Period('2012-03', freq=freq))
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p + o
# freq is Tick
- p = Period('2011-04-01', freq='D')
- self.assertEqual(p + offsets.Day(5), Period('2011-04-06', freq='D'))
- self.assertEqual(p + offsets.Hour(24), Period('2011-04-02', freq='D'))
- self.assertEqual(p + np.timedelta64(2, 'D'), Period('2011-04-03', freq='D'))
- self.assertEqual(p + np.timedelta64(3600 * 24, 's'), Period('2011-04-02', freq='D'))
- self.assertEqual(p + timedelta(-2), Period('2011-03-30', freq='D'))
- self.assertEqual(p + timedelta(hours=48), Period('2011-04-03', freq='D'))
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(4, 'h'), timedelta(hours=23)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
-
- p = Period('2011-04-01 09:00', freq='H')
- self.assertEqual(p + offsets.Day(2), Period('2011-04-03 09:00', freq='H'))
- self.assertEqual(p + offsets.Hour(3), Period('2011-04-01 12:00', freq='H'))
- self.assertEqual(p + np.timedelta64(3, 'h'), Period('2011-04-01 12:00', freq='H'))
- self.assertEqual(p + np.timedelta64(3600, 's'), Period('2011-04-01 10:00', freq='H'))
- self.assertEqual(p + timedelta(minutes=120), Period('2011-04-01 11:00', freq='H'))
- self.assertEqual(p + timedelta(days=4, minutes=180), Period('2011-04-05 12:00', freq='H'))
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
+ for freq in ['D', '2D', '3D']:
+ p = Period('2011-04-01', freq=freq)
+ self.assertEqual(p + offsets.Day(5), Period('2011-04-06', freq=freq))
+ self.assertEqual(p + offsets.Hour(24), Period('2011-04-02', freq=freq))
+ self.assertEqual(p + np.timedelta64(2, 'D'), Period('2011-04-03', freq=freq))
+ self.assertEqual(p + np.timedelta64(3600 * 24, 's'), Period('2011-04-02', freq=freq))
+ self.assertEqual(p + timedelta(-2), Period('2011-03-30', freq=freq))
+ self.assertEqual(p + timedelta(hours=48), Period('2011-04-03', freq=freq))
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(4, 'h'), timedelta(hours=23)]:
+ with tm.assertRaises(ValueError):
+ p + o
+
+ for freq in ['H', '2H', '3H']:
+ p = Period('2011-04-01 09:00', freq=freq)
+ self.assertEqual(p + offsets.Day(2), Period('2011-04-03 09:00', freq=freq))
+ self.assertEqual(p + offsets.Hour(3), Period('2011-04-01 12:00', freq=freq))
+ self.assertEqual(p + np.timedelta64(3, 'h'), Period('2011-04-01 12:00', freq=freq))
+ self.assertEqual(p + np.timedelta64(3600, 's'), Period('2011-04-01 10:00', freq=freq))
+ self.assertEqual(p + timedelta(minutes=120), Period('2011-04-01 11:00', freq=freq))
+ self.assertEqual(p + timedelta(days=4, minutes=180), Period('2011-04-05 12:00', freq=freq))
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
+ with tm.assertRaises(ValueError):
+ p + o
def test_add_offset_nat(self):
# freq is DateOffset
- p = Period('NaT', freq='A')
- for o in [offsets.YearEnd(2)]:
- self.assertEqual((p + o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaises(ValueError):
- p + o
-
- p = Period('NaT', freq='M')
- for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]:
- self.assertEqual((p + o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
+ for freq in ['A', '2A', '3A']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.YearEnd(2)]:
+ self.assertEqual((p + o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p + o
+
+ for freq in ['M', '2M', '3M']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]:
+ self.assertEqual((p + o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p + o
# freq is Tick
- p = Period('NaT', freq='D')
- for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'),
- np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]:
- self.assertEqual((p + o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(4, 'h'), timedelta(hours=23)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
-
- p = Period('NaT', freq='H')
- for o in [offsets.Day(2), offsets.Hour(3), np.timedelta64(3, 'h'),
- np.timedelta64(3600, 's'), timedelta(minutes=120),
- timedelta(days=4, minutes=180)]:
- self.assertEqual((p + o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
- with tm.assertRaisesRegexp(ValueError, 'Input has different freq from Period'):
- p + o
+ for freq in ['D', '2D', '3D']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'),
+ np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]:
+ self.assertEqual((p + o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(4, 'h'), timedelta(hours=23)]:
+ with tm.assertRaises(ValueError):
+ p + o
+
+ for freq in ['H', '2H', '3H']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.Day(2), offsets.Hour(3), np.timedelta64(3, 'h'),
+ np.timedelta64(3600, 's'), timedelta(minutes=120),
+ timedelta(days=4, minutes=180)]:
+ self.assertEqual((p + o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
+ with tm.assertRaises(ValueError):
+ p + o
def test_sub_offset(self):
# freq is DateOffset
- p = Period('2011', freq='A')
- self.assertEqual(p - offsets.YearEnd(2), Period('2009', freq='A'))
+ for freq in ['A', '2A', '3A']:
+ p = Period('2011', freq=freq)
+ self.assertEqual(p - offsets.YearEnd(2), Period('2009', freq=freq))
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaises(ValueError):
- p - o
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p - o
- p = Period('2011-03', freq='M')
- self.assertEqual(p - offsets.MonthEnd(2), Period('2011-01', freq='M'))
- self.assertEqual(p - offsets.MonthEnd(12), Period('2010-03', freq='M'))
+ for freq in ['M', '2M', '3M']:
+ p = Period('2011-03', freq=freq)
+ self.assertEqual(p - offsets.MonthEnd(2), Period('2011-01', freq=freq))
+ self.assertEqual(p - offsets.MonthEnd(12), Period('2010-03', freq=freq))
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaises(ValueError):
- p - o
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p - o
# freq is Tick
- p = Period('2011-04-01', freq='D')
- self.assertEqual(p - offsets.Day(5), Period('2011-03-27', freq='D'))
- self.assertEqual(p - offsets.Hour(24), Period('2011-03-31', freq='D'))
- self.assertEqual(p - np.timedelta64(2, 'D'), Period('2011-03-30', freq='D'))
- self.assertEqual(p - np.timedelta64(3600 * 24, 's'), Period('2011-03-31', freq='D'))
- self.assertEqual(p - timedelta(-2), Period('2011-04-03', freq='D'))
- self.assertEqual(p - timedelta(hours=48), Period('2011-03-30', freq='D'))
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(4, 'h'), timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
- p - o
-
- p = Period('2011-04-01 09:00', freq='H')
- self.assertEqual(p - offsets.Day(2), Period('2011-03-30 09:00', freq='H'))
- self.assertEqual(p - offsets.Hour(3), Period('2011-04-01 06:00', freq='H'))
- self.assertEqual(p - np.timedelta64(3, 'h'), Period('2011-04-01 06:00', freq='H'))
- self.assertEqual(p - np.timedelta64(3600, 's'), Period('2011-04-01 08:00', freq='H'))
- self.assertEqual(p - timedelta(minutes=120), Period('2011-04-01 07:00', freq='H'))
- self.assertEqual(p - timedelta(days=4, minutes=180), Period('2011-03-28 06:00', freq='H'))
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
- p - o
+ for freq in ['D', '2D', '3D']:
+ p = Period('2011-04-01', freq=freq)
+ self.assertEqual(p - offsets.Day(5), Period('2011-03-27', freq=freq))
+ self.assertEqual(p - offsets.Hour(24), Period('2011-03-31', freq=freq))
+ self.assertEqual(p - np.timedelta64(2, 'D'), Period('2011-03-30', freq=freq))
+ self.assertEqual(p - np.timedelta64(3600 * 24, 's'), Period('2011-03-31', freq=freq))
+ self.assertEqual(p - timedelta(-2), Period('2011-04-03', freq=freq))
+ self.assertEqual(p - timedelta(hours=48), Period('2011-03-30', freq=freq))
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(4, 'h'), timedelta(hours=23)]:
+ with tm.assertRaises(ValueError):
+ p - o
+
+ for freq in ['H', '2H', '3H']:
+ p = Period('2011-04-01 09:00', freq=freq)
+ self.assertEqual(p - offsets.Day(2), Period('2011-03-30 09:00', freq=freq))
+ self.assertEqual(p - offsets.Hour(3), Period('2011-04-01 06:00', freq=freq))
+ self.assertEqual(p - np.timedelta64(3, 'h'), Period('2011-04-01 06:00', freq=freq))
+ self.assertEqual(p - np.timedelta64(3600, 's'), Period('2011-04-01 08:00', freq=freq))
+ self.assertEqual(p - timedelta(minutes=120), Period('2011-04-01 07:00', freq=freq))
+ self.assertEqual(p - timedelta(days=4, minutes=180), Period('2011-03-28 06:00', freq=freq))
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
+ with tm.assertRaises(ValueError):
+ p - o
def test_sub_offset_nat(self):
# freq is DateOffset
- p = Period('NaT', freq='A')
- for o in [offsets.YearEnd(2)]:
- self.assertEqual((p - o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaises(ValueError):
- p - o
-
- p = Period('NaT', freq='M')
- for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]:
- self.assertEqual((p - o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(365, 'D'), timedelta(365)]:
- with tm.assertRaises(ValueError):
- p - o
+ for freq in ['A', '2A', '3A']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.YearEnd(2)]:
+ self.assertEqual((p - o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p - o
+
+ for freq in ['M', '2M', '3M']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.MonthEnd(2), offsets.MonthEnd(12)]:
+ self.assertEqual((p - o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(365, 'D'), timedelta(365)]:
+ with tm.assertRaises(ValueError):
+ p - o
# freq is Tick
- p = Period('NaT', freq='D')
- for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'),
- np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]:
- self.assertEqual((p - o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(4, 'h'), timedelta(hours=23)]:
- with tm.assertRaises(ValueError):
- p - o
-
- p = Period('NaT', freq='H')
- for o in [offsets.Day(2), offsets.Hour(3), np.timedelta64(3, 'h'),
- np.timedelta64(3600, 's'), timedelta(minutes=120),
- timedelta(days=4, minutes=180)]:
- self.assertEqual((p - o).ordinal, tslib.iNaT)
-
- for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
- np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
- with tm.assertRaises(ValueError):
- p - o
+ for freq in ['D', '2D', '3D']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.Day(5), offsets.Hour(24), np.timedelta64(2, 'D'),
+ np.timedelta64(3600 * 24, 's'), timedelta(-2), timedelta(hours=48)]:
+ self.assertEqual((p - o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(4, 'h'), timedelta(hours=23)]:
+ with tm.assertRaises(ValueError):
+ p - o
+
+ for freq in ['H', '2H', '3H']:
+ p = Period('NaT', freq=freq)
+ for o in [offsets.Day(2), offsets.Hour(3), np.timedelta64(3, 'h'),
+ np.timedelta64(3600, 's'), timedelta(minutes=120),
+ timedelta(days=4, minutes=180)]:
+ self.assertEqual((p - o).ordinal, tslib.iNaT)
+
+ for o in [offsets.YearBegin(2), offsets.MonthBegin(1), offsets.Minute(),
+ np.timedelta64(3200, 's'), timedelta(hours=23, minutes=30)]:
+ with tm.assertRaises(ValueError):
+ p - o
def test_nat_ops(self):
- p = Period('NaT', freq='M')
- self.assertEqual((p + 1).ordinal, tslib.iNaT)
- self.assertEqual((p - 1).ordinal, tslib.iNaT)
- self.assertEqual((p - Period('2011-01', freq='M')).ordinal, tslib.iNaT)
- self.assertEqual((Period('2011-01', freq='M') - p).ordinal, tslib.iNaT)
+ for freq in ['M', '2M', '3M']:
+ p = Period('NaT', freq=freq)
+ self.assertEqual((p + 1).ordinal, tslib.iNaT)
+ self.assertEqual((p - 1).ordinal, tslib.iNaT)
+ self.assertEqual((p - Period('2011-01', freq=freq)).ordinal, tslib.iNaT)
+ self.assertEqual((Period('2011-01', freq=freq) - p).ordinal, tslib.iNaT)
def test_pi_ops_nat(self):
idx = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-04'], freq='M', name='idx')
@@ -3042,27 +3481,112 @@ def test_period_nat_comp(self):
self.assertEqual(left <= right, False)
self.assertEqual(left >= right, False)
- def test_pi_nat_comp(self):
- idx1 = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-05'], freq='M')
+ def test_pi_pi_comp(self):
+
+ for freq in ['M', '2M', '3M']:
+ base = PeriodIndex(['2011-01', '2011-02',
+ '2011-03', '2011-04'], freq=freq)
+ p = Period('2011-02', freq=freq)
+
+ exp = np.array([False, True, False, False])
+ self.assert_numpy_array_equal(base == p, exp)
+
+ exp = np.array([True, False, True, True])
+ self.assert_numpy_array_equal(base != p, exp)
+
+ exp = np.array([False, False, True, True])
+ self.assert_numpy_array_equal(base > p, exp)
+
+ exp = np.array([True, False, False, False])
+ self.assert_numpy_array_equal(base < p, exp)
+
+ exp = np.array([False, True, True, True])
+ self.assert_numpy_array_equal(base >= p, exp)
- result = idx1 > Period('2011-02', freq='M')
- self.assert_numpy_array_equal(result, np.array([False, False, False, True]))
+ exp = np.array([True, True, False, False])
+ self.assert_numpy_array_equal(base <= p, exp)
- result = idx1 == Period('NaT', freq='M')
- self.assert_numpy_array_equal(result, np.array([False, False, False, False]))
+ idx = PeriodIndex(['2011-02', '2011-01', '2011-03', '2011-05'], freq=freq)
- result = idx1 != Period('NaT', freq='M')
- self.assert_numpy_array_equal(result, np.array([True, True, True, True]))
+ exp = np.array([False, False, True, False])
+ self.assert_numpy_array_equal(base == idx, exp)
- idx2 = PeriodIndex(['2011-02', '2011-01', '2011-04', 'NaT'], freq='M')
- result = idx1 < idx2
- self.assert_numpy_array_equal(result, np.array([True, False, False, False]))
+ exp = np.array([True, True, False, True])
+ self.assert_numpy_array_equal(base != idx, exp)
- result = idx1 == idx1
- self.assert_numpy_array_equal(result, np.array([True, True, False, True]))
+ exp = np.array([False, True, False, False])
+ self.assert_numpy_array_equal(base > idx, exp)
- result = idx1 != idx1
- self.assert_numpy_array_equal(result, np.array([False, False, True, False]))
+ exp = np.array([True, False, False, True])
+ self.assert_numpy_array_equal(base < idx, exp)
+
+ exp = np.array([False, True, True, False])
+ self.assert_numpy_array_equal(base >= idx, exp)
+
+ exp = np.array([True, False, True, True])
+ self.assert_numpy_array_equal(base <= idx, exp)
+
+ # different base freq
+ msg = "Input has different freq=A-DEC from PeriodIndex"
+ with tm.assertRaisesRegexp(ValueError, msg):
+ base <= Period('2011', freq='A')
+
+ with tm.assertRaisesRegexp(ValueError, msg):
+ idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='A')
+ base <= idx
+
+ # different mult
+ msg = "Input has different freq=4M from PeriodIndex"
+ with tm.assertRaisesRegexp(ValueError, msg):
+ base <= Period('2011', freq='4M')
+
+ with tm.assertRaisesRegexp(ValueError, msg):
+ idx = PeriodIndex(['2011', '2012', '2013', '2014'], freq='4M')
+ base <= idx
+
+ def test_pi_nat_comp(self):
+ for freq in ['M', '2M', '3M']:
+ idx1 = PeriodIndex(['2011-01', '2011-02', 'NaT', '2011-05'], freq=freq)
+
+ result = idx1 > Period('2011-02', freq=freq)
+ exp = np.array([False, False, False, True])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 == Period('NaT', freq=freq)
+ exp = np.array([False, False, False, False])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 != Period('NaT', freq=freq)
+ exp = np.array([True, True, True, True])
+ self.assert_numpy_array_equal(result, exp)
+
+ idx2 = PeriodIndex(['2011-02', '2011-01', '2011-04', 'NaT'], freq=freq)
+ result = idx1 < idx2
+ exp = np.array([True, False, False, False])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 == idx2
+ exp = np.array([False, False, False, False])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 != idx2
+ exp = np.array([True, True, True, True])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 == idx1
+ exp = np.array([True, True, False, True])
+ self.assert_numpy_array_equal(result, exp)
+
+ result = idx1 != idx1
+ exp = np.array([False, False, True, False])
+ self.assert_numpy_array_equal(result, exp)
+
+ diff = PeriodIndex(['2011-02', '2011-01', '2011-04', 'NaT'], freq='4M')
+ msg = "Input has different freq=4M from PeriodIndex"
+ with tm.assertRaisesRegexp(ValueError, msg):
+ idx1 > diff
+ with tm.assertRaisesRegexp(ValueError, msg):
+ idx1 == diff
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 08a4056c1fce2..d9b31c0a1d620 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -9,7 +9,7 @@
from pandas import Index, Series, DataFrame
from pandas.tseries.index import date_range, bdate_range
-from pandas.tseries.offsets import DateOffset
+from pandas.tseries.offsets import DateOffset, Week
from pandas.tseries.period import period_range, Period, PeriodIndex
from pandas.tseries.resample import DatetimeIndex
@@ -758,7 +758,7 @@ def test_to_weekly_resampling(self):
high.plot()
ax = low.plot()
for l in ax.get_lines():
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
# tsplot
from pandas.tseries.plotting import tsplot
@@ -767,7 +767,7 @@ def test_to_weekly_resampling(self):
tsplot(high, plt.Axes.plot)
lines = tsplot(low, plt.Axes.plot)
for l in lines:
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
@slow
def test_from_weekly_resampling(self):
@@ -782,7 +782,7 @@ def test_from_weekly_resampling(self):
expected_l = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549,
1553, 1558, 1562])
for l in ax.get_lines():
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
xdata = l.get_xdata(orig=False)
if len(xdata) == 12: # idxl lines
self.assert_numpy_array_equal(xdata, expected_l)
@@ -796,9 +796,8 @@ def test_from_weekly_resampling(self):
tsplot(low, plt.Axes.plot)
lines = tsplot(high, plt.Axes.plot)
-
for l in lines:
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
xdata = l.get_xdata(orig=False)
if len(xdata) == 12: # idxl lines
self.assert_numpy_array_equal(xdata, expected_l)
@@ -825,7 +824,7 @@ def test_from_resampling_area_line_mixed(self):
expected_y = np.zeros(len(expected_x))
for i in range(3):
l = ax.lines[i]
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertEqual(PeriodIndex(l.get_xdata()).freq, idxh.freq)
self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
# check stacked values are correct
expected_y += low[i].values
@@ -836,7 +835,7 @@ def test_from_resampling_area_line_mixed(self):
expected_y = np.zeros(len(expected_x))
for i in range(3):
l = ax.lines[3 + i]
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
expected_y += high[i].values
self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
@@ -851,7 +850,7 @@ def test_from_resampling_area_line_mixed(self):
expected_y = np.zeros(len(expected_x))
for i in range(3):
l = ax.lines[i]
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
expected_y += high[i].values
self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
@@ -862,7 +861,7 @@ def test_from_resampling_area_line_mixed(self):
expected_y = np.zeros(len(expected_x))
for i in range(3):
l = ax.lines[3 + i]
- self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, idxh.freq)
self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
expected_y += low[i].values
self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index efd1ff9ba34fd..521679f21dc93 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -449,6 +449,10 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
if not isinstance(arg, compat.string_types):
return arg
+ from pandas.tseries.offsets import DateOffset
+ if isinstance(freq, DateOffset):
+ freq = freq.rule_code
+
if dayfirst is None:
dayfirst = get_option("display.date_dayfirst")
if yearfirst is None:
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 77ac362181a2b..a914eb992d88f 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1799,6 +1799,19 @@ _MONTH_ALIASES = dict((k + 1, v) for k, v in enumerate(_MONTHS))
cpdef object _get_rule_month(object source, object default='DEC'):
+ """
+ Return starting month of given freq, default is December.
+
+ Example
+ -------
+ >>> _get_rule_month('D')
+ 'DEC'
+
+ >>> _get_rule_month('A-JAN')
+ 'JAN'
+ """
+ if hasattr(source, 'freqstr'):
+ source = source.freqstr
source = source.upper()
if '-' not in source:
return default
| Closes #7811.
- [x] Change `Period.freq` and `PeriodIndex.freq` to store offsets.
- [x] Add `freqstr` to `Period`, `PeriodIndex` can use `DatetimeIndexOpsMixin`'s logic
- [x] Logic and tests for pickles taken in prev versions
- [x] Perform shift/arithmetic considering freq's mult.
- [x] Test all offsets has accessible `n` proeprties (#10350)
- [x] Explicit tests for `asfreq` and `to_timestamp` using freq with mult.
- [x] Ops with different base freq must be prohibited. Change freq comparison to base comparison.
- [x] Fix partial string slicing bug (some tests are commented out because of this)
- [x] Fix order bug (#10295, removing temp logic for `PeriodIndex`)
- ~~Decide a policy for legacy freq aliases, like "WK"~~ (handled in #10878)
- [x] Update doc.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7832 | 2014-07-24T13:47:39Z | 2015-09-03T14:04:18Z | 2015-09-03T14:04:18Z | 2015-09-03T14:14:34Z |
CI: fix typos in readme | diff --git a/ci/README.txt b/ci/README.txt
index f69fc832fde85..bb71dc25d6093 100644
--- a/ci/README.txt
+++ b/ci/README.txt
@@ -1,15 +1,15 @@
-Travis is a ci service that's well-integrated with github.
-The following ypes of breakage should be detected
-by travis builds:
+Travis is a ci service that's well-integrated with GitHub.
+The following types of breakage should be detected
+by Travis builds:
-1) Failing tests on any supported version of python.
+1) Failing tests on any supported version of Python.
2) Pandas should install and the tests should run if no optional deps are installed.
That also means tests which rely on optional deps need to raise SkipTest()
if the dep is missing.
3) unicode related fails when running under exotic locales.
We tried running the vbench suite for a while, but with varying load
-on travis machines, that wasn't useful.
+on Travis machines, that wasn't useful.
Travis currently (4/2013) has a 5-job concurrency limit. Exceeding it
basically doubles the total runtime for a commit through travis, and
| This readme file had some problems with it. I cleaned it up a bit.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7831 | 2014-07-24T13:33:18Z | 2014-07-24T17:28:15Z | 2014-07-24T17:28:15Z | 2014-07-24T17:28:19Z |
BUG: Bug in passing a DatetimeIndex with a timezone that was not being retained in Frame construction (GH7822) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 5e0af498557f2..e8daf41764a70 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -58,6 +58,27 @@ API changes
rolling_min(s, window=10, min_periods=5)
+- Bug in passing a ``DatetimeIndex`` with a timezone that was not being retained in DataFrame construction from a dict (:issue:`7822`)
+
+ In prior versions this would drop the timezone.
+
+ .. ipython:: python
+
+ i = date_range('1/1/2011', periods=3, freq='10s', tz = 'US/Eastern')
+ i
+ df = DataFrame( {'a' : i } )
+ df
+ df.dtypes
+
+ This behavior is unchanged.
+
+ .. ipython:: python
+
+ df = DataFrame( )
+ df['a'] = i
+ df
+ df.dtypes
+
.. _whatsnew_0150.cat:
Categoricals in Series/DataFrame
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 7b005867a404f..636dedfbeb7b7 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2146,19 +2146,13 @@ def reindexer(value):
value = value.copy()
elif (isinstance(value, Index) or _is_sequence(value)):
- if len(value) != len(self.index):
- raise ValueError('Length of values does not match length of '
- 'index')
-
+ from pandas.core.series import _sanitize_index
+ value = _sanitize_index(value, self.index, copy=False)
if not isinstance(value, (np.ndarray, Index)):
if isinstance(value, list) and len(value) > 0:
value = com._possibly_convert_platform(value)
else:
value = com._asarray_tuplesafe(value)
- elif isinstance(value, PeriodIndex):
- value = value.asobject
- elif isinstance(value, DatetimeIndex):
- value = value._to_embed(keep_tz=True).copy()
elif value.ndim == 2:
value = value.copy().T
else:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9abc8f22009b3..502c01ce6d1d1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2431,8 +2431,26 @@ def remove_na(series):
return series[notnull(_values_from_object(series))]
+def _sanitize_index(data, index, copy=False):
+ """ sanitize an index type to return an ndarray of the underlying, pass thru a non-Index """
+
+ if len(data) != len(index):
+ raise ValueError('Length of values does not match length of '
+ 'index')
+
+ if isinstance(data, PeriodIndex):
+ data = data.asobject
+ elif isinstance(data, DatetimeIndex):
+ data = data._to_embed(keep_tz=True)
+ if copy:
+ data = data.copy()
+
+ return data
+
def _sanitize_array(data, index, dtype=None, copy=False,
raise_cast_failure=False):
+ """ sanitize input data to an ndarray, copy if specified, coerce to the dtype if specified """
+
if dtype is not None:
dtype = np.dtype(dtype)
@@ -2482,11 +2500,13 @@ def _try_cast(arr, take_fast_path):
raise TypeError('Cannot cast datetime64 to %s' % dtype)
else:
subarr = _try_cast(data, True)
- else:
+ elif isinstance(data, Index):
# don't coerce Index types
# e.g. indexes can have different conversions (so don't fast path them)
# GH 6140
- subarr = _try_cast(data, not isinstance(data, Index))
+ subarr = _sanitize_index(data, index, copy=True)
+ else:
+ subarr = _try_cast(data, True)
if copy:
subarr = data.copy()
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index c4783bc49f0ce..0dd729d58f174 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3599,6 +3599,23 @@ def test_constructor_with_datetimes(self):
self.assertEqual(df.iat[0,0],dt)
assert_series_equal(df.dtypes,Series({'End Date' : np.dtype('object') }))
+ # GH 7822
+ # preserver an index with a tz on dict construction
+ i = date_range('1/1/2011', periods=5, freq='10s', tz = 'US/Eastern')
+
+ expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True) })
+ df = DataFrame()
+ df['a'] = i
+ assert_frame_equal(df, expected)
+ df = DataFrame( {'a' : i } )
+ assert_frame_equal(df, expected)
+
+ # multiples
+ i_no_tz = date_range('1/1/2011', periods=5, freq='10s')
+ df = DataFrame( {'a' : i, 'b' : i_no_tz } )
+ expected = DataFrame( {'a' : i.to_series(keep_tz=True).reset_index(drop=True), 'b': i_no_tz })
+ assert_frame_equal(df, expected)
+
def test_constructor_for_list_with_dtypes(self):
intname = np.dtype(np.int_).name
floatname = np.dtype(np.float_).name
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 531724cdb6837..21f915cb50e21 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -598,10 +598,13 @@ def test_to_datetime_tzlocal(self):
def test_frame_no_datetime64_dtype(self):
+ # after 7822
+ # these retain the timezones on dict construction
+
dr = date_range('2011/1/1', '2012/1/1', freq='W-FRI')
dr_tz = dr.tz_localize(self.tzstr('US/Eastern'))
e = DataFrame({'A': 'foo', 'B': dr_tz}, index=dr)
- self.assertEqual(e['B'].dtype, 'M8[ns]')
+ self.assertEqual(e['B'].dtype, 'O')
# GH 2810 (with timezones)
datetimes_naive = [ ts.to_pydatetime() for ts in dr ]
@@ -610,7 +613,7 @@ def test_frame_no_datetime64_dtype(self):
'datetimes_naive': datetimes_naive,
'datetimes_with_tz' : datetimes_with_tz })
result = df.get_dtype_counts()
- expected = Series({ 'datetime64[ns]' : 3, 'object' : 1 })
+ expected = Series({ 'datetime64[ns]' : 2, 'object' : 2 })
tm.assert_series_equal(result, expected)
def test_hongkong_tz_convert(self):
| closes #7822
| https://api.github.com/repos/pandas-dev/pandas/pulls/7823 | 2014-07-23T15:45:55Z | 2014-07-23T17:01:23Z | 2014-07-23T17:01:23Z | 2014-07-23T17:01:23Z |
BUG: Fixed failure in StataReader when reading variable labels in 117 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 06c93541a7783..2322af4752e2e 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -212,7 +212,7 @@ Bug Fixes
- Bug in ``DataFrame.plot`` with ``subplots=True`` may draw unnecessary minor xticks and yticks (:issue:`7801`)
-
+- Bug in ``StataReader`` which did not read variable labels in 117 files due to difference between Stata documentation and implementation (:issue:`7816`)
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index 48a5f5ee6c994..3458a95ac096d 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -520,8 +520,15 @@ def _read_header(self):
self.byteorder + 'q', self.path_or_buf.read(8))[0] + 9
seek_value_label_names = struct.unpack(
self.byteorder + 'q', self.path_or_buf.read(8))[0] + 19
- seek_variable_labels = struct.unpack(
- self.byteorder + 'q', self.path_or_buf.read(8))[0] + 17
+ # Stata 117 data files do not follow the described format. This is
+ # a work around that uses the previous label, 33 bytes for each
+ # variable, 20 for the closing tag and 17 for the opening tag
+ self.path_or_buf.read(8) # <variable_lables>, throw away
+ seek_variable_labels = seek_value_label_names + (33*self.nvar) + 20 + 17
+ # Below is the original, correct code (per Stata sta format doc,
+ # although this is not followed in actual 117 dtas)
+ #seek_variable_labels = struct.unpack(
+ # self.byteorder + 'q', self.path_or_buf.read(8))[0] + 17
self.path_or_buf.read(8) # <characteristics>
self.data_location = struct.unpack(
self.byteorder + 'q', self.path_or_buf.read(8))[0] + 6
diff --git a/pandas/io/tests/data/stata7_115.dta b/pandas/io/tests/data/stata7_115.dta
new file mode 100644
index 0000000000000..133713b201ba8
Binary files /dev/null and b/pandas/io/tests/data/stata7_115.dta differ
diff --git a/pandas/io/tests/data/stata7_117.dta b/pandas/io/tests/data/stata7_117.dta
new file mode 100644
index 0000000000000..c001478fc902d
Binary files /dev/null and b/pandas/io/tests/data/stata7_117.dta differ
diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py
index 435226bc4313f..5271604235922 100644
--- a/pandas/io/tests/test_stata.py
+++ b/pandas/io/tests/test_stata.py
@@ -68,6 +68,9 @@ def setUp(self):
self.dta15_115 = os.path.join(self.dirpath, 'stata6_115.dta')
self.dta15_117 = os.path.join(self.dirpath, 'stata6_117.dta')
+ self.dta16_115 = os.path.join(self.dirpath, 'stata7_115.dta')
+ self.dta16_117 = os.path.join(self.dirpath, 'stata7_117.dta')
+
def read_dta(self, file):
return read_stata(file, convert_dates=True)
@@ -199,7 +202,7 @@ def test_read_dta4(self):
'labeled_with_missings', 'float_labelled'])
# these are all categoricals
- expected = pd.concat([ Series(pd.Categorical(value)) for col, value in expected.iteritems() ],axis=1)
+ expected = pd.concat([ Series(pd.Categorical(value)) for col, value in compat.iteritems(expected)],axis=1)
tm.assert_frame_equal(parsed_113, expected)
tm.assert_frame_equal(parsed_114, expected)
@@ -551,6 +554,18 @@ def test_bool_uint(self):
written_and_read_again = written_and_read_again.set_index('index')
tm.assert_frame_equal(written_and_read_again, expected)
+ def test_variable_labels(self):
+ sr_115 = StataReader(self.dta16_115).variable_labels()
+ sr_117 = StataReader(self.dta16_117).variable_labels()
+ keys = ('var1', 'var2', 'var3')
+ labels = ('label1', 'label2', 'label3')
+ for k,v in compat.iteritems(sr_115):
+ self.assertTrue(k in sr_117)
+ self.assertTrue(v == sr_117[k])
+ self.assertTrue(k in keys)
+ self.assertTrue(v in labels)
+
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| Stata's implementation does not match the online dta file format description.
The solution used here is to directly compute the offset rather than reading
it from the dta file. If Stata fixes their implementation, the original code
can be restored.
closes #7816
| https://api.github.com/repos/pandas-dev/pandas/pulls/7818 | 2014-07-22T17:24:49Z | 2014-07-23T13:40:44Z | 2014-07-23T13:40:44Z | 2014-08-20T15:32:51Z |
TST: add tests for GH 6572 | diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 1614261542733..b6761426edc5d 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2181,6 +2181,29 @@ def test_constructor_coverage(self):
end='2011-01-01', freq='B')
self.assertRaises(ValueError, DatetimeIndex, periods=10, freq='D')
+ def test_constructor_datetime64_tzformat(self):
+ # GH 6572
+ tm._skip_if_no_pytz()
+ tm._skip_if_no_dateutil()
+ from dateutil.tz import tzoffset
+ for freq in ['AS', 'W-SUN']:
+ idx = date_range('2013-01-01T00:00:00-05:00', '2016-01-01T23:59:59-05:00', freq=freq)
+ expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59',
+ freq=freq, tz=tzoffset(None, -18000))
+ tm.assert_index_equal(idx, expected)
+ # Unable to use `US/Eastern` because of DST
+ expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59',
+ freq=freq, tz='America/Lima')
+ self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8)
+
+ idx = date_range('2013-01-01T00:00:00+09:00', '2016-01-01T23:59:59+09:00', freq=freq)
+ expected = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59',
+ freq=freq, tz=tzoffset(None, 32400))
+ tm.assert_index_equal(idx, expected)
+ expected_i8 = date_range('2013-01-01T00:00:00', '2016-01-01T23:59:59',
+ freq=freq, tz='Asia/Tokyo')
+ self.assert_numpy_array_equal(idx.asi8, expected_i8.asi8)
+
def test_constructor_name(self):
idx = DatetimeIndex(start='2000-01-01', periods=1, freq='A',
name='TEST')
| Closes #6572. It seems to be fixed by #7465 as a side effect.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7810 | 2014-07-20T14:52:20Z | 2014-07-21T11:43:55Z | 2014-07-21T11:43:55Z | 2014-07-23T11:08:16Z |
ENH/CLN: add HistPlot class inheriting MPLPlot | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index e842b73664e6c..06dba0979c7eb 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -168,6 +168,8 @@ previously results in ``Exception`` or ``TypeError`` (:issue:`7812`)
- ``Timestamp.__repr__`` displays ``dateutil.tz.tzoffset`` info (:issue:`7907`)
+- Histogram from ``DataFrame.plot`` with ``kind='hist'`` (:issue:`7809`), See :ref:`the docs<visualization.hist>`.
+
.. _whatsnew_0150.dt:
.dt accessor
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 69e04483cb47d..40b5d7c1599c1 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -123,6 +123,7 @@ a handful of values for plots other than the default Line plot.
These include:
* :ref:`'bar' <visualization.barplot>` or :ref:`'barh' <visualization.barplot>` for bar plots
+* :ref:`'hist' <visualization.hist>` for histogram
* :ref:`'kde' <visualization.kde>` or ``'density'`` for density plots
* :ref:`'area' <visualization.area_plot>` for area plots
* :ref:`'scatter' <visualization.scatter_matrix>` for scatter plots
@@ -205,6 +206,46 @@ To get horizontal bar plots, pass ``kind='barh'``:
Histograms
~~~~~~~~~~
+
+.. versionadded:: 0.15.0
+
+Histogram can be drawn specifying ``kind='hist'``.
+
+.. ipython:: python
+
+ df4 = DataFrame({'a': randn(1000) + 1, 'b': randn(1000),
+ 'c': randn(1000) - 1}, columns=['a', 'b', 'c'])
+
+ plt.figure();
+
+ @savefig hist_new.png
+ df4.plot(kind='hist', alpha=0.5)
+
+Histogram can be stacked by ``stacked=True``. Bin size can be changed by ``bins`` keyword.
+
+.. ipython:: python
+
+ plt.figure();
+
+ @savefig hist_new_stacked.png
+ df4.plot(kind='hist', stacked=True, bins=20)
+
+You can pass other keywords supported by matplotlib ``hist``. For example, horizontal and cumulative histgram can be drawn by ``orientation='horizontal'`` and ``cumulative='True'``.
+
+.. ipython:: python
+
+ plt.figure();
+
+ @savefig hist_new_kwargs.png
+ df4['a'].plot(kind='hist', orientation='horizontal', cumulative=True)
+
+
+See the :meth:`hist <matplotlib.axes.Axes.hist>` method and the
+`matplotlib hist documenation <http://matplotlib.org/api/pyplot_api.html#matplotlib.pyplot.hist>`__ for more.
+
+
+The previous interface ``DataFrame.hist`` to plot histogram still can be used.
+
.. ipython:: python
plt.figure();
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 8dbcb8c542fb3..b3a92263370e8 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -452,7 +452,7 @@ def test_plot(self):
_check_plot_works(self.ts.plot, kind='area', stacked=False)
_check_plot_works(self.iseries.plot)
- for kind in ['line', 'bar', 'barh', 'kde']:
+ for kind in ['line', 'bar', 'barh', 'kde', 'hist']:
if not _ok_for_gaussian_kde(kind):
continue
_check_plot_works(self.series[:5].plot, kind=kind)
@@ -616,7 +616,13 @@ def test_pie_series(self):
self._check_text_labels(ax.texts, series.index)
@slow
- def test_hist(self):
+ def test_hist_df_kwargs(self):
+ df = DataFrame(np.random.randn(10, 2))
+ ax = df.plot(kind='hist', bins=5)
+ self.assertEqual(len(ax.patches), 10)
+
+ @slow
+ def test_hist_legacy(self):
_check_plot_works(self.ts.hist)
_check_plot_works(self.ts.hist, grid=False)
_check_plot_works(self.ts.hist, figsize=(8, 10))
@@ -637,7 +643,7 @@ def test_hist(self):
self.ts.hist(by=self.ts.index, figure=fig)
@slow
- def test_hist_bins(self):
+ def test_hist_bins_legacy(self):
df = DataFrame(np.random.randn(10, 2))
ax = df.hist(bins=2)[0][0]
self.assertEqual(len(ax.patches), 2)
@@ -701,13 +707,25 @@ def test_plot_fails_when_ax_differs_from_figure(self):
self.ts.hist(ax=ax1, figure=fig2)
@slow
- def test_kde(self):
+ def test_hist_kde(self):
+ ax = self.ts.plot(kind='hist', logy=True)
+ self._check_ax_scales(ax, yaxis='log')
+ xlabels = ax.get_xticklabels()
+ # ticks are values, thus ticklabels are blank
+ self._check_text_labels(xlabels, [''] * len(xlabels))
+ ylabels = ax.get_yticklabels()
+ self._check_text_labels(ylabels, [''] * len(ylabels))
+
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
_check_plot_works(self.ts.plot, kind='kde')
_check_plot_works(self.ts.plot, kind='density')
ax = self.ts.plot(kind='kde', logy=True)
self._check_ax_scales(ax, yaxis='log')
+ xlabels = ax.get_xticklabels()
+ self._check_text_labels(xlabels, [''] * len(xlabels))
+ ylabels = ax.get_yticklabels()
+ self._check_text_labels(ylabels, [''] * len(ylabels))
@slow
def test_kde_kwargs(self):
@@ -718,9 +736,29 @@ def test_kde_kwargs(self):
_check_plot_works(self.ts.plot, kind='density', bw_method=.5, ind=linspace(-100,100,20))
ax = self.ts.plot(kind='kde', logy=True, bw_method=.5, ind=linspace(-100,100,20))
self._check_ax_scales(ax, yaxis='log')
+ self._check_text_labels(ax.yaxis.get_label(), 'Density')
@slow
- def test_kde_color(self):
+ def test_hist_kwargs(self):
+ ax = self.ts.plot(kind='hist', bins=5)
+ self.assertEqual(len(ax.patches), 5)
+ self._check_text_labels(ax.yaxis.get_label(), 'Degree')
+ tm.close()
+
+ ax = self.ts.plot(kind='hist', orientation='horizontal')
+ self._check_text_labels(ax.xaxis.get_label(), 'Degree')
+ tm.close()
+
+ ax = self.ts.plot(kind='hist', align='left', stacked=True)
+ tm.close()
+
+ @slow
+ def test_hist_kde_color(self):
+ ax = self.ts.plot(kind='hist', logy=True, bins=10, color='b')
+ self._check_ax_scales(ax, yaxis='log')
+ self.assertEqual(len(ax.patches), 10)
+ self._check_colors(ax.patches, facecolors=['b'] * 10)
+
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
ax = self.ts.plot(kind='kde', logy=True, color='r')
@@ -1611,7 +1649,7 @@ def test_boxplot_return_type(self):
self._check_box_return_type(result, 'both')
@slow
- def test_kde(self):
+ def test_kde_df(self):
tm._skip_if_no_scipy()
_skip_if_no_scipy_gaussian_kde()
df = DataFrame(randn(100, 4))
@@ -1630,7 +1668,122 @@ def test_kde(self):
self._check_ax_scales(axes, yaxis='log')
@slow
- def test_hist(self):
+ def test_hist_df(self):
+ df = DataFrame(randn(100, 4))
+ series = df[0]
+
+ ax = _check_plot_works(df.plot, kind='hist')
+ expected = [com.pprint_thing(c) for c in df.columns]
+ self._check_legend_labels(ax, labels=expected)
+
+ axes = _check_plot_works(df.plot, kind='hist', subplots=True, logy=True)
+ self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
+ self._check_ax_scales(axes, yaxis='log')
+
+ axes = series.plot(kind='hist', rot=40)
+ self._check_ticks_props(axes, xrot=40, yrot=0)
+ tm.close()
+
+ ax = series.plot(kind='hist', normed=True, cumulative=True, bins=4)
+ # height of last bin (index 5) must be 1.0
+ self.assertAlmostEqual(ax.get_children()[5].get_height(), 1.0)
+ tm.close()
+
+ ax = series.plot(kind='hist', cumulative=True, bins=4)
+ self.assertAlmostEqual(ax.get_children()[5].get_height(), 100.0)
+ tm.close()
+
+ # if horizontal, yticklabels are rotated
+ axes = df.plot(kind='hist', rot=50, fontsize=8, orientation='horizontal')
+ self._check_ticks_props(axes, xrot=0, yrot=50, ylabelsize=8)
+
+ def _check_box_coord(self, patches, expected_y=None, expected_h=None,
+ expected_x=None, expected_w=None):
+ result_y = np.array([p.get_y() for p in patches])
+ result_height = np.array([p.get_height() for p in patches])
+ result_x = np.array([p.get_x() for p in patches])
+ result_width = np.array([p.get_width() for p in patches])
+
+ if expected_y is not None:
+ self.assert_numpy_array_equal(result_y, expected_y)
+ if expected_h is not None:
+ self.assert_numpy_array_equal(result_height, expected_h)
+ if expected_x is not None:
+ self.assert_numpy_array_equal(result_x, expected_x)
+ if expected_w is not None:
+ self.assert_numpy_array_equal(result_width, expected_w)
+
+ @slow
+ def test_hist_df_coord(self):
+ normal_df = DataFrame({'A': np.repeat(np.array([1, 2, 3, 4, 5]),
+ np.array([10, 9, 8, 7, 6])),
+ 'B': np.repeat(np.array([1, 2, 3, 4, 5]),
+ np.array([8, 8, 8, 8, 8])),
+ 'C': np.repeat(np.array([1, 2, 3, 4, 5]),
+ np.array([6, 7, 8, 9, 10]))},
+ columns=['A', 'B', 'C'])
+
+ nan_df = DataFrame({'A': np.repeat(np.array([np.nan, 1, 2, 3, 4, 5]),
+ np.array([3, 10, 9, 8, 7, 6])),
+ 'B': np.repeat(np.array([1, np.nan, 2, 3, 4, 5]),
+ np.array([8, 3, 8, 8, 8, 8])),
+ 'C': np.repeat(np.array([1, 2, 3, np.nan, 4, 5]),
+ np.array([6, 7, 8, 3, 9, 10]))},
+ columns=['A', 'B', 'C'])
+
+ for df in [normal_df, nan_df]:
+ ax = df.plot(kind='hist', bins=5)
+ self._check_box_coord(ax.patches[:5], expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(ax.patches[5:10], expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(ax.patches[10:], expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([6, 7, 8, 9, 10]))
+
+ ax = df.plot(kind='hist', bins=5, stacked=True)
+ self._check_box_coord(ax.patches[:5], expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(ax.patches[5:10], expected_y=np.array([10, 9, 8, 7, 6]),
+ expected_h=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(ax.patches[10:], expected_y=np.array([18, 17, 16, 15, 14]),
+ expected_h=np.array([6, 7, 8, 9, 10]))
+
+ axes = df.plot(kind='hist', bins=5, stacked=True, subplots=True)
+ self._check_box_coord(axes[0].patches, expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(axes[1].patches, expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(axes[2].patches, expected_y=np.array([0, 0, 0, 0, 0]),
+ expected_h=np.array([6, 7, 8, 9, 10]))
+
+ # horizontal
+ ax = df.plot(kind='hist', bins=5, orientation='horizontal')
+ self._check_box_coord(ax.patches[:5], expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(ax.patches[5:10], expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(ax.patches[10:], expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([6, 7, 8, 9, 10]))
+
+ ax = df.plot(kind='hist', bins=5, stacked=True, orientation='horizontal')
+ self._check_box_coord(ax.patches[:5], expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(ax.patches[5:10], expected_x=np.array([10, 9, 8, 7, 6]),
+ expected_w=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(ax.patches[10:], expected_x=np.array([18, 17, 16, 15, 14]),
+ expected_w=np.array([6, 7, 8, 9, 10]))
+
+ axes = df.plot(kind='hist', bins=5, stacked=True,
+ subplots=True, orientation='horizontal')
+ self._check_box_coord(axes[0].patches, expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([10, 9, 8, 7, 6]))
+ self._check_box_coord(axes[1].patches, expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([8, 8, 8, 8, 8]))
+ self._check_box_coord(axes[2].patches, expected_x=np.array([0, 0, 0, 0, 0]),
+ expected_w=np.array([6, 7, 8, 9, 10]))
+
+ @slow
+ def test_hist_df_legacy(self):
_check_plot_works(self.hist_df.hist)
# make sure layout is handled
@@ -1849,7 +2002,7 @@ def test_plot_int_columns(self):
@slow
def test_df_legend_labels(self):
- kinds = 'line', 'bar', 'barh', 'kde', 'area'
+ kinds = ['line', 'bar', 'barh', 'kde', 'area', 'hist']
df = DataFrame(rand(3, 3), columns=['a', 'b', 'c'])
df2 = DataFrame(rand(3, 3), columns=['d', 'e', 'f'])
df3 = DataFrame(rand(3, 3), columns=['g', 'h', 'i'])
@@ -1927,7 +2080,7 @@ def test_legend_name(self):
@slow
def test_no_legend(self):
- kinds = 'line', 'bar', 'barh', 'kde', 'area'
+ kinds = ['line', 'bar', 'barh', 'kde', 'area', 'hist']
df = DataFrame(rand(3, 3), columns=['a', 'b', 'c'])
for kind in kinds:
@@ -2019,6 +2172,56 @@ def test_area_colors(self):
poly = [o for o in ax.get_children() if isinstance(o, PolyCollection)]
self._check_colors(poly, facecolors=rgba_colors)
+ @slow
+ def test_hist_colors(self):
+ default_colors = self.plt.rcParams.get('axes.color_cycle')
+
+ df = DataFrame(randn(5, 5))
+ ax = df.plot(kind='hist')
+ self._check_colors(ax.patches[::10], facecolors=default_colors[:5])
+ tm.close()
+
+ custom_colors = 'rgcby'
+ ax = df.plot(kind='hist', color=custom_colors)
+ self._check_colors(ax.patches[::10], facecolors=custom_colors)
+ tm.close()
+
+ from matplotlib import cm
+ # Test str -> colormap functionality
+ ax = df.plot(kind='hist', colormap='jet')
+ rgba_colors = lmap(cm.jet, np.linspace(0, 1, 5))
+ self._check_colors(ax.patches[::10], facecolors=rgba_colors)
+ tm.close()
+
+ # Test colormap functionality
+ ax = df.plot(kind='hist', colormap=cm.jet)
+ rgba_colors = lmap(cm.jet, np.linspace(0, 1, 5))
+ self._check_colors(ax.patches[::10], facecolors=rgba_colors)
+ tm.close()
+
+ ax = df.ix[:, [0]].plot(kind='hist', color='DodgerBlue')
+ self._check_colors([ax.patches[0]], facecolors=['DodgerBlue'])
+
+ @slow
+ def test_kde_colors(self):
+ from matplotlib import cm
+
+ custom_colors = 'rgcby'
+ df = DataFrame(rand(5, 5))
+
+ ax = df.plot(kind='kde', color=custom_colors)
+ self._check_colors(ax.get_lines(), linecolors=custom_colors)
+ tm.close()
+
+ ax = df.plot(kind='kde', colormap='jet')
+ rgba_colors = lmap(cm.jet, np.linspace(0, 1, len(df)))
+ self._check_colors(ax.get_lines(), linecolors=rgba_colors)
+ tm.close()
+
+ ax = df.plot(kind='kde', colormap=cm.jet)
+ rgba_colors = lmap(cm.jet, np.linspace(0, 1, len(df)))
+ self._check_colors(ax.get_lines(), linecolors=rgba_colors)
+
def test_default_color_cycle(self):
import matplotlib.pyplot as plt
plt.rcParams['axes.color_cycle'] = list('rgbk')
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 5d85b68234f96..7d0eaea5b36d6 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1359,58 +1359,6 @@ def _get_errorbars(self, label=None, index=None, xerr=True, yerr=True):
return errors
-class KdePlot(MPLPlot):
- orientation = 'vertical'
-
- def __init__(self, data, bw_method=None, ind=None, **kwargs):
- MPLPlot.__init__(self, data, **kwargs)
- self.bw_method=bw_method
- self.ind=ind
-
- def _make_plot(self):
- from scipy.stats import gaussian_kde
- from scipy import __version__ as spv
- from distutils.version import LooseVersion
- plotf = self.plt.Axes.plot
- colors = self._get_colors()
- for i, (label, y) in enumerate(self._iter_data()):
- ax = self._get_ax(i)
- style = self._get_style(i, label)
-
- label = com.pprint_thing(label)
-
- if LooseVersion(spv) >= '0.11.0':
- gkde = gaussian_kde(y, bw_method=self.bw_method)
- else:
- gkde = gaussian_kde(y)
- if self.bw_method is not None:
- msg = ('bw_method was added in Scipy 0.11.0.' +
- ' Scipy version in use is %s.' % spv)
- warnings.warn(msg)
-
- sample_range = max(y) - min(y)
-
- if self.ind is None:
- ind = np.linspace(min(y) - 0.5 * sample_range,
- max(y) + 0.5 * sample_range, 1000)
- else:
- ind = self.ind
-
- ax.set_ylabel("Density")
-
- y = gkde.evaluate(ind)
- kwds = self.kwds.copy()
- kwds['label'] = label
- self._maybe_add_color(colors, kwds, style, i)
- if style is None:
- args = (ax, ind, y)
- else:
- args = (ax, ind, y, style)
-
- newlines = plotf(*args, **kwds)
- self._add_legend_handle(newlines[0], label)
-
-
class ScatterPlot(MPLPlot):
def __init__(self, data, x, y, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
@@ -1903,6 +1851,119 @@ def orientation(self):
raise NotImplementedError(self.kind)
+class HistPlot(LinePlot):
+
+ def __init__(self, data, bins=10, bottom=0, **kwargs):
+ self.bins = bins # use mpl default
+ self.bottom = bottom
+ # Do not call LinePlot.__init__ which may fill nan
+ MPLPlot.__init__(self, data, **kwargs)
+
+ def _args_adjust(self):
+ if com.is_integer(self.bins):
+ # create common bin edge
+ values = np.ravel(self.data.values)
+ values = values[~com.isnull(values)]
+
+ hist, self.bins = np.histogram(values, bins=self.bins,
+ range=self.kwds.get('range', None),
+ weights=self.kwds.get('weights', None))
+
+ if com.is_list_like(self.bottom):
+ self.bottom = np.array(self.bottom)
+
+ def _get_plot_function(self):
+ def plotf(ax, y, style=None, column_num=None, **kwds):
+ if column_num == 0:
+ self._initialize_prior(len(self.bins) - 1)
+ y = y[~com.isnull(y)]
+ bottom = self._pos_prior + self.bottom
+ # ignore style
+ n, bins, patches = self.plt.Axes.hist(ax, y, bins=self.bins,
+ bottom=bottom, **kwds)
+ self._update_prior(n)
+ return patches
+ return plotf
+
+ def _make_plot(self):
+ plotf = self._get_plot_function()
+ colors = self._get_colors()
+ for i, (label, y) in enumerate(self._iter_data()):
+ ax = self._get_ax(i)
+ style = self._get_style(i, label)
+ label = com.pprint_thing(label)
+
+ kwds = self.kwds.copy()
+ kwds['label'] = label
+ self._maybe_add_color(colors, kwds, style, i)
+
+ if style is not None:
+ kwds['style'] = style
+
+ artists = plotf(ax, y, column_num=i, **kwds)
+ self._add_legend_handle(artists[0], label)
+
+ def _post_plot_logic(self):
+ if self.orientation == 'horizontal':
+ for ax in self.axes:
+ ax.set_xlabel('Degree')
+ else:
+ for ax in self.axes:
+ ax.set_ylabel('Degree')
+
+ @property
+ def orientation(self):
+ if self.kwds.get('orientation', None) == 'horizontal':
+ return 'horizontal'
+ else:
+ return 'vertical'
+
+
+class KdePlot(HistPlot):
+ orientation = 'vertical'
+
+ def __init__(self, data, bw_method=None, ind=None, **kwargs):
+ MPLPlot.__init__(self, data, **kwargs)
+ self.bw_method = bw_method
+ self.ind = ind
+
+ def _args_adjust(self):
+ pass
+
+ def _get_ind(self, y):
+ if self.ind is None:
+ sample_range = max(y) - min(y)
+ ind = np.linspace(min(y) - 0.5 * sample_range,
+ max(y) + 0.5 * sample_range, 1000)
+ else:
+ ind = self.ind
+ return ind
+
+ def _get_plot_function(self):
+ from scipy.stats import gaussian_kde
+ from scipy import __version__ as spv
+ f = MPLPlot._get_plot_function(self)
+ def plotf(ax, y, style=None, column_num=None, **kwds):
+ if LooseVersion(spv) >= '0.11.0':
+ gkde = gaussian_kde(y, bw_method=self.bw_method)
+ else:
+ gkde = gaussian_kde(y)
+ if self.bw_method is not None:
+ msg = ('bw_method was added in Scipy 0.11.0.' +
+ ' Scipy version in use is %s.' % spv)
+ warnings.warn(msg)
+
+ ind = self._get_ind(y)
+ y = gkde.evaluate(ind)
+ lines = f(ax, ind, y, style=style, **kwds)
+ return lines
+ return plotf
+
+ def _post_plot_logic(self):
+ for ax in self.axes:
+ ax.set_ylabel('Density')
+
+
class PiePlot(MPLPlot):
def __init__(self, data, kind=None, **kwargs):
@@ -1964,11 +2025,8 @@ class BoxPlot(MPLPlot):
pass
-class HistPlot(MPLPlot):
- pass
-
# kinds supported by both dataframe and series
-_common_kinds = ['line', 'bar', 'barh', 'kde', 'density', 'area']
+_common_kinds = ['line', 'bar', 'barh', 'kde', 'density', 'area', 'hist']
# kinds supported by dataframe
_dataframe_kinds = ['scatter', 'hexbin']
# kinds supported only by series or dataframe single column
@@ -1976,7 +2034,7 @@ class HistPlot(MPLPlot):
_all_kinds = _common_kinds + _dataframe_kinds + _series_kinds
_plot_klass = {'line': LinePlot, 'bar': BarPlot, 'barh': BarPlot,
- 'kde': KdePlot,
+ 'kde': KdePlot, 'hist': HistPlot,
'scatter': ScatterPlot, 'hexbin': HexBinPlot,
'area': AreaPlot, 'pie': PiePlot}
@@ -2023,10 +2081,11 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
ax : matplotlib axis object, default None
style : list or dict
matplotlib line style per column
- kind : {'line', 'bar', 'barh', 'kde', 'density', 'area', scatter', 'hexbin'}
+ kind : {'line', 'bar', 'barh', 'hist', 'kde', 'density', 'area', 'scatter', 'hexbin'}
line : line plot
bar : vertical bar plot
barh : horizontal bar plot
+ hist : histogram
kde/density : Kernel Density Estimation plot
area : area plot
scatter : scatter plot
@@ -2170,10 +2229,11 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
Parameters
----------
label : label argument to provide to plot
- kind : {'line', 'bar', 'barh', 'kde', 'density', 'area'}
+ kind : {'line', 'bar', 'barh', 'hist', 'kde', 'density', 'area'}
line : line plot
bar : vertical bar plot
barh : horizontal bar plot
+ hist : histogram
kde/density : Kernel Density Estimation plot
area : area plot
use_index : boolean, default True
| Because `hist` and `boxplot` are separated from normal `plot`, there are some inconsistencies with these functions. Looks better to include them to `MPLPlot` framework.
Maybe `scatter` and `hist` can be deprecated in 0.15 if `MPLPlot` can offer better `GroupBy` plot (plan to do in separate PR).
### Example
This allows to use `kind='hist` in `DataFrame.plot` and `Series.plot`. (No changes for `DataFrame.hist`)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
df = pd.DataFrame(np.random.randn(1000, 5))
df.plot(kind='hist', subplots=True)
```

```
df.plot(kind='hist', stacked=True)
```

### Remaining Items
- [x] Add a release note in API section detailing this change/enhancement
- [x] Modify doc
- [x] Add tests (both histogram and kde)
- [x] Add support for `rot` and `fontsize` (depending on `orientation` kw) (rely on #7844)
- [x] Add tests for xticklabels and yticklabels
- [x] Add tests for colors
- [x] Handling nan
- [x] Check `stacked=True` can be supported (`DataFrame.hist` doesn't support it though..).
- [x] Add tests for stacking
| https://api.github.com/repos/pandas-dev/pandas/pulls/7809 | 2014-07-20T14:35:02Z | 2014-08-11T15:30:26Z | 2014-08-11T15:30:26Z | 2014-08-12T02:00:00Z |
Docs: Panel.dropna now exists, update docs accordingly. | diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index 4f4e11e39ae48..d3024daaa59c9 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -267,9 +267,8 @@ data. To do this, use the **dropna** method:
df.dropna(axis=1)
df['one'].dropna()
-**dropna** is presently only implemented for Series and DataFrame, but will be
-eventually added to Panel. Series.dropna is a simpler method as it only has one
-axis to consider. DataFrame.dropna has considerably more options, which can be
+Series.dropna is a simpler method as it only has one axis to consider.
+DataFrame.dropna has considerably more options than Series.dropna, which can be
examined :ref:`in the API <api.dataframe.missing>`.
.. _missing_data.interpolate:
| https://api.github.com/repos/pandas-dev/pandas/pulls/7806 | 2014-07-19T13:57:47Z | 2014-07-23T21:30:04Z | 2014-07-23T21:30:04Z | 2014-07-23T21:30:04Z | |
Docs: Be more specific about inf/-inf no longer being treated as nulls. | diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index b0319c01b2737..4f4e11e39ae48 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -68,7 +68,7 @@ detect this value with data of different types: floating point, integer,
boolean, and general object. In many cases, however, the Python ``None`` will
arise and we wish to also consider that "missing" or "null".
-Until recently, for legacy reasons ``inf`` and ``-inf`` were also
+Prior to version v0.10.0 ``inf`` and ``-inf`` were also
considered to be "null" in computations. This is no longer the case by
default; use the ``mode.use_inf_as_null`` option to recover it.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7805 | 2014-07-19T12:01:20Z | 2014-07-19T13:04:21Z | 2014-07-19T13:04:21Z | 2014-07-19T14:01:15Z | |
DOC: added nanosecond frequencies to doc | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index cbfb20c6f9d7d..05fd82b2f448d 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -700,6 +700,7 @@ frequencies. We will refer to these aliases as *offset aliases*
"S", "secondly frequency"
"L", "milliseonds"
"U", "microseconds"
+ "N", "nanoseconds"
Combining Aliases
~~~~~~~~~~~~~~~~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/7804 | 2014-07-19T11:00:15Z | 2014-07-19T12:06:49Z | 2014-07-19T12:06:49Z | 2014-07-19T14:01:25Z | |
BUG: is_superperiod and is_subperiod cannot handle higher freq than S | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index ca24eb3f910ed..ccea9de8bcbcf 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -192,6 +192,7 @@ Bug Fixes
+- Bug in ``is_superperiod`` and ``is_subperiod`` cannot handle higher frequencies than ``S`` (:issue:`7760`, :issue:`7772`, :issue:`7803`)
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index fe61e5f0acd9b..073f6e13047e9 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -929,25 +929,31 @@ def is_subperiod(source, target):
if _is_quarterly(source):
return _quarter_months_conform(_get_rule_month(source),
_get_rule_month(target))
- return source in ['D', 'C', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'C', 'B', 'M', 'H', 'T', 'S', 'L', 'U', 'N']
elif _is_quarterly(target):
- return source in ['D', 'C', 'B', 'M', 'H', 'T', 'S']
+ return source in ['D', 'C', 'B', 'M', 'H', 'T', 'S', 'L', 'U', 'N']
elif target == 'M':
- return source in ['D', 'C', 'B', 'H', 'T', 'S']
+ return source in ['D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif _is_weekly(target):
- return source in [target, 'D', 'C', 'B', 'H', 'T', 'S']
+ return source in [target, 'D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif target == 'B':
- return source in ['B', 'H', 'T', 'S']
+ return source in ['B', 'H', 'T', 'S', 'L', 'U', 'N']
elif target == 'C':
- return source in ['C', 'H', 'T', 'S']
+ return source in ['C', 'H', 'T', 'S', 'L', 'U', 'N']
elif target == 'D':
- return source in ['D', 'H', 'T', 'S']
+ return source in ['D', 'H', 'T', 'S', 'L', 'U', 'N']
elif target == 'H':
- return source in ['H', 'T', 'S']
+ return source in ['H', 'T', 'S', 'L', 'U', 'N']
elif target == 'T':
- return source in ['T', 'S']
+ return source in ['T', 'S', 'L', 'U', 'N']
elif target == 'S':
- return source in ['S']
+ return source in ['S', 'L', 'U', 'N']
+ elif target == 'L':
+ return source in ['L', 'U', 'N']
+ elif target == 'U':
+ return source in ['U', 'N']
+ elif target == 'N':
+ return source in ['N']
def is_superperiod(source, target):
@@ -982,25 +988,31 @@ def is_superperiod(source, target):
smonth = _get_rule_month(source)
tmonth = _get_rule_month(target)
return _quarter_months_conform(smonth, tmonth)
- return target in ['D', 'C', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'M', 'H', 'T', 'S', 'L', 'U', 'N']
elif _is_quarterly(source):
- return target in ['D', 'C', 'B', 'M', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'M', 'H', 'T', 'S', 'L', 'U', 'N']
elif source == 'M':
- return target in ['D', 'C', 'B', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif _is_weekly(source):
- return target in [source, 'D', 'C', 'B', 'H', 'T', 'S']
+ return target in [source, 'D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif source == 'B':
- return target in ['D', 'C', 'B', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif source == 'C':
- return target in ['D', 'C', 'B', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif source == 'D':
- return target in ['D', 'C', 'B', 'H', 'T', 'S']
+ return target in ['D', 'C', 'B', 'H', 'T', 'S', 'L', 'U', 'N']
elif source == 'H':
- return target in ['H', 'T', 'S']
+ return target in ['H', 'T', 'S', 'L', 'U', 'N']
elif source == 'T':
- return target in ['T', 'S']
+ return target in ['T', 'S', 'L', 'U', 'N']
elif source == 'S':
- return target in ['S']
+ return target in ['S', 'L', 'U', 'N']
+ elif source == 'L':
+ return target in ['L', 'U', 'N']
+ elif source == 'U':
+ return target in ['U', 'N']
+ elif source == 'N':
+ return target in ['N']
def _get_rule_month(source, default='DEC'):
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 37371b5828c8c..10a8286f4bec9 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -342,6 +342,16 @@ def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.Hour(), offsets.Minute()))
assert(fmod.is_subperiod(offsets.Minute(), offsets.Hour()))
+ assert(fmod.is_superperiod(offsets.Second(), offsets.Milli()))
+ assert(fmod.is_subperiod(offsets.Milli(), offsets.Second()))
+
+ assert(fmod.is_superperiod(offsets.Milli(), offsets.Micro()))
+ assert(fmod.is_subperiod(offsets.Micro(), offsets.Milli()))
+
+ assert(fmod.is_superperiod(offsets.Micro(), offsets.Nano()))
+ assert(fmod.is_subperiod(offsets.Nano(), offsets.Micro()))
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 0bdba3751b6fd..5742b8e9bfaae 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -707,6 +707,28 @@ def test_from_weekly_resampling(self):
for l in ax.get_lines():
self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ @slow
+ def test_mixed_freq_second_millisecond(self):
+ # GH 7772, GH 7760
+ idxh = date_range('2014-07-01 09:00', freq='S', periods=50)
+ idxl = date_range('2014-07-01 09:00', freq='100L', periods=500)
+ high = Series(np.random.randn(len(idxh)), idxh)
+ low = Series(np.random.randn(len(idxl)), idxl)
+ # high to low
+ high.plot()
+ ax = low.plot()
+ self.assertEqual(len(ax.get_lines()), 2)
+ for l in ax.get_lines():
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'L')
+ tm.close()
+
+ # low to high
+ low.plot()
+ ax = high.plot()
+ self.assertEqual(len(ax.get_lines()), 2)
+ for l in ax.get_lines():
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'L')
+
@slow
def test_irreg_dtypes(self):
# date
| Closes #7772, Closes #7760.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7803 | 2014-07-19T10:50:15Z | 2014-07-19T14:11:35Z | 2014-07-19T14:11:35Z | 2014-07-19T15:21:58Z |
BUG: reset_index with MultiIndex contains PeriodIndex raises ValueError | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index ca24eb3f910ed..3ce193763779b 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -193,6 +193,8 @@ Bug Fixes
+- Bug in ``DataFrame.reset_index`` which has ``MultiIndex`` contains ``PeriodIndex`` or ``DatetimeIndex`` with tz raises ``ValueError`` (:issue:`7746`, :issue:`7793`)
+
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 17bef8dd28cf4..4f558dda756dd 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -2452,13 +2452,13 @@ def _maybe_casted_values(index, labels=None):
if values.dtype == np.object_:
values = lib.maybe_convert_objects(values)
- # if we have the labels, extract the values with a mask
- if labels is not None:
- mask = labels == -1
- values = values.take(labels)
- if mask.any():
- values, changed = com._maybe_upcast_putmask(values,
- mask, np.nan)
+ # if we have the labels, extract the values with a mask
+ if labels is not None:
+ mask = labels == -1
+ values = values.take(labels)
+ if mask.any():
+ values, changed = com._maybe_upcast_putmask(values,
+ mask, np.nan)
return values
new_index = np.arange(len(new_obj))
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index c0ca5451ef1d2..d8e17c4d1d290 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -2118,6 +2118,33 @@ def test_reset_index_datetime(self):
expected['idx3'] = expected['idx3'].apply(lambda d: pd.Timestamp(d, tz='Europe/Paris'))
assert_frame_equal(df.reset_index(), expected)
+ # GH 7793
+ idx = pd.MultiIndex.from_product([['a','b'], pd.date_range('20130101', periods=3, tz=tz)])
+ df = pd.DataFrame(np.arange(6).reshape(6,1), columns=['a'], index=idx)
+
+ expected = pd.DataFrame({'level_0': 'a a a b b b'.split(),
+ 'level_1': [datetime.datetime(2013, 1, 1),
+ datetime.datetime(2013, 1, 2),
+ datetime.datetime(2013, 1, 3)] * 2,
+ 'a': np.arange(6, dtype='int64')},
+ columns=['level_0', 'level_1', 'a'])
+ expected['level_1'] = expected['level_1'].apply(lambda d: pd.Timestamp(d, offset='D', tz=tz))
+ assert_frame_equal(df.reset_index(), expected)
+
+ def test_reset_index_period(self):
+ # GH 7746
+ idx = pd.MultiIndex.from_product([pd.period_range('20130101', periods=3, freq='M'),
+ ['a','b','c']], names=['month', 'feature'])
+
+ df = pd.DataFrame(np.arange(9).reshape(-1,1), index=idx, columns=['a'])
+ expected = pd.DataFrame({'month': [pd.Period('2013-01', freq='M')] * 3 +
+ [pd.Period('2013-02', freq='M')] * 3 +
+ [pd.Period('2013-03', freq='M')] * 3,
+ 'feature': ['a', 'b', 'c'] * 3,
+ 'a': np.arange(9, dtype='int64')},
+ columns=['month', 'feature', 'a'])
+ assert_frame_equal(df.reset_index(), expected)
+
def test_set_index_period(self):
# GH 6631
df = DataFrame(np.random.random(6))
| Closes #7746, Closes #7793.
Sorry, caused by #7533. The `ValueError` is being raised when `PeriodIndex` or `DatetimeIndex` with tz 's unique values are less than `MultiIndex` length.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7802 | 2014-07-19T09:10:42Z | 2014-07-19T13:39:59Z | 2014-07-19T13:39:59Z | 2014-07-19T15:24:03Z |
BUG: timeseries subplots may display unnecessary minor ticklabels | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 5e3f97944c243..5edc337a1c6a5 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -199,6 +199,7 @@ Bug Fixes
+- Bug in ``DataFrame.plot`` with ``subplots=True`` may draw unnecessary minor xticks and yticks (:issue:`7801`)
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 00045e88ba2f0..f9ae058c065e3 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -126,11 +126,14 @@ def _check_visible(self, collections, visible=True):
Parameters
----------
- collections : list-like
- list or collection of target artist
+ collections : matplotlib Artist or its list-like
+ target Artist or its list or collection
visible : bool
expected visibility
"""
+ from matplotlib.collections import Collection
+ if not isinstance(collections, Collection) and not com.is_list_like(collections):
+ collections = [collections]
for patch in collections:
self.assertEqual(patch.get_visible(), visible)
@@ -861,9 +864,12 @@ def test_plot(self):
axes = _check_plot_works(df.plot, subplots=True, title='blah')
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
for ax in axes[:2]:
+ self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
+ self._check_visible(ax.get_xticklabels(minor=True), visible=False)
self._check_visible([ax.xaxis.get_label()], visible=False)
for ax in [axes[2]]:
+ self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
self._check_visible([ax.xaxis.get_label()])
@@ -1017,21 +1023,61 @@ def test_subplots(self):
self._check_legend_labels(ax, labels=[com.pprint_thing(column)])
for ax in axes[:-2]:
+ self._check_visible(ax.xaxis) # xaxis must be visible for grid
self._check_visible(ax.get_xticklabels(), visible=False)
+ self._check_visible(ax.get_xticklabels(minor=True), visible=False)
+ self._check_visible(ax.xaxis.get_label(), visible=False)
self._check_visible(ax.get_yticklabels())
+ self._check_visible(axes[-1].xaxis)
self._check_visible(axes[-1].get_xticklabels())
+ self._check_visible(axes[-1].get_xticklabels(minor=True))
+ self._check_visible(axes[-1].xaxis.get_label())
self._check_visible(axes[-1].get_yticklabels())
axes = df.plot(kind=kind, subplots=True, sharex=False)
for ax in axes:
+ self._check_visible(ax.xaxis)
self._check_visible(ax.get_xticklabels())
+ self._check_visible(ax.get_xticklabels(minor=True))
+ self._check_visible(ax.xaxis.get_label())
self._check_visible(ax.get_yticklabels())
axes = df.plot(kind=kind, subplots=True, legend=False)
for ax in axes:
self.assertTrue(ax.get_legend() is None)
+ @slow
+ def test_subplots_timeseries(self):
+ idx = date_range(start='2014-07-01', freq='M', periods=10)
+ df = DataFrame(np.random.rand(10, 3), index=idx)
+
+ for kind in ['line', 'area']:
+ axes = df.plot(kind=kind, subplots=True, sharex=True)
+ self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
+
+ for ax in axes[:-2]:
+ # GH 7801
+ self._check_visible(ax.xaxis) # xaxis must be visible for grid
+ self._check_visible(ax.get_xticklabels(), visible=False)
+ self._check_visible(ax.get_xticklabels(minor=True), visible=False)
+ self._check_visible(ax.xaxis.get_label(), visible=False)
+ self._check_visible(ax.get_yticklabels())
+
+ self._check_visible(axes[-1].xaxis)
+ self._check_visible(axes[-1].get_xticklabels())
+ self._check_visible(axes[-1].get_xticklabels(minor=True))
+ self._check_visible(axes[-1].xaxis.get_label())
+ self._check_visible(axes[-1].get_yticklabels())
+
+ axes = df.plot(kind=kind, subplots=True, sharex=False)
+ for ax in axes:
+ self._check_visible(ax.xaxis)
+ self._check_visible(ax.get_xticklabels())
+ self._check_visible(ax.get_xticklabels(minor=True))
+ self._check_visible(ax.xaxis.get_label())
+ self._check_visible(ax.get_yticklabels())
+
def test_negative_log(self):
df = - DataFrame(rand(6, 4),
index=list(string.ascii_letters[:6]),
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index d3ea809b79b76..3570b605c714e 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2972,16 +2972,35 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
axarr[i] = ax
if nplots > 1:
+
if sharex and nrows > 1:
for ax in axarr[:naxes][:-ncols]: # only bottom row
for label in ax.get_xticklabels():
label.set_visible(False)
+ try:
+ # set_visible will not be effective if
+ # minor axis has NullLocator and NullFormattor (default)
+ import matplotlib.ticker as ticker
+ ax.xaxis.set_minor_locator(ticker.AutoLocator())
+ ax.xaxis.set_minor_formatter(ticker.FormatStrFormatter(''))
+ for label in ax.get_xticklabels(minor=True):
+ label.set_visible(False)
+ except Exception: # pragma no cover
+ pass
ax.xaxis.get_label().set_visible(False)
if sharey and ncols > 1:
for i, ax in enumerate(axarr):
if (i % ncols) != 0: # only first column
for label in ax.get_yticklabels():
label.set_visible(False)
+ try:
+ import matplotlib.ticker as ticker
+ ax.yaxis.set_minor_locator(ticker.AutoLocator())
+ ax.yaxis.set_minor_formatter(ticker.FormatStrFormatter(''))
+ for label in ax.get_yticklabels(minor=True):
+ label.set_visible(False)
+ except Exception: # pragma no cover
+ pass
ax.yaxis.get_label().set_visible(False)
if naxes != nplots:
| Related to #7457. The fix was incomplete because it only hides major ticklabels, not minor ticklabels. This causes incorrect result in time-series plot.
(And another problem is `rot` default is not applied to minor ticks. I'll check this separatelly)
```
df = pd.DataFrame(np.random.randn(10, 4), index=pd.date_range(start='2014-07-01', freq='M', periods=10))
df.plot(subplots=True)
```
### Current result:
Minor ticklabels of top 3 axes are not hidden.

### Result after fix:

| https://api.github.com/repos/pandas-dev/pandas/pulls/7801 | 2014-07-19T08:57:16Z | 2014-07-22T11:38:59Z | 2014-07-22T11:38:59Z | 2014-07-23T11:08:36Z |
Correct docs structure in indexing docs. | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 495d97f340d31..837e3b386f3d0 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -2170,7 +2170,7 @@ add an index after you've already done so. There are a couple of different
ways.
Add an index using DataFrame columns
-------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. _indexing.set_index:
@@ -2213,7 +2213,7 @@ the index in-place (without creating a new object):
data
Remove / reset the index, ``reset_index``
-------------------------------------------
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As a convenience, there is a new function on DataFrame called ``reset_index``
which transfers the index values into the DataFrame's columns and sets a simple
@@ -2244,7 +2244,7 @@ discards the index, instead of putting index values in the DataFrame's columns.
deprecated.
Adding an ad hoc index
-----------------------
+~~~~~~~~~~~~~~~~~~~~~~
If you create an index yourself, you can just assign it to the ``index`` field:
| https://api.github.com/repos/pandas-dev/pandas/pulls/7800 | 2014-07-19T07:55:11Z | 2014-07-19T12:07:37Z | 2014-07-19T12:07:37Z | 2014-07-19T12:07:41Z | |
dropna method added to Index. | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 6927d5a732440..98fafe41ced1b 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -215,6 +215,16 @@ def is_(self, other):
# use something other than None to be clearer
return self._id is getattr(other, '_id', Ellipsis)
+ def dropna(self):
+ """
+ Return Index without null values
+
+ Returns
+ -------
+ dropped : Index
+ """
+ return self[~isnull(self.values)]
+
def _reset_identity(self):
"""Initializes or resets ``_id`` attribute with new object"""
self._id = _Identity()
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index a8486beb57042..7ebbac4541c8e 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -905,6 +905,13 @@ def test_nan_first_take_datetime(self):
exp = Index([idx[-1], idx[0], idx[1]])
tm.assert_index_equal(res, exp)
+ def test_dropna(self):
+ idx = Index([np.nan, 'a', np.nan, np.nan, 'b', 'c', np.nan],
+ name='idx')
+ expected = Index(['a', 'b', 'c'], name='idx')
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestFloat64Index(tm.TestCase):
_multiprocess_can_split_ = True
@@ -1049,6 +1056,13 @@ def test_astype_from_object(self):
tm.assert_equal(result.dtype, expected.dtype)
tm.assert_index_equal(result, expected)
+ def test_dropna(self):
+ idx = Index([np.nan, 1.0, np.nan, np.nan, 2.0, 3.0, np.nan],
+ name='idx')
+ expected = Index([1.0, 2.0, 3.0], name='idx')
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestInt64Index(tm.TestCase):
_multiprocess_can_split_ = True
@@ -1474,6 +1488,12 @@ def test_slice_keep_name(self):
idx = Int64Index([1, 2], name='asdf')
self.assertEqual(idx.name, idx[1:].name)
+ def test_dropna_does_nothing(self):
+ idx = Index([1, 2, 3], name='idx')
+ expected = Index([1, 2, 3], name='idx')
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestMultiIndex(tm.TestCase):
_multiprocess_can_split_ = True
@@ -2824,6 +2844,12 @@ def test_level_setting_resets_attributes(self):
# if this fails, probably didn't reset the cache correctly.
assert not ind.is_monotonic
+ def test_dropna_does_nothing(self):
+ idx = MultiIndex.from_tuples([('bar', 'two')])
+ expected = idx
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
| ref pydata/pandas#6194
ref pydata/pandas#7784
| https://api.github.com/repos/pandas-dev/pandas/pulls/7799 | 2014-07-19T00:03:14Z | 2014-07-19T00:04:05Z | null | 2014-07-19T00:04:05Z |
BUG: tslib.tz_convert and tslib.tz_convert_single may output different result in DST | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index dc8ed4c9f5aac..109ed8b286c22 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -233,8 +233,8 @@ Enhancements
-
-
+- Bug in ``tslib.tz_convert`` and ``tslib.tz_convert_single`` may return different results (:issue:`7798`)
+- Bug in ``DatetimeIndex.intersection`` of non-overlapping timestamps with tz raises ``IndexError`` (:issue:`7880`)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 1b5baf1bfe9da..88a86da27daf9 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -12636,23 +12636,6 @@ def test_consolidate_datetime64(self):
assert_array_equal(df.starting.values, ser_starting.index.values)
assert_array_equal(df.ending.values, ser_ending.index.values)
- def test_tslib_tz_convert_trans_pos_plus_1__bug(self):
- # Regression test for tslib.tz_convert(vals, tz1, tz2).
- # See https://github.com/pydata/pandas/issues/4496 for details.
- idx = pd.date_range(datetime(2011, 3, 26, 23), datetime(2011, 3, 27, 1), freq='1min')
- idx = idx.tz_localize('UTC')
- idx = idx.tz_convert('Europe/Moscow')
-
- test_vector = pd.Series([3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3, 3,
- 3, 3, 3, 3, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
- 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4,
- 4, 4, 4, 4, 4, 4, 4, 4, 5], dtype=int)
-
- hours = idx.hour
-
- np.testing.assert_equal(hours, test_vector.values)
-
def _check_bool_op(self, name, alternative, frame=None, has_skipna=True,
has_bool_only=False):
if frame is None:
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 9bbcc781ca9d6..edc7b075da6f8 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -683,25 +683,6 @@ def infer_freq(index, warn=True):
_ONE_HOUR = 60 * _ONE_MINUTE
_ONE_DAY = 24 * _ONE_HOUR
-def _tz_convert_with_transitions(values, to_tz, from_tz):
- """
- convert i8 values from the specificed timezone to the to_tz zone, taking
- into account DST transitions
- """
-
- # vectorization is slow, so tests if we can do this via the faster tz_convert
- f = lambda x: tslib.tz_convert_single(x, to_tz, from_tz)
-
- if len(values) > 2:
- first_slow, last_slow = f(values[0]),f(values[-1])
-
- first_fast, last_fast = tslib.tz_convert(np.array([values[0],values[-1]],dtype='i8'),to_tz,from_tz)
-
- # don't cross a DST, so ok
- if first_fast == first_slow and last_fast == last_slow:
- return tslib.tz_convert(values,to_tz,from_tz)
-
- return np.vectorize(f)(values)
class _FrequencyInferer(object):
"""
@@ -713,7 +694,7 @@ def __init__(self, index, warn=True):
self.values = np.asarray(index).view('i8')
if index.tz is not None:
- self.values = _tz_convert_with_transitions(self.values,'UTC',index.tz)
+ self.values = tslib.tz_convert(self.values, 'UTC', index.tz)
self.warn = warn
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 518bb4180ec89..5f7c93d38653a 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -14,7 +14,7 @@
from pandas.compat import u
from pandas.tseries.frequencies import (
infer_freq, to_offset, get_period_alias,
- Resolution, _tz_convert_with_transitions)
+ Resolution)
from pandas.core.base import DatetimeIndexOpsMixin
from pandas.tseries.offsets import DateOffset, generate_range, Tick, CDay
from pandas.tseries.tools import parse_time_string, normalize_date
@@ -1569,7 +1569,7 @@ def insert(self, loc, item):
new_dates = np.concatenate((self[:loc].asi8, [item.view(np.int64)],
self[loc:].asi8))
if self.tz is not None:
- new_dates = _tz_convert_with_transitions(new_dates,'UTC',self.tz)
+ new_dates = tslib.tz_convert(new_dates, 'UTC', self.tz)
return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz)
except (AttributeError, TypeError):
@@ -1606,7 +1606,7 @@ def delete(self, loc):
freq = self.freq
if self.tz is not None:
- new_dates = _tz_convert_with_transitions(new_dates, 'UTC', self.tz)
+ new_dates = tslib.tz_convert(new_dates, 'UTC', self.tz)
return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz)
def _view_like(self, ndarray):
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 9d5f45735feb5..c54c133dd2afe 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -3203,8 +3203,8 @@ def test_union(self):
def test_intersection(self):
# GH 4690 (with tz)
- for tz in [None, 'Asia/Tokyo']:
- rng = date_range('6/1/2000', '6/30/2000', freq='D', name='idx')
+ for tz in [None, 'Asia/Tokyo', 'US/Eastern', 'dateutil/US/Pacific']:
+ base = date_range('6/1/2000', '6/30/2000', freq='D', name='idx')
# if target has the same name, it is preserved
rng2 = date_range('5/15/2000', '6/20/2000', freq='D', name='idx')
@@ -3214,16 +3214,18 @@ def test_intersection(self):
rng3 = date_range('5/15/2000', '6/20/2000', freq='D', name='other')
expected3 = date_range('6/1/2000', '6/20/2000', freq='D', name=None)
- result2 = rng.intersection(rng2)
- result3 = rng.intersection(rng3)
- for (result, expected) in [(result2, expected2), (result3, expected3)]:
+ rng4 = date_range('7/1/2000', '7/31/2000', freq='D', name='idx')
+ expected4 = DatetimeIndex([], name='idx')
+
+ for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]:
+ result = base.intersection(rng)
self.assertTrue(result.equals(expected))
self.assertEqual(result.name, expected.name)
self.assertEqual(result.freq, expected.freq)
self.assertEqual(result.tz, expected.tz)
# non-monotonic
- rng = DatetimeIndex(['2011-01-05', '2011-01-04', '2011-01-02', '2011-01-03'],
+ base = DatetimeIndex(['2011-01-05', '2011-01-04', '2011-01-02', '2011-01-03'],
tz=tz, name='idx')
rng2 = DatetimeIndex(['2011-01-04', '2011-01-02', '2011-02-02', '2011-02-03'],
@@ -3234,10 +3236,12 @@ def test_intersection(self):
tz=tz, name='other')
expected3 = DatetimeIndex(['2011-01-04', '2011-01-02'], tz=tz, name=None)
- result2 = rng.intersection(rng2)
- result3 = rng.intersection(rng3)
- for (result, expected) in [(result2, expected2), (result3, expected3)]:
- print(result, expected)
+ # GH 7880
+ rng4 = date_range('7/1/2000', '7/31/2000', freq='D', tz=tz, name='idx')
+ expected4 = DatetimeIndex([], tz=tz, name='idx')
+
+ for (rng, expected) in [(rng2, expected2), (rng3, expected3), (rng4, expected4)]:
+ result = base.intersection(rng)
self.assertTrue(result.equals(expected))
self.assertEqual(result.name, expected.name)
self.assertIsNone(result.freq)
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 21f915cb50e21..ab969f13289ac 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -787,6 +787,64 @@ def test_utc_with_system_utc(self):
# check that the time hasn't changed.
self.assertEqual(ts, ts.tz_convert(dateutil.tz.tzutc()))
+ def test_tslib_tz_convert_trans_pos_plus_1__bug(self):
+ # Regression test for tslib.tz_convert(vals, tz1, tz2).
+ # See https://github.com/pydata/pandas/issues/4496 for details.
+ for freq, n in [('H', 1), ('T', 60), ('S', 3600)]:
+ idx = date_range(datetime(2011, 3, 26, 23), datetime(2011, 3, 27, 1), freq=freq)
+ idx = idx.tz_localize('UTC')
+ idx = idx.tz_convert('Europe/Moscow')
+
+ expected = np.repeat(np.array([3, 4, 5]), np.array([n, n, 1]))
+ self.assert_numpy_array_equal(idx.hour, expected)
+
+ def test_tslib_tz_convert_dst(self):
+ for freq, n in [('H', 1), ('T', 60), ('S', 3600)]:
+ # Start DST
+ idx = date_range('2014-03-08 23:00', '2014-03-09 09:00', freq=freq, tz='UTC')
+ idx = idx.tz_convert('US/Eastern')
+ expected = np.repeat(np.array([18, 19, 20, 21, 22, 23, 0, 1, 3, 4, 5]),
+ np.array([n, n, n, n, n, n, n, n, n, n, 1]))
+ self.assert_numpy_array_equal(idx.hour, expected)
+
+ idx = date_range('2014-03-08 18:00', '2014-03-09 05:00', freq=freq, tz='US/Eastern')
+ idx = idx.tz_convert('UTC')
+ expected = np.repeat(np.array([23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9]),
+ np.array([n, n, n, n, n, n, n, n, n, n, 1]))
+ self.assert_numpy_array_equal(idx.hour, expected)
+
+ # End DST
+ idx = date_range('2014-11-01 23:00', '2014-11-02 09:00', freq=freq, tz='UTC')
+ idx = idx.tz_convert('US/Eastern')
+ expected = np.repeat(np.array([19, 20, 21, 22, 23, 0, 1, 1, 2, 3, 4]),
+ np.array([n, n, n, n, n, n, n, n, n, n, 1]))
+ self.assert_numpy_array_equal(idx.hour, expected)
+
+ idx = date_range('2014-11-01 18:00', '2014-11-02 05:00', freq=freq, tz='US/Eastern')
+ idx = idx.tz_convert('UTC')
+ expected = np.repeat(np.array([22, 23, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]),
+ np.array([n, n, n, n, n, n, n, n, n, n, n, n, 1]))
+ self.assert_numpy_array_equal(idx.hour, expected)
+
+ # daily
+ # Start DST
+ idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', tz='UTC')
+ idx = idx.tz_convert('US/Eastern')
+ self.assert_numpy_array_equal(idx.hour, np.array([19, 19]))
+
+ idx = date_range('2014-03-08 00:00', '2014-03-09 00:00', freq='D', tz='US/Eastern')
+ idx = idx.tz_convert('UTC')
+ self.assert_numpy_array_equal(idx.hour, np.array([5, 5]))
+
+ # End DST
+ idx = date_range('2014-11-01 00:00', '2014-11-02 00:00', freq='D', tz='UTC')
+ idx = idx.tz_convert('US/Eastern')
+ self.assert_numpy_array_equal(idx.hour, np.array([20, 20]))
+
+ idx = date_range('2014-11-01 00:00', '2014-11-02 000:00', freq='D', tz='US/Eastern')
+ idx = idx.tz_convert('UTC')
+ self.assert_numpy_array_equal(idx.hour, np.array([4, 4]))
+
class TestTimeZoneCacheKey(tm.TestCase):
def test_cache_keys_are_distinct_for_pytz_vs_dateutil(self):
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index a47d6a178f8b2..79eaa97d50322 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -425,6 +425,44 @@ def test_period_ordinal_business_day(self):
# Tuesday
self.assertEqual(11418, period_ordinal(2013, 10, 8, 0, 0, 0, 0, 0, get_freq('B')))
+ def test_tslib_tz_convert(self):
+ def compare_utc_to_local(tz_didx, utc_didx):
+ f = lambda x: tslib.tz_convert_single(x, 'UTC', tz_didx.tz)
+ result = tslib.tz_convert(tz_didx.asi8, 'UTC', tz_didx.tz)
+ result_single = np.vectorize(f)(tz_didx.asi8)
+ self.assert_numpy_array_equal(result, result_single)
+
+ def compare_local_to_utc(tz_didx, utc_didx):
+ f = lambda x: tslib.tz_convert_single(x, tz_didx.tz, 'UTC')
+ result = tslib.tz_convert(utc_didx.asi8, tz_didx.tz, 'UTC')
+ result_single = np.vectorize(f)(utc_didx.asi8)
+ self.assert_numpy_array_equal(result, result_single)
+
+ for tz in ['UTC', 'Asia/Tokyo', 'US/Eastern', 'Europe/Moscow']:
+ # US: 2014-03-09 - 2014-11-11
+ # MOSCOW: 2014-10-26 / 2014-12-31
+ tz_didx = date_range('2014-03-01', '2015-01-10', freq='H', tz=tz)
+ utc_didx = date_range('2014-03-01', '2015-01-10', freq='H')
+ compare_utc_to_local(tz_didx, utc_didx)
+ # local tz to UTC can be differ in hourly (or higher) freqs because of DST
+ compare_local_to_utc(tz_didx, utc_didx)
+
+ tz_didx = date_range('2000-01-01', '2020-01-01', freq='D', tz=tz)
+ utc_didx = date_range('2000-01-01', '2020-01-01', freq='D')
+ compare_utc_to_local(tz_didx, utc_didx)
+ compare_local_to_utc(tz_didx, utc_didx)
+
+ tz_didx = date_range('2000-01-01', '2100-01-01', freq='A', tz=tz)
+ utc_didx = date_range('2000-01-01', '2100-01-01', freq='A')
+ compare_utc_to_local(tz_didx, utc_didx)
+ compare_local_to_utc(tz_didx, utc_didx)
+
+ # Check empty array
+ result = tslib.tz_convert(np.array([], dtype=np.int64),
+ tslib.maybe_get_tz('US/Eastern'),
+ tslib.maybe_get_tz('Asia/Tokyo'))
+ self.assert_numpy_array_equal(result, np.array([], dtype=np.int64))
+
class TestTimestampOps(tm.TestCase):
def test_timestamp_and_datetime(self):
self.assertEqual((Timestamp(datetime.datetime(2013, 10, 13)) - datetime.datetime(2013, 10, 12)).days, 1)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index dc9f3fa258985..b8342baae16bd 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1907,10 +1907,14 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
Py_ssize_t i, pos, n = len(vals)
int64_t v, offset
pandas_datetimestruct dts
+ Py_ssize_t trans_len
if not have_pytz:
import pytz
+ if len(vals) == 0:
+ return np.array([], dtype=np.int64)
+
# Convert to UTC
if _get_zone(tz1) != 'UTC':
@@ -1927,6 +1931,7 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
else:
deltas = _get_deltas(tz1)
trans = _get_transitions(tz1)
+ trans_len = len(trans)
pos = trans.searchsorted(vals[0]) - 1
if pos < 0:
raise ValueError('First time before start of DST info')
@@ -1934,7 +1939,7 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
offset = deltas[pos]
for i in range(n):
v = vals[i]
- if v >= [pos + 1]:
+ while pos + 1 < trans_len and v >= trans[pos + 1]:
pos += 1
offset = deltas[pos]
utc_dates[i] = v - offset
@@ -1957,29 +1962,23 @@ def tz_convert(ndarray[int64_t] vals, object tz1, object tz2):
# Convert UTC to other timezone
trans = _get_transitions(tz2)
+ trans_len = len(trans)
deltas = _get_deltas(tz2)
- pos = trans.searchsorted(utc_dates[0])
- if pos == 0:
+ pos = trans.searchsorted(utc_dates[0]) - 1
+ if pos < 0:
raise ValueError('First time before start of DST info')
- elif pos == len(trans):
- return utc_dates + deltas[-1]
# TODO: this assumed sortedness :/
- pos -= 1
-
offset = deltas[pos]
- cdef Py_ssize_t trans_len = len(trans)
-
for i in range(n):
v = utc_dates[i]
if vals[i] == NPY_NAT:
result[i] = vals[i]
else:
- if (pos + 1) < trans_len and v >= trans[pos + 1]:
+ while pos + 1 < trans_len and v >= trans[pos + 1]:
pos += 1
offset = deltas[pos]
result[i] = v + offset
-
return result
def tz_convert_single(int64_t val, object tz1, object tz2):
@@ -2005,7 +2004,7 @@ def tz_convert_single(int64_t val, object tz1, object tz2):
elif _get_zone(tz1) != 'UTC':
deltas = _get_deltas(tz1)
trans = _get_transitions(tz1)
- pos = trans.searchsorted(val) - 1
+ pos = trans.searchsorted(val, side='right') - 1
if pos < 0:
raise ValueError('First time before start of DST info')
offset = deltas[pos]
@@ -2024,7 +2023,7 @@ def tz_convert_single(int64_t val, object tz1, object tz2):
# Convert UTC to other timezone
trans = _get_transitions(tz2)
deltas = _get_deltas(tz2)
- pos = trans.searchsorted(utc_date) - 1
+ pos = trans.searchsorted(utc_date, side='right') - 1
if pos < 0:
raise ValueError('First time before start of DST info')
| These functions may return different result in case of DST. There seems to be 2 problems:
- `tslib.tz_convert` checks DST and change `deltas` by adjusting `pos` by 1. If input has time-gaps more than 2 DST spans, result will be incorrect.
- `tslib.tz_convert_single` results incorrect if input is just on DST edge.
```
import pandas as pd
import numpy as np
import datetime
import pytz
idx = pd.date_range('2014-03-01', '2015-01-10', freq='H')
f = lambda x: pd.tslib.tz_convert_single(x, pd.tslib.maybe_get_tz('US/Eastern'), 'UTC')
result = pd.tslib.tz_convert(idx.asi8, pd.tslib.maybe_get_tz('US/Eastern'), 'UTC')
result_single = np.vectorize(f)(idx.asi8)
result[result != result_single]
# [1394370000000000000 1394373600000000000 1394377200000000000 ...,
# 1414918800000000000 1414922400000000000 1414926000000000000]
```
#### Note
Additionally, it was modifed to close #7880 also.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7798 | 2014-07-18T22:36:18Z | 2014-08-03T01:41:58Z | 2014-08-03T01:41:58Z | 2014-08-04T13:18:25Z |
BUG: fix reading pre-0.14.1 pickles of containers with one block and dup items | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 5e3f97944c243..103ac2a34a49a 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -191,6 +191,8 @@ Bug Fixes
- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
+- Bug in pickle deserialization that failed for pre-0.14.1 containers with dup items trying to avoid ambiguity
+ when matching block and manager items, when there's only one block there's no ambiguity (:issue:`7794`)
- Bug in ``is_superperiod`` and ``is_subperiod`` cannot handle higher frequencies than ``S`` (:issue:`7760`, :issue:`7772`, :issue:`7803`)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index f649baeb16278..cad7b579aa554 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2271,10 +2271,23 @@ def unpickle_block(values, mgr_locs):
ax_arrays, bvalues, bitems = state[:3]
self.axes = [_ensure_index(ax) for ax in ax_arrays]
+
+ if len(bitems) == 1 and self.axes[0].equals(bitems[0]):
+ # This is a workaround for pre-0.14.1 pickles that didn't
+ # support unpickling multi-block frames/panels with non-unique
+ # columns/items, because given a manager with items ["a", "b",
+ # "a"] there's no way of knowing which block's "a" is where.
+ #
+ # Single-block case can be supported under the assumption that
+ # block items corresponded to manager items 1-to-1.
+ all_mgr_locs = [slice(0, len(bitems[0]))]
+ else:
+ all_mgr_locs = [self.axes[0].get_indexer(blk_items)
+ for blk_items in bitems]
+
self.blocks = tuple(
- unpickle_block(values,
- self.axes[0].get_indexer(items))
- for values, items in zip(bvalues, bitems))
+ unpickle_block(values, mgr_locs)
+ for values, mgr_locs in zip(bvalues, all_mgr_locs))
self._post_setstate()
diff --git a/pandas/io/tests/data/legacy_pickle/0.13.0/0.13.0_x86_64_linux_2.7.8.pickle b/pandas/io/tests/data/legacy_pickle/0.13.0/0.13.0_x86_64_linux_2.7.8.pickle
new file mode 100644
index 0000000000000..3ffecb77ef8c9
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.13.0/0.13.0_x86_64_linux_2.7.8.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.14.0/0.14.0_x86_64_linux_2.7.8.pickle b/pandas/io/tests/data/legacy_pickle/0.14.0/0.14.0_x86_64_linux_2.7.8.pickle
new file mode 100644
index 0000000000000..19cbcddc4ded8
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.14.0/0.14.0_x86_64_linux_2.7.8.pickle differ
diff --git a/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_linux_2.7.8.pickle b/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_linux_2.7.8.pickle
new file mode 100644
index 0000000000000..af530fcd3fb39
Binary files /dev/null and b/pandas/io/tests/data/legacy_pickle/0.14.1/0.14.1_x86_64_linux_2.7.8.pickle differ
diff --git a/pandas/io/tests/generate_legacy_pickles.py b/pandas/io/tests/generate_legacy_pickles.py
index 3a0386c7660d4..b20a1e5b60b86 100644
--- a/pandas/io/tests/generate_legacy_pickles.py
+++ b/pandas/io/tests/generate_legacy_pickles.py
@@ -1,6 +1,7 @@
""" self-contained to write legacy pickle files """
from __future__ import print_function
+
def _create_sp_series():
import numpy as np
@@ -53,6 +54,7 @@ def _create_sp_frame():
def create_data():
""" create the pickle data """
+ from distutils.version import LooseVersion
import numpy as np
import pandas
from pandas import (Series,TimeSeries,DataFrame,Panel,
@@ -92,13 +94,23 @@ def create_data():
index=MultiIndex.from_tuples(tuple(zip(*[['bar','bar','baz','baz','baz'],
['one','two','one','two','three']])),
names=['first','second'])),
- dup = DataFrame(np.arange(15).reshape(5, 3).astype(np.float64),
- columns=['A', 'B', 'A']))
+ dup=DataFrame(np.arange(15).reshape(5, 3).astype(np.float64),
+ columns=['A', 'B', 'A']))
panel = dict(float = Panel(dict(ItemA = frame['float'], ItemB = frame['float']+1)),
dup = Panel(np.arange(30).reshape(3, 5, 2).astype(np.float64),
items=['A', 'B', 'A']))
+ if LooseVersion(pandas.__version__) >= '0.14.1':
+ # Pre-0.14.1 versions generated non-unpicklable mixed-type frames and
+ # panels if their columns/items were non-unique.
+ mixed_dup_df = DataFrame(data)
+ mixed_dup_df.columns = list("ABCDA")
+
+ mixed_dup_panel = Panel(dict(ItemA=frame['float'], ItemB=frame['int']))
+ mixed_dup_panel.items = ['ItemA', 'ItemA']
+ frame['mixed_dup'] = mixed_dup_df
+ panel['mixed_dup'] = mixed_dup_panel
return dict( series = series,
frame = frame,
| Series, frames and panels that contain only one block can be unpickled
under the assumption that block items correspond to manager items 1-to-1
(as pointed out in #7329).
I still don't have any of darwin/win/32bit platforms at hand, so I
cannot post those pickles. Also, my linux has a fresher python than
indicated by filenames of existing pickle test data. I did generate
some data to test the fix, the question is do you want me to upload it?
| https://api.github.com/repos/pandas-dev/pandas/pulls/7794 | 2014-07-18T17:45:02Z | 2014-07-21T11:42:08Z | 2014-07-21T11:42:08Z | 2014-07-29T10:04:36Z |
BUG: read_column did not preserve UTC tzinfo | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index f5926c2d011ee..06c93541a7783 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -196,6 +196,7 @@ Bug Fixes
- Bug in Series 0-division with a float and integer operand dtypes (:issue:`7785`)
- Bug in ``Series.astype("unicode")`` not calling ``unicode`` on the values correctly (:issue:`7758`)
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
+- Bug in ``HDFStore.select_column()`` not preserving UTC timezone info when selecting a DatetimeIndex (:issue:`7777`)
- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index cecbb407d0bd1..c130ed4fc52ba 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -62,6 +62,18 @@ def _ensure_encoding(encoding):
encoding = _default_encoding
return encoding
+def _set_tz(values, tz, preserve_UTC=False):
+ """ set the timezone if values are an Index """
+ if tz is not None and isinstance(values, Index):
+ tz = _ensure_decoded(tz)
+ if values.tz is None:
+ values = values.tz_localize('UTC').tz_convert(tz)
+ if preserve_UTC:
+ if tslib.get_timezone(tz) == 'UTC':
+ values = list(values)
+
+ return values
+
Term = Expr
@@ -1464,11 +1476,7 @@ def convert(self, values, nan_rep, encoding):
kwargs['freq'] = None
self.values = Index(values, **kwargs)
- # set the timezone if indicated
- # we stored in utc, so reverse to local timezone
- if self.tz is not None:
- self.values = self.values.tz_localize(
- 'UTC').tz_convert(_ensure_decoded(self.tz))
+ self.values = _set_tz(self.values, self.tz)
return self
@@ -3443,8 +3451,11 @@ def read_column(self, column, where=None, start=None, stop=None, **kwargs):
# column must be an indexable or a data column
c = getattr(self.table.cols, column)
a.set_info(self.info)
- return Series(a.convert(c[start:stop], nan_rep=self.nan_rep,
- encoding=self.encoding).take_data())
+ return Series(_set_tz(a.convert(c[start:stop],
+ nan_rep=self.nan_rep,
+ encoding=self.encoding
+ ).take_data(),
+ a.tz, True))
raise KeyError("column [%s] not found in the table" % column)
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index c602e8ff1a888..8d7f007f0bda7 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -4299,6 +4299,38 @@ def test_tseries_indices_frame(self):
self.assertEqual(type(result.index), type(df.index))
self.assertEqual(result.index.freq, df.index.freq)
+ def test_tseries_select_index_column(self):
+ # GH7777
+ # selecting a UTC datetimeindex column did
+ # not preserve UTC tzinfo set before storing
+
+ # check that no tz still works
+ rng = date_range('1/1/2000', '1/30/2000')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ self.assertEqual(rng.tz, DatetimeIndex(result.values).tz)
+
+ # check utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='UTC')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ self.assertEqual(rng.tz, DatetimeIndex(result.values).tz)
+
+ # double check non-utc
+ rng = date_range('1/1/2000', '1/30/2000', tz='US/Eastern')
+ frame = DataFrame(np.random.randn(len(rng), 4), index=rng)
+
+ with ensure_clean_store(self.path) as store:
+ store.append('frame', frame)
+ result = store.select_column('frame', 'index')
+ self.assertEqual(rng.tz, DatetimeIndex(result.values).tz)
+
def test_unicode_index(self):
unicode_values = [u('\u03c3'), u('\u03c3\u03c3')]
| BUG: Fixes #7777, HDFStore.read_column did not preserve timezone information
when fetching a DatetimeIndex column with tz=UTC
| https://api.github.com/repos/pandas-dev/pandas/pulls/7790 | 2014-07-18T16:00:34Z | 2014-07-22T15:24:22Z | 2014-07-22T15:24:22Z | 2014-07-22T15:26:00Z |
BUG/COMPAT: pickled dtindex with freq raises AttributeError in normalize... | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index eb58f46f0f3fe..226ec28089b60 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -189,6 +189,7 @@ Bug Fixes
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
+- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
diff --git a/pandas/io/tests/test_pickle.py b/pandas/io/tests/test_pickle.py
index c52a405fe81ea..07d576ac1c8ae 100644
--- a/pandas/io/tests/test_pickle.py
+++ b/pandas/io/tests/test_pickle.py
@@ -48,9 +48,12 @@ def compare(self, vf):
# py3 compat when reading py2 pickle
try:
data = pandas.read_pickle(vf)
- except (ValueError) as detail:
- # trying to read a py3 pickle in py2
- return
+ except (ValueError) as e:
+ if 'unsupported pickle protocol:' in str(e):
+ # trying to read a py3 pickle in py2
+ return
+ else:
+ raise
for typ, dv in data.items():
for dt, result in dv.items():
@@ -60,6 +63,7 @@ def compare(self, vf):
continue
self.compare_element(typ, result, expected)
+ return data
def read_pickles(self, version):
if not is_little_endian():
@@ -68,7 +72,14 @@ def read_pickles(self, version):
pth = tm.get_data_path('legacy_pickle/{0}'.format(str(version)))
for f in os.listdir(pth):
vf = os.path.join(pth,f)
- self.compare(vf)
+ data = self.compare(vf)
+
+ if data is None:
+ continue
+
+ if 'series' in data:
+ if 'ts' in data['series']:
+ self._validate_timeseries(data['series']['ts'], self.data['series']['ts'])
def test_read_pickles_0_10_1(self):
self.read_pickles('0.10.1')
@@ -82,6 +93,9 @@ def test_read_pickles_0_12_0(self):
def test_read_pickles_0_13_0(self):
self.read_pickles('0.13.0')
+ def test_read_pickles_0_14_0(self):
+ self.read_pickles('0.14.0')
+
def test_round_trip_current(self):
for typ, dv in self.data.items():
@@ -94,6 +108,14 @@ def test_round_trip_current(self):
result = pd.read_pickle(path)
self.compare_element(typ, result, expected)
+ def _validate_timeseries(self, pickled, current):
+ # GH 7748
+ tm.assert_series_equal(pickled, current)
+ self.assertEqual(pickled.index.freq, current.index.freq)
+ self.assertEqual(pickled.index.freq.normalize, False)
+ self.assert_numpy_array_equal(pickled > 0, current > 0)
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 57181b43df9f6..8f77f88910a3c 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -130,6 +130,9 @@ def __add__(date):
_cacheable = False
_normalize_cache = True
+ # default for prior pickles
+ normalize = False
+
def __init__(self, n=1, normalize=False, **kwds):
self.n = int(n)
self.normalize = normalize
diff --git a/setup.py b/setup.py
index 3ec992d91bb45..844f5742c0e69 100755
--- a/setup.py
+++ b/setup.py
@@ -578,6 +578,7 @@ def pxd(name):
'tests/data/legacy_pickle/0.11.0/*.pickle',
'tests/data/legacy_pickle/0.12.0/*.pickle',
'tests/data/legacy_pickle/0.13.0/*.pickle',
+ 'tests/data/legacy_pickle/0.14.0/*.pickle',
'tests/data/*.csv',
'tests/data/*.dta',
'tests/data/*.txt',
| Closes #7748.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7789 | 2014-07-18T15:58:00Z | 2014-07-19T12:10:05Z | 2014-07-19T12:10:05Z | 2014-07-19T13:13:34Z |
Raise exception on non-unique column index in to_hdf for fixed format. | diff --git a/doc/source/io.rst b/doc/source/io.rst
index a363d144b2ba1..91ffb5091e927 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2311,7 +2311,8 @@ Fixed Format
The examples above show storing using ``put``, which write the HDF5 to ``PyTables`` in a fixed array format, called
the ``fixed`` format. These types of stores are are **not** appendable once written (though you can simply
remove them and rewrite). Nor are they **queryable**; they must be
-retrieved in their entirety. These offer very fast writing and slightly faster reading than ``table`` stores.
+retrieved in their entirety. They also do not support dataframes with non-unique column names.
+The ``fixed`` format stores offer very fast writing and slightly faster reading than ``table`` stores.
This format is specified by default when using ``put`` or ``to_hdf`` or by ``format='fixed'`` or ``format='f'``
.. warning::
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index ca24eb3f910ed..9fbe718b3fc64 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -187,6 +187,7 @@ Bug Fixes
- Bug in Series 0-division with a float and integer operand dtypes (:issue:`7785`)
- Bug in ``Series.astype("unicode")`` not calling ``unicode`` on the values correctly (:issue:`7758`)
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
+- Raise a ``ValueError`` in ``df.to_hdf`` if ``df`` has non-unique columns as the resulting file will be broken (:issue:`7761`)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 0e6c41a25bbe5..cecbb407d0bd1 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -2680,6 +2680,9 @@ def write(self, obj, **kwargs):
self.attrs.ndim = data.ndim
for i, ax in enumerate(data.axes):
+ if i == 0:
+ if not ax.is_unique:
+ raise ValueError("Columns index has to be unique for fixed format")
self.write_index('axis%d' % i, ax)
# Supporting mixed-type DataFrame objects...nontrivial
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index 6a944284035c8..c602e8ff1a888 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -4370,6 +4370,17 @@ def test_categorical(self):
# FIXME: TypeError: cannot pass a where specification when reading from a Fixed format store. this store must be selected in its entirety
#result = store.select('df', where = ['index>2'])
#tm.assert_frame_equal(df[df.index>2],result)
+
+ def test_duplicate_column_name(self):
+ df = DataFrame(columns=["a", "a"], data=[[0, 0]])
+
+ with ensure_clean_path(self.path) as path:
+ self.assertRaises(ValueError, df.to_hdf, path, 'df', format='fixed')
+
+ df.to_hdf(path, 'df', format='table')
+ other = read_hdf(path, 'df')
+ tm.assert_frame_equal(df, other)
+
def _test_sort(obj):
if isinstance(obj, DataFrame):
| Fixes #7761.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7788 | 2014-07-18T15:11:18Z | 2014-07-21T11:39:29Z | 2014-07-21T11:39:29Z | 2014-08-18T18:40:23Z |
BUG: Bug in Series 0-division with a float and integer operand dtypes (GH7785) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index eb58f46f0f3fe..ca24eb3f910ed 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -184,7 +184,7 @@ There are no experimental changes in 0.15.0
Bug Fixes
~~~~~~~~~
-
+- Bug in Series 0-division with a float and integer operand dtypes (:issue:`7785`)
- Bug in ``Series.astype("unicode")`` not calling ``unicode`` on the values correctly (:issue:`7758`)
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 1a57c9c33ba7c..04c5140d6a59b 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1328,7 +1328,7 @@ def _fill_zeros(result, x, y, name, fill):
# correctly
# GH 6178
if np.isinf(fill):
- np.putmask(result,signs<0 & mask, -fill)
+ np.putmask(result,(signs<0) & mask, -fill)
result = result.reshape(shape)
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index fda0abe07050d..e56da6c6522a5 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2362,6 +2362,17 @@ def test_div(self):
expected = Series([np.nan,np.inf,-np.inf])
assert_series_equal(result, expected)
+ # float/integer issue
+ # GH 7785
+ p = DataFrame({'first': (1,0), 'second': (-0.01,-0.02)})
+ expected = Series([-0.01,-np.inf])
+
+ result = p['second'].div(p['first'])
+ assert_series_equal(result, expected)
+
+ result = p['second'] / p['first']
+ assert_series_equal(result, expected)
+
def test_operators(self):
def _check_op(series, other, op, pos_only=False):
@@ -4865,12 +4876,12 @@ def test_astype_unicode(self):
test_series = [
Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]),
Series([u"データーサイエンス、お前はもう死んでいる"]),
-
+
]
-
+
former_encoding = None
if not compat.PY3:
- # in python we can force the default encoding
+ # in python we can force the default encoding
# for this test
former_encoding = sys.getdefaultencoding()
reload(sys)
| closes #7785
| https://api.github.com/repos/pandas-dev/pandas/pulls/7786 | 2014-07-18T14:01:14Z | 2014-07-18T14:32:28Z | 2014-07-18T14:32:28Z | 2014-07-18T14:32:29Z |
Performance improvements for nunique method. | diff --git a/pandas/core/base.py b/pandas/core/base.py
index beffbfb2923db..b3fed959a8522 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -288,21 +288,29 @@ def value_counts(self, normalize=False, sort=True, ascending=False,
tz=getattr(self, 'tz', None))
return result
- def unique(self):
+ def unique(self, dropna=False):
"""
Return array of unique values in the object. Significantly faster than
numpy.unique. Includes NA values.
+ Parameters
+ ----------
+ dropna : boolean, default False
+ Don't include NaN in the result.
+
Returns
-------
uniques : ndarray
"""
- from pandas.core.nanops import unique1d
- values = self.values
- if hasattr(values,'unique'):
- return values.unique()
-
- return unique1d(values)
+ if dropna:
+ return self.dropna().unique()
+ else:
+ if hasattr(self.values, 'unique'):
+ # Categorical Series not supported by unique1d
+ return self.values.unique()
+ else:
+ from pandas.core.nanops import unique1d
+ return unique1d(self.values)
def nunique(self, dropna=True):
"""
@@ -319,7 +327,7 @@ def nunique(self, dropna=True):
-------
nunique : int
"""
- return len(self.value_counts(dropna=dropna))
+ return len(self.unique(dropna=dropna))
def factorize(self, sort=False, na_sentinel=-1):
"""
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index f9ed6c2fecc3c..9db21cdc03afd 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -869,7 +869,7 @@ def mode(self):
fastpath=True)
return result
- def unique(self):
+ def unique(self, **kwargs):
"""
Return the unique values.
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 263e6db8c486a..e1593eb2b75b2 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -215,6 +215,16 @@ def is_(self, other):
# use something other than None to be clearer
return self._id is getattr(other, '_id', Ellipsis)
+ def dropna(self):
+ """
+ Return Index without null values
+
+ Returns
+ -------
+ dropped : Index
+ """
+ return self[~isnull(self.values)]
+
def _reset_identity(self):
"""Initializes or resets ``_id`` attribute with new object"""
self._id = _Identity()
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 8b1f6ce3e7f45..5e3a0201236fd 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -907,6 +907,13 @@ def test_nan_first_take_datetime(self):
exp = Index([idx[-1], idx[0], idx[1]])
tm.assert_index_equal(res, exp)
+ def test_dropna(self):
+ idx = Index([np.nan, 'a', np.nan, np.nan, 'b', 'c', np.nan],
+ name='idx')
+ expected = Index(['a', 'b', 'c'], name='idx')
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestFloat64Index(tm.TestCase):
_multiprocess_can_split_ = True
@@ -1051,6 +1058,12 @@ def test_astype_from_object(self):
tm.assert_equal(result.dtype, expected.dtype)
tm.assert_index_equal(result, expected)
+ def test_dropna(self):
+ idx = Float64Index([np.nan, 1.0, np.nan, np.nan, 2.0, 3.0, np.nan])
+ expected = Float64Index([1.0, 2.0, 3.0])
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestInt64Index(tm.TestCase):
_multiprocess_can_split_ = True
@@ -1476,6 +1489,12 @@ def test_slice_keep_name(self):
idx = Int64Index([1, 2], name='asdf')
self.assertEqual(idx.name, idx[1:].name)
+ def test_dropna_does_nothing(self):
+ idx = Int64Index([1, 2, 3], name='idx')
+ expected = Int64Index([1, 2, 3], name='idx')
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
class TestMultiIndex(tm.TestCase):
_multiprocess_can_split_ = True
@@ -2948,6 +2967,12 @@ def test_level_setting_resets_attributes(self):
# if this fails, probably didn't reset the cache correctly.
assert not ind.is_monotonic
+ def test_dropna_does_nothing(self):
+ idx = MultiIndex.from_tuples([('bar', 'two')])
+ expected = idx
+ result = idx.dropna()
+ tm.assert_index_equal(result, expected)
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 518bb4180ec89..cf586d609b513 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -852,6 +852,23 @@ def take(self, indices, axis=0):
return self[maybe_slice]
return super(DatetimeIndex, self).take(indices, axis)
+ def unique(self, dropna=False):
+ """
+ Index.unique with handling for DatetimeIndex metadata
+
+ Parameters
+ ----------
+ dropna : boolean, default False
+ Don't include NaN in the result.
+
+ Returns
+ -------
+ result : DatetimeIndex
+ """
+ result = Int64Index.unique(self, dropna=dropna)
+ return DatetimeIndex._simple_new(result, tz=self.tz,
+ name=self.name)
+
def union(self, other):
"""
Specialized union for DatetimeIndex objects. If combine
diff --git a/vb_suite/series_methods.py b/vb_suite/series_methods.py
index 1659340cfe050..88f47e9515a63 100644
--- a/vb_suite/series_methods.py
+++ b/vb_suite/series_methods.py
@@ -27,3 +27,11 @@
's2.nsmallest(3, take_last=False)',
setup,
start_date=datetime(2014, 1, 25))
+
+series_nunique1 = Benchmark('s1.nunique()',
+ setup,
+ start_date=datetime(2014, 1, 25))
+
+series_nunique2 = Benchmark('s2.nunique()',
+ setup,
+ start_date=datetime(2014, 1, 25))
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index be9aa03801641..ff6ef904a1d81 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -23,6 +23,7 @@
'plotting',
'reindex',
'replace',
+ 'series_methods',
'sparse',
'strings',
'reshape',
| https://api.github.com/repos/pandas-dev/pandas/pulls/7784 | 2014-07-18T13:09:30Z | 2015-01-25T23:34:09Z | null | 2015-01-25T23:34:09Z | |
Docs: Categorical docs fixups. | diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index 87b59dc735969..c758dde16837b 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -104,7 +104,7 @@ By using some special functions:
.. note::
- I contrast to R's `factor` function, there is currently no way to assign/change labels at
+ In contrast to R's `factor` function, there is currently no way to assign/change labels at
creation time. Use `levels` to change the levels after creation time.
To get back to the original Series or `numpy` array, use ``Series.astype(original_dtype)`` or
@@ -397,7 +397,7 @@ that only values already in the levels can be assigned.
Getting
~~~~~~~
-If the slicing operation returns either a `DataFrame` or a a column of type `Series`,
+If the slicing operation returns either a `DataFrame` or a column of type `Series`,
the ``category`` dtype is preserved.
.. ipython:: python
@@ -509,7 +509,7 @@ The same applies to ``df.append(df)``.
Getting Data In/Out
-------------------
-Writing data (`Series`, `Frames`) to a HDF store and reading it in entirety works. Querying the hdf
+Writing data (`Series`, `Frames`) to a HDF store and reading it in entirety works. Querying the HDF
store does not yet work.
.. ipython:: python
@@ -539,8 +539,8 @@ store does not yet work.
pass
-Writing to a csv file will convert the data, effectively removing any information about the
-`Categorical` (levels and ordering). So if you read back the csv file you have to convert the
+Writing to a CSV file will convert the data, effectively removing any information about the
+`Categorical` (levels and ordering). So if you read back the CSV file you have to convert the
relevant columns back to `category` and assign the right levels and level ordering.
.. ipython:: python
@@ -756,4 +756,4 @@ Future compatibility
~~~~~~~~~~~~~~~~~~~~
As `Categorical` is not a native `numpy` dtype, the implementation details of
-`Series.cat` can change if such a `numpy` dtype is implemented.
\ No newline at end of file
+`Series.cat` can change if such a `numpy` dtype is implemented.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7783 | 2014-07-18T11:51:53Z | 2014-07-18T12:02:30Z | 2014-07-18T12:02:30Z | 2014-07-18T12:02:33Z | |
BUG: Prevent config paths to contain python keywords | diff --git a/pandas/core/config.py b/pandas/core/config.py
index 3e8d76500d128..60dc1d7d0341e 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -445,7 +445,7 @@ def register_option(key, defval, doc='', validator=None, cb=None):
for k in path:
if not bool(re.match('^' + tokenize.Name + '$', k)):
raise ValueError("%s is not a valid identifier" % k)
- if keyword.iskeyword(key):
+ if keyword.iskeyword(k):
raise ValueError("%s is a python keyword" % k)
cursor = _global_config
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index e60c9d5bd0fdf..dc5e9a67bdb65 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -59,6 +59,7 @@ def test_register_option(self):
# no python keywords
self.assertRaises(ValueError, self.cf.register_option, 'for', 0)
+ self.assertRaises(ValueError, self.cf.register_option, 'a.for.b', 0)
# must be valid identifier (ensure attribute access works)
self.assertRaises(ValueError, self.cf.register_option,
'Oh my Goddess!', 0)
| https://api.github.com/repos/pandas-dev/pandas/pulls/7781 | 2014-07-18T09:42:40Z | 2014-09-19T12:22:30Z | 2014-09-19T12:22:30Z | 2014-09-19T16:11:44Z | |
API: support `c` and `colormap` args for DataFrame.plot with kind='scatter' | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index bfd484b363dd2..2871d2f628659 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -435,6 +435,8 @@ Enhancements
- Added ``layout`` keyword to ``DataFrame.plot`` (:issue:`6667`)
- Allow to pass multiple axes to ``DataFrame.plot``, ``hist`` and ``boxplot`` (:issue:`5353`, :issue:`6970`, :issue:`7069`)
+- Added support for ``c``, ``colormap`` and ``colorbar`` arguments for
+ ``DataFrame.plot`` with ``kind='scatter'`` (:issue:`7780`)
- ``PeriodIndex`` supports ``resolution`` as the same as ``DatetimeIndex`` (:issue:`7708`)
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 1cce55cd53e11..d845ae38f05c2 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -521,6 +521,14 @@ It is recommended to specify ``color`` and ``label`` keywords to distinguish eac
df.plot(kind='scatter', x='c', y='d',
color='DarkGreen', label='Group 2', ax=ax);
+The keyword ``c`` may be given as the name of a column to provide colors for
+each point:
+
+.. ipython:: python
+
+ @savefig scatter_plot_colored.png
+ df.plot(kind='scatter', x='a', y='b', c='c', s=50);
+
You can pass other keywords supported by matplotlib ``scatter``.
Below example shows a bubble chart using a dataframe column values as bubble size.
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index 131edf499ff18..3211998b42300 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -1497,6 +1497,34 @@ def test_plot_scatter(self):
axes = df.plot(x='x', y='y', kind='scatter', subplots=True)
self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
+ @slow
+ def test_plot_scatter_with_c(self):
+ df = DataFrame(randn(6, 4),
+ index=list(string.ascii_letters[:6]),
+ columns=['x', 'y', 'z', 'four'])
+
+ axes = [df.plot(kind='scatter', x='x', y='y', c='z'),
+ df.plot(kind='scatter', x=0, y=1, c=2)]
+ for ax in axes:
+ # default to RdBu
+ self.assertEqual(ax.collections[0].cmap.name, 'RdBu')
+ # n.b. there appears to be no public method to get the colorbar
+ # label
+ self.assertEqual(ax.collections[0].colorbar._label, 'z')
+
+ cm = 'cubehelix'
+ ax = df.plot(kind='scatter', x='x', y='y', c='z', colormap=cm)
+ self.assertEqual(ax.collections[0].cmap.name, cm)
+
+ # verify turning off colorbar works
+ ax = df.plot(kind='scatter', x='x', y='y', c='z', colorbar=False)
+ self.assertIs(ax.collections[0].colorbar, None)
+
+ # verify that we can still plot a solid color
+ ax = df.plot(x=0, y=1, c='red', kind='scatter')
+ self.assertIs(ax.collections[0].colorbar, None)
+ self._check_colors(ax.collections, facecolors=['r'])
+
@slow
def test_plot_bar(self):
df = DataFrame(randn(6, 4),
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 56316ac726c8a..7a68da3ad14f2 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1368,32 +1368,55 @@ def _get_errorbars(self, label=None, index=None, xerr=True, yerr=True):
class ScatterPlot(MPLPlot):
_layout_type = 'single'
- def __init__(self, data, x, y, **kwargs):
+ def __init__(self, data, x, y, c=None, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
- self.kwds.setdefault('c', self.plt.rcParams['patch.facecolor'])
if x is None or y is None:
raise ValueError( 'scatter requires and x and y column')
if com.is_integer(x) and not self.data.columns.holds_integer():
x = self.data.columns[x]
if com.is_integer(y) and not self.data.columns.holds_integer():
y = self.data.columns[y]
+ if com.is_integer(c) and not self.data.columns.holds_integer():
+ c = self.data.columns[c]
self.x = x
self.y = y
+ self.c = c
@property
def nseries(self):
return 1
def _make_plot(self):
- x, y, data = self.x, self.y, self.data
+ import matplotlib.pyplot as plt
+
+ x, y, c, data = self.x, self.y, self.c, self.data
ax = self.axes[0]
+ # plot a colorbar only if a colormap is provided or necessary
+ cb = self.kwds.pop('colorbar', self.colormap or c in self.data.columns)
+
+ # pandas uses colormap, matplotlib uses cmap.
+ cmap = self.colormap or 'RdBu'
+ cmap = plt.cm.get_cmap(cmap)
+
+ if c is None:
+ c_values = self.plt.rcParams['patch.facecolor']
+ elif c in self.data.columns:
+ c_values = self.data[c].values
+ else:
+ c_values = c
+
if self.legend and hasattr(self, 'label'):
label = self.label
else:
label = None
- scatter = ax.scatter(data[x].values, data[y].values, label=label,
- **self.kwds)
+ scatter = ax.scatter(data[x].values, data[y].values, c=c_values,
+ label=label, cmap=cmap, **self.kwds)
+ if cb:
+ img = ax.collections[0]
+ cb_label = c if c in self.data.columns else ''
+ self.fig.colorbar(img, ax=ax, label=cb_label)
+
self._add_legend_handle(scatter, label)
errors_x = self._get_errorbars(label=x, index=0, yerr=False)
@@ -2259,6 +2282,8 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
colormap : str or matplotlib colormap object, default None
Colormap to select colors from. If string, load colormap with that name
from matplotlib.
+ colorbar : boolean, optional
+ If True, plot colorbar (only relevant for 'scatter' and 'hexbin' plots)
position : float
Specify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
@@ -2285,6 +2310,9 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
`C` specifies the value at each `(x, y)` point and `reduce_C_function`
is a function of one argument that reduces all the values in a bin to
a single number (e.g. `mean`, `max`, `sum`, `std`).
+
+ If `kind`='scatter' and the argument `c` is the name of a dataframe column,
+ the values of that column are used to color each point.
"""
kind = _get_standard_kind(kind.lower().strip())
| `matplotlib.pyplot.scatter` supports the argument `c` for setting the color of
each point. This patch lets you easily set it by giving a column name (currently
you need to supply an ndarray to make it work, since pandas isn't aware of it):
```
df.plot('x', 'y', c='z', kind='scatter')
```
vs
```
df.plot('x', 'y', c=df['z'].values, kind='scatter')
```
While I was at it, I noticed that `kind='scatter'` did not support the `colormap`
argument that some of the other methods support (notably `kind='hexbin'`). So
I added it, too.
This change should be almost entirely backwards compatible, unless folks are naming
columns in their data frame valid matplotlib colors and using the same color name
for the `c` argument.
A colorbar will also be added automatically if relevant.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7780 | 2014-07-18T05:52:42Z | 2014-09-11T19:48:42Z | 2014-09-11T19:48:42Z | 2014-09-17T22:27:52Z |
BUG: unwanted conversions of timedelta dtypes when in a mixed datetimelike frame (GH7778) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 128ddbd4a9ec3..eb58f46f0f3fe 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -186,7 +186,7 @@ Bug Fixes
~~~~~~~~~
- Bug in ``Series.astype("unicode")`` not calling ``unicode`` on the values correctly (:issue:`7758`)
-
+- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index a461dd0e247f2..17bef8dd28cf4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3539,6 +3539,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
except Exception:
pass
+ dtype = object if self._is_mixed_type else None
if axis == 0:
series_gen = (self.icol(i) for i in range(len(self.columns)))
res_index = self.columns
@@ -3547,7 +3548,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
res_index = self.index
res_columns = self.columns
values = self.values
- series_gen = (Series.from_array(arr, index=res_columns, name=name)
+ series_gen = (Series.from_array(arr, index=res_columns, name=name, dtype=dtype)
for i, (arr, name) in
enumerate(zip(values, res_index)))
else: # pragma : no cover
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 2bd318ec2430f..f649baeb16278 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -25,7 +25,7 @@
from pandas.util.decorators import cache_readonly
from pandas.tslib import Timestamp
-from pandas import compat
+from pandas import compat, _np_version_under1p7
from pandas.compat import range, map, zip, u
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
@@ -1290,6 +1290,16 @@ def to_native_types(self, slicer=None, na_rep=None, **kwargs):
return rvalues.tolist()
+ def get_values(self, dtype=None):
+ # return object dtypes as datetime.timedeltas
+ if dtype == object:
+ if _np_version_under1p7:
+ return self.values.astype('object')
+ return lib.map_infer(self.values.ravel(),
+ lambda x: timedelta(microseconds=x.item()/1000)
+ ).reshape(self.values.shape)
+ return self.values
+
class BoolBlock(NumericBlock):
__slots__ = ()
is_bool = True
@@ -2595,7 +2605,7 @@ def as_matrix(self, items=None):
else:
mgr = self
- if self._is_single_block:
+ if self._is_single_block or not self.is_mixed_type:
return mgr.blocks[0].get_values()
else:
return mgr._interleave()
@@ -3647,9 +3657,11 @@ def _lcd_dtype(l):
has_non_numeric = have_dt64 or have_td64 or have_cat
if (have_object or
- (have_bool and have_numeric) or
+ (have_bool and (have_numeric or have_dt64 or have_td64)) or
(have_numeric and has_non_numeric) or
- have_cat):
+ have_cat or
+ have_dt64 or
+ have_td64):
return np.dtype(object)
elif have_bool:
return np.dtype(bool)
@@ -3670,10 +3682,6 @@ def _lcd_dtype(l):
return np.dtype('int%s' % (lcd.itemsize * 8 * 2))
return lcd
- elif have_dt64 and not have_float and not have_complex:
- return np.dtype('M8[ns]')
- elif have_td64 and not have_float and not have_complex:
- return np.dtype('m8[ns]')
elif have_complex:
return np.dtype('c16')
else:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index eff558d875c4a..9abc8f22009b3 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -237,14 +237,14 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
self._set_axis(0, index, fastpath=True)
@classmethod
- def from_array(cls, arr, index=None, name=None, copy=False,
+ def from_array(cls, arr, index=None, name=None, dtype=None, copy=False,
fastpath=False):
# return a sparse series here
if isinstance(arr, ABCSparseArray):
from pandas.sparse.series import SparseSeries
cls = SparseSeries
- return cls(arr, index=index, name=name, copy=copy, fastpath=fastpath)
+ return cls(arr, index=index, name=name, dtype=dtype, copy=copy, fastpath=fastpath)
@property
def _constructor(self):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 2e1bbc88e36ff..df00edc46eed2 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -9635,6 +9635,15 @@ def test_apply(self):
[[1, 2, 3], [4, 5, 6], [7, 8, 9]], index=['a', 'a', 'c'])
self.assertRaises(ValueError, df.apply, lambda x: x, 2)
+ def test_apply_mixed_datetimelike(self):
+ tm._skip_if_not_numpy17_friendly()
+
+ # mixed datetimelike
+ # GH 7778
+ df = DataFrame({ 'A' : date_range('20130101',periods=3), 'B' : pd.to_timedelta(np.arange(3),unit='s') })
+ result = df.apply(lambda x: x, axis=1)
+ assert_frame_equal(result, df)
+
def test_apply_empty(self):
# empty
applied = self.empty.apply(np.sqrt)
diff --git a/pandas/tests/test_internals.py b/pandas/tests/test_internals.py
index 8a9010084fd99..36dbced6eda8c 100644
--- a/pandas/tests/test_internals.py
+++ b/pandas/tests/test_internals.py
@@ -3,7 +3,7 @@
import nose
import numpy as np
-from pandas import Index, MultiIndex, DataFrame, Series
+from pandas import Index, MultiIndex, DataFrame, Series, Categorical
from pandas.compat import OrderedDict, lrange
from pandas.sparse.array import SparseArray
from pandas.core.internals import *
@@ -41,9 +41,11 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
* complex, c16, c8
* bool
* object, string, O
- * datetime, dt
+ * datetime, dt, M8[ns]
+ * timedelta, td, m8[ns]
* sparse (SparseArray with fill_value=0.0)
* sparse_na (SparseArray with fill_value=np.nan)
+ * category, category2
"""
placement = BlockPlacement(placement)
@@ -67,8 +69,14 @@ def create_block(typestr, placement, item_shape=None, num_offset=0):
shape)
elif typestr in ('bool'):
values = np.ones(shape, dtype=np.bool_)
- elif typestr in ('datetime', 'dt'):
+ elif typestr in ('datetime', 'dt', 'M8[ns]'):
values = (mat * 1e9).astype('M8[ns]')
+ elif typestr in ('timedelta', 'td', 'm8[ns]'):
+ values = (mat * 1).astype('m8[ns]')
+ elif typestr in ('category'):
+ values = Categorical([1,1,2,2,3,3,3,3,4,4])
+ elif typestr in ('category2'):
+ values = Categorical(['a','a','a','a','b','b','c','c','c','d'])
elif typestr in ('sparse', 'sparse_na'):
# FIXME: doesn't support num_rows != 10
assert shape[-1] == 10
@@ -556,7 +564,54 @@ def _compare(old_mgr, new_mgr):
self.assertEqual(new_mgr.get('h').dtype, np.float16)
def test_interleave(self):
- pass
+
+
+ # self
+ for dtype in ['f8','i8','object','bool','complex','M8[ns]','m8[ns]']:
+ mgr = create_mgr('a: {0}'.format(dtype))
+ self.assertEqual(mgr.as_matrix().dtype,dtype)
+ mgr = create_mgr('a: {0}; b: {0}'.format(dtype))
+ self.assertEqual(mgr.as_matrix().dtype,dtype)
+
+ # will be converted according the actual dtype of the underlying
+ mgr = create_mgr('a: category')
+ self.assertEqual(mgr.as_matrix().dtype,'i8')
+ mgr = create_mgr('a: category; b: category')
+ self.assertEqual(mgr.as_matrix().dtype,'i8'),
+ mgr = create_mgr('a: category; b: category2')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: category2')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: category2; b: category2')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+
+ # combinations
+ mgr = create_mgr('a: f8')
+ self.assertEqual(mgr.as_matrix().dtype,'f8')
+ mgr = create_mgr('a: f8; b: i8')
+ self.assertEqual(mgr.as_matrix().dtype,'f8')
+ mgr = create_mgr('a: f4; b: i8')
+ self.assertEqual(mgr.as_matrix().dtype,'f4')
+ mgr = create_mgr('a: f4; b: i8; d: object')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: bool; b: i8')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: complex')
+ self.assertEqual(mgr.as_matrix().dtype,'complex')
+ mgr = create_mgr('a: f8; b: category')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: M8[ns]; b: category')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: M8[ns]; b: bool')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: M8[ns]; b: i8')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: m8[ns]; b: bool')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: m8[ns]; b: i8')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
+ mgr = create_mgr('a: M8[ns]; b: m8[ns]')
+ self.assertEqual(mgr.as_matrix().dtype,'object')
def test_interleave_non_unique_cols(self):
df = DataFrame([
| closes #7778
TST: tests for internals/as_matrix() for all dtypes (including categoricals)
| https://api.github.com/repos/pandas-dev/pandas/pulls/7779 | 2014-07-17T23:34:17Z | 2014-07-18T01:00:30Z | 2014-07-18T01:00:30Z | 2014-07-18T01:00:30Z |
BUG: Fix for passing multiple ints as levels in DataFrame.stack() (#7660) | diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst
index db68c0eb224e2..ab9018da4c41a 100644
--- a/doc/source/reshaping.rst
+++ b/doc/source/reshaping.rst
@@ -160,10 +160,34 @@ the level numbers:
stacked.unstack('second')
+.. _reshaping.stack_multiple:
+
You may also stack or unstack more than one level at a time by passing a list
of levels, in which case the end result is as if each level in the list were
processed individually.
+.. ipython:: python
+
+ columns = MultiIndex.from_tuples([
+ ('A', 'cat', 'long'), ('B', 'cat', 'long'),
+ ('A', 'dog', 'short'), ('B', 'dog', 'short')
+ ],
+ names=['exp', 'animal', 'hair_length']
+ )
+ df = DataFrame(randn(4, 4), columns=columns)
+ df
+
+ df.stack(level=['animal', 'hair_length'])
+
+The list of levels can contain either level names or level numbers (but
+not a mixture of the two).
+
+.. ipython:: python
+
+ # df.stack(level=['animal', 'hair_length'])
+ # from above is equivalent to:
+ df.stack(level=[1, 2])
+
These functions are intelligent about handling missing data and do not expect
each subgroup within the hierarchical index to have the same set of labels.
They also can handle the index being unsorted (but you can make it sorted by
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 5e3f97944c243..aa57004a70e29 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -30,6 +30,11 @@ users upgrade to this version.
API changes
~~~~~~~~~~~
+- Passing multiple levels to `DataFrame.stack()` will now work when multiple level
+ numbers are passed (:issue:`7660`), and will raise a ``ValueError`` when the
+ levels aren't all level names or all level numbers. See
+ :ref:`Reshaping by stacking and unstacking <reshaping.stack_multiple>`.
+
.. _whatsnew_0150.cat:
Categoricals in Series/DataFrame
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 4f558dda756dd..04fe9e8d35359 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3311,13 +3311,10 @@ def stack(self, level=-1, dropna=True):
-------
stacked : DataFrame or Series
"""
- from pandas.core.reshape import stack
+ from pandas.core.reshape import stack, stack_multiple
if isinstance(level, (tuple, list)):
- result = self
- for lev in level:
- result = stack(result, lev, dropna=dropna)
- return result
+ return stack_multiple(self, level, dropna=dropna)
else:
return stack(self, level, dropna=dropna)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 6927d5a732440..81602d5240a08 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2490,6 +2490,12 @@ def _get_level_number(self, level):
raise KeyError('Level %s not found' % str(level))
elif level < 0:
level += self.nlevels
+ if level < 0:
+ orig_level = level - self.nlevels
+ raise IndexError(
+ 'Too many levels: Index has only %d levels, '
+ '%d is not a valid level number' % (self.nlevels, orig_level)
+ )
# Note: levels are zero-based
elif level >= self.nlevels:
raise IndexError('Too many levels: Index has only %d levels, '
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index 43784e15ab163..b014ede6e65a8 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -513,9 +513,7 @@ def stack(frame, level=-1, dropna=True):
"names are not unique.".format(level))
raise ValueError(msg)
- if isinstance(level, int) and level < 0:
- level += frame.columns.nlevels
-
+ # Will also convert negative level numbers and check if out of bounds.
level = frame.columns._get_level_number(level)
if isinstance(frame.columns, MultiIndex):
@@ -547,6 +545,45 @@ def stack(frame, level=-1, dropna=True):
return Series(new_values, index=new_index)
+def stack_multiple(frame, level, dropna=True):
+ # If all passed levels match up to column names, no
+ # ambiguity about what to do
+ if all(lev in frame.columns.names for lev in level):
+ result = frame
+ for lev in level:
+ result = stack(result, lev, dropna=dropna)
+
+ # Otherwise, level numbers may change as each successive level is stacked
+ elif all(isinstance(lev, int) for lev in level):
+ # As each stack is done, the level numbers decrease, so we need
+ # to account for that when level is a sequence of ints
+ result = frame
+ # _get_level_number() checks level numbers are in range and converts
+ # negative numbers to positive
+ level = [frame.columns._get_level_number(lev) for lev in level]
+
+ # Can't iterate directly through level as we might need to change
+ # values as we go
+ for index in range(len(level)):
+ lev = level[index]
+ result = stack(result, lev, dropna=dropna)
+ # Decrement all level numbers greater than current, as these
+ # have now shifted down by one
+ updated_level = []
+ for other in level:
+ if other > lev:
+ updated_level.append(other - 1)
+ else:
+ updated_level.append(other)
+ level = updated_level
+
+ else:
+ raise ValueError("level should contain all level names or all level numbers, "
+ "not a mixture of the two.")
+
+ return result
+
+
def _stack_multi_columns(frame, level=-1, dropna=True):
this = frame.copy()
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index df00edc46eed2..c4783bc49f0ce 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -11725,6 +11725,29 @@ def test_stack_unstack(self):
assert_frame_equal(unstacked_cols.T, self.frame)
assert_frame_equal(unstacked_cols_df['bar'].T, self.frame)
+ def test_stack_ints(self):
+ df = DataFrame(
+ np.random.randn(30, 27),
+ columns=MultiIndex.from_tuples(
+ list(itertools.product(range(3), repeat=3))
+ )
+ )
+ assert_frame_equal(
+ df.stack(level=[1, 2]),
+ df.stack(level=1).stack(level=1)
+ )
+ assert_frame_equal(
+ df.stack(level=[-2, -1]),
+ df.stack(level=1).stack(level=1)
+ )
+
+ df_named = df.copy()
+ df_named.columns.set_names(range(3), inplace=True)
+ assert_frame_equal(
+ df_named.stack(level=[1, 2]),
+ df_named.stack(level=1).stack(level=1)
+ )
+
def test_unstack_bool(self):
df = DataFrame([False, False],
index=MultiIndex.from_arrays([['a', 'b'], ['c', 'l']]),
diff --git a/pandas/tests/test_multilevel.py b/pandas/tests/test_multilevel.py
index d8e17c4d1d290..5c0e500b243c9 100644
--- a/pandas/tests/test_multilevel.py
+++ b/pandas/tests/test_multilevel.py
@@ -834,6 +834,12 @@ def test_count_level_corner(self):
columns=df.columns).fillna(0).astype(np.int64)
assert_frame_equal(result, expected)
+ def test_get_level_number_out_of_bounds(self):
+ with assertRaisesRegexp(IndexError, "Too many levels"):
+ self.frame.index._get_level_number(2)
+ with assertRaisesRegexp(IndexError, "not a valid level number"):
+ self.frame.index._get_level_number(-3)
+
def test_unstack(self):
# just check that it works for now
unstacked = self.ymd.unstack()
@@ -1005,6 +1011,22 @@ def test_stack_unstack_multiple(self):
expected = self.ymd.unstack(2).unstack(1).dropna(axis=1, how='all')
assert_frame_equal(unstacked, expected.ix[:, unstacked.columns])
+ def test_stack_names_and_numbers(self):
+ unstacked = self.ymd.unstack(['year', 'month'])
+
+ # Can't use mixture of names and numbers to stack
+ with assertRaisesRegexp(ValueError, "level should contain"):
+ unstacked.stack([0, 'month'])
+
+ def test_stack_multiple_out_of_bounds(self):
+ # nlevels == 3
+ unstacked = self.ymd.unstack(['year', 'month'])
+
+ with assertRaisesRegexp(IndexError, "Too many levels"):
+ unstacked.stack([2, 3])
+ with assertRaisesRegexp(IndexError, "not a valid level number"):
+ unstacked.stack([-4, -3])
+
def test_unstack_period_series(self):
# GH 4342
idx1 = pd.PeriodIndex(['2013-01', '2013-01', '2013-02', '2013-02',
| closes #7660
The specific case that came up in #7660 (and originally in #7653) seems easy enough to fix, so I've covered that in the PR. I feel like this raises a few other potential problems though, e.g.:
- If the list of level numbers is not in order, is there any sensible way to deal with them? I've sorted the level list in my fix, as that seems like the most straightforward way of making sure that when you do each stack, it's only the level numbers higher than the current that are affected. This might produce undesired results though, so maybe we should just raise a `ValueError` if the level numbers aren't sorted?
- I'm not sure how to extend this to deal with negative level numbers.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7770 | 2014-07-16T23:51:25Z | 2014-07-21T11:42:51Z | 2014-07-21T11:42:51Z | 2014-07-21T11:43:11Z |
FIX: Documentation for for 0.14.1 change log | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index b3be2936c20b5..9e19161847327 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -239,8 +239,8 @@ Bug Fixes
- Bug where ``nanops._has_infs`` doesn't work with many dtypes
(:issue:`7357`)
- Bug in ``StataReader.data`` where reading a 0-observation dta failed (:issue:`7369`)
-- Bug in when reading Stata 13 (117) files containing fixed width strings (:issue:`7360`)
-- Bug in when writing Stata files where the encoding was ignored (:issue:`7286`)
+- Bug in ``StataReader`` when reading Stata 13 (117) files containing fixed width strings (:issue:`7360`)
+- Bug in ``StataWriter`` where encoding was ignored (:issue:`7286`)
- Bug in ``DatetimeIndex`` comparison doesn't handle ``NaT`` properly (:issue:`7529`)
- Bug in passing input with ``tzinfo`` to some offsets ``apply``, ``rollforward`` or ``rollback`` resets ``tzinfo`` or raises ``ValueError`` (:issue:`7465`)
- Bug in ``DatetimeIndex.to_period``, ``PeriodIndex.asobject``, ``PeriodIndex.to_timestamp`` doesn't preserve ``name`` (:issue:`7485`)
| Change log is missing some key words on changes to StataReader and StataWriter.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7769 | 2014-07-16T19:03:36Z | 2014-07-28T20:41:48Z | 2014-07-28T20:41:48Z | 2014-08-20T15:32:51Z |
Categorical fixups | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 985f112979a7e..6424b82779f0f 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -66,7 +66,8 @@ Creating a ``DataFrame`` by passing a dict of objects that can be converted to s
'B' : pd.Timestamp('20130102'),
'C' : pd.Series(1,index=list(range(4)),dtype='float32'),
'D' : np.array([3] * 4,dtype='int32'),
- 'E' : 'foo' })
+ 'E' : pd.Categorical(["test","train","test","train"]),
+ 'F' : 'foo' })
df2
Having specific :ref:`dtypes <basics.dtypes>`
@@ -635,6 +636,32 @@ the quarter end:
ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9
ts.head()
+Categoricals
+------------
+
+Since version 0.15, pandas can include categorical data in a `DataFrame`. For full docs, see the
+:ref:`Categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>` .
+
+.. ipython:: python
+
+ df = pd.DataFrame({"id":[1,2,3,4,5,6], "raw_grade":['a', 'b', 'b', 'a', 'a', 'e']})
+
+ # convert the raw grades to a categorical
+ df["grade"] = pd.Categorical(df["raw_grade"])
+
+ # Alternative: df["grade"] = df["raw_grade"].astype("category")
+ df["grade"]
+
+ # Rename the levels
+ df["grade"].cat.levels = ["very good", "good", "very bad"]
+
+ # Reorder the levels and simultaneously add the missing levels
+ df["grade"].cat.reorder_levels(["very bad", "bad", "medium", "good", "very good"])
+ df["grade"]
+ df.sort("grade")
+ df.groupby("grade").size()
+
+
Plotting
--------
diff --git a/doc/source/api.rst b/doc/source/api.rst
index ec6e2aff870c6..a6a04af610ee0 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -528,11 +528,17 @@ and has the following usable methods and properties (all available as
:toctree: generated/
Categorical
- Categorical.from_codes
Categorical.levels
Categorical.ordered
Categorical.reorder_levels
Categorical.remove_unused_levels
+
+The following methods are considered API when using ``Categorical`` directly:
+
+.. autosummary::
+ :toctree: generated/
+
+ Categorical.from_codes
Categorical.min
Categorical.max
Categorical.mode
@@ -547,7 +553,7 @@ the Categorical back to a numpy array, so levels and order information is not pr
Categorical.__array__
To create compatibility with `pandas.Series` and `numpy` arrays, the following (non-API) methods
-are also introduced.
+are also introduced and available when ``Categorical`` is used directly.
.. autosummary::
:toctree: generated/
@@ -564,7 +570,6 @@ are also introduced.
Categorical.argsort
Categorical.fillna
-
Plotting
~~~~~~~~
.. currentmodule:: pandas
diff --git a/doc/source/categorical.rst b/doc/source/categorical.rst
index c08351eb87a79..831093228b5d6 100644
--- a/doc/source/categorical.rst
+++ b/doc/source/categorical.rst
@@ -90,6 +90,7 @@ By using some special functions:
df['group'] = pd.cut(df.value, range(0, 105, 10), right=False, labels=labels)
df.head(10)
+See :ref:`documentation <reshaping.tile.cut>` for :func:`~pandas.cut`.
`Categoricals` have a specific ``category`` :ref:`dtype <basics.dtypes>`:
@@ -331,6 +332,45 @@ Operations
The following operations are possible with categorical data:
+Comparing `Categoricals` with other objects is possible in two cases:
+ * comparing a `Categorical` to another `Categorical`, when `level` and `ordered` is the same or
+ * comparing a `Categorical` to a scalar.
+All other comparisons will raise a TypeError.
+
+.. ipython:: python
+
+ cat = pd.Series(pd.Categorical([1,2,3], levels=[3,2,1]))
+ cat_base = pd.Series(pd.Categorical([2,2,2], levels=[3,2,1]))
+ cat_base2 = pd.Series(pd.Categorical([2,2,2]))
+
+ cat > cat_base
+
+ # This doesn't work because the levels are not the same
+ try:
+ cat > cat_base2
+ except TypeError as e:
+ print("TypeError: " + str(e))
+
+ cat > 2
+
+.. note::
+
+ Comparisons with `Series`, `np.array` or a `Categorical` with different levels or ordering
+ will raise an `TypeError` because custom level ordering would result in two valid results:
+ one with taking in account the ordering and one without. If you want to compare a `Categorical`
+ with such a type, you need to be explicit and convert the `Categorical` to values:
+
+.. ipython:: python
+
+ base = np.array([1,2,3])
+
+ try:
+ cat > base
+ except TypeError as e:
+ print("TypeError: " + str(e))
+
+ np.asarray(cat) > base
+
Getting the minimum and maximum, if the categorical is ordered:
.. ipython:: python
@@ -509,7 +549,8 @@ The same applies to ``df.append(df)``.
Getting Data In/Out
-------------------
-Writing data (`Series`, `Frames`) to a HDF store that contains a ``category`` dtype will currently raise ``NotImplementedError``.
+Writing data (`Series`, `Frames`) to a HDF store that contains a ``category`` dtype will currently
+raise ``NotImplementedError``.
Writing to a CSV file will convert the data, effectively removing any information about the
`Categorical` (levels and ordering). So if you read back the CSV file you have to convert the
@@ -579,7 +620,7 @@ object and not as a low level `numpy` array dtype. This leads to some problems.
try:
np.dtype("category")
except TypeError as e:
- print("TypeError: " + str(e))
+ print("TypeError: " + str(e))
dtype = pd.Categorical(["a"]).dtype
try:
diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst
index 92a35d0276e22..3d40be37dbbb3 100644
--- a/doc/source/reshaping.rst
+++ b/doc/source/reshaping.rst
@@ -503,3 +503,10 @@ handling of NaN:
pd.factorize(x, sort=True)
np.unique(x, return_inverse=True)[::-1]
+
+.. note::
+ If you just want to handle one column as a categorical variable (like R's factor),
+ you can use ``df["cat_col"] = pd.Categorical(df["col"])`` or
+ ``df["cat_col"] = df["col"].astype("category")``. For full docs on :class:`~pandas.Categorical`,
+ see the :ref:`Categorical introduction <categorical>` and the
+ :ref:`API documentation <api.categorical>`. This feature was introduced in version 0.15.
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index b91c306e9b193..c7a9aa5c3630b 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -225,7 +225,8 @@ Categoricals in Series/DataFrame
methods to manipulate. Thanks to Jan Schultz for much of this API/implementation. (:issue:`3943`, :issue:`5313`, :issue:`5314`,
:issue:`7444`, :issue:`7839`, :issue:`7848`, :issue:`7864`, :issue:`7914`).
-For full docs, see the :ref:`Categorical introduction <categorical>` and the :ref:`API documentation <api.categorical>`.
+For full docs, see the :ref:`Categorical introduction <categorical>` and the
+:ref:`API documentation <api.categorical>`.
.. ipython:: python
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index c9674aea4a715..91713ab3bc576 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -2,12 +2,13 @@
import numpy as np
from warnings import warn
+import types
from pandas import compat
from pandas.compat import u
-from pandas.core.algorithms import factorize, unique
-from pandas.core.base import PandasObject
+from pandas.core.algorithms import factorize
+from pandas.core.base import PandasObject, PandasDelegate
from pandas.core.index import Index, _ensure_index
from pandas.core.indexing import _is_null_slice
from pandas.tseries.period import PeriodIndex
@@ -18,16 +19,36 @@
def _cat_compare_op(op):
def f(self, other):
- if isinstance(other, (Categorical, np.ndarray)):
- values = np.asarray(self)
- f = getattr(values, op)
- return f(np.asarray(other))
- else:
+ # On python2, you can usually compare any type to any type, and Categoricals can be
+ # seen as a custom type, but having different results depending whether a level are
+ # the same or not is kind of insane, so be a bit stricter here and use the python3 idea
+ # of comparing only things of equal type.
+ if not self.ordered:
+ if op in ['__lt__', '__gt__','__le__','__ge__']:
+ raise TypeError("Unordered Categoricals can only compare equality or not")
+ if isinstance(other, Categorical):
+ # Two Categoricals can only be be compared if the levels are the same
+ if (len(self.levels) != len(other.levels)) or not ((self.levels == other.levels).all()):
+ raise TypeError("Categoricals can only be compared if 'levels' are the same")
+ if not (self.ordered == other.ordered):
+ raise TypeError("Categoricals can only be compared if 'ordered' is the same")
+ na_mask = (self._codes == -1) | (other._codes == -1)
+ f = getattr(self._codes, op)
+ ret = f(other._codes)
+ if na_mask.any():
+ # In other series, the leads to False, so do that here too
+ ret[na_mask] = False
+ return ret
+ elif np.isscalar(other):
if other in self.levels:
i = self.levels.get_loc(other)
return getattr(self._codes, op)(i)
else:
return np.repeat(False, len(self))
+ else:
+ msg = "Cannot compare a Categorical for op {op} with type {typ}. If you want to \n" \
+ "compare values, use 'np.asarray(cat) <op> other'."
+ raise TypeError(msg.format(op=op,typ=type(other)))
f.__name__ = op
@@ -109,9 +130,9 @@ class Categorical(PandasObject):
Attributes
----------
- levels : ndarray
+ levels : Index
The levels of this categorical
- codes : Index
+ codes : ndarray
The codes (integer positions, which point to the levels) of this categorical, read only
ordered : boolean
Whether or not this Categorical is ordered
@@ -171,6 +192,9 @@ class Categorical(PandasObject):
Categorical.max
"""
+ # For comparisons, so that numpy uses our implementation if the compare ops, which raise
+ __array_priority__ = 1000
+
def __init__(self, values, levels=None, ordered=None, name=None, fastpath=False, compat=False):
if fastpath:
@@ -208,8 +232,25 @@ def __init__(self, values, levels=None, ordered=None, name=None, fastpath=False,
# under certain versions of numpy as well
inferred = com._possibly_infer_to_datetimelike(values)
if not isinstance(inferred, np.ndarray):
+
+ # Input sanitation...
+ if com._is_sequence(values) or isinstance(values, types.GeneratorType):
+ # isnull doesn't work with generators/xrange, so convert all to lists
+ # TODO: prevent allocation of two times the array/list be converting directly
+ values = list(values)
+ elif np.isscalar(values):
+ values = [values]
+
from pandas.core.series import _sanitize_array
- values = _sanitize_array(values, None)
+ # On list with NaNs, int values will be converted to float. Use "object" dtype
+ # to prevent this. In the end objects will be casted to int/... in the level
+ # assignment step.
+ # tuple are list_like but com.isnull(<tuple>) will return a single bool,
+ # which then raises an AttributeError: 'bool' object has no attribute 'any'
+ has_null = (com.is_list_like(values) and not isinstance(values, tuple)
+ and com.isnull(values).any())
+ dtype = 'object' if has_null else None
+ values = _sanitize_array(values, None, dtype=dtype)
if levels is None:
try:
@@ -277,7 +318,7 @@ def from_array(cls, data):
return Categorical(data)
@classmethod
- def from_codes(cls, codes, levels, ordered=True, name=None):
+ def from_codes(cls, codes, levels, ordered=False, name=None):
"""
Make a Categorical type from codes and levels arrays.
@@ -294,7 +335,7 @@ def from_codes(cls, codes, levels, ordered=True, name=None):
The levels for the categorical. Items need to be unique.
ordered : boolean, optional
Whether or not this categorical is treated as a ordered categorical. If not given,
- the resulting categorical will be ordered.
+ the resulting categorical will be unordered.
name : str, optional
Name for the Categorical variable.
"""
@@ -429,9 +470,16 @@ def __array__(self, dtype=None):
Returns
-------
values : numpy array
- A numpy array of the same dtype as categorical.levels.dtype
+ A numpy array of either the specified dtype or, if dtype==None (default), the same
+ dtype as categorical.levels.dtype
"""
- return com.take_1d(self.levels.values, self._codes)
+ ret = com.take_1d(self.levels.values, self._codes)
+ if dtype and dtype != self.levels.dtype:
+ return np.asarray(ret, dtype)
+ return ret
+
+ def astype(self, dtype, order='K', casting='unsafe', subok=True, copy=True):
+ return np.asarray(self, dtype)
@property
def T(self):
@@ -503,10 +551,27 @@ def order(self, inplace=False, ascending=True, na_position='last', **kwargs):
if na_position not in ['last','first']:
raise ValueError('invalid na_position: {!r}'.format(na_position))
- codes = np.sort(self._codes.copy())
+ codes = np.sort(self._codes)
if not ascending:
codes = codes[::-1]
+ # NaN handling
+ na_mask = (codes==-1)
+ if na_mask.any():
+ n_nans = len(codes[na_mask])
+ if na_position=="first" and not ascending:
+ # in this case sort to the front
+ new_codes = codes.copy()
+ new_codes[0:n_nans] = -1
+ new_codes[n_nans:] = codes[~na_mask]
+ codes = new_codes
+ elif na_position=="last" and not ascending:
+ # ... and to the end
+ new_codes = codes.copy()
+ pos = len(codes)-n_nans
+ new_codes[0:pos] = codes[~na_mask]
+ new_codes[pos:] = -1
+ codes = new_codes
if inplace:
self._codes = codes
return
@@ -760,7 +825,8 @@ def __setitem__(self, key, value):
rvalue = value if com.is_list_like(value) else [value]
to_add = Index(rvalue)-self.levels
- if len(to_add):
+ # no assignments of values not in levels, but it's always ok to set something to np.nan
+ if len(to_add) and not com.isnull(to_add).all():
raise ValueError("cannot setitem on a Categorical with a new level,"
" set the levels first")
@@ -786,6 +852,13 @@ def __setitem__(self, key, value):
key = self._codes[key]
lindexer = self.levels.get_indexer(rvalue)
+
+ # float levels do currently return -1 for np.nan, even if np.nan is included in the index
+ # "repair" this here
+ if com.isnull(rvalue).any() and com.isnull(self.levels).any():
+ nan_pos = np.where(com.isnull(self.levels))
+ lindexer[lindexer == -1] = nan_pos
+
self._codes[key] = lindexer
#### reduction ops ####
@@ -916,16 +989,59 @@ def describe(self):
'values' : self._codes }
).groupby('codes').count()
- counts.index = self.levels.take(counts.index)
- counts = counts.reindex(self.levels)
freqs = counts / float(counts.sum())
from pandas.tools.merge import concat
result = concat([counts,freqs],axis=1)
- result.index.name = 'levels'
result.columns = ['counts','freqs']
+
+ # fill in the real levels
+ check = result.index == -1
+ if check.any():
+ # Sort -1 (=NaN) to the last position
+ index = np.arange(0, len(self.levels)+1)
+ index[-1] = -1
+ result = result.reindex(index)
+ # build new index
+ levels = np.arange(0,len(self.levels)+1 ,dtype=object)
+ levels[:-1] = self.levels
+ levels[-1] = np.nan
+ result.index = levels.take(result.index)
+ else:
+ result.index = self.levels.take(result.index)
+ result = result.reindex(self.levels)
+ result.index.name = 'levels'
+
return result
+##### The Series.cat accessor #####
+
+class CategoricalProperties(PandasDelegate):
+ """
+ This is a delegator class that passes thru limit property access
+ """
+
+ def __init__(self, values, index):
+ self.categorical = values
+ self.index = index
+
+ def _delegate_property_get(self, name):
+ return getattr(self.categorical, name)
+
+ def _delegate_property_set(self, name, new_values):
+ return setattr(self.categorical, name, new_values)
+
+ def _delegate_method(self, name, *args, **kwargs):
+ method = getattr(self.categorical, name)
+ return method(*args, **kwargs)
+
+CategoricalProperties._add_delegate_accessors(delegate=Categorical,
+ accessors=["levels", "codes", "ordered"],
+ typ='property')
+CategoricalProperties._add_delegate_accessors(delegate=Categorical,
+ accessors=["reorder_levels", "remove_unused_levels"],
+ typ='method')
+
##### utility routines #####
def _get_codes_for_values(values, levels):
diff --git a/pandas/core/common.py b/pandas/core/common.py
index bc4c95ed3323e..9e04e38b9c4e2 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -275,7 +275,9 @@ def _isnull_ndarraylike(obj):
values = getattr(obj, 'values', obj)
dtype = values.dtype
- if dtype.kind in ('O', 'S', 'U'):
+ if is_categorical_dtype(values):
+ result = _isnull_categorical(values)
+ elif dtype.kind in ('O', 'S', 'U'):
# Working around NumPy ticket 1542
shape = values.shape
@@ -285,7 +287,6 @@ def _isnull_ndarraylike(obj):
result = np.empty(shape, dtype=bool)
vec = lib.isnullobj(values.ravel())
result[...] = vec.reshape(shape)
-
elif dtype in _DATELIKE_DTYPES:
# this is the NaT pattern
result = values.view('i8') == tslib.iNaT
@@ -299,6 +300,14 @@ def _isnull_ndarraylike(obj):
return result
+def _isnull_categorical(obj):
+ ret = obj._codes == -1
+ # String/object and float levels can hold np.nan
+ if obj.levels.dtype.kind in ('S', 'O' 'f'):
+ if np.nan in obj.levels:
+ nan_pos = np.where(com.isnull(self.levels))
+ ret = ret | obj == nan_pos
+ return ret
def _isnull_ndarraylike_old(obj):
values = getattr(obj, 'values', obj)
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 8f749d07296a7..0539d803a42a4 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -177,7 +177,7 @@ def _get_footer(self):
# level infos are added to the end and in a new line, like it is done for Categoricals
# Only added when we request a name
if self.name and com.is_categorical_dtype(self.series.dtype):
- level_info = self.series.cat._repr_level_info()
+ level_info = self.series.values._repr_level_info()
if footer:
footer += "\n"
footer += level_info
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 9f29570af6f4f..de3b8d857617f 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -524,6 +524,10 @@ def _comp_method_SERIES(op, name, str_rep, masker=False):
code duplication.
"""
def na_op(x, y):
+ if com.is_categorical_dtype(x) != com.is_categorical_dtype(y):
+ msg = "Cannot compare a Categorical for op {op} with type {typ}. If you want to \n" \
+ "compare values, use 'series <op> np.asarray(cat)'."
+ raise TypeError(msg.format(op=op,typ=type(y)))
if x.dtype == np.object_:
if isinstance(y, list):
y = lib.list_to_object_array(y)
@@ -555,11 +559,16 @@ def wrapper(self, other):
index=self.index, name=name)
elif isinstance(other, pd.DataFrame): # pragma: no cover
return NotImplemented
- elif isinstance(other, (pa.Array, pd.Series, pd.Index)):
+ elif isinstance(other, (pa.Array, pd.Index)):
if len(self) != len(other):
raise ValueError('Lengths must match to compare')
return self._constructor(na_op(self.values, np.asarray(other)),
index=self.index).__finalize__(self)
+ elif isinstance(other, pd.Categorical):
+ if not com.is_categorical_dtype(self):
+ msg = "Cannot compare a Categorical for op {op} with Series of dtype {typ}.\n"\
+ "If you want to compare values, use 'series <op> np.asarray(other)'."
+ raise TypeError(msg.format(op=op,typ=self.dtype))
else:
mask = isnull(self)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 5a490992c478c..ef6bdf99915b1 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -900,7 +900,7 @@ def _repr_footer(self):
# Categorical
if com.is_categorical_dtype(self.dtype):
- level_info = self.cat._repr_level_info()
+ level_info = self.values._repr_level_info()
return u('%sLength: %d, dtype: %s\n%s') % (namestr,
len(self),
str(self.dtype.name),
@@ -2415,11 +2415,12 @@ def dt(self):
#------------------------------------------------------------------------------
# Categorical methods
- @property
+ @cache_readonly
def cat(self):
+ from pandas.core.categorical import CategoricalProperties
if not com.is_categorical_dtype(self.dtype):
raise TypeError("Can only use .cat accessor with a 'category' dtype")
- return self.values
+ return CategoricalProperties(self.values, self.index)
Series._setup_axes(['index'], info_axis=0, stat_axis=0,
aliases={'rows': 0})
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index 421e05f5a3bc7..dbfea95bb58c8 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -111,6 +111,50 @@ def test_constructor(self):
cat = pd.Categorical([1,2,3,np.nan], levels=[1,2,3])
self.assertTrue(com.is_integer_dtype(cat.levels))
+ # https://github.com/pydata/pandas/issues/3678
+ cat = pd.Categorical([np.nan,1, 2, 3])
+ self.assertTrue(com.is_integer_dtype(cat.levels))
+
+ # this should result in floats
+ cat = pd.Categorical([np.nan, 1, 2., 3 ])
+ self.assertTrue(com.is_float_dtype(cat.levels))
+
+ cat = pd.Categorical([np.nan, 1., 2., 3. ])
+ self.assertTrue(com.is_float_dtype(cat.levels))
+
+ # corner cases
+ cat = pd.Categorical([1])
+ self.assertTrue(len(cat.levels) == 1)
+ self.assertTrue(cat.levels[0] == 1)
+ self.assertTrue(len(cat.codes) == 1)
+ self.assertTrue(cat.codes[0] == 0)
+
+ cat = pd.Categorical(["a"])
+ self.assertTrue(len(cat.levels) == 1)
+ self.assertTrue(cat.levels[0] == "a")
+ self.assertTrue(len(cat.codes) == 1)
+ self.assertTrue(cat.codes[0] == 0)
+
+ # Scalars should be converted to lists
+ cat = pd.Categorical(1)
+ self.assertTrue(len(cat.levels) == 1)
+ self.assertTrue(cat.levels[0] == 1)
+ self.assertTrue(len(cat.codes) == 1)
+ self.assertTrue(cat.codes[0] == 0)
+
+
+ def test_constructor_with_generator(self):
+ # This was raising an Error in isnull(single_val).any() because isnull returned a scalar
+ # for a generator
+
+ a = (a for x in [1,2])
+ cat = Categorical(a)
+
+ # This does actually a xrange, which is a sequence instead of a generator
+ from pandas.core.index import MultiIndex
+ MultiIndex.from_product([range(5), ['a', 'b', 'c']])
+
+
def test_from_codes(self):
# too few levels
@@ -134,7 +178,7 @@ def f():
self.assertRaises(ValueError, f)
- exp = Categorical(["a","b","c"])
+ exp = Categorical(["a","b","c"], ordered=False)
res = Categorical.from_codes([0,1,2], ["a","b","c"])
self.assertTrue(exp.equals(res))
@@ -179,6 +223,62 @@ def test_comparisons(self):
expected = np.repeat(False, len(self.factor))
self.assert_numpy_array_equal(result, expected)
+ # comparisons with categoricals
+ cat_rev = pd.Categorical(["a","b","c"], levels=["c","b","a"])
+ cat_rev_base = pd.Categorical(["b","b","b"], levels=["c","b","a"])
+ cat = pd.Categorical(["a","b","c"])
+ cat_base = pd.Categorical(["b","b","b"], levels=cat.levels)
+
+ # comparisons need to take level ordering into account
+ res_rev = cat_rev > cat_rev_base
+ exp_rev = np.array([True, False, False])
+ self.assert_numpy_array_equal(res_rev, exp_rev)
+
+ res_rev = cat_rev < cat_rev_base
+ exp_rev = np.array([False, False, True])
+ self.assert_numpy_array_equal(res_rev, exp_rev)
+
+ res = cat > cat_base
+ exp = np.array([False, False, True])
+ self.assert_numpy_array_equal(res, exp)
+
+ # Only categories with same levels can be compared
+ def f():
+ cat > cat_rev
+ self.assertRaises(TypeError, f)
+
+ cat_rev_base2 = pd.Categorical(["b","b","b"], levels=["c","b","a","d"])
+ def f():
+ cat_rev > cat_rev_base2
+ self.assertRaises(TypeError, f)
+
+ # Only categories with same ordering information can be compared
+ cat_unorderd = cat.copy()
+ cat_unorderd.ordered = False
+ self.assertFalse((cat > cat).any())
+ def f():
+ cat > cat_unorderd
+ self.assertRaises(TypeError, f)
+
+ # comparison (in both directions) with Series will raise
+ s = Series(["b","b","b"])
+ self.assertRaises(TypeError, lambda: cat > s)
+ self.assertRaises(TypeError, lambda: cat_rev > s)
+ self.assertRaises(TypeError, lambda: s < cat)
+ self.assertRaises(TypeError, lambda: s < cat_rev)
+
+ # comparison with numpy.array will raise in both direction, but only on newer
+ # numpy versions
+ a = np.array(["b","b","b"])
+ self.assertRaises(TypeError, lambda: cat > a)
+ self.assertRaises(TypeError, lambda: cat_rev > a)
+
+ # The following work via '__array_priority__ = 1000'
+ # but only on numpy > 1.6.1?
+ tm._skip_if_not_numpy17_friendly()
+ self.assertRaises(TypeError, lambda: a < cat)
+ self.assertRaises(TypeError, lambda: a < cat_rev)
+
def test_na_flags_int_levels(self):
# #1457
@@ -205,6 +305,16 @@ def test_describe(self):
).set_index('levels')
tm.assert_frame_equal(desc, expected)
+ # check unused levels
+ cat = self.factor.copy()
+ cat.levels = ["a","b","c","d"]
+ desc = cat.describe()
+ expected = DataFrame.from_dict(dict(counts=[3, 2, 3, np.nan],
+ freqs=[3/8., 2/8., 3/8., np.nan],
+ levels=['a', 'b', 'c', 'd'])
+ ).set_index('levels')
+ tm.assert_frame_equal(desc, expected)
+
# check an integer one
desc = Categorical([1,2,3,1,2,3,3,2,1,1,1]).describe()
expected = DataFrame.from_dict(dict(counts=[5, 3, 3],
@@ -214,6 +324,47 @@ def test_describe(self):
).set_index('levels')
tm.assert_frame_equal(desc, expected)
+ # https://github.com/pydata/pandas/issues/3678
+ # describe should work with NaN
+ cat = pd.Categorical([np.nan,1, 2, 2])
+ desc = cat.describe()
+ expected = DataFrame.from_dict(dict(counts=[1, 2, 1],
+ freqs=[1/4., 2/4., 1/4.],
+ levels=[1,2,np.nan]
+ )
+ ).set_index('levels')
+ tm.assert_frame_equal(desc, expected)
+
+ # having NaN as level and as "not available" should also print two NaNs in describe!
+ cat = pd.Categorical([np.nan,1, 2, 2])
+ cat.levels = [1,2,np.nan]
+ desc = cat.describe()
+ expected = DataFrame.from_dict(dict(counts=[1, 2, np.nan, 1],
+ freqs=[1/4., 2/4., np.nan, 1/4.],
+ levels=[1,2,np.nan,np.nan]
+ )
+ ).set_index('levels')
+ tm.assert_frame_equal(desc, expected)
+
+ # empty levels show up as NA
+ cat = Categorical(["a","b","b","b"], levels=['a','b','c'], ordered=True)
+ result = cat.describe()
+
+ expected = DataFrame([[1,0.25],[3,0.75],[np.nan,np.nan]],
+ columns=['counts','freqs'],
+ index=Index(['a','b','c'],name='levels'))
+ tm.assert_frame_equal(result,expected)
+
+ # NA as a level
+ cat = pd.Categorical(["a","c","c",np.nan], levels=["b","a","c",np.nan] )
+ result = cat.describe()
+
+ expected = DataFrame([[np.nan, np.nan],[1,0.25],[2,0.5], [1,0.25]],
+ columns=['counts','freqs'],
+ index=Index(['b','a','c',np.nan],name='levels'))
+ tm.assert_frame_equal(result,expected)
+
+
def test_print(self):
expected = [" a", " b", " b", " a", " a", " c", " c", " c",
"Levels (3, object): [a < b < c]"]
@@ -496,6 +647,44 @@ def test_slicing_directly(self):
self.assert_numpy_array_equal(sliced._codes, expected._codes)
tm.assert_index_equal(sliced.levels, expected.levels)
+ def test_set_item_nan(self):
+ cat = pd.Categorical([1,2,3])
+ exp = pd.Categorical([1,np.nan,3], levels=[1,2,3])
+ cat[1] = np.nan
+ self.assertTrue(cat.equals(exp))
+
+ # if nan in levels, the proper code should be set!
+ cat = pd.Categorical([1,2,3, np.nan], levels=[1,2,3])
+ cat.levels = [1,2,3, np.nan]
+ cat[1] = np.nan
+ exp = np.array([0,3,2,-1])
+ self.assert_numpy_array_equal(cat.codes, exp)
+
+ cat = pd.Categorical([1,2,3, np.nan], levels=[1,2,3])
+ cat.levels = [1,2,3, np.nan]
+ cat[1:3] = np.nan
+ exp = np.array([0,3,3,-1])
+ self.assert_numpy_array_equal(cat.codes, exp)
+
+ cat = pd.Categorical([1,2,3, np.nan], levels=[1,2,3])
+ cat.levels = [1,2,3, np.nan]
+ cat[1:3] = [np.nan, 1]
+ exp = np.array([0,3,0,-1])
+ self.assert_numpy_array_equal(cat.codes, exp)
+
+ cat = pd.Categorical([1,2,3, np.nan], levels=[1,2,3])
+ cat.levels = [1,2,3, np.nan]
+ cat[1:3] = [np.nan, np.nan]
+ exp = np.array([0,3,3,-1])
+ self.assert_numpy_array_equal(cat.codes, exp)
+
+ cat = pd.Categorical([1,2,3, np.nan], levels=[1,2,3])
+ cat.levels = [1,2,3, np.nan]
+ cat[pd.isnull(cat)] = np.nan
+ exp = np.array([0,1,2,3])
+ self.assert_numpy_array_equal(cat.codes, exp)
+
+
class TestCategoricalAsBlock(tm.TestCase):
_multiprocess_can_split_ = True
@@ -616,7 +805,7 @@ def test_sideeffects_free(self):
# so this WILL change values
cat = Categorical(["a","b","c","a"])
s = pd.Series(cat)
- self.assertTrue(s.cat is cat)
+ self.assertTrue(s.values is cat)
s.cat.levels = [1,2,3]
exp_s = np.array([1,2,3,1])
self.assert_numpy_array_equal(s.__array__(), exp_s)
@@ -632,20 +821,20 @@ def test_nan_handling(self):
# Nans are represented as -1 in labels
s = Series(Categorical(["a","b",np.nan,"a"]))
self.assert_numpy_array_equal(s.cat.levels, np.array(["a","b"]))
- self.assert_numpy_array_equal(s.cat._codes, np.array([0,1,-1,0]))
+ self.assert_numpy_array_equal(s.cat.codes, np.array([0,1,-1,0]))
# If levels have nan included, the label should point to that instead
s2 = Series(Categorical(["a","b",np.nan,"a"], levels=["a","b",np.nan]))
self.assert_numpy_array_equal(s2.cat.levels,
np.array(["a","b",np.nan], dtype=np.object_))
- self.assert_numpy_array_equal(s2.cat._codes, np.array([0,1,2,0]))
+ self.assert_numpy_array_equal(s2.cat.codes, np.array([0,1,2,0]))
# Changing levels should also make the replaced level np.nan
s3 = Series(Categorical(["a","b","c","a"]))
s3.cat.levels = ["a","b",np.nan]
self.assert_numpy_array_equal(s3.cat.levels,
np.array(["a","b",np.nan], dtype=np.object_))
- self.assert_numpy_array_equal(s3.cat._codes, np.array([0,1,2,0]))
+ self.assert_numpy_array_equal(s3.cat.codes, np.array([0,1,2,0]))
def test_sequence_like(self):
@@ -655,8 +844,8 @@ def test_sequence_like(self):
df['grade'] = Categorical(df['raw_grade'])
# basic sequencing testing
- result = list(df.grade.cat)
- expected = np.array(df.grade.cat).tolist()
+ result = list(df.grade.values)
+ expected = np.array(df.grade.values).tolist()
tm.assert_almost_equal(result,expected)
# iteration
@@ -698,7 +887,7 @@ def test_series_delegations(self):
exp_values = np.array(["a","b","c","a"])
s.cat.reorder_levels(["c","b","a"])
self.assert_numpy_array_equal(s.cat.levels, exp_levels)
- self.assert_numpy_array_equal(s.cat.__array__(), exp_values)
+ self.assert_numpy_array_equal(s.values.__array__(), exp_values)
self.assert_numpy_array_equal(s.__array__(), exp_values)
# remove unused levels
@@ -707,7 +896,7 @@ def test_series_delegations(self):
exp_values = np.array(["a","b","b","a"])
s.cat.remove_unused_levels()
self.assert_numpy_array_equal(s.cat.levels, exp_levels)
- self.assert_numpy_array_equal(s.cat.__array__(), exp_values)
+ self.assert_numpy_array_equal(s.values.__array__(), exp_values)
self.assert_numpy_array_equal(s.__array__(), exp_values)
# This method is likely to be confused, so test that it raises an error on wrong inputs:
@@ -766,31 +955,16 @@ def test_describe(self):
result = self.cat.describe()
self.assertEquals(len(result.columns),1)
- # empty levels show up as NA
- s = Series(Categorical(["a","b","b","b"], levels=['a','b','c'], ordered=True))
- result = s.cat.describe()
- expected = DataFrame([[1,0.25],[3,0.75],[np.nan,np.nan]],
- columns=['counts','freqs'],
- index=Index(['a','b','c'],name='levels'))
- tm.assert_frame_equal(result,expected)
+ # In a frame, describe() for the cat should be the same as for string arrays (count, unique,
+ # top, freq)
+ cat = Categorical(["a","b","b","b"], levels=['a','b','c'], ordered=True)
+ s = Series(cat)
result = s.describe()
expected = Series([4,2,"b",3],index=['count','unique','top', 'freq'])
tm.assert_series_equal(result,expected)
- # NA as a level
- cat = pd.Categorical(["a","c","c",np.nan], levels=["b","a","c",np.nan] )
- result = cat.describe()
-
- expected = DataFrame([[np.nan, np.nan],[1,0.25],[2,0.5], [1,0.25]],
- columns=['counts','freqs'],
- index=Index(['b','a','c',np.nan],name='levels'))
- tm.assert_frame_equal(result,expected)
-
-
- # In a frame, describe() for the cat should be the same as for string arrays (count, unique,
- # top, freq)
cat = pd.Series(pd.Categorical(["a","b","c","c"]))
df3 = pd.DataFrame({"cat":cat, "s":["a","b","c","c"]})
res = df3.describe()
@@ -970,7 +1144,7 @@ def test_sort(self):
# Cats must be sorted in a dataframe
res = df.sort(columns=["string"], ascending=False)
exp = np.array(["d", "c", "b", "a"])
- self.assert_numpy_array_equal(res["sort"].cat.__array__(), exp)
+ self.assert_numpy_array_equal(res["sort"].values.__array__(), exp)
self.assertEqual(res["sort"].dtype, "category")
res = df.sort(columns=["sort"], ascending=False)
@@ -1013,17 +1187,29 @@ def f():
res = cat.order(ascending=False, na_position='last')
exp_val = np.array(["d","c","b","a", np.nan],dtype=object)
exp_levels = np.array(["a","b","c","d"],dtype=object)
- # FIXME: IndexError: Out of bounds on buffer access (axis 0)
- #self.assert_numpy_array_equal(res.__array__(), exp_val)
- #self.assert_numpy_array_equal(res.levels, exp_levels)
+ self.assert_numpy_array_equal(res.__array__(), exp_val)
+ self.assert_numpy_array_equal(res.levels, exp_levels)
+
+ cat = Categorical(["a","c","b","d", np.nan], ordered=True)
+ res = cat.order(ascending=False, na_position='first')
+ exp_val = np.array([np.nan, "d","c","b","a"],dtype=object)
+ exp_levels = np.array(["a","b","c","d"],dtype=object)
+ self.assert_numpy_array_equal(res.__array__(), exp_val)
+ self.assert_numpy_array_equal(res.levels, exp_levels)
cat = Categorical(["a","c","b","d", np.nan], ordered=True)
res = cat.order(ascending=False, na_position='first')
exp_val = np.array([np.nan, "d","c","b","a"],dtype=object)
exp_levels = np.array(["a","b","c","d"],dtype=object)
- # FIXME: IndexError: Out of bounds on buffer access (axis 0)
- #self.assert_numpy_array_equal(res.__array__(), exp_val)
- #self.assert_numpy_array_equal(res.levels, exp_levels)
+ self.assert_numpy_array_equal(res.__array__(), exp_val)
+ self.assert_numpy_array_equal(res.levels, exp_levels)
+
+ cat = Categorical(["a","c","b","d", np.nan], ordered=True)
+ res = cat.order(ascending=False, na_position='last')
+ exp_val = np.array(["d","c","b","a",np.nan],dtype=object)
+ exp_levels = np.array(["a","b","c","d"],dtype=object)
+ self.assert_numpy_array_equal(res.__array__(), exp_val)
+ self.assert_numpy_array_equal(res.levels, exp_levels)
def test_slicing(self):
cat = Series(Categorical([1,2,3,4]))
@@ -1473,6 +1659,63 @@ def f():
df.loc[2:3,"b"] = pd.Categorical(["b","b"], levels=["a","b"])
tm.assert_frame_equal(df, exp)
+ # ensure that one can set something to np.nan
+ s = Series(Categorical([1,2,3]))
+ exp = Series(Categorical([1,np.nan,3]))
+ s[1] = np.nan
+ tm.assert_series_equal(s, exp)
+
+ def test_comparisons(self):
+ tests_data = [(list("abc"), list("cba"), list("bbb")),
+ ([1,2,3], [3,2,1], [2,2,2])]
+ for data , reverse, base in tests_data:
+ cat_rev = pd.Series(pd.Categorical(data, levels=reverse))
+ cat_rev_base = pd.Series(pd.Categorical(base, levels=reverse))
+ cat = pd.Series(pd.Categorical(data))
+ cat_base = pd.Series(pd.Categorical(base, levels=cat.cat.levels))
+ s = Series(base)
+ a = np.array(base)
+
+ # comparisons need to take level ordering into account
+ res_rev = cat_rev > cat_rev_base
+ exp_rev = Series([True, False, False])
+ tm.assert_series_equal(res_rev, exp_rev)
+
+ res_rev = cat_rev < cat_rev_base
+ exp_rev = Series([False, False, True])
+ tm.assert_series_equal(res_rev, exp_rev)
+
+ res = cat > cat_base
+ exp = Series([False, False, True])
+ tm.assert_series_equal(res, exp)
+
+ # Only categories with same levels can be compared
+ def f():
+ cat > cat_rev
+ self.assertRaises(TypeError, f)
+
+ # categorical cannot be compared to Series or numpy array, and also not the other way
+ # around
+ self.assertRaises(TypeError, lambda: cat > s)
+ self.assertRaises(TypeError, lambda: cat_rev > s)
+ self.assertRaises(TypeError, lambda: cat > a)
+ self.assertRaises(TypeError, lambda: cat_rev > a)
+
+ self.assertRaises(TypeError, lambda: s < cat)
+ self.assertRaises(TypeError, lambda: s < cat_rev)
+
+ self.assertRaises(TypeError, lambda: a < cat)
+ self.assertRaises(TypeError, lambda: a < cat_rev)
+
+ # Categoricals can be compared to scalar values
+ res = cat_rev > base[0]
+ tm.assert_series_equal(res, exp)
+
+ # And test NaN handling...
+ cat = pd.Series(pd.Categorical(["a","b","c", np.nan]))
+ exp = Series([True, True, True, False])
+ res = (cat == cat)
+ tm.assert_series_equal(res, exp)
def test_concat(self):
cat = pd.Categorical(["a","b"], levels=["a","b"])
| Some fixups for Categoricals.
Fixes: #3678
- [x] Maybe change Series.cat after the discussion in https://github.com/pydata/pandas/issues/7207 ?
- [ ] remove Series.cat from tab completition if Series is not of dtype category
- [x] fix for the "FIXME" in unittests
- [x] Look at problems in docs (-> hdf support)
- [x] Fixup Comparison thingies...
| https://api.github.com/repos/pandas-dev/pandas/pulls/7768 | 2014-07-16T18:42:14Z | 2014-08-12T16:18:56Z | null | 2014-08-12T23:45:39Z |
Docs: MultiIndex support is hardly bleeding edge, remove docs warnings. | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 9c73c679f726a..ed5bfd0ba4804 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -1638,15 +1638,6 @@ analysis.
See the :ref:`cookbook<cookbook.multi_index>` for some advanced strategies
-.. note::
-
- Given that hierarchical indexing is so new to the library, it is definitely
- "bleeding-edge" functionality but is certainly suitable for production. But,
- there may inevitably be some minor API changes as more use cases are
- explored and any weaknesses in the design / implementation are identified.
- pandas aims to be "eminently usable" so any feedback about new
- functionality like this is extremely helpful.
-
Creating a MultiIndex (hierarchical index) object
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
| https://api.github.com/repos/pandas-dev/pandas/pulls/7767 | 2014-07-16T07:25:44Z | 2014-07-16T09:59:30Z | 2014-07-16T09:59:30Z | 2014-07-16T09:59:39Z | |
BUG: rolling_* functions should not shrink window | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 06c93541a7783..a269645b841b0 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -37,6 +37,30 @@ API changes
- Raise a ``ValueError`` in ``df.to_hdf`` with 'fixed' format, if ``df`` has non-unique columns as the resulting file will be broken (:issue:`7761`)
+- :func:`rolling_min`, :func:`rolling_max`, :func:`rolling_cov`, and :func:`rolling_corr`
+ now return objects with all ``NaN``s when ``len(arg) < min_periods <= window``
+ (like all other rolling functions do) rather than producing an error message. (:issue:`7766`)
+ For example, this is the old behavior:
+ .. ipython:: python
+ In [14]: s = Series([10, 11, 12, 13])
+
+ In [15]: rolling_min(s, window=10, min_periods=5)
+ ---------------------------------------------------------------------------
+ ValueError Traceback (most recent call last)
+ <ipython-input-15-f622819d7987> in <module>()
+ ----> 1 rolling_min(s, window=10, min_periods=5)
+ ...
+ ValueError: min_periods (5) must be <= window (4)
+ whereas this is the new behavior:
+ .. ipython:: python
+ In [16]: rolling_min(s, window=10, min_periods=5)
+ Out[16]:
+ 0 NaN
+ 1 NaN
+ 2 NaN
+ 3 NaN
+ dtype: float64
+
.. _whatsnew_0150.cat:
Categoricals in Series/DataFrame
diff --git a/pandas/algos.pyx b/pandas/algos.pyx
index 2a07272acd0e8..d993447fc7408 100644
--- a/pandas/algos.pyx
+++ b/pandas/algos.pyx
@@ -1551,8 +1551,6 @@ def roll_max2(ndarray[float64_t] a, int window, int minp):
minp = _check_minp(window, minp, n0)
- window = min(window, n0)
-
ring = <pairs*>stdlib.malloc(window * sizeof(pairs))
end = ring + window
last = ring
@@ -1650,8 +1648,6 @@ def roll_min2(np.ndarray[np.float64_t, ndim=1] a, int window, int minp):
raise ValueError('Invalid min_periods size %d greater than window %d'
% (minp, window))
- window = min(window, n0)
-
minp = _check_minp(window, minp, n0)
ring = <pairs*>stdlib.malloc(window * sizeof(pairs))
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index e5d96ee6b8f0f..5a405a5b74f7b 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -211,9 +211,8 @@ def rolling_cov(arg1, arg2=None, window=None, min_periods=None, freq=None,
arg2 = _conv_timerule(arg2, freq, how)
def _get_cov(X, Y):
- adj_window = min(window, len(X), len(Y))
- mean = lambda x: rolling_mean(x, adj_window, min_periods, center=center)
- count = rolling_count(X + Y, adj_window, center=center)
+ mean = lambda x: rolling_mean(x, window, min_periods, center=center)
+ count = rolling_count(X + Y, window, center=center)
bias_adj = count / (count - 1)
return (mean(X * Y) - mean(X) * mean(Y)) * bias_adj
rs = _flex_binary_moment(arg1, arg2, _get_cov, pairwise=bool(pairwise))
@@ -236,12 +235,11 @@ def rolling_corr(arg1, arg2=None, window=None, min_periods=None, freq=None,
arg2 = _conv_timerule(arg2, freq, how)
def _get_corr(a, b):
- adj_window = min(window, len(a), len(b))
- num = rolling_cov(a, b, adj_window, min_periods, freq=freq,
+ num = rolling_cov(a, b, window, min_periods, freq=freq,
center=center)
- den = (rolling_std(a, adj_window, min_periods, freq=freq,
+ den = (rolling_std(a, window, min_periods, freq=freq,
center=center) *
- rolling_std(b, adj_window, min_periods, freq=freq,
+ rolling_std(b, window, min_periods, freq=freq,
center=center))
return num / den
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 8f20a4d421045..9c8e958055191 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -6,9 +6,9 @@
from numpy.random import randn
import numpy as np
-from pandas import Series, DataFrame, bdate_range, isnull, notnull
+from pandas import Series, DataFrame, Panel, bdate_range, isnull, notnull
from pandas.util.testing import (
- assert_almost_equal, assert_series_equal, assert_frame_equal
+ assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal
)
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
@@ -841,6 +841,46 @@ def test_rolling_corr_diff_length(self):
result = mom.rolling_corr(s1, s2a, window=3, min_periods=2)
assert_series_equal(result, expected)
+ def test_rolling_functions_window_non_shrinkage(self):
+ # GH 7764
+ s = Series(range(4))
+ s_expected = Series(np.nan, index=s.index)
+ df = DataFrame([[1,5], [3, 2], [3,9], [-1,0]], columns=['A','B'])
+ df_expected = DataFrame(np.nan, index=df.index, columns=df.columns)
+ df_expected_panel = Panel(items=df.index, major_axis=df.columns, minor_axis=df.columns)
+
+ functions = [lambda x: mom.rolling_cov(x, x, pairwise=False, window=10, min_periods=5),
+ lambda x: mom.rolling_corr(x, x, pairwise=False, window=10, min_periods=5),
+ lambda x: mom.rolling_max(x, window=10, min_periods=5),
+ lambda x: mom.rolling_min(x, window=10, min_periods=5),
+ lambda x: mom.rolling_sum(x, window=10, min_periods=5),
+ lambda x: mom.rolling_mean(x, window=10, min_periods=5),
+ lambda x: mom.rolling_std(x, window=10, min_periods=5),
+ lambda x: mom.rolling_var(x, window=10, min_periods=5),
+ lambda x: mom.rolling_skew(x, window=10, min_periods=5),
+ lambda x: mom.rolling_kurt(x, window=10, min_periods=5),
+ lambda x: mom.rolling_quantile(x, quantile=0.5, window=10, min_periods=5),
+ lambda x: mom.rolling_median(x, window=10, min_periods=5),
+ lambda x: mom.rolling_apply(x, func=sum, window=10, min_periods=5),
+ lambda x: mom.rolling_window(x, win_type='boxcar', window=10, min_periods=5),
+ ]
+ for f in functions:
+ s_result = f(s)
+ assert_series_equal(s_result, s_expected)
+
+ df_result = f(df)
+ assert_frame_equal(df_result, df_expected)
+
+ functions = [lambda x: mom.rolling_cov(x, x, pairwise=True, window=10, min_periods=5),
+ lambda x: mom.rolling_corr(x, x, pairwise=True, window=10, min_periods=5),
+ # rolling_corr_pairwise is depracated, so the following line should be deleted
+ # when rolling_corr_pairwise is removed.
+ lambda x: mom.rolling_corr_pairwise(x, x, window=10, min_periods=5),
+ ]
+ for f in functions:
+ df_result_panel = f(df)
+ assert_panel_equal(df_result_panel, df_expected_panel)
+
def test_expanding_cov_pairwise_diff_length(self):
# GH 7512
df1 = DataFrame([[1,5], [3, 2], [3,9]], columns=['A','B'])
| Closes https://github.com/pydata/pandas/issues/7764.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7766 | 2014-07-16T01:29:45Z | 2014-07-23T12:17:01Z | null | 2014-09-10T00:12:30Z |
Closes #7758 - astype(unicode) returning unicode. | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index f305d088e996f..933cc92fa5992 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -184,8 +184,7 @@ There are no experimental changes in 0.15.0
Bug Fixes
~~~~~~~~~
-
-
+- Bug in Series.astype("unicode") actually calling str (:issue:`7758`)
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 53f7415ac8ef6..1a57c9c33ba7c 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -2492,6 +2492,9 @@ def _astype_nansafe(arr, dtype, copy=True):
elif arr.dtype == np.object_ and np.issubdtype(dtype.type, np.integer):
# work around NumPy brokenness, #1987
return lib.astype_intsafe(arr.ravel(), dtype).reshape(arr.shape)
+ elif issubclass(dtype.type, compat.text_type):
+ # in Py3 that's str, in Py2 that's unicode
+ return lib.astype_unicode(arr.ravel()).reshape(arr.shape)
elif issubclass(dtype.type, compat.string_types):
return lib.astype_str(arr.ravel()).reshape(arr.shape)
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index 7690cc4819dd5..373320393bff2 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -781,6 +781,16 @@ def astype_intsafe(ndarray[object] arr, new_dtype):
return result
+cpdef ndarray[object] astype_unicode(ndarray arr):
+ cdef:
+ Py_ssize_t i, n = arr.size
+ ndarray[object] result = np.empty(n, dtype=object)
+
+ for i in range(n):
+ util.set_value_at(result, i, unicode(arr[i]))
+
+ return result
+
cpdef ndarray[object] astype_str(ndarray arr):
cdef:
Py_ssize_t i, n = arr.size
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 1ae6ceb7ae2b4..fda0abe07050d 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1,3 +1,4 @@
+# coding=utf-8
# pylint: disable-msg=E1101,W0612
import sys
@@ -4851,13 +4852,41 @@ def test_astype_str(self):
s1 = Series([digits * 10, tm.rands(63), tm.rands(64),
tm.rands(1000)])
s2 = Series([digits * 10, tm.rands(63), tm.rands(64), nan, 1.0])
- types = (compat.text_type,) + (np.str_, np.unicode_)
+ types = (compat.text_type, np.str_)
for typ in types:
for s in (s1, s2):
res = s.astype(typ)
expec = s.map(compat.text_type)
assert_series_equal(res, expec)
+ def test_astype_unicode(self):
+ # a bit of magic is required to set default encoding encoding to utf-8
+ digits = string.digits
+ test_series = [
+ Series([digits * 10, tm.rands(63), tm.rands(64), tm.rands(1000)]),
+ Series([u"データーサイエンス、お前はもう死んでいる"]),
+
+ ]
+
+ former_encoding = None
+ if not compat.PY3:
+ # in python we can force the default encoding
+ # for this test
+ former_encoding = sys.getdefaultencoding()
+ reload(sys)
+ sys.setdefaultencoding("utf-8")
+ if sys.getdefaultencoding() == "utf-8":
+ test_series.append(Series([u"野菜食べないとやばい".encode("utf-8")]))
+ for s in test_series:
+ res = s.astype("unicode")
+ expec = s.map(compat.text_type)
+ assert_series_equal(res, expec)
+ # restore the former encoding
+ if former_encoding is not None and former_encoding != "utf-8":
+ reload(sys)
+ sys.setdefaultencoding(former_encoding)
+
+
def test_map(self):
index, data = tm.getMixedTypeDict()
| I didn't have to rely on `infer_dtype` as suggested by @jreback in #7758
as I delegated to numpy to do all the dirty job :
Just calls numpy.unicode on all the values.
Please have a critical look at this pull request before merging :
as I am unfamiliar with both pandas and cython; and may have
misunderstood the way pandas is working.
Unit tests are attached and seem to work alright on `python 2.7.2` and `python 3.3.0`
closes #7758
| https://api.github.com/repos/pandas-dev/pandas/pulls/7765 | 2014-07-15T21:42:06Z | 2014-07-16T11:57:48Z | null | 2014-07-16T21:49:37Z |
docs: rewrite .iloc accessing beyond ends. | diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 9c73c679f726a..8f4cb1e1e6a68 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -415,20 +415,29 @@ For getting a cross section using an integer position (equiv to ``df.xs(1)``)
df1.iloc[1]
-There is one significant departure from standard python/numpy slicing semantics.
-python/numpy allow slicing past the end of an array without an associated error.
+Out of range slice indexes are handled gracefully just as in Python/Numpy.
.. ipython:: python
# these are allowed in python/numpy.
+ # Only works in Pandas starting from v0.14.0.
x = list('abcdef')
+ x
x[4:10]
x[8:10]
+ s = Series(x)
+ s
+ s.iloc[4:10]
+ s.iloc[8:10]
-- as of v0.14.0, ``iloc`` will now accept out-of-bounds indexers for slices, e.g. a value that exceeds the length of the object being
- indexed. These will be excluded. This will make pandas conform more with pandas/numpy indexing of out-of-bounds
- values. A single indexer / list of indexers that is out-of-bounds will still raise
- ``IndexError`` (:issue:`6296`, :issue:`6299`). This could result in an empty axis (e.g. an empty DataFrame being returned)
+.. note::
+
+ Prior to v0.14.0, ``iloc`` would not accept out of bounds indexers for
+ slices, e.g. a value that exceeds the length of the object being indexed.
+
+
+Note that this could result in an empty axis (e.g. an empty DataFrame being
+returned)
.. ipython:: python
@@ -438,7 +447,9 @@ python/numpy allow slicing past the end of an array without an associated error.
dfl.iloc[:,1:3]
dfl.iloc[4:6]
-These are out-of-bounds selections
+A single indexer that is out of bounds will raise an ``IndexError``.
+A list of indexers where any element is out of bounds will raise an
+``IndexError``
.. code-block:: python
| Let's actually talk about what the current behaviour is, not what the
behaviour used to be.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7756 | 2014-07-15T02:31:45Z | 2014-07-15T12:57:21Z | 2014-07-15T12:57:21Z | 2014-07-15T12:57:21Z |
DOC: Fix typo conncection -> connection | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 23ca80d771df9..1ee5c55c0ae06 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -264,7 +264,7 @@ def read_sql_table(table_name, con, index_col=None, coerce_float=True,
table_name : string
Name of SQL table in database
con : SQLAlchemy engine
- Sqlite DBAPI conncection mode not supported
+ Sqlite DBAPI connection mode not supported
index_col : string, optional
Column to set as index
coerce_float : boolean, default True
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/7747 | 2014-07-14T04:53:52Z | 2014-07-14T07:39:19Z | 2014-07-14T07:39:19Z | 2014-07-14T07:39:24Z |
Docs fixes | diff --git a/doc/README.rst b/doc/README.rst
index 1a105a7a65a81..660a3b7232891 100644
--- a/doc/README.rst
+++ b/doc/README.rst
@@ -33,8 +33,8 @@ Some other important things to know about the docs:
itself and the docs in this folder ``pandas/doc/``.
The docstrings provide a clear explanation of the usage of the individual
- functions, while the documentation in this filder consists of tutorial-like
- overviews per topic together with some other information (whatsnew,
+ functions, while the documentation in this folder consists of tutorial-like
+ overviews per topic together with some other information (what's new,
installation, etc).
- The docstrings follow the **Numpy Docstring Standard** which is used widely
@@ -56,7 +56,7 @@ Some other important things to know about the docs:
x = 2
x**3
- will be renderd as
+ will be rendered as
::
@@ -66,7 +66,7 @@ Some other important things to know about the docs:
Out[2]: 8
This means that almost all code examples in the docs are always run (and the
- ouptut saved) during the doc build. This way, they will always be up to date,
+ output saved) during the doc build. This way, they will always be up to date,
but it makes the doc building a bit more complex.
@@ -135,12 +135,12 @@ If you want to do a full clean build, do::
Staring with 0.13.1 you can tell ``make.py`` to compile only a single section
of the docs, greatly reducing the turn-around time for checking your changes.
-You will be prompted to delete unrequired `.rst` files, since the last commited
-version can always be restored from git.
+You will be prompted to delete `.rst` files that aren't required, since the
+last committed version can always be restored from git.
::
- #omit autosummary and api section
+ #omit autosummary and API section
python make.py clean
python make.py --no-api
diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index a9a97ee56813c..2111bb2d72dcb 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -260,7 +260,7 @@ For slicing columns explicitly
df.iloc[:,1:3]
-For getting a value explicity
+For getting a value explicitly
.. ipython:: python
diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 4d67616c5cd60..a503367c13427 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -346,7 +346,7 @@ General DataFrame Combine
The ``combine_first`` method above calls the more general DataFrame method
``combine``. This method takes another DataFrame and a combiner function,
aligns the input DataFrame and then passes the combiner function pairs of
-Series (ie, columns whose names are the same).
+Series (i.e., columns whose names are the same).
So, for instance, to reproduce ``combine_first`` as above:
@@ -1461,7 +1461,7 @@ from the current type (say ``int`` to ``float``)
df3.dtypes
The ``values`` attribute on a DataFrame return the *lower-common-denominator* of the dtypes, meaning
-the dtype that can accommodate **ALL** of the types in the resulting homogenous dtyped numpy array. This can
+the dtype that can accommodate **ALL** of the types in the resulting homogeneous dtyped numpy array. This can
force some *upcasting*.
.. ipython:: python
diff --git a/doc/source/cookbook.rst b/doc/source/cookbook.rst
index 844112312cdce..fd68427a86951 100644
--- a/doc/source/cookbook.rst
+++ b/doc/source/cookbook.rst
@@ -499,7 +499,7 @@ The :ref:`HDFStores <io.hdf5>` docs
`Merging on-disk tables with millions of rows
<http://stackoverflow.com/questions/14614512/merging-two-tables-with-millions-of-rows-in-python/14617925#14617925>`__
-Deduplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
+De-duplicating a large store by chunks, essentially a recursive reduction operation. Shows a function for taking in data from
csv file and creating a store by chunks, with date parsing as well.
`See here
<http://stackoverflow.com/questions/16110252/need-to-compare-very-large-files-around-1-5gb-in-python/16110391#16110391>`__
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 7c43a03e68013..928de285982cf 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -118,7 +118,7 @@ provided. The value will be repeated to match the length of **index**
Series is ndarray-like
~~~~~~~~~~~~~~~~~~~~~~
-``Series`` acts very similary to a ``ndarray``, and is a valid argument to most NumPy functions.
+``Series`` acts very similarly to a ``ndarray``, and is a valid argument to most NumPy functions.
However, things like slicing also slice the index.
.. ipython :: python
@@ -474,7 +474,7 @@ DataFrame:
For a more exhaustive treatment of more sophisticated label-based indexing and
slicing, see the :ref:`section on indexing <indexing>`. We will address the
-fundamentals of reindexing / conforming to new sets of lables in the
+fundamentals of reindexing / conforming to new sets of labels in the
:ref:`section on reindexing <basics.reindexing>`.
Data alignment and arithmetic
@@ -892,7 +892,7 @@ Slicing
~~~~~~~
Slicing works in a similar manner to a Panel. ``[]`` slices the first dimension.
-``.ix`` allows you to slice abitrarily and get back lower dimensional objects
+``.ix`` allows you to slice arbitrarily and get back lower dimensional objects
.. ipython:: python
diff --git a/doc/source/enhancingperf.rst b/doc/source/enhancingperf.rst
index 00c76632ce17b..e6b735173110b 100644
--- a/doc/source/enhancingperf.rst
+++ b/doc/source/enhancingperf.rst
@@ -553,7 +553,7 @@ standard Python.
:func:`pandas.eval` Parsers
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-There are two different parsers and and two different engines you can use as
+There are two different parsers and two different engines you can use as
the backend.
The default ``'pandas'`` parser allows a more intuitive syntax for expressing
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 81bebab46dac9..a613d53218ce2 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -144,7 +144,7 @@ Frequency conversion
Frequency conversion is implemented using the ``resample`` method on TimeSeries
and DataFrame objects (multiple time series). ``resample`` also works on panels
-(3D). Here is some code that resamples daily data to montly:
+(3D). Here is some code that resamples daily data to monthly:
.. ipython:: python
diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 0078ffb506cc9..438e2f79c5ff3 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -183,7 +183,7 @@ Why not make NumPy like R?
~~~~~~~~~~~~~~~~~~~~~~~~~~
Many people have suggested that NumPy should simply emulate the ``NA`` support
-present in the more domain-specific statistical programming langauge `R
+present in the more domain-specific statistical programming language `R
<http://r-project.org>`__. Part of the reason is the NumPy type hierarchy:
.. csv-table::
@@ -500,7 +500,7 @@ parse HTML tables in the top-level pandas io function ``read_html``.
molasses. However consider the fact that many tables on the web are not
big enough for the parsing algorithm runtime to matter. It is more
likely that the bottleneck will be in the process of reading the raw
- text from the url over the web, i.e., IO (input-output). For very large
+ text from the URL over the web, i.e., IO (input-output). For very large
tables, this might not be true.
**Issues with using** |Anaconda|_
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 22f1414c4f2b0..eaccbfddc1f86 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -969,7 +969,7 @@ Regroup columns of a DataFrame according to their sum, and sum the aggregated on
df.groupby(df.sum(), axis=1).sum()
-Returning a Series to propogate names
+Returning a Series to propagate names
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Group DataFrame columns, compute a set of metrics and return a named Series.
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index 84736d4989f6f..9c73c679f726a 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -88,10 +88,10 @@ of multi-axis indexing.
See more at :ref:`Selection by Position <indexing.integer>`
- ``.ix`` supports mixed integer and label based access. It is primarily label
- based, but will fallback to integer positional access. ``.ix`` is the most
+ based, but will fall back to integer positional access. ``.ix`` is the most
general and will support any of the inputs to ``.loc`` and ``.iloc``, as well
as support for floating point label schemes. ``.ix`` is especially useful
- when dealing with mixed positional and label based hierarchial indexes.
+ when dealing with mixed positional and label based hierarchical indexes.
As using integer slices with ``.ix`` have different behavior depending on
whether the slice is interpreted as position based or label based, it's
usually better to be explicit and use ``.iloc`` or ``.loc``.
@@ -230,7 +230,7 @@ new column.
- The ``Series/Panel`` accesses are available starting in 0.13.0.
If you are using the IPython environment, you may also use tab-completion to
-see these accessable attributes.
+see these accessible attributes.
Slicing ranges
--------------
@@ -328,7 +328,7 @@ For getting values with a boolean array
df1.loc['a']>0
df1.loc[:,df1.loc['a']>0]
-For getting a value explicity (equiv to deprecated ``df.get_value('a','A')``)
+For getting a value explicitly (equiv to deprecated ``df.get_value('a','A')``)
.. ipython:: python
@@ -415,7 +415,7 @@ For getting a cross section using an integer position (equiv to ``df.xs(1)``)
df1.iloc[1]
-There is one signficant departure from standard python/numpy slicing semantics.
+There is one significant departure from standard python/numpy slicing semantics.
python/numpy allow slicing past the end of an array without an associated error.
.. ipython:: python
@@ -494,7 +494,7 @@ out what you're asking for. If you only want to access a scalar value, the
fastest way is to use the ``at`` and ``iat`` methods, which are implemented on
all of the data structures.
-Similary to ``loc``, ``at`` provides **label** based scalar lookups, while, ``iat`` provides **integer** based lookups analagously to ``iloc``
+Similarly to ``loc``, ``at`` provides **label** based scalar lookups, while, ``iat`` provides **integer** based lookups analogously to ``iloc``
.. ipython:: python
@@ -643,7 +643,7 @@ To return a Series of the same shape as the original
s.where(s > 0)
-Selecting values from a DataFrame with a boolean critierion now also preserves
+Selecting values from a DataFrame with a boolean criterion now also preserves
input data shape. ``where`` is used under the hood as the implementation.
Equivalent is ``df.where(df < 0)``
@@ -690,7 +690,7 @@ without creating a copy:
**alignment**
Furthermore, ``where`` aligns the input boolean condition (ndarray or DataFrame),
-such that partial selection with setting is possible. This is analagous to
+such that partial selection with setting is possible. This is analogous to
partial setting via ``.ix`` (but on the contents rather than the axis labels)
.. ipython:: python
@@ -756,7 +756,7 @@ between the values of columns ``a`` and ``c``. For example:
# query
df.query('(a < b) & (b < c)')
-Do the same thing but fallback on a named index if there is no column
+Do the same thing but fall back on a named index if there is no column
with the name ``a``.
.. ipython:: python
@@ -899,7 +899,7 @@ The ``in`` and ``not in`` operators
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
:meth:`~pandas.DataFrame.query` also supports special use of Python's ``in`` and
-``not in`` comparison operators, providing a succint syntax for calling the
+``not in`` comparison operators, providing a succinct syntax for calling the
``isin`` method of a ``Series`` or ``DataFrame``.
.. ipython:: python
@@ -1416,7 +1416,7 @@ faster, and allows one to index *both* axes if so desired.
Why does the assignment when using chained indexing fail!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-So, why does this show the ``SettingWithCopy`` warning / and possibly not work when you do chained indexing and assignement:
+So, why does this show the ``SettingWithCopy`` warning / and possibly not work when you do chained indexing and assignment:
.. code-block:: python
@@ -2149,7 +2149,7 @@ metadata, like the index ``name`` (or, for ``MultiIndex``, ``levels`` and
You can use the ``rename``, ``set_names``, ``set_levels``, and ``set_labels``
to set these attributes directly. They default to returning a copy; however,
-you can specify ``inplace=True`` to have the data change inplace.
+you can specify ``inplace=True`` to have the data change in place.
.. ipython:: python
diff --git a/doc/source/io.rst b/doc/source/io.rst
index cfa97ca0f3fef..fa6ab646a47c8 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -29,7 +29,7 @@
IO Tools (Text, CSV, HDF5, ...)
*******************************
-The pandas I/O api is a set of top level ``reader`` functions accessed like ``pd.read_csv()`` that generally return a ``pandas``
+The pandas I/O API is a set of top level ``reader`` functions accessed like ``pd.read_csv()`` that generally return a ``pandas``
object.
* :ref:`read_csv<io.read_csv_table>`
@@ -78,8 +78,8 @@ for some advanced strategies
They can take a number of arguments:
- - ``filepath_or_buffer``: Either a string path to a file, url
- (including http, ftp, and s3 locations), or any object with a ``read``
+ - ``filepath_or_buffer``: Either a string path to a file, URL
+ (including http, ftp, and S3 locations), or any object with a ``read``
method (such as an open file or ``StringIO``).
- ``sep`` or ``delimiter``: A delimiter / separator to split fields
on. `read_csv` is capable of inferring the delimiter automatically in some
@@ -511,7 +511,7 @@ data columns:
Date Parsing Functions
~~~~~~~~~~~~~~~~~~~~~~
Finally, the parser allows you can specify a custom ``date_parser`` function to
-take full advantage of the flexiblity of the date parsing API:
+take full advantage of the flexibility of the date parsing API:
.. ipython:: python
@@ -964,7 +964,7 @@ Reading columns with a ``MultiIndex``
By specifying list of row locations for the ``header`` argument, you
can read in a ``MultiIndex`` for the columns. Specifying non-consecutive
-rows will skip the interveaning rows. In order to have the pre-0.13 behavior
+rows will skip the intervening rows. In order to have the pre-0.13 behavior
of tupleizing columns, specify ``tupleize_cols=True``.
.. ipython:: python
@@ -1038,7 +1038,7 @@ rather than reading the entire file into memory, such as the following:
table
-By specifiying a ``chunksize`` to ``read_csv`` or ``read_table``, the return
+By specifying a ``chunksize`` to ``read_csv`` or ``read_table``, the return
value will be an iterable object of type ``TextFileReader``:
.. ipython:: python
@@ -1100,7 +1100,7 @@ function takes a number of arguments. Only the first is required.
used. (A sequence should be given if the DataFrame uses MultiIndex).
- ``mode`` : Python write mode, default 'w'
- ``encoding``: a string representing the encoding to use if the contents are
- non-ascii, for python versions prior to 3
+ non-ASCII, for python versions prior to 3
- ``line_terminator``: Character sequence denoting line end (default '\\n')
- ``quoting``: Set quoting rules as in csv module (default csv.QUOTE_MINIMAL)
- ``quotechar``: Character used to quote fields (default '"')
@@ -1184,7 +1184,7 @@ with optional parameters:
- ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10.
- ``force_ascii`` : force encoded string to be ASCII, default True.
- ``date_unit`` : The time unit to encode to, governs timestamp and ISO8601 precision. One of 's', 'ms', 'us' or 'ns' for seconds, milliseconds, microseconds and nanoseconds respectively. Default 'ms'.
-- ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serialisable object.
+- ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serializable object.
Note ``NaN``'s, ``NaT``'s and ``None`` will be converted to ``null`` and ``datetime`` objects will be converted based on the ``date_format`` and ``date_unit`` parameters.
@@ -1208,7 +1208,7 @@ file / string. Consider the following DataFrame and Series:
sjo = Series(dict(x=15, y=16, z=17), name='D')
sjo
-**Column oriented** (the default for ``DataFrame``) serialises the data as
+**Column oriented** (the default for ``DataFrame``) serializes the data as
nested JSON objects with column labels acting as the primary index:
.. ipython:: python
@@ -1224,7 +1224,7 @@ but the index labels are now primary:
dfjo.to_json(orient="index")
sjo.to_json(orient="index")
-**Record oriented** serialises the data to a JSON array of column -> value records,
+**Record oriented** serializes the data to a JSON array of column -> value records,
index labels are not included. This is useful for passing DataFrame data to plotting
libraries, for example the JavaScript library d3.js:
@@ -1233,7 +1233,7 @@ libraries, for example the JavaScript library d3.js:
dfjo.to_json(orient="records")
sjo.to_json(orient="records")
-**Value oriented** is a bare-bones option which serialises to nested JSON arrays of
+**Value oriented** is a bare-bones option which serializes to nested JSON arrays of
values only, column and index labels are not included:
.. ipython:: python
@@ -1241,7 +1241,7 @@ values only, column and index labels are not included:
dfjo.to_json(orient="values")
# Not available for Series
-**Split oriented** serialises to a JSON object containing separate entries for
+**Split oriented** serializes to a JSON object containing separate entries for
values, index and columns. Name is also included for ``Series``:
.. ipython:: python
@@ -1252,13 +1252,13 @@ values, index and columns. Name is also included for ``Series``:
.. note::
Any orient option that encodes to a JSON object will not preserve the ordering of
- index and column labels during round-trip serialisation. If you wish to preserve
+ index and column labels during round-trip serialization. If you wish to preserve
label ordering use the `split` option as it uses ordered containers.
Date Handling
+++++++++++++
-Writing in iso date format
+Writing in ISO date format
.. ipython:: python
@@ -1268,7 +1268,7 @@ Writing in iso date format
json = dfd.to_json(date_format='iso')
json
-Writing in iso date format, with microseconds
+Writing in ISO date format, with microseconds
.. ipython:: python
@@ -1297,17 +1297,17 @@ Writing to a file, with a date index and a date column
Fallback Behavior
+++++++++++++++++
-If the JSON serialiser cannot handle the container contents directly it will fallback in the following manner:
+If the JSON serializer cannot handle the container contents directly it will fallback in the following manner:
- if a ``toDict`` method is defined by the unrecognised object then that
- will be called and its returned ``dict`` will be JSON serialised.
+ will be called and its returned ``dict`` will be JSON serialized.
- if a ``default_handler`` has been passed to ``to_json`` that will
be called to convert the object.
- otherwise an attempt is made to convert the object to a ``dict`` by
parsing its contents. However if the object is complex this will often fail
with an ``OverflowError``.
-Your best bet when encountering ``OverflowError`` during serialisation
+Your best bet when encountering ``OverflowError`` during serialization
is to specify a ``default_handler``. For example ``timedelta`` can cause
problems:
@@ -1346,10 +1346,10 @@ Reading JSON
Reading a JSON string to pandas object can take a number of parameters.
The parser will try to parse a ``DataFrame`` if ``typ`` is not supplied or
-is ``None``. To explicity force ``Series`` parsing, pass ``typ=series``
+is ``None``. To explicitly force ``Series`` parsing, pass ``typ=series``
- ``filepath_or_buffer`` : a **VALID** JSON string or file handle / StringIO. The string could be
- a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host
+ a URL. Valid URL schemes include http, ftp, S3, and file. For file URLs, a host
is expected. For instance, a local file could be
file ://localhost/path/to/table.json
- ``typ`` : type of object to recover (series or frame), default 'frame'
@@ -1377,8 +1377,8 @@ is ``None``. To explicity force ``Series`` parsing, pass ``typ=series``
- ``dtype`` : if True, infer dtypes, if a dict of column to dtype, then use those, if False, then don't infer dtypes at all, default is True, apply only to the data
- ``convert_axes`` : boolean, try to convert the axes to the proper dtypes, default is True
-- ``convert_dates`` : a list of columns to parse for dates; If True, then try to parse datelike columns, default is True
-- ``keep_default_dates`` : boolean, default True. If parsing dates, then parse the default datelike columns
+- ``convert_dates`` : a list of columns to parse for dates; If True, then try to parse date-like columns, default is True
+- ``keep_default_dates`` : boolean, default True. If parsing dates, then parse the default date-like columns
- ``numpy`` : direct decoding to numpy arrays. default is False;
Supports numeric data only, although labels may be non-numeric. Also note that the JSON ordering **MUST** be the same for each term if ``numpy=True``
- ``precise_float`` : boolean, default ``False``. Set to enable usage of higher precision (strtod) function when decoding string to double values. Default (``False``) is to use fast but less precise builtin functionality
@@ -1387,7 +1387,7 @@ is ``None``. To explicity force ``Series`` parsing, pass ``typ=series``
then pass one of 's', 'ms', 'us' or 'ns' to force timestamp precision to
seconds, milliseconds, microseconds or nanoseconds respectively.
-The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parsable.
+The parser will raise one of ``ValueError/TypeError/AssertionError`` if the JSON is not parseable.
If a non-default ``orient`` was used when encoding to JSON be sure to pass the same
option here so that decoding produces sensible results, see `Orient Options`_ for an
@@ -1438,7 +1438,7 @@ Specify dtypes for conversion:
pd.read_json('test.json', dtype={'A' : 'float32', 'bools' : 'int8'}).dtypes
-Preserve string indicies:
+Preserve string indices:
.. ipython:: python
@@ -1480,7 +1480,7 @@ The Numpy Parameter
This supports numeric data only. Index and columns labels may be non-numeric, e.g. strings, dates etc.
If ``numpy=True`` is passed to ``read_json`` an attempt will be made to sniff
-an appropriate dtype during deserialisation and to subsequently decode directly
+an appropriate dtype during deserialization and to subsequently decode directly
to numpy arrays, bypassing the need for intermediate Python objects.
This can provide speedups if you are deserialising a large amount of numeric
@@ -1502,7 +1502,7 @@ data:
timeit read_json(jsonfloats, numpy=True)
-The speedup is less noticable for smaller datasets:
+The speedup is less noticeable for smaller datasets:
.. ipython:: python
@@ -1586,7 +1586,7 @@ Reading HTML Content
.. versionadded:: 0.12.0
The top-level :func:`~pandas.io.html.read_html` function can accept an HTML
-string/file/url and will parse HTML tables into list of pandas DataFrames.
+string/file/URL and will parse HTML tables into list of pandas DataFrames.
Let's look at a few examples.
.. note::
@@ -2381,7 +2381,7 @@ hierarchical path-name like format (e.g. ``foo/bar/bah``), which will
generate a hierarchy of sub-stores (or ``Groups`` in PyTables
parlance). Keys can be specified with out the leading '/' and are ALWAYS
absolute (e.g. 'foo' refers to '/foo'). Removal operations can remove
-everying in the sub-store and BELOW, so be *careful*.
+everything in the sub-store and BELOW, so be *careful*.
.. ipython:: python
@@ -2516,7 +2516,7 @@ The ``indexers`` are on the left-hand side of the sub-expression:
- ``columns``, ``major_axis``, ``ts``
-The right-hand side of the sub-expression (after a comparsion operator) can be:
+The right-hand side of the sub-expression (after a comparison operator) can be:
- functions that will be evaluated, e.g. ``Timestamp('2012-02-01')``
- strings, e.g. ``"bar"``
@@ -2696,7 +2696,7 @@ be data_columns
# columns are stored separately as ``PyTables`` columns
store.root.df_dc.table
-There is some performance degredation by making lots of columns into
+There is some performance degradation by making lots of columns into
`data columns`, so it is up to the user to designate these. In addition,
you cannot change data columns (nor indexables) after the first
append/put operation (Of course you can simply read in the data and
@@ -2935,7 +2935,7 @@ after the fact.
- ``ptrepack --chunkshape=auto --propindexes --complevel=9 --complib=blosc in.h5 out.h5``
Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow
-you to reuse previously deleted space. Aalternatively, one can simply
+you to reuse previously deleted space. Alternatively, one can simply
remove the file and write again, or use the ``copy`` method.
.. _io.hdf5-notes:
@@ -2996,7 +2996,7 @@ Currently, ``unicode`` and ``datetime`` columns (represented with a
dtype of ``object``), **WILL FAIL**. In addition, even though a column
may look like a ``datetime64[ns]``, if it contains ``np.nan``, this
**WILL FAIL**. You can try to convert datetimelike columns to proper
-``datetime64[ns]`` columns, that possibily contain ``NaT`` to represent
+``datetime64[ns]`` columns, that possibly contain ``NaT`` to represent
invalid values. (Some of these issues have been addressed and these
conversion may not be necessary in future versions of pandas)
@@ -3025,7 +3025,7 @@ may introduce a string for a column **larger** than the column can hold, an Exce
could have a silent truncation of these columns, leading to loss of information). In the future we may relax this and
allow a user-specified truncation to occur.
-Pass ``min_itemsize`` on the first table creation to a-priori specifiy the minimum length of a particular string column.
+Pass ``min_itemsize`` on the first table creation to a-priori specify the minimum length of a particular string column.
``min_itemsize`` can be an integer, or a dict mapping a column name to an integer. You can pass ``values`` as a key to
allow all *indexables* or *data_columns* to have this min_itemsize.
@@ -3070,7 +3070,7 @@ External Compatibility
~~~~~~~~~~~~~~~~~~~~~~
``HDFStore`` write ``table`` format objects in specific formats suitable for
-producing loss-less roundtrips to pandas objects. For external
+producing loss-less round trips to pandas objects. For external
compatibility, ``HDFStore`` can read native ``PyTables`` format
tables. It is possible to write an ``HDFStore`` object that can easily
be imported into ``R`` using the ``rhdf5`` library. Create a table
@@ -3136,7 +3136,7 @@ Performance
generally longer as compared with regular stores. Query times can
be quite fast, especially on an indexed axis.
- You can pass ``chunksize=<int>`` to ``append``, specifying the
- write chunksize (default is 50000). This will signficantly lower
+ write chunksize (default is 50000). This will significantly lower
your memory usage on writing.
- You can pass ``expectedrows=<int>`` to the first ``append``,
to set the TOTAL number of expected rows that ``PyTables`` will
@@ -3304,7 +3304,7 @@ And you can explicitly force columns to be parsed as dates:
pd.read_sql_table('data', engine, parse_dates=['Date'])
-If needed you can explicitly specifiy a format string, or a dict of arguments
+If needed you can explicitly specify a format string, or a dict of arguments
to pass to :func:`pandas.to_datetime`:
.. code-block:: python
@@ -3456,7 +3456,7 @@ response code of Google BigQuery can be successful (200) even if the
append failed. For this reason, if there is a failure to append to the
table, the complete error response from BigQuery is returned which
can be quite long given it provides a status for each row. You may want
-to start with smaller chuncks to test that the size and types of your
+to start with smaller chunks to test that the size and types of your
dataframe match your destination table to make debugging simpler.
.. code-block:: python
@@ -3470,7 +3470,7 @@ The BigQuery SQL query language has some oddities, see `here <https://developers
While BigQuery uses SQL-like syntax, it has some important differences
from traditional databases both in functionality, API limitations (size and
-qunatity of queries or uploads), and how Google charges for use of the service.
+quantity of queries or uploads), and how Google charges for use of the service.
You should refer to Google documentation often as the service seems to
be changing and evolving. BiqQuery is best for analyzing large sets of
data quickly, but it is not a direct replacement for a transactional database.
@@ -3522,7 +3522,7 @@ converting them to a DataFrame which is returned:
Currently the ``index`` is retrieved as a column on read back.
-The parameter ``convert_categoricals`` indicates wheter value labels should be
+The parameter ``convert_categoricals`` indicates whether value labels should be
read and used to create a ``Categorical`` variable from them. Value labels can
also be retrieved by the function ``variable_labels``, which requires data to be
called before (see ``pandas.io.stata.StataReader``).
diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index 9263eb2cedf9b..b0319c01b2737 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -548,7 +548,7 @@ will be replaced with a scalar (list of regex -> regex)
All of the regular expression examples can also be passed with the
``to_replace`` argument as the ``regex`` argument. In this case the ``value``
-argument must be passed explicity by name or ``regex`` must be a nested
+argument must be passed explicitly by name or ``regex`` must be a nested
dictionary. The previous example, in this case, would then be
.. ipython:: python
@@ -566,7 +566,7 @@ want to use a regular expression.
Numeric Replacement
~~~~~~~~~~~~~~~~~~~
-Similiar to ``DataFrame.fillna``
+Similar to ``DataFrame.fillna``
.. ipython:: python
:suppress:
diff --git a/doc/source/options.rst b/doc/source/options.rst
index 961797acb00aa..1e8517014bfc5 100644
--- a/doc/source/options.rst
+++ b/doc/source/options.rst
@@ -166,7 +166,7 @@ dataframes to stretch across pages, wrapped over the full column vs row-wise.
pd.reset_option('max_rows')
``display.max_columnwidth`` sets the maximum width of columns. Cells
-of this length or longer will be truncated with an elipsis.
+of this length or longer will be truncated with an ellipsis.
.. ipython:: python
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 8e47466385e77..49a788def2854 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -18,7 +18,7 @@ Package overview
* Input/Output tools: loading tabular data from flat files (CSV, delimited,
Excel 2003), and saving and loading pandas objects from the fast and
efficient PyTables/HDF5 format.
- * Memory-efficent "sparse" versions of the standard data structures for storing
+ * Memory-efficient "sparse" versions of the standard data structures for storing
data that is mostly missing or mostly constant (some fixed value)
* Moving window statistics (rolling mean, rolling standard deviation, etc.)
* Static and moving window linear and `panel regression
diff --git a/doc/source/release.rst b/doc/source/release.rst
index e490cb330a497..9dc96219f42d9 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -301,8 +301,8 @@ Improvements to existing features
limit precision based on the values in the array (:issue:`3401`)
- ``pd.show_versions()`` is now available for convenience when reporting issues.
- perf improvements to Series.str.extract (:issue:`5944`)
-- perf improvments in ``dtypes/ftypes`` methods (:issue:`5968`)
-- perf improvments in indexing with object dtypes (:issue:`5968`)
+- perf improvements in ``dtypes/ftypes`` methods (:issue:`5968`)
+- perf improvements in indexing with object dtypes (:issue:`5968`)
- improved dtype inference for ``timedelta`` like passed to constructors (:issue:`5458`, :issue:`5689`)
- escape special characters when writing to latex (:issue: `5374`)
- perf improvements in ``DataFrame.apply`` (:issue:`6013`)
@@ -329,7 +329,7 @@ Bug Fixes
- Bug in groupby dtype conversion with datetimelike (:issue:`5869`)
- Regression in handling of empty Series as indexers to Series (:issue:`5877`)
- Bug in internal caching, related to (:issue:`5727`)
-- Testing bug in reading json/msgpack from a non-filepath on windows under py3 (:issue:`5874`)
+- Testing bug in reading JSON/msgpack from a non-filepath on windows under py3 (:issue:`5874`)
- Bug when assigning to .ix[tuple(...)] (:issue:`5896`)
- Bug in fully reindexing a Panel (:issue:`5905`)
- Bug in idxmin/max with object dtypes (:issue:`5914`)
@@ -337,7 +337,7 @@ Bug Fixes
- Bug in assigning to chained series with a series via ix (:issue:`5928`)
- Bug in creating an empty DataFrame, copying, then assigning (:issue:`5932`)
- Bug in DataFrame.tail with empty frame (:issue:`5846`)
-- Bug in propogating metadata on ``resample`` (:issue:`5862`)
+- Bug in propagating metadata on ``resample`` (:issue:`5862`)
- Fixed string-representation of ``NaT`` to be "NaT" (:issue:`5708`)
- Fixed string-representation for Timestamp to show nanoseconds if present (:issue:`5912`)
- ``pd.match`` not returning passed sentinel
@@ -638,7 +638,7 @@ API Changes
- support ``timedelta64[ns]`` as a serialization type (:issue:`3577`)
- store `datetime.date` objects as ordinals rather then timetuples to avoid
timezone issues (:issue:`2852`), thanks @tavistmorph and @numpand
- - ``numexpr`` 2.2.2 fixes incompatiblity in PyTables 2.4 (:issue:`4908`)
+ - ``numexpr`` 2.2.2 fixes incompatibility in PyTables 2.4 (:issue:`4908`)
- ``flush`` now accepts an ``fsync`` parameter, which defaults to ``False``
(:issue:`5364`)
- ``unicode`` indices not supported on ``table`` formats (:issue:`5386`)
@@ -649,7 +649,7 @@ API Changes
Options are seconds, milliseconds, microseconds and nanoseconds.
(:issue:`4362`, :issue:`4498`).
- added ``default_handler`` parameter to allow a callable to be passed
- which will be responsible for handling otherwise unserialisable objects.
+ which will be responsible for handling otherwise unserialiable objects.
(:issue:`5138`)
- ``Index`` and ``MultiIndex`` changes (:issue:`4039`):
@@ -723,7 +723,7 @@ API Changes
``SparsePanel``, etc.), now support the entire set of arithmetic operators
and arithmetic flex methods (add, sub, mul, etc.). ``SparsePanel`` does not
support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`)
-- Arithemtic func factories are now passed real names (suitable for using
+- Arithmetic func factories are now passed real names (suitable for using
with super) (:issue:`5240`)
- Provide numpy compatibility with 1.7 for a calling convention like
``np.prod(pandas_object)`` as numpy call with additional keyword args
@@ -802,7 +802,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- ``swapaxes`` on a ``Panel`` with the same axes specified now return a copy
- support attribute access for setting
- - ``filter`` supports same api as original ``DataFrame`` filter
+ - ``filter`` supports same API as original ``DataFrame`` filter
- ``fillna`` refactored to ``core/generic.py``, while > 3ndim is
``NotImplemented``
@@ -836,7 +836,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- added ``ftypes`` method to Series/DataFame, similar to ``dtypes``, but
indicates if the underlying is sparse/dense (as well as the dtype)
- All ``NDFrame`` objects now have a ``_prop_attributes``, which can be used
- to indcated various values to propogate to a new object from an existing
+ to indicate various values to propagate to a new object from an existing
(e.g. name in ``Series`` will follow more automatically now)
- Internal type checking is now done via a suite of generated classes,
allowing ``isinstance(value, klass)`` without having to directly import the
@@ -855,7 +855,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
elements (:issue:`1903`)
- Refactor ``clip`` methods to core/generic.py (:issue:`4798`)
- Refactor of ``_get_numeric_data/_get_bool_data`` to core/generic.py,
- allowing Series/Panel functionaility
+ allowing Series/Panel functionality
- Refactor of Series arithmetic with time-like objects
(datetime/timedelta/time etc.) into a separate, cleaned up wrapper class.
(:issue:`4613`)
@@ -927,7 +927,7 @@ Bug Fixes
as the docstring says (:issue:`4362`).
- ``as_index`` is no longer ignored when doing groupby apply (:issue:`4648`,
:issue:`3417`)
-- JSON NaT handling fixed, NaTs are now serialised to `null` (:issue:`4498`)
+- JSON NaT handling fixed, NaTs are now serialized to `null` (:issue:`4498`)
- Fixed JSON handling of escapable characters in JSON object keys
(:issue:`4593`)
- Fixed passing ``keep_default_na=False`` when ``na_values=None``
@@ -1086,7 +1086,7 @@ Bug Fixes
- Fix a bug where reshaping a ``Series`` to its own shape raised
``TypeError`` (:issue:`4554`) and other reshaping issues.
- Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`)
-- Make sure series-series boolean comparions are label based (:issue:`4947`)
+- Make sure series-series boolean comparisons are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer
(:issue:`4294`)
- Tests/fix for multi-index construction of an all-nan frame (:issue:`4078`)
@@ -1096,7 +1096,7 @@ Bug Fixes
ordering of returned tables (:issue:`4770`, :issue:`5029`).
- Fixed a bug where :func:`~pandas.read_html` was incorrectly parsing when
passed ``index_col=0`` (:issue:`5066`).
-- Fixed a bug where :func:`~pandas.read_html` was incorrectly infering the
+- Fixed a bug where :func:`~pandas.read_html` was incorrectly inferring the
type of headers (:issue:`5048`).
- Fixed a bug where ``DatetimeIndex`` joins with ``PeriodIndex`` caused a
stack overflow (:issue:`3899`).
@@ -1203,7 +1203,7 @@ New Features
- Added support for writing in ``to_csv`` and reading in ``read_csv``,
multi-index columns. The ``header`` option in ``read_csv`` now accepts a
list of the rows from which to read the index. Added the option,
- ``tupleize_cols`` to provide compatiblity for the pre 0.12 behavior of
+ ``tupleize_cols`` to provide compatibility for the pre 0.12 behavior of
writing and reading multi-index columns via a list of tuples. The default in
0.12 is to write lists of tuples and *not* interpret list of tuples as a
multi-index column.
@@ -1250,7 +1250,7 @@ Improvements to existing features
:issue:`3572`, :issue:`3911`, :issue:`3912`), but they will try to convert object
arrays to numeric arrays if possible so that you can still plot, for example, an
object array with floats. This happens before any drawing takes place which
- elimnates any spurious plots from showing up.
+ eliminates any spurious plots from showing up.
- Added Faq section on repr display options, to help users customize their setup.
- ``where`` operations that result in block splitting are much faster (:issue:`3733`)
- Series and DataFrame hist methods now take a ``figsize`` argument (:issue:`3834`)
@@ -1258,7 +1258,7 @@ Improvements to existing features
operations (:issue:`3877`)
- Add ``unit`` keyword to ``Timestamp`` and ``to_datetime`` to enable passing of
integers or floats that are in an epoch unit of ``D, s, ms, us, ns``, thanks @mtkini (:issue:`3969`)
- (e.g. unix timestamps or epoch ``s``, with fracional seconds allowed) (:issue:`3540`)
+ (e.g. unix timestamps or epoch ``s``, with fractional seconds allowed) (:issue:`3540`)
- DataFrame corr method (spearman) is now cythonized.
- Improved ``network`` test decorator to catch ``IOError`` (and therefore
``URLError`` as well). Added ``with_connectivity_check`` decorator to allow
@@ -1296,7 +1296,7 @@ API Changes
``timedelta64[ns]`` to ``object/int`` (:issue:`3425`)
- The behavior of ``datetime64`` dtypes has changed with respect to certain
so-called reduction operations (:issue:`3726`). The following operations now
- raise a ``TypeError`` when perfomed on a ``Series`` and return an *empty*
+ raise a ``TypeError`` when performed on a ``Series`` and return an *empty*
``Series`` when performed on a ``DataFrame`` similar to performing these
operations on, for example, a ``DataFrame`` of ``slice`` objects:
- sum, prod, mean, std, var, skew, kurt, corr, and cov
@@ -1335,7 +1335,7 @@ API Changes
deprecated
- set FutureWarning to require data_source, and to replace year/month with
expiry date in pandas.io options. This is in preparation to add options
- data from google (:issue:`3822`)
+ data from Google (:issue:`3822`)
- the ``method`` and ``axis`` arguments of ``DataFrame.replace()`` are
deprecated
- Implement ``__nonzero__`` for ``NDFrame`` objects (:issue:`3691`, :issue:`3696`)
@@ -1452,13 +1452,13 @@ Bug Fixes
their first argument (:issue:`3702`)
- Fix file tokenization error with \r delimiter and quoted fields (:issue:`3453`)
- Groupby transform with item-by-item not upcasting correctly (:issue:`3740`)
-- Incorrectly read a HDFStore multi-index Frame witha column specification (:issue:`3748`)
+- Incorrectly read a HDFStore multi-index Frame with a column specification (:issue:`3748`)
- ``read_html`` now correctly skips tests (:issue:`3741`)
- PandasObjects raise TypeError when trying to hash (:issue:`3882`)
- Fix incorrect arguments passed to concat that are not list-like (e.g. concat(df1,df2)) (:issue:`3481`)
- Correctly parse when passed the ``dtype=str`` (or other variable-len string dtypes)
in ``read_csv`` (:issue:`3795`)
-- Fix index name not propogating when using ``loc/ix`` (:issue:`3880`)
+- Fix index name not propagating when using ``loc/ix`` (:issue:`3880`)
- Fix groupby when applying a custom function resulting in a returned DataFrame was
not converting dtypes (:issue:`3911`)
- Fixed a bug where ``DataFrame.replace`` with a compiled regular expression
@@ -1468,7 +1468,7 @@ Bug Fixes
- Indexing with a string with seconds resolution not selecting from a time index (:issue:`3925`)
- csv parsers would loop infinitely if ``iterator=True`` but no ``chunksize`` was
specified (:issue:`3967`), python parser failing with ``chunksize=1``
-- Fix index name not propogating when using ``shift``
+- Fix index name not propagating when using ``shift``
- Fixed dropna=False being ignored with multi-index stack (:issue:`3997`)
- Fixed flattening of columns when renaming MultiIndex columns DataFrame (:issue:`4004`)
- Fix ``Series.clip`` for datetime series. NA/NaN threshold values will now throw ValueError (:issue:`3996`)
@@ -1523,17 +1523,17 @@ New Features
- New documentation section, ``10 Minutes to Pandas``
- New documentation section, ``Cookbook``
-- Allow mixed dtypes (e.g ``float32/float64/int32/int16/int8``) to coexist in DataFrames and propogate in operations
+- Allow mixed dtypes (e.g ``float32/float64/int32/int16/int8``) to coexist in DataFrames and propagate in operations
- Add function to pandas.io.data for retrieving stock index components from Yahoo! finance (:issue:`2795`)
- Support slicing with time objects (:issue:`2681`)
- Added ``.iloc`` attribute, to support strict integer based indexing, analogous to ``.ix`` (:issue:`2922`)
-- Added ``.loc`` attribute, to support strict label based indexing, analagous to ``.ix`` (:issue:`3053`)
+- Added ``.loc`` attribute, to support strict label based indexing, analogous to ``.ix`` (:issue:`3053`)
- Added ``.iat`` attribute, to support fast scalar access via integers (replaces ``iget_value/iset_value``)
- Added ``.at`` attribute, to support fast scalar access via labels (replaces ``get_value/set_value``)
-- Moved functionaility from ``irow,icol,iget_value/iset_value`` to ``.iloc`` indexer (via ``_ixs`` methods in each object)
+- Moved functionality from ``irow,icol,iget_value/iset_value`` to ``.iloc`` indexer (via ``_ixs`` methods in each object)
- Added support for expression evaluation using the ``numexpr`` library
- Added ``convert=boolean`` to ``take`` routines to translate negative indices to positive, defaults to True
-- Added to_series() method to indices, to facilitate the creation of indexeres (:issue:`3275`)
+- Added to_series() method to indices, to facilitate the creation of indexers (:issue:`3275`)
Improvements to existing features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -1760,7 +1760,7 @@ Bug Fixes
- Fixed a bug in the legend of plotting.andrews_curves() (:issue:`3278`)
- Produce a series on apply if we only generate a singular series and have
a simple index (:issue:`2893`)
-- Fix Python ascii file parsing when integer falls outside of floating point
+- Fix Python ASCII file parsing when integer falls outside of floating point
spacing (:issue:`3258`)
- fixed pretty priniting of sets (:issue:`3294`)
- Panel() and Panel.from_dict() now respects ordering when give OrderedDict (:issue:`3303`)
@@ -1783,7 +1783,7 @@ pandas 0.10.1
New Features
~~~~~~~~~~~~
-- Add data inferface to World Bank WDI pandas.io.wb (:issue:`2592`)
+- Add data interface to World Bank WDI pandas.io.wb (:issue:`2592`)
API Changes
~~~~~~~~~~~
@@ -1822,7 +1822,7 @@ Improvements to existing features
- added method ``copy`` to copy an existing store (and possibly upgrade)
- show the shape of the data on disk for non-table stores when printing the
store
- - added ability to read PyTables flavor tables (allows compatiblity to
+ - added ability to read PyTables flavor tables (allows compatibility to
other HDF5 systems)
- Add ``logx`` option to DataFrame/Series.plot (:issue:`2327`, :issue:`2565`)
@@ -1837,7 +1837,7 @@ Improvements to existing features
- Add methods ``neg`` and ``inv`` to Series
- Implement ``kind`` option in ``ExcelFile`` to indicate whether it's an XLS
or XLSX file (:issue:`2613`)
-- Documented a fast-path in pd.read_Csv when parsing iso8601 datetime strings
+- Documented a fast-path in pd.read_csv when parsing iso8601 datetime strings
yielding as much as a 20x speedup. (:issue:`5993`)
@@ -1955,7 +1955,7 @@ New Features
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
-- Add support for Panel4D, a named 4 Dimensional stucture
+- Add support for Panel4D, a named 4 Dimensional structure
- Add support for ndpanel factory functions, to create custom,
domain-specific N-Dimensional containers
@@ -2008,7 +2008,7 @@ Improvements to existing features
- Add ``normalize`` option to Series/DataFrame.asfreq (:issue:`2137`)
- SparseSeries and SparseDataFrame construction from empty and scalar
values now no longer create dense ndarrays unnecessarily (:issue:`2322`)
-- ``HDFStore`` now supports hierarchial keys (:issue:`2397`)
+- ``HDFStore`` now supports hierarchical keys (:issue:`2397`)
- Support multiple query selection formats for ``HDFStore tables`` (:issue:`1996`)
- Support ``del store['df']`` syntax to delete HDFStores
- Add multi-dtype support for ``HDFStore tables``
@@ -2077,7 +2077,7 @@ Bug Fixes
- Fix DataFrame row indexing case with MultiIndex (:issue:`2314`)
- Fix to_excel exporting issues with Timestamp objects in index (:issue:`2294`)
- Fixes assigning scalars and array to hierarchical column chunk (:issue:`1803`)
-- Fixed a UnicdeDecodeError with series tidy_repr (:issue:`2225`)
+- Fixed a UnicodeDecodeError with series tidy_repr (:issue:`2225`)
- Fixed issued with duplicate keys in an index (:issue:`2347`, :issue:`2380`)
- Fixed issues re: Hash randomization, default on starting w/ py3.3 (:issue:`2331`)
- Fixed issue with missing attributes after loading a pickled dataframe (:issue:`2431`)
@@ -2783,7 +2783,7 @@ Bug Fixes
(:issue:`1013`)
- DataFrame.plot(logy=True) has no effect (:issue:`1011`).
- Broken arithmetic operations between SparsePanel-Panel (:issue:`1015`)
-- Unicode repr issues in MultiIndex with non-ascii characters (:issue:`1010`)
+- Unicode repr issues in MultiIndex with non-ASCII characters (:issue:`1010`)
- DataFrame.lookup() returns inconsistent results if exact match not present
(:issue:`1001`)
- DataFrame arithmetic operations not treating None as NA (:issue:`992`)
@@ -2794,7 +2794,7 @@ Bug Fixes
- DataFrame.plot(kind='bar') ignores color argument (:issue:`958`)
- Inconsistent Index comparison results (:issue:`948`)
- Improper int dtype DataFrame construction from data with NaN (:issue:`846`)
-- Removes default 'result' name in grouby results (:issue:`995`)
+- Removes default 'result' name in groupby results (:issue:`995`)
- DataFrame.from_records no longer mutate input columns (:issue:`975`)
- Use Index name when grouping by it (:issue:`1313`)
@@ -3866,7 +3866,7 @@ pandas 0.4.1
**Release date:** 9/25/2011
is is primarily a bug fix release but includes some new features and
-provements
+improvements
New Features
~~~~~~~~~~~~
diff --git a/doc/source/rplot.rst b/doc/source/rplot.rst
index cdecee39d8d1e..46b57cea2d9ed 100644
--- a/doc/source/rplot.rst
+++ b/doc/source/rplot.rst
@@ -99,7 +99,7 @@ The plot above shows that it is possible to have two or more plots for the same
@savefig rplot4_tips.png
plot.render(plt.gcf())
-Above is a similar plot but with 2D kernel desnity estimation plot superimposed.
+Above is a similar plot but with 2D kernel density estimation plot superimposed.
.. ipython:: python
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 76bc796beced8..cbfb20c6f9d7d 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -379,9 +379,9 @@ We are stopping on the included end-point as its part of the index
Datetime Indexing
~~~~~~~~~~~~~~~~~
-Indexing a ``DateTimeIndex`` with a partial string depends on the "accuracy" of the period, in other words how specific the interval is in relation to the frequency of the index. In contrast, indexing with datetime objects is exact, because the objects have exact meaning. These also follow the sematics of *including both endpoints*.
+Indexing a ``DateTimeIndex`` with a partial string depends on the "accuracy" of the period, in other words how specific the interval is in relation to the frequency of the index. In contrast, indexing with datetime objects is exact, because the objects have exact meaning. These also follow the semantics of *including both endpoints*.
-These ``datetime`` objects are specific ``hours, minutes,`` and ``seconds`` even though they were not explicity specified (they are ``0``).
+These ``datetime`` objects are specific ``hours, minutes,`` and ``seconds`` even though they were not explicitly specified (they are ``0``).
.. ipython:: python
@@ -1460,7 +1460,7 @@ Series of timedeltas with ``NaT`` values are supported
y = s - s.shift()
y
-Elements can be set to ``NaT`` using ``np.nan`` analagously to datetimes
+Elements can be set to ``NaT`` using ``np.nan`` analogously to datetimes
.. ipython:: python
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 630e40c4ebfa2..69e04483cb47d 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -317,7 +317,7 @@ The return type of ``boxplot`` depends on two keyword arguments: ``by`` and ``re
When ``by`` is ``None``:
* if ``return_type`` is ``'dict'``, a dictionary containing the :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned. The keys are "boxes", "caps", "fliers", "medians", and "whiskers".
- This is the deafult.
+ This is the default.
* if ``return_type`` is ``'axes'``, a :class:`matplotlib Axes <matplotlib.axes.Axes>` containing the boxplot is returned.
* if ``return_type`` is ``'both'`` a namedtuple containging the :class:`matplotlib Axes <matplotlib.axes.Axes>`
and :class:`matplotlib Lines <matplotlib.lines.Line2D>` is returned
@@ -763,7 +763,7 @@ layout and formatting of the returned plot:
plt.figure(); ts.plot(style='k--', label='Series');
For each kind of plot (e.g. `line`, `bar`, `scatter`) any additional arguments
-keywords are passed alogn to the corresponding matplotlib function
+keywords are passed along to the corresponding matplotlib function
(:meth:`ax.plot() <matplotlib.axes.Axes.plot>`,
:meth:`ax.bar() <matplotlib.axes.Axes.bar>`,
:meth:`ax.scatter() <matplotlib.axes.Axes.scatter>`). These can be used
| https://api.github.com/repos/pandas-dev/pandas/pulls/7745 | 2014-07-13T11:23:38Z | 2014-07-13T12:37:47Z | 2014-07-13T12:37:47Z | 2014-07-13T12:37:47Z | |
Update common.py | diff --git a/pandas/core/common.py b/pandas/core/common.py
index bb7f43511e905..9c20d1a62d9a4 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -1435,7 +1435,8 @@ def _interp_limit(invalid, limit):
"""mask off values that won't be filled since they exceed the limit"""
all_nans = np.where(invalid)[0]
violate = [invalid[x:x + limit + 1] for x in all_nans]
- violate = np.array([x.all() & (x.size > limit) for x in violate])
+ violate = np.array([x.all() & (x.size > limit) for x in violate],
+ dtype=bool)
return all_nans[violate] + limit
xvalues = getattr(xvalues, 'values', xvalues)
| Force violate array to be of type int. When no nan values are found, the array defaults to an empty array of type float, so you get an error: **\* IndexError: arrays used as indices must be of integer (or boolean) type
| https://api.github.com/repos/pandas-dev/pandas/pulls/7743 | 2014-07-13T09:07:45Z | 2014-08-05T17:10:19Z | null | 2014-08-05T17:10:19Z |
spell fix: seperated -> separated | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 4d67616c5cd60..14942c6f0f194 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -1253,11 +1253,11 @@ Methods like ``match``, ``contains``, ``startswith``, and ``endswith`` take
``upper``,Equivalent to ``str.upper``
-Getting indicator variables from seperated strings
+Getting indicator variables from separated strings
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can extract dummy variables from string columns.
-For example if they are seperated by a ``'|'``:
+For example if they are separated by a ``'|'``:
.. ipython:: python
| https://api.github.com/repos/pandas-dev/pandas/pulls/7742 | 2014-07-13T08:51:19Z | 2014-07-13T11:47:37Z | 2014-07-13T11:47:37Z | 2014-07-13T12:13:22Z | |
ENH/BUG: DatetimeIndex and PeriodIndex in-place ops behaves incorrectly | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 06c93541a7783..086c24246918d 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -198,6 +198,10 @@ Bug Fixes
- Bug in ``DataFrame.as_matrix()`` with mixed ``datetime64[ns]`` and ``timedelta64[ns]`` dtypes (:issue:`7778`)
- Bug in ``HDFStore.select_column()`` not preserving UTC timezone info when selecting a DatetimeIndex (:issue:`7777`)
+- Bug in ``DatetimeIndex`` and ``PeriodIndex`` in-place addition and subtraction cause different result from normal one (:issue:`6527`)
+- Bug in adding and subtracting ``PeriodIndex`` with ``PeriodIndex`` raise ``TypeError`` (:issue:`7741`)
+- Bug in ``combine_first`` with ``PeriodIndex`` data raises ``TypeError`` (:issue:`3367`)
+
- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 4035627b98458..243e34e35784a 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -1,6 +1,8 @@
"""
Base and utility classes for pandas objects.
"""
+import datetime
+
from pandas import compat
import numpy as np
from pandas.core import common as com
@@ -511,4 +513,34 @@ def resolution(self):
from pandas.tseries.frequencies import get_reso_string
return get_reso_string(self._resolution)
+ def __add__(self, other):
+ from pandas.core.index import Index
+ from pandas.tseries.offsets import DateOffset
+ if isinstance(other, Index):
+ return self.union(other)
+ elif isinstance(other, (DateOffset, datetime.timedelta, np.timedelta64)):
+ return self._add_delta(other)
+ elif com.is_integer(other):
+ return self.shift(other)
+ else: # pragma: no cover
+ return NotImplemented
+
+ def __sub__(self, other):
+ from pandas.core.index import Index
+ from pandas.tseries.offsets import DateOffset
+ if isinstance(other, Index):
+ return self.diff(other)
+ elif isinstance(other, (DateOffset, datetime.timedelta, np.timedelta64)):
+ return self._add_delta(-other)
+ elif com.is_integer(other):
+ return self.shift(-other)
+ else: # pragma: no cover
+ return NotImplemented
+
+ __iadd__ = __add__
+ __isub__ = __sub__
+
+ def _add_delta(self, other):
+ return NotImplemented
+
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 761d79a288df3..1b7db1451f6cf 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -481,6 +481,8 @@ def test_factorize(self):
class TestDatetimeIndexOps(Ops):
_allowed = '_allow_datetime_index_ops'
+ tz = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern',
+ 'dateutil/Asia/Singapore', 'dateutil/US/Pacific']
def setUp(self):
super(TestDatetimeIndexOps, self).setUp()
@@ -545,7 +547,7 @@ def test_asobject_tolist(self):
self.assertEqual(idx.tolist(), expected_list)
def test_minmax(self):
- for tz in [None, 'Asia/Tokyo', 'US/Eastern']:
+ for tz in self.tz:
# monotonic
idx1 = pd.DatetimeIndex([pd.NaT, '2011-01-01', '2011-01-02',
'2011-01-03'], tz=tz)
@@ -613,6 +615,100 @@ def test_resolution(self):
idx = pd.date_range(start='2013-04-01', periods=30, freq=freq, tz=tz)
self.assertEqual(idx.resolution, expected)
+ def test_add_iadd(self):
+ for tz in self.tz:
+ # union
+ rng1 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other1 = pd.date_range('1/6/2000', freq='D', periods=5, tz=tz)
+ expected1 = pd.date_range('1/1/2000', freq='D', periods=10, tz=tz)
+
+ rng2 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other2 = pd.date_range('1/4/2000', freq='D', periods=5, tz=tz)
+ expected2 = pd.date_range('1/1/2000', freq='D', periods=8, tz=tz)
+
+ rng3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other3 = pd.DatetimeIndex([], tz=tz)
+ expected3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+
+ for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2),
+ (rng3, other3, expected3)]:
+ result_add = rng + other
+ result_union = rng.union(other)
+
+ tm.assert_index_equal(result_add, expected)
+ tm.assert_index_equal(result_union, expected)
+ rng += other
+ tm.assert_index_equal(rng, expected)
+
+ # offset
+ if _np_version_under1p7:
+ offsets = [pd.offsets.Hour(2), timedelta(hours=2)]
+ else:
+ offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h')]
+
+ for delta in offsets:
+ rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
+ result = rng + delta
+ expected = pd.date_range('2000-01-01 02:00', '2000-02-01 02:00', tz=tz)
+ tm.assert_index_equal(result, expected)
+ rng += delta
+ tm.assert_index_equal(rng, expected)
+
+ # int
+ rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz)
+ result = rng + 1
+ expected = pd.date_range('2000-01-01 10:00', freq='H', periods=10, tz=tz)
+ tm.assert_index_equal(result, expected)
+ rng += 1
+ tm.assert_index_equal(rng, expected)
+
+ def test_sub_isub(self):
+ for tz in self.tz:
+ # diff
+ rng1 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other1 = pd.date_range('1/6/2000', freq='D', periods=5, tz=tz)
+ expected1 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+
+ rng2 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other2 = pd.date_range('1/4/2000', freq='D', periods=5, tz=tz)
+ expected2 = pd.date_range('1/1/2000', freq='D', periods=3, tz=tz)
+
+ rng3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+ other3 = pd.DatetimeIndex([], tz=tz)
+ expected3 = pd.date_range('1/1/2000', freq='D', periods=5, tz=tz)
+
+ for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2),
+ (rng3, other3, expected3)]:
+ result_add = rng - other
+ result_union = rng.diff(other)
+
+ tm.assert_index_equal(result_add, expected)
+ tm.assert_index_equal(result_union, expected)
+ rng -= other
+ tm.assert_index_equal(rng, expected)
+
+ # offset
+ if _np_version_under1p7:
+ offsets = [pd.offsets.Hour(2), timedelta(hours=2)]
+ else:
+ offsets = [pd.offsets.Hour(2), timedelta(hours=2), np.timedelta64(2, 'h')]
+
+ for delta in offsets:
+ rng = pd.date_range('2000-01-01', '2000-02-01', tz=tz)
+ result = rng - delta
+ expected = pd.date_range('1999-12-31 22:00', '2000-01-31 22:00', tz=tz)
+ tm.assert_index_equal(result, expected)
+ rng -= delta
+ tm.assert_index_equal(rng, expected)
+
+ # int
+ rng = pd.date_range('2000-01-01 09:00', freq='H', periods=10, tz=tz)
+ result = rng - 1
+ expected = pd.date_range('2000-01-01 08:00', freq='H', periods=10, tz=tz)
+ tm.assert_index_equal(result, expected)
+ rng -= 1
+ tm.assert_index_equal(rng, expected)
+
class TestPeriodIndexOps(Ops):
_allowed = '_allow_period_index_ops'
@@ -745,6 +841,133 @@ def test_resolution(self):
idx = pd.period_range(start='2013-04-01', periods=30, freq=freq)
self.assertEqual(idx.resolution, expected)
+ def test_add_iadd(self):
+ # union
+ rng1 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other1 = pd.period_range('1/6/2000', freq='D', periods=5)
+ expected1 = pd.period_range('1/1/2000', freq='D', periods=10)
+
+ rng2 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other2 = pd.period_range('1/4/2000', freq='D', periods=5)
+ expected2 = pd.period_range('1/1/2000', freq='D', periods=8)
+
+ rng3 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other3 = pd.PeriodIndex([], freq='D')
+ expected3 = pd.period_range('1/1/2000', freq='D', periods=5)
+
+ rng4 = pd.period_range('2000-01-01 09:00', freq='H', periods=5)
+ other4 = pd.period_range('2000-01-02 09:00', freq='H', periods=5)
+ expected4 = pd.PeriodIndex(['2000-01-01 09:00', '2000-01-01 10:00',
+ '2000-01-01 11:00', '2000-01-01 12:00',
+ '2000-01-01 13:00', '2000-01-02 09:00',
+ '2000-01-02 10:00', '2000-01-02 11:00',
+ '2000-01-02 12:00', '2000-01-02 13:00'],
+ freq='H')
+
+ rng5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03',
+ '2000-01-01 09:05'], freq='T')
+ other5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:05'
+ '2000-01-01 09:08'], freq='T')
+ expected5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03',
+ '2000-01-01 09:05', '2000-01-01 09:08'],
+ freq='T')
+
+ rng6 = pd.period_range('2000-01-01', freq='M', periods=7)
+ other6 = pd.period_range('2000-04-01', freq='M', periods=7)
+ expected6 = pd.period_range('2000-01-01', freq='M', periods=10)
+
+ rng7 = pd.period_range('2003-01-01', freq='A', periods=5)
+ other7 = pd.period_range('1998-01-01', freq='A', periods=8)
+ expected7 = pd.period_range('1998-01-01', freq='A', periods=10)
+
+ for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2),
+ (rng3, other3, expected3), (rng4, other4, expected4),
+ (rng5, other5, expected5), (rng6, other6, expected6),
+ (rng7, other7, expected7)]:
+
+ result_add = rng + other
+ result_union = rng.union(other)
+
+ tm.assert_index_equal(result_add, expected)
+ tm.assert_index_equal(result_union, expected)
+ # GH 6527
+ rng += other
+ tm.assert_index_equal(rng, expected)
+
+ # offset
+ for delta in [pd.offsets.Hour(2), timedelta(hours=2)]:
+ rng = pd.period_range('2000-01-01', '2000-02-01')
+ with tm.assertRaisesRegexp(TypeError, 'unsupported operand type\(s\)'):
+ result = rng + delta
+ with tm.assertRaisesRegexp(TypeError, 'unsupported operand type\(s\)'):
+ rng += delta
+
+ # int
+ rng = pd.period_range('2000-01-01 09:00', freq='H', periods=10)
+ result = rng + 1
+ expected = pd.period_range('2000-01-01 10:00', freq='H', periods=10)
+ tm.assert_index_equal(result, expected)
+ rng += 1
+ tm.assert_index_equal(rng, expected)
+
+ def test_sub_isub(self):
+ # diff
+ rng1 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other1 = pd.period_range('1/6/2000', freq='D', periods=5)
+ expected1 = pd.period_range('1/1/2000', freq='D', periods=5)
+
+ rng2 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other2 = pd.period_range('1/4/2000', freq='D', periods=5)
+ expected2 = pd.period_range('1/1/2000', freq='D', periods=3)
+
+ rng3 = pd.period_range('1/1/2000', freq='D', periods=5)
+ other3 = pd.PeriodIndex([], freq='D')
+ expected3 = pd.period_range('1/1/2000', freq='D', periods=5)
+
+ rng4 = pd.period_range('2000-01-01 09:00', freq='H', periods=5)
+ other4 = pd.period_range('2000-01-02 09:00', freq='H', periods=5)
+ expected4 = rng4
+
+ rng5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:03',
+ '2000-01-01 09:05'], freq='T')
+ other5 = pd.PeriodIndex(['2000-01-01 09:01', '2000-01-01 09:05'], freq='T')
+ expected5 = pd.PeriodIndex(['2000-01-01 09:03'], freq='T')
+
+ rng6 = pd.period_range('2000-01-01', freq='M', periods=7)
+ other6 = pd.period_range('2000-04-01', freq='M', periods=7)
+ expected6 = pd.period_range('2000-01-01', freq='M', periods=3)
+
+ rng7 = pd.period_range('2003-01-01', freq='A', periods=5)
+ other7 = pd.period_range('1998-01-01', freq='A', periods=8)
+ expected7 = pd.period_range('2006-01-01', freq='A', periods=2)
+
+ for rng, other, expected in [(rng1, other1, expected1), (rng2, other2, expected2),
+ (rng3, other3, expected3), (rng4, other4, expected4),
+ (rng5, other5, expected5), (rng6, other6, expected6),
+ (rng7, other7, expected7),]:
+ result_add = rng - other
+ result_union = rng.diff(other)
+
+ tm.assert_index_equal(result_add, expected)
+ tm.assert_index_equal(result_union, expected)
+ rng -= other
+ tm.assert_index_equal(rng, expected)
+
+ # offset
+ for delta in [pd.offsets.Hour(2), timedelta(hours=2)]:
+ with tm.assertRaisesRegexp(TypeError, 'unsupported operand type\(s\)'):
+ result = rng + delta
+ with tm.assertRaisesRegexp(TypeError, 'unsupported operand type\(s\)'):
+ rng += delta
+
+ # int
+ rng = pd.period_range('2000-01-01 09:00', freq='H', periods=10)
+ result = rng - 1
+ expected = pd.period_range('2000-01-01 08:00', freq='H', periods=10)
+ tm.assert_index_equal(result, expected)
+ rng -= 1
+ tm.assert_index_equal(rng, expected)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 9423037844e74..2a3c53135a644 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -595,30 +595,6 @@ def __setstate__(self, state):
else: # pragma: no cover
np.ndarray.__setstate__(self, state)
- def __add__(self, other):
- if isinstance(other, Index):
- return self.union(other)
- elif isinstance(other, (DateOffset, timedelta)):
- return self._add_delta(other)
- elif isinstance(other, np.timedelta64):
- return self._add_delta(other)
- elif com.is_integer(other):
- return self.shift(other)
- else: # pragma: no cover
- raise TypeError(other)
-
- def __sub__(self, other):
- if isinstance(other, Index):
- return self.diff(other)
- elif isinstance(other, (DateOffset, timedelta)):
- return self._add_delta(-other)
- elif isinstance(other, np.timedelta64):
- return self._add_delta(-other)
- elif com.is_integer(other):
- return self.shift(-other)
- else: # pragma: no cover
- raise TypeError(other)
-
def _add_delta(self, delta):
if isinstance(delta, (Tick, timedelta)):
inc = offsets._delta_to_nanoseconds(delta)
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 8c4bb2f5adc5e..887bf806dd4e4 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -872,19 +872,6 @@ def shift(self, n):
values[mask] = tslib.iNaT
return PeriodIndex(data=values, name=self.name, freq=self.freq)
- def __add__(self, other):
- try:
- return self.shift(other)
- except TypeError:
- # self.values + other raises TypeError for invalid input
- return NotImplemented
-
- def __sub__(self, other):
- try:
- return self.shift(-other)
- except TypeError:
- return NotImplemented
-
@property
def inferred_type(self):
# b/c data is represented as ints make sure we can't have ambiguous
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 53375b4d07796..f5f66a49c29d4 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -2450,6 +2450,20 @@ def test_recreate_from_data(self):
idx = PeriodIndex(org.values, freq=o)
self.assertTrue(idx.equals(org))
+ def test_combine_first(self):
+ # GH 3367
+ didx = pd.DatetimeIndex(start='1950-01-31', end='1950-07-31', freq='M')
+ pidx = pd.PeriodIndex(start=pd.Period('1950-1'), end=pd.Period('1950-7'), freq='M')
+ # check to be consistent with DatetimeIndex
+ for idx in [didx, pidx]:
+ a = pd.Series([1, np.nan, np.nan, 4, 5, np.nan, 7], index=idx)
+ b = pd.Series([9, 9, 9, 9, 9, 9, 9], index=idx)
+
+ result = a.combine_first(b)
+ expected = pd.Series([1, 9, 9, 4, 5, 9, 7], index=idx, dtype=np.float64)
+ tm.assert_series_equal(result, expected)
+
+
def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index b6761426edc5d..f2bc66f156c75 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1235,13 +1235,6 @@ def test_last_subset(self):
result = ts[:0].last('3M')
assert_series_equal(result, ts[:0])
- def test_add_offset(self):
- rng = date_range('1/1/2000', '2/1/2000')
-
- result = rng + offsets.Hour(2)
- expected = date_range('1/1/2000 02:00', '2/1/2000 02:00')
- self.assertTrue(result.equals(expected))
-
def test_format_pre_1900_dates(self):
rng = date_range('1/1/1850', '1/1/1950', freq='A-DEC')
rng.format()
@@ -2314,14 +2307,6 @@ def test_map(self):
exp = [f(x) for x in rng]
self.assert_numpy_array_equal(result, exp)
- def test_add_union(self):
- rng = date_range('1/1/2000', periods=5)
- rng2 = date_range('1/6/2000', periods=5)
-
- result = rng + rng2
- expected = rng.union(rng2)
- self.assertTrue(result.equals(expected))
-
def test_misc_coverage(self):
rng = date_range('1/1/2000', periods=5)
result = rng.groupby(rng.day)
| Fixes 2 issues related to `DetetimeIndex` and `PeriodIndex` ops.
- Addition / subtraction between `PeriodIndex` raise `TypeError` (Closes #3367).
```
pidx + pidx
# TypeError: unsupported operand type(s) for +: 'PeriodIndex' and 'PeriodIndex'
```
- In-place addition / subtraction doesn't return the same result as normal addition / subtraction. Specifically, `PeriodIndex` in-place operation results in `Int64Index` (Closes #6527)
```
didx = pd.date_range('2011-01-01', freq='D', periods=5)
# This results shift (expected)
didx + 1
# <class 'pandas.tseries.index.DatetimeIndex'>
# [2011-01-02, ..., 2011-01-06]
# Length: 5, Freq: D, Timezone: None
# This result adding 1 unit (nano second) (NG)
didx += 1
didx
# <class 'pandas.tseries.index.DatetimeIndex'>
# [2011-01-01 00:00:00.000000001, ..., 2011-01-05 00:00:00.000000001]
# Length: 5, Freq: None, Timezone: None
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7741 | 2014-07-13T00:14:25Z | 2014-07-23T21:28:50Z | 2014-07-23T21:28:50Z | 2014-07-25T20:42:29Z |
BUG: _flex_binary_moment() doesn't preserve column order or handle multiple columns with the same label | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 1c05c01633b15..da96d1e359454 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -265,7 +265,7 @@ Bug Fixes
-- Bug in repeated timeseries line and area plot may result in ``ValueError`` or incorrect kind (:issue:`7733`)
+- Bug in repeated timeseries line and area plot may result in ``ValueError`` or incorrect kind (:issue:`7733`)
@@ -278,7 +278,10 @@ Bug Fixes
- Bug in ``DataFrame.plot`` with ``subplots=True`` may draw unnecessary minor xticks and yticks (:issue:`7801`)
- Bug in ``StataReader`` which did not read variable labels in 117 files due to difference between Stata documentation and implementation (:issue:`7816`)
-
+- Bug in ``expanding_cov``, ``expanding_corr``, ``rolling_cov``, ``rolling_cov``, ``ewmcov``, and ``ewmcorr``
+ returning results with columns sorted by name and producing an error for non-unique columns;
+ now handles non-unique columns and returns columns in original order
+ (except for the case of two DataFrames with ``pairwise=False``, where behavior is unchanged) (:issue:`7542`)
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index 6f06255c7262d..a62d8178385cc 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -259,38 +259,55 @@ def _flex_binary_moment(arg1, arg2, f, pairwise=False):
isinstance(arg2, (np.ndarray,Series)):
X, Y = _prep_binary(arg1, arg2)
return f(X, Y)
+
elif isinstance(arg1, DataFrame):
+ def dataframe_from_int_dict(data, frame_template):
+ result = DataFrame(data, index=frame_template.index)
+ result.columns = frame_template.columns[result.columns]
+ return result
+
results = {}
if isinstance(arg2, DataFrame):
- X, Y = arg1.align(arg2, join='outer')
if pairwise is False:
- X = X + 0 * Y
- Y = Y + 0 * X
- res_columns = arg1.columns.union(arg2.columns)
- for col in res_columns:
- if col in X and col in Y:
- results[col] = f(X[col], Y[col])
+ if arg1 is arg2:
+ # special case in order to handle duplicate column names
+ for i, col in enumerate(arg1.columns):
+ results[i] = f(arg1.iloc[:, i], arg2.iloc[:, i])
+ return dataframe_from_int_dict(results, arg1)
+ else:
+ if not arg1.columns.is_unique:
+ raise ValueError("'arg1' columns are not unique")
+ if not arg2.columns.is_unique:
+ raise ValueError("'arg2' columns are not unique")
+ X, Y = arg1.align(arg2, join='outer')
+ X = X + 0 * Y
+ Y = Y + 0 * X
+ res_columns = arg1.columns.union(arg2.columns)
+ for col in res_columns:
+ if col in X and col in Y:
+ results[col] = f(X[col], Y[col])
+ return DataFrame(results, index=X.index, columns=res_columns)
elif pairwise is True:
results = defaultdict(dict)
for i, k1 in enumerate(arg1.columns):
for j, k2 in enumerate(arg2.columns):
if j<i and arg2 is arg1:
# Symmetric case
- results[k1][k2] = results[k2][k1]
+ results[i][j] = results[j][i]
else:
- results[k1][k2] = f(*_prep_binary(arg1[k1], arg2[k2]))
- return Panel.from_dict(results).swapaxes('items', 'major')
+ results[i][j] = f(*_prep_binary(arg1.iloc[:, i], arg2.iloc[:, j]))
+ p = Panel.from_dict(results).swapaxes('items', 'major')
+ p.major_axis = arg1.columns[p.major_axis]
+ p.minor_axis = arg2.columns[p.minor_axis]
+ return p
else:
raise ValueError("'pairwise' is not True/False")
else:
- res_columns = arg1.columns
- X, Y = arg1.align(arg2, axis=0, join='outer')
results = {}
+ for i, col in enumerate(arg1.columns):
+ results[i] = f(*_prep_binary(arg1.iloc[:, i], arg2))
+ return dataframe_from_int_dict(results, arg1)
- for col in res_columns:
- results[col] = f(X[col], Y)
-
- return DataFrame(results, index=X.index, columns=res_columns)
else:
return _flex_binary_moment(arg2, arg1, f)
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 7124eaf6fb797..4b5bb042e1fc7 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -8,7 +8,7 @@
from pandas import Series, DataFrame, Panel, bdate_range, isnull, notnull
from pandas.util.testing import (
- assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal
+ assert_almost_equal, assert_series_equal, assert_frame_equal, assert_panel_equal, assert_index_equal
)
import pandas.core.datetools as datetools
import pandas.stats.moments as mom
@@ -970,6 +970,119 @@ def test_expanding_corr_pairwise_diff_length(self):
assert_frame_equal(result2, expected)
assert_frame_equal(result3, expected)
assert_frame_equal(result4, expected)
+
+ def test_pairwise_stats_column_names_order(self):
+ # GH 7738
+ df1s = [DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[0,1]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1,0]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1,1]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=['C','C']),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[1.,0]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=[0.,1]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1]], columns=['C',1]),
+ DataFrame([[2.,4.],[1.,2.],[5.,2.],[8.,1.]], columns=[1,0.]),
+ DataFrame([[2,4.],[1,2.],[5,2.],[8,1.]], columns=[0,1.]),
+ DataFrame([[2,4],[1,2],[5,2],[8,1.]], columns=[1.,'X']),
+ ]
+ df2 = DataFrame([[None,1,1],[None,1,2],[None,3,2],[None,8,1]], columns=['Y','Z','X'])
+ s = Series([1,1,3,8])
+
+ # DataFrame methods (which do not call _flex_binary_moment())
+ for f in [lambda x: x.cov(),
+ lambda x: x.corr(),
+ ]:
+ results = [f(df) for df in df1s]
+ for (df, result) in zip(df1s, results):
+ assert_index_equal(result.index, df.columns)
+ assert_index_equal(result.columns, df.columns)
+ for i, result in enumerate(results):
+ if i > 0:
+ self.assert_numpy_array_equivalent(result, results[0])
+
+ # DataFrame with itself, pairwise=True
+ for f in [lambda x: mom.expanding_cov(x, pairwise=True),
+ lambda x: mom.expanding_corr(x, pairwise=True),
+ lambda x: mom.rolling_cov(x, window=3, pairwise=True),
+ lambda x: mom.rolling_corr(x, window=3, pairwise=True),
+ lambda x: mom.ewmcov(x, com=3, pairwise=True),
+ lambda x: mom.ewmcorr(x, com=3, pairwise=True),
+ ]:
+ results = [f(df) for df in df1s]
+ for (df, result) in zip(df1s, results):
+ assert_index_equal(result.items, df.index)
+ assert_index_equal(result.major_axis, df.columns)
+ assert_index_equal(result.minor_axis, df.columns)
+ for i, result in enumerate(results):
+ if i > 0:
+ self.assert_numpy_array_equivalent(result, results[0])
+
+ # DataFrame with itself, pairwise=False
+ for f in [lambda x: mom.expanding_cov(x, pairwise=False),
+ lambda x: mom.expanding_corr(x, pairwise=False),
+ lambda x: mom.rolling_cov(x, window=3, pairwise=False),
+ lambda x: mom.rolling_corr(x, window=3, pairwise=False),
+ lambda x: mom.ewmcov(x, com=3, pairwise=False),
+ lambda x: mom.ewmcorr(x, com=3, pairwise=False),
+ ]:
+ results = [f(df) for df in df1s]
+ for (df, result) in zip(df1s, results):
+ assert_index_equal(result.index, df.index)
+ assert_index_equal(result.columns, df.columns)
+ for i, result in enumerate(results):
+ if i > 0:
+ self.assert_numpy_array_equivalent(result, results[0])
+
+ # DataFrame with another DataFrame, pairwise=True
+ for f in [lambda x, y: mom.expanding_cov(x, y, pairwise=True),
+ lambda x, y: mom.expanding_corr(x, y, pairwise=True),
+ lambda x, y: mom.rolling_cov(x, y, window=3, pairwise=True),
+ lambda x, y: mom.rolling_corr(x, y, window=3, pairwise=True),
+ lambda x, y: mom.ewmcov(x, y, com=3, pairwise=True),
+ lambda x, y: mom.ewmcorr(x, y, com=3, pairwise=True),
+ ]:
+ results = [f(df, df2) for df in df1s]
+ for (df, result) in zip(df1s, results):
+ assert_index_equal(result.items, df.index)
+ assert_index_equal(result.major_axis, df.columns)
+ assert_index_equal(result.minor_axis, df2.columns)
+ for i, result in enumerate(results):
+ if i > 0:
+ self.assert_numpy_array_equivalent(result, results[0])
+
+ # DataFrame with another DataFrame, pairwise=False
+ for f in [lambda x, y: mom.expanding_cov(x, y, pairwise=False),
+ lambda x, y: mom.expanding_corr(x, y, pairwise=False),
+ lambda x, y: mom.rolling_cov(x, y, window=3, pairwise=False),
+ lambda x, y: mom.rolling_corr(x, y, window=3, pairwise=False),
+ lambda x, y: mom.ewmcov(x, y, com=3, pairwise=False),
+ lambda x, y: mom.ewmcorr(x, y, com=3, pairwise=False),
+ ]:
+ results = [f(df, df2) if df.columns.is_unique else None for df in df1s]
+ for (df, result) in zip(df1s, results):
+ if result is not None:
+ expected_index = df.index.union(df2.index)
+ expected_columns = df.columns.union(df2.columns)
+ assert_index_equal(result.index, expected_index)
+ assert_index_equal(result.columns, expected_columns)
+ else:
+ tm.assertRaisesRegexp(ValueError, "'arg1' columns are not unique", f, df, df2)
+ tm.assertRaisesRegexp(ValueError, "'arg2' columns are not unique", f, df2, df)
+
+ # DataFrame with a Series
+ for f in [lambda x, y: mom.expanding_cov(x, y),
+ lambda x, y: mom.expanding_corr(x, y),
+ lambda x, y: mom.rolling_cov(x, y, window=3),
+ lambda x, y: mom.rolling_corr(x, y, window=3),
+ lambda x, y: mom.ewmcov(x, y, com=3),
+ lambda x, y: mom.ewmcorr(x, y, com=3),
+ ]:
+ results = [f(df, s) for df in df1s] + [f(s, df) for df in df1s]
+ for (df, result) in zip(df1s, results):
+ assert_index_equal(result.index, df.index)
+ assert_index_equal(result.columns, df.columns)
+ for i, result in enumerate(results):
+ if i > 0:
+ self.assert_numpy_array_equivalent(result, results[0])
def test_rolling_skew_edge_cases(self):
| Closes https://github.com/pydata/pandas/issues/7542.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7738 | 2014-07-12T18:57:33Z | 2014-07-25T14:34:07Z | 2014-07-25T14:34:07Z | 2014-09-10T00:12:39Z |
ENH: Use left._constructor on pd.merge | diff --git a/doc/source/merging.rst b/doc/source/merging.rst
index 04fb0b0695f8f..55bbf613b33cf 100644
--- a/doc/source/merging.rst
+++ b/doc/source/merging.rst
@@ -376,6 +376,10 @@ Here's a description of what each argument is for:
can be avoided are somewhat pathological but this option is provided
nonetheless.
+The return type will be the same as ``left``. If ``left`` is a ``DataFrame``
+and ``right`` is a subclass of DataFrame, the return type will still be
+``DataFrame``.
+
``merge`` is a function in the pandas namespace, and it is also available as a
DataFrame instance method, with the calling DataFrame being implicitly
considered the left object in the join.
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 148cf85d0b5ab..7a9ba2ed6e53d 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -166,6 +166,9 @@ previously results in ``Exception`` or ``TypeError`` (:issue:`7812`)
- ``DataFrame.tz_localize`` and ``DataFrame.tz_convert`` now accepts an optional ``level`` argument
for localizing a specific level of a MultiIndex (:issue:`7846`)
+- ``merge``, ``DataFrame.merge``, and ``ordered_merge`` now return the same type
+ as the ``left`` argument. (:issue:`7737`)
+
.. _whatsnew_0150.dt:
.dt accessor
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 3979ae76f14c3..352ac52281c54 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -135,6 +135,8 @@
Returns
-------
merged : DataFrame
+ The output type will the be same as 'left', if it is a subclass
+ of DataFrame.
"""
#----------------------------------------------------------------------
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index ee594ef031e82..3a5c191148fe6 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -106,6 +106,8 @@ def ordered_merge(left, right, on=None, left_by=None, right_by=None,
Returns
-------
merged : DataFrame
+ The output type will the be same as 'left', if it is a subclass
+ of DataFrame.
"""
def _merger(x, y):
op = _OrderedMerge(x, y, on=on, left_on=left_on, right_on=right_on,
@@ -198,7 +200,8 @@ def get_result(self):
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
- result = DataFrame(result_data).__finalize__(self, method='merge')
+ typ = self.left._constructor
+ result = typ(result_data).__finalize__(self, method='merge')
self._maybe_add_join_keys(result, left_indexer, right_indexer)
@@ -520,7 +523,8 @@ def get_result(self):
axes=[llabels.append(rlabels), join_index],
concat_axis=0, copy=self.copy)
- result = DataFrame(result_data)
+ typ = self.left._constructor
+ result = typ(result_data).__finalize__(self, method='ordered_merge')
self._maybe_add_join_keys(result, left_indexer, right_indexer)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index df2f270346e20..6985da233ed58 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -781,6 +781,16 @@ def test_merge_nan_right(self):
1: nan}})[['i1', 'i2', 'i1_', 'i3']]
assert_frame_equal(result, expected)
+ def test_merge_type(self):
+ class NotADataFrame(DataFrame):
+ @property
+ def _constructor(self):
+ return NotADataFrame
+
+ nad = NotADataFrame(self.df)
+ result = nad.merge(self.df2, on='key1')
+
+ tm.assert_isinstance(result, NotADataFrame)
def test_append_dtype_coerce(self):
@@ -2154,6 +2164,18 @@ def test_multigroup(self):
result = ordered_merge(left, self.right, on='key', left_by='group')
self.assertTrue(result['group'].notnull().all())
+ def test_merge_type(self):
+ class NotADataFrame(DataFrame):
+ @property
+ def _constructor(self):
+ return NotADataFrame
+
+ nad = NotADataFrame(self.left)
+ result = nad.merge(self.right, on='key')
+
+ tm.assert_isinstance(result, NotADataFrame)
+
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| Use the _constructor property when creating the merge result to
preserve the output type.
If a [GeoPandas](http://github.com/geopandas/geopandas) `GeoDataFrame` is merged with a `DataFrame`, the result is hard-coded to always be `DataFrame` [GeoPandas Issue #118](https://github.com/geopandas/geopandas/issues/118). We'd like it to return `GeoDataFrame` in these cases
```
>>> import geopandas as gpd
>>> import pandas as pd
>>> gdf = gpd.GeoDataFrame(...)
>>> df = pd.DataFrame(...)
>>> merged = pd.merge(gdf, df, on='column')
>>> type(merged)
GeoDataFrame
```
This PR uses `left._constructor` to generate the result type for merge operations.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7737 | 2014-07-12T16:53:33Z | 2014-08-11T13:04:16Z | 2014-08-11T13:04:16Z | 2014-08-11T13:57:14Z |
ENH: plot functions accept multiple axes and layout kw | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index d15a48535f1eb..bbf665b574409 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -303,6 +303,9 @@ Enhancements
~~~~~~~~~~~~
- Added support for bool, uint8, uint16 and uint32 datatypes in ``to_stata`` (:issue:`7097`, :issue:`7365`)
+- Added ``layout`` keyword to ``DataFrame.plot`` (:issue:`6667`)
+- Allow to pass multiple axes to ``DataFrame.plot``, ``hist`` and ``boxplot`` (:issue:`5353`, :issue:`6970`, :issue:`7069`)
+
- ``PeriodIndex`` supports ``resolution`` as the same as ``DatetimeIndex`` (:issue:`7708`)
- ``pandas.tseries.holiday`` has added support for additional holidays and ways to observe holidays (:issue:`7070`)
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 40b5d7c1599c1..e8d3d147479c2 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -946,10 +946,41 @@ with the ``subplots`` keyword:
@savefig frame_plot_subplots.png
df.plot(subplots=True, figsize=(6, 6));
-Targeting Different Subplots
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+Using Layout and Targetting Multiple Axes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
-You can pass an ``ax`` argument to :meth:`Series.plot` to plot on a particular axis:
+The layout of subplots can be specified by ``layout`` keyword. It can accept
+``(rows, columns)``. The ``layout`` keyword can be used in
+``hist`` and ``boxplot`` also. If input is invalid, ``ValueError`` will be raised.
+
+The number of axes which can be contained by rows x columns specified by ``layout`` must be
+larger than the number of required subplots. If layout can contain more axes than required,
+blank axes are not drawn.
+
+.. ipython:: python
+
+ @savefig frame_plot_subplots_layout.png
+ df.plot(subplots=True, layout=(2, 3), figsize=(6, 6));
+
+Also, you can pass multiple axes created beforehand as list-like via ``ax`` keyword.
+This allows to use more complicated layout.
+The passed axes must be the same number as the subplots being drawn.
+
+When multiple axes are passed via ``ax`` keyword, ``layout``, ``sharex`` and ``sharey`` keywords are ignored.
+These must be configured when creating axes.
+
+.. ipython:: python
+
+ fig, axes = plt.subplots(4, 4, figsize=(6, 6));
+ plt.adjust_subplots(wspace=0.5, hspace=0.5);
+ target1 = [axes[0][0], axes[1][1], axes[2][2], axes[3][3]]
+ target2 = [axes[3][0], axes[2][1], axes[1][2], axes[0][3]]
+
+ df.plot(subplots=True, ax=target1, legend=False);
+ @savefig frame_plot_subplots_multi_ax.png
+ (-df).plot(subplots=True, ax=target2, legend=False);
+
+Another option is passing an ``ax`` argument to :meth:`Series.plot` to plot on a particular axis:
.. ipython:: python
:suppress:
@@ -964,12 +995,12 @@ You can pass an ``ax`` argument to :meth:`Series.plot` to plot on a particular a
.. ipython:: python
fig, axes = plt.subplots(nrows=2, ncols=2)
- df['A'].plot(ax=axes[0,0]); axes[0,0].set_title('A')
- df['B'].plot(ax=axes[0,1]); axes[0,1].set_title('B')
- df['C'].plot(ax=axes[1,0]); axes[1,0].set_title('C')
+ df['A'].plot(ax=axes[0,0]); axes[0,0].set_title('A');
+ df['B'].plot(ax=axes[0,1]); axes[0,1].set_title('B');
+ df['C'].plot(ax=axes[1,0]); axes[1,0].set_title('C');
@savefig series_plot_multi.png
- df['D'].plot(ax=axes[1,1]); axes[1,1].set_title('D')
+ df['D'].plot(ax=axes[1,1]); axes[1,1].set_title('D');
.. ipython:: python
:suppress:
diff --git a/pandas/tests/test_graphics.py b/pandas/tests/test_graphics.py
index b3a92263370e8..1560b78a2f5e0 100644
--- a/pandas/tests/test_graphics.py
+++ b/pandas/tests/test_graphics.py
@@ -670,7 +670,7 @@ def test_hist_layout_with_by(self):
axes = _check_plot_works(df.height.hist, by=df.classroom, layout=(2, 2))
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- axes = _check_plot_works(df.height.hist, by=df.category, layout=(4, 2), figsize=(12, 7))
+ axes = df.height.hist(by=df.category, layout=(4, 2), figsize=(12, 7))
self._check_axes_shape(axes, axes_num=4, layout=(4, 2), figsize=(12, 7))
@slow
@@ -1071,6 +1071,7 @@ def test_subplots(self):
for kind in ['bar', 'barh', 'line', 'area']:
axes = df.plot(kind=kind, subplots=True, sharex=True, legend=True)
self._check_axes_shape(axes, axes_num=3, layout=(3, 1))
+ self.assertEqual(axes.shape, (3, ))
for ax, column in zip(axes, df.columns):
self._check_legend_labels(ax, labels=[com.pprint_thing(column)])
@@ -1133,6 +1134,77 @@ def test_subplots_timeseries(self):
self._check_visible(ax.get_yticklabels())
self._check_ticks_props(ax, xlabelsize=7, xrot=45)
+ def test_subplots_layout(self):
+ # GH 6667
+ df = DataFrame(np.random.rand(10, 3),
+ index=list(string.ascii_letters[:10]))
+
+ axes = df.plot(subplots=True, layout=(2, 2))
+ self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
+ self.assertEqual(axes.shape, (2, 2))
+
+ axes = df.plot(subplots=True, layout=(1, 4))
+ self._check_axes_shape(axes, axes_num=3, layout=(1, 4))
+ self.assertEqual(axes.shape, (1, 4))
+
+ with tm.assertRaises(ValueError):
+ axes = df.plot(subplots=True, layout=(1, 1))
+
+ # single column
+ df = DataFrame(np.random.rand(10, 1),
+ index=list(string.ascii_letters[:10]))
+ axes = df.plot(subplots=True)
+ self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
+ self.assertEqual(axes.shape, (1, ))
+
+ axes = df.plot(subplots=True, layout=(3, 3))
+ self._check_axes_shape(axes, axes_num=1, layout=(3, 3))
+ self.assertEqual(axes.shape, (3, 3))
+
+ @slow
+ def test_subplots_multiple_axes(self):
+ # GH 5353, 6970, GH 7069
+ fig, axes = self.plt.subplots(2, 3)
+ df = DataFrame(np.random.rand(10, 3),
+ index=list(string.ascii_letters[:10]))
+
+ returned = df.plot(subplots=True, ax=axes[0])
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assertEqual(returned.shape, (3, ))
+ self.assertIs(returned[0].figure, fig)
+ # draw on second row
+ returned = df.plot(subplots=True, ax=axes[1])
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assertEqual(returned.shape, (3, ))
+ self.assertIs(returned[0].figure, fig)
+ self._check_axes_shape(axes, axes_num=6, layout=(2, 3))
+ tm.close()
+
+ with tm.assertRaises(ValueError):
+ fig, axes = self.plt.subplots(2, 3)
+ # pass different number of axes from required
+ df.plot(subplots=True, ax=axes)
+
+ # pass 2-dim axes and invalid layout
+ # invalid lauout should not affect to input and return value
+ # (show warning is tested in
+ # TestDataFrameGroupByPlots.test_grouped_box_multiple_axes
+ fig, axes = self.plt.subplots(2, 2)
+ df = DataFrame(np.random.rand(10, 4),
+ index=list(string.ascii_letters[:10]))
+
+ returned = df.plot(subplots=True, ax=axes, layout=(2, 1))
+ self._check_axes_shape(returned, axes_num=4, layout=(2, 2))
+ self.assertEqual(returned.shape, (4, ))
+
+ # single column
+ fig, axes = self.plt.subplots(1, 1)
+ df = DataFrame(np.random.rand(10, 1),
+ index=list(string.ascii_letters[:10]))
+ axes = df.plot(subplots=True, ax=[axes])
+ self._check_axes_shape(axes, axes_num=1, layout=(1, 1))
+ self.assertEqual(axes.shape, (1, ))
+
def test_negative_log(self):
df = - DataFrame(rand(6, 4),
index=list(string.ascii_letters[:6]),
@@ -1718,7 +1790,7 @@ def test_hist_df_coord(self):
normal_df = DataFrame({'A': np.repeat(np.array([1, 2, 3, 4, 5]),
np.array([10, 9, 8, 7, 6])),
'B': np.repeat(np.array([1, 2, 3, 4, 5]),
- np.array([8, 8, 8, 8, 8])),
+ np.array([8, 8, 8, 8, 8])),
'C': np.repeat(np.array([1, 2, 3, 4, 5]),
np.array([6, 7, 8, 9, 10]))},
columns=['A', 'B', 'C'])
@@ -1726,7 +1798,7 @@ def test_hist_df_coord(self):
nan_df = DataFrame({'A': np.repeat(np.array([np.nan, 1, 2, 3, 4, 5]),
np.array([3, 10, 9, 8, 7, 6])),
'B': np.repeat(np.array([1, np.nan, 2, 3, 4, 5]),
- np.array([8, 3, 8, 8, 8, 8])),
+ np.array([8, 3, 8, 8, 8, 8])),
'C': np.repeat(np.array([1, 2, 3, np.nan, 4, 5]),
np.array([6, 7, 8, 3, 9, 10]))},
columns=['A', 'B', 'C'])
@@ -2712,6 +2784,41 @@ def test_grouped_box_layout(self):
return_type='dict')
self._check_axes_shape(self.plt.gcf().axes, axes_num=3, layout=(1, 4))
+ @slow
+ def test_grouped_box_multiple_axes(self):
+ # GH 6970, GH 7069
+ df = self.hist_df
+
+ # check warning to ignore sharex / sharey
+ # this check should be done in the first function which
+ # passes multiple axes to plot, hist or boxplot
+ # location should be changed if other test is added
+ # which has earlier alphabetical order
+ with tm.assert_produces_warning(UserWarning):
+ fig, axes = self.plt.subplots(2, 2)
+ df.groupby('category').boxplot(column='height', return_type='axes', ax=axes)
+ self._check_axes_shape(self.plt.gcf().axes, axes_num=4, layout=(2, 2))
+
+ fig, axes = self.plt.subplots(2, 3)
+ returned = df.boxplot(column=['height', 'weight', 'category'], by='gender',
+ return_type='axes', ax=axes[0])
+ returned = np.array(returned.values())
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assert_numpy_array_equal(returned, axes[0])
+ self.assertIs(returned[0].figure, fig)
+ # draw on second row
+ returned = df.groupby('classroom').boxplot(column=['height', 'weight', 'category'],
+ return_type='axes', ax=axes[1])
+ returned = np.array(returned.values())
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assert_numpy_array_equal(returned, axes[1])
+ self.assertIs(returned[0].figure, fig)
+
+ with tm.assertRaises(ValueError):
+ fig, axes = self.plt.subplots(2, 3)
+ # pass different number of axes from required
+ axes = df.groupby('classroom').boxplot(ax=axes)
+
@slow
def test_grouped_hist_layout(self):
@@ -2724,12 +2831,12 @@ def test_grouped_hist_layout(self):
axes = _check_plot_works(df.hist, column='height', by=df.gender, layout=(2, 1))
self._check_axes_shape(axes, axes_num=2, layout=(2, 1))
- axes = _check_plot_works(df.hist, column='height', by=df.category, layout=(4, 1))
+ axes = df.hist(column='height', by=df.category, layout=(4, 1))
self._check_axes_shape(axes, axes_num=4, layout=(4, 1))
- axes = _check_plot_works(df.hist, column='height', by=df.category,
- layout=(4, 2), figsize=(12, 8))
+ axes = df.hist(column='height', by=df.category, layout=(4, 2), figsize=(12, 8))
self._check_axes_shape(axes, axes_num=4, layout=(4, 2), figsize=(12, 8))
+ tm.close()
# GH 6769
axes = _check_plot_works(df.hist, column='height', by='classroom', layout=(2, 2))
@@ -2739,13 +2846,32 @@ def test_grouped_hist_layout(self):
axes = _check_plot_works(df.hist, by='classroom')
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
- axes = _check_plot_works(df.hist, by='gender', layout=(3, 5))
+ axes = df.hist(by='gender', layout=(3, 5))
self._check_axes_shape(axes, axes_num=2, layout=(3, 5))
- axes = _check_plot_works(df.hist, column=['height', 'weight', 'category'])
+ axes = df.hist(column=['height', 'weight', 'category'])
self._check_axes_shape(axes, axes_num=3, layout=(2, 2))
@slow
+ def test_grouped_hist_multiple_axes(self):
+ # GH 6970, GH 7069
+ df = self.hist_df
+
+ fig, axes = self.plt.subplots(2, 3)
+ returned = df.hist(column=['height', 'weight', 'category'], ax=axes[0])
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assert_numpy_array_equal(returned, axes[0])
+ self.assertIs(returned[0].figure, fig)
+ returned = df.hist(by='classroom', ax=axes[1])
+ self._check_axes_shape(returned, axes_num=3, layout=(1, 3))
+ self.assert_numpy_array_equal(returned, axes[1])
+ self.assertIs(returned[0].figure, fig)
+
+ with tm.assertRaises(ValueError):
+ fig, axes = self.plt.subplots(2, 3)
+ # pass different number of axes from required
+ axes = df.hist(column='height', ax=axes)
+ @slow
def test_axis_share_x(self):
df = self.hist_df
# GH4089
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 7d0eaea5b36d6..18fc2bead02ec 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -246,7 +246,8 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
df = frame._get_numeric_data()
n = df.columns.size
- fig, axes = _subplots(nrows=n, ncols=n, figsize=figsize, ax=ax,
+ naxes = n * n
+ fig, axes = _subplots(naxes=naxes, figsize=figsize, ax=ax,
squeeze=False)
# no gaps between subplots
@@ -752,6 +753,7 @@ class MPLPlot(object):
data :
"""
+ _layout_type = 'vertical'
_default_rot = 0
orientation = None
@@ -767,7 +769,7 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True,
xticks=None, yticks=None,
sort_columns=False, fontsize=None,
secondary_y=False, colormap=None,
- table=False, **kwds):
+ table=False, layout=None, **kwds):
self.data = data
self.by = by
@@ -780,6 +782,7 @@ def __init__(self, data, kind=None, by=None, subplots=False, sharex=True,
self.sharex = sharex
self.sharey = sharey
self.figsize = figsize
+ self.layout = layout
self.xticks = xticks
self.yticks = yticks
@@ -932,22 +935,22 @@ def _maybe_right_yaxis(self, ax):
def _setup_subplots(self):
if self.subplots:
- nrows, ncols = self._get_layout()
- fig, axes = _subplots(nrows=nrows, ncols=ncols,
+ fig, axes = _subplots(naxes=self.nseries,
sharex=self.sharex, sharey=self.sharey,
- figsize=self.figsize, ax=self.ax)
- if not com.is_list_like(axes):
- axes = np.array([axes])
+ figsize=self.figsize, ax=self.ax,
+ layout=self.layout,
+ layout_type=self._layout_type)
else:
if self.ax is None:
fig = self.plt.figure(figsize=self.figsize)
- ax = fig.add_subplot(111)
+ axes = fig.add_subplot(111)
else:
fig = self.ax.get_figure()
if self.figsize is not None:
fig.set_size_inches(self.figsize)
- ax = self.ax
- axes = [ax]
+ axes = self.ax
+
+ axes = _flatten(axes)
if self.logx or self.loglog:
[a.set_xscale('log') for a in axes]
@@ -957,12 +960,18 @@ def _setup_subplots(self):
self.fig = fig
self.axes = axes
- def _get_layout(self):
- from pandas.core.frame import DataFrame
- if isinstance(self.data, DataFrame):
- return (len(self.data.columns), 1)
+ @property
+ def result(self):
+ """
+ Return result axes
+ """
+ if self.subplots:
+ if self.layout is not None and not com.is_list_like(self.ax):
+ return self.axes.reshape(*self.layout)
+ else:
+ return self.axes
else:
- return (1, 1)
+ return self.axes[0]
def _compute_plot_data(self):
numeric_data = self.data.convert_objects()._get_numeric_data()
@@ -1360,6 +1369,8 @@ def _get_errorbars(self, label=None, index=None, xerr=True, yerr=True):
class ScatterPlot(MPLPlot):
+ _layout_type = 'single'
+
def __init__(self, data, x, y, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
self.kwds.setdefault('c', self.plt.rcParams['patch.facecolor'])
@@ -1372,8 +1383,9 @@ def __init__(self, data, x, y, **kwargs):
self.x = x
self.y = y
- def _get_layout(self):
- return (1, 1)
+ @property
+ def nseries(self):
+ return 1
def _make_plot(self):
x, y, data = self.x, self.y, self.data
@@ -1404,6 +1416,8 @@ def _post_plot_logic(self):
class HexBinPlot(MPLPlot):
+ _layout_type = 'single'
+
def __init__(self, data, x, y, C=None, **kwargs):
MPLPlot.__init__(self, data, **kwargs)
@@ -1421,8 +1435,9 @@ def __init__(self, data, x, y, C=None, **kwargs):
self.y = y
self.C = C
- def _get_layout(self):
- return (1, 1)
+ @property
+ def nseries(self):
+ return 1
def _make_plot(self):
import matplotlib.pyplot as plt
@@ -1966,6 +1981,8 @@ def _post_plot_logic(self):
class PiePlot(MPLPlot):
+ _layout_type = 'horizontal'
+
def __init__(self, data, kind=None, **kwargs):
data = data.fillna(value=0)
if (data < 0).any().any():
@@ -1978,13 +1995,6 @@ def _args_adjust(self):
self.logx = False
self.loglog = False
- def _get_layout(self):
- from pandas import DataFrame
- if isinstance(self.data, DataFrame):
- return (1, len(self.data.columns))
- else:
- return (1, 1)
-
def _validate_color_args(self):
pass
@@ -2044,7 +2054,7 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
legend=True, rot=None, ax=None, style=None, title=None,
xlim=None, ylim=None, logx=False, logy=False, xticks=None,
yticks=None, kind='line', sort_columns=False, fontsize=None,
- secondary_y=False, **kwds):
+ secondary_y=False, layout=None, **kwds):
"""
Make line, bar, or scatter plots of DataFrame series with the index on the x-axis
@@ -2116,6 +2126,8 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
position : float
Specify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
+ layout : tuple (optional)
+ (rows, columns) for the layout of the plot
table : boolean, Series or DataFrame, default False
If True, draw a table using the data in the DataFrame and the data will
be transposed to meet matplotlib's default layout.
@@ -2153,7 +2165,7 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
xlim=xlim, ylim=ylim, title=title, grid=grid,
figsize=figsize, logx=logx, logy=logy,
sort_columns=sort_columns, secondary_y=secondary_y,
- **kwds)
+ layout=layout, **kwds)
elif kind in _series_kinds:
if y is None and subplots is False:
msg = "{0} requires either y column or 'subplots=True'"
@@ -2169,9 +2181,8 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
fontsize=fontsize, use_index=use_index, sharex=sharex,
sharey=sharey, xticks=xticks, yticks=yticks,
xlim=xlim, ylim=ylim, title=title, grid=grid,
- figsize=figsize,
- sort_columns=sort_columns,
- **kwds)
+ figsize=figsize, layout=layout,
+ sort_columns=sort_columns, **kwds)
else:
if x is not None:
if com.is_integer(x) and not frame.columns.holds_integer():
@@ -2209,14 +2220,11 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
xticks=xticks, yticks=yticks, xlim=xlim, ylim=ylim,
title=title, grid=grid, figsize=figsize, logx=logx,
logy=logy, sort_columns=sort_columns,
- secondary_y=secondary_y, **kwds)
+ secondary_y=secondary_y, layout=layout, **kwds)
plot_obj.generate()
plot_obj.draw()
- if subplots:
- return plot_obj.axes
- else:
- return plot_obj.axes[0]
+ return plot_obj.result
def plot_series(series, label=None, kind='line', use_index=True, rot=None,
@@ -2311,7 +2319,7 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
plot_obj.draw()
# plot_obj.ax is None if we created the first figure
- return plot_obj.axes[0]
+ return plot_obj.result
_shared_docs['boxplot'] = """
@@ -2551,12 +2559,13 @@ def hist_frame(data, column=None, by=None, grid=True, xlabelsize=None,
data = data._get_numeric_data()
naxes = len(data.columns)
- nrows, ncols = _get_layout(naxes, layout=layout)
- fig, axes = _subplots(nrows=nrows, ncols=ncols, naxes=naxes, ax=ax, squeeze=False,
- sharex=sharex, sharey=sharey, figsize=figsize)
+ fig, axes = _subplots(naxes=naxes, ax=ax, squeeze=False,
+ sharex=sharex, sharey=sharey, figsize=figsize,
+ layout=layout)
+ _axes = _flatten(axes)
for i, col in enumerate(com._try_sort(data.columns)):
- ax = axes[i // ncols, i % ncols]
+ ax = _axes[i]
ax.hist(data[col].dropna().values, bins=bins, **kwds)
ax.set_title(col)
ax.grid(grid)
@@ -2672,7 +2681,7 @@ def plot_group(group, ax):
xrot = xrot or rot
fig, axes = _grouped_plot(plot_group, data, column=column,
- by=by, sharex=sharex, sharey=sharey,
+ by=by, sharex=sharex, sharey=sharey, ax=ax,
figsize=figsize, layout=layout, rot=rot)
_set_ticks_props(axes, xlabelsize=xlabelsize, xrot=xrot,
@@ -2730,9 +2739,9 @@ def boxplot_frame_groupby(grouped, subplots=True, column=None, fontsize=None,
"""
if subplots is True:
naxes = len(grouped)
- nrows, ncols = _get_layout(naxes, layout=layout)
- fig, axes = _subplots(nrows=nrows, ncols=ncols, naxes=naxes, squeeze=False,
- ax=ax, sharex=False, sharey=True, figsize=figsize)
+ fig, axes = _subplots(naxes=naxes, squeeze=False,
+ ax=ax, sharex=False, sharey=True, figsize=figsize,
+ layout=layout)
axes = _flatten(axes)
ret = compat.OrderedDict()
@@ -2773,14 +2782,14 @@ def _grouped_plot(plotf, data, column=None, by=None, numeric_only=True,
grouped = grouped[column]
naxes = len(grouped)
- nrows, ncols = _get_layout(naxes, layout=layout)
- fig, axes = _subplots(nrows=nrows, ncols=ncols, naxes=naxes,
- figsize=figsize, sharex=sharex, sharey=sharey, ax=ax)
+ fig, axes = _subplots(naxes=naxes, figsize=figsize,
+ sharex=sharex, sharey=sharey, ax=ax,
+ layout=layout)
- ravel_axes = _flatten(axes)
+ _axes = _flatten(axes)
for i, (key, group) in enumerate(grouped):
- ax = ravel_axes[i]
+ ax = _axes[i]
if numeric_only and isinstance(group, DataFrame):
group = group._get_numeric_data()
plotf(group, ax, **kwargs)
@@ -2799,16 +2808,14 @@ def _grouped_plot_by_column(plotf, data, columns=None, by=None,
by = [by]
columns = data._get_numeric_data().columns - by
naxes = len(columns)
- nrows, ncols = _get_layout(naxes, layout=layout)
- fig, axes = _subplots(nrows=nrows, ncols=ncols, naxes=naxes,
- sharex=True, sharey=True,
- figsize=figsize, ax=ax)
+ fig, axes = _subplots(naxes=naxes, sharex=True, sharey=True,
+ figsize=figsize, ax=ax, layout=layout)
- ravel_axes = _flatten(axes)
+ _axes = _flatten(axes)
result = compat.OrderedDict()
for i, col in enumerate(columns):
- ax = ravel_axes[i]
+ ax = _axes[i]
gp_col = grouped[col]
keys, values = zip(*gp_col)
re_plotf = plotf(keys, values, ax, **kwargs)
@@ -2869,7 +2876,7 @@ def table(ax, data, rowLabels=None, colLabels=None,
return table
-def _get_layout(nplots, layout=None):
+def _get_layout(nplots, layout=None, layout_type='box'):
if layout is not None:
if not isinstance(layout, (tuple, list)) or len(layout) != 2:
raise ValueError('Layout must be a tuple of (rows, columns)')
@@ -2881,27 +2888,31 @@ def _get_layout(nplots, layout=None):
return layout
- if nplots == 1:
+ if layout_type == 'single':
return (1, 1)
- elif nplots == 2:
- return (1, 2)
- elif nplots < 4:
- return (2, 2)
+ elif layout_type == 'horizontal':
+ return (1, nplots)
+ elif layout_type == 'vertical':
+ return (nplots, 1)
- k = 1
- while k ** 2 < nplots:
- k += 1
-
- if (k - 1) * k >= nplots:
- return k, (k - 1)
- else:
- return k, k
+ layouts = {1: (1, 1), 2: (1, 2), 3: (2, 2), 4: (2, 2)}
+ try:
+ return layouts[nplots]
+ except KeyError:
+ k = 1
+ while k ** 2 < nplots:
+ k += 1
+
+ if (k - 1) * k >= nplots:
+ return k, (k - 1)
+ else:
+ return k, k
-# copied from matplotlib/pyplot.py for compatibility with matplotlib < 1.0
+# copied from matplotlib/pyplot.py and modified for pandas.plotting
-def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=True,
- subplot_kw=None, ax=None, **fig_kw):
+def _subplots(naxes=None, sharex=False, sharey=False, squeeze=True,
+ subplot_kw=None, ax=None, layout=None, layout_type='box', **fig_kw):
"""Create a figure with a set of subplots already made.
This utility wrapper makes it convenient to create common layouts of
@@ -2909,12 +2920,6 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
Keyword arguments:
- nrows : int
- Number of rows of the subplot grid. Defaults to 1.
-
- ncols : int
- Number of columns of the subplot grid. Defaults to 1.
-
naxes : int
Number of required axes. Exceeded axes are set invisible. Default is nrows * ncols.
@@ -2942,11 +2947,17 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
ax : Matplotlib axis object, optional
+ layout : tuple
+ Number of rows and columns of the subplot grid.
+ If not specified, calculated from naxes and layout_type
+
+ layout_type : {'box', 'horziontal', 'vertical'}, default 'box'
+ Specify how to layout the subplot grid.
+
fig_kw : Other keyword arguments to be passed to the figure() call.
Note that all keywords not recognized above will be
automatically included here.
-
Returns:
fig, ax : tuple
@@ -2975,23 +2986,27 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
plt.subplots(2, 2, subplot_kw=dict(polar=True))
"""
import matplotlib.pyplot as plt
- from pandas.core.frame import DataFrame
if subplot_kw is None:
subplot_kw = {}
- # Create empty object array to hold all axes. It's easiest to make it 1-d
- # so we can just append subplots upon creation, and then
- nplots = nrows * ncols
-
- if naxes is None:
- naxes = nrows * ncols
- elif nplots < naxes:
- raise ValueError("naxes {0} is larger than layour size defined by nrows * ncols".format(naxes))
-
if ax is None:
fig = plt.figure(**fig_kw)
else:
+ if com.is_list_like(ax):
+ ax = _flatten(ax)
+ if layout is not None:
+ warnings.warn("When passing multiple axes, layout keyword is ignored", UserWarning)
+ if sharex or sharey:
+ warnings.warn("When passing multiple axes, sharex and sharey are ignored."
+ "These settings must be specified when creating axes", UserWarning)
+ if len(ax) == naxes:
+ fig = ax[0].get_figure()
+ return fig, ax
+ else:
+ raise ValueError("The number of passed axes must be {0}, the same as "
+ "the output plot".format(naxes))
+
fig = ax.get_figure()
# if ax is passed and a number of subplots is 1, return ax as it is
if naxes == 1:
@@ -3004,6 +3019,11 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
"is being cleared", UserWarning)
fig.clear()
+ nrows, ncols = _get_layout(naxes, layout=layout, layout_type=layout_type)
+ nplots = nrows * ncols
+
+ # Create empty object array to hold all axes. It's easiest to make it 1-d
+ # so we can just append subplots upon creation, and then
axarr = np.empty(nplots, dtype=object)
# Create first subplot separately, so we can share it if requested
@@ -3074,10 +3094,10 @@ def _subplots(nrows=1, ncols=1, naxes=None, sharex=False, sharey=False, squeeze=
def _flatten(axes):
if not com.is_list_like(axes):
- axes = [axes]
+ return np.array([axes])
elif isinstance(axes, (np.ndarray, Index)):
- axes = axes.ravel()
- return axes
+ return axes.ravel()
+ return np.array(axes)
def _get_all_lines(ax):
| - Added `layout` keyword to `plot_frame` (Closes #6667)
- Allow to pass multiple axes to `plot_frame`, `hist` and `boxplot` (Closes #5353, Closes #6970, Closes #7069)
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import pandas.util.testing as tm
n = 20
gender = tm.choice(['Male', 'Female'], size=n)
classroom = tm.choice(['A', 'B', 'C'], size=n)
df = pd.DataFrame({'gender': gender,
'classroom': classroom,
'height': np.random.normal(66, 4, size=n),
'weight': np.random.normal(161, 32, size=n),
'category': np.random.randint(4, size=n)})
fig, axes = plt.subplots(6, 3, figsize=(6, 7))
df.boxplot(by='category', column=['height', 'weight', 'category'], ax=axes[0])
df.groupby('classroom').boxplot(column=['height', 'weight', 'category'], ax=axes[1])
df.hist(column=['height', 'weight', 'category'], ax=axes[2])
df.hist(by='classroom', ax=axes[3])
df.plot(subplots=True, ax=axes[4], legend=False)
df.plot(kind='pie', subplots=True, ax=axes[5], legend=False)
plt.subplots_adjust(hspace=1, bottom=0.05)
```
### Result

| https://api.github.com/repos/pandas-dev/pandas/pulls/7736 | 2014-07-12T12:51:02Z | 2014-08-19T17:09:15Z | 2014-08-19T17:09:14Z | 2014-09-10T12:10:57Z |
BUG: DTI.value_counts doesnt preserve tz | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 0f430e249f1c4..7e0931ca1b745 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -290,6 +290,12 @@ Bug Fixes
+- Bug in ``DatetimeIndex.value_counts`` doesn't preserve tz (:issue:`7735`)
+- Bug in ``PeriodIndex.value_counts`` results in ``Int64Index`` (:issue:`7735`)
+
+
+
+
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index cb6f200b259db..4abb6ed10d6a7 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -197,6 +197,7 @@ def value_counts(values, sort=True, ascending=False, normalize=False,
from pandas.core.series import Series
from pandas.tools.tile import cut
+ is_period = getattr(values, 'inferred_type', None) == 'period'
values = Series(values).values
is_category = com.is_categorical_dtype(values.dtype)
@@ -212,11 +213,8 @@ def value_counts(values, sort=True, ascending=False, normalize=False,
values = cat.codes
dtype = values.dtype
- if com.is_integer_dtype(dtype):
- values = com._ensure_int64(values)
- keys, counts = htable.value_count_int64(values)
- elif issubclass(values.dtype.type, (np.datetime64, np.timedelta64)):
+ if issubclass(values.dtype.type, (np.datetime64, np.timedelta64)) or is_period:
values = values.view(np.int64)
keys, counts = htable.value_count_int64(values)
@@ -227,6 +225,10 @@ def value_counts(values, sort=True, ascending=False, normalize=False,
# convert the keys back to the dtype we came in
keys = keys.astype(dtype)
+ elif com.is_integer_dtype(dtype):
+ values = com._ensure_int64(values)
+ keys, counts = htable.value_count_int64(values)
+
else:
values = com._ensure_object(values)
mask = com.isnull(values)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 243e34e35784a..d55196b56c784 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -275,8 +275,18 @@ def value_counts(self, normalize=False, sort=True, ascending=False,
counts : Series
"""
from pandas.core.algorithms import value_counts
- return value_counts(self.values, sort=sort, ascending=ascending,
- normalize=normalize, bins=bins, dropna=dropna)
+ from pandas.tseries.api import DatetimeIndex, PeriodIndex
+ result = value_counts(self, sort=sort, ascending=ascending,
+ normalize=normalize, bins=bins, dropna=dropna)
+
+ if isinstance(self, PeriodIndex):
+ # preserve freq
+ result.index = self._simple_new(result.index.values, self.name,
+ freq=self.freq)
+ elif isinstance(self, DatetimeIndex):
+ result.index = self._simple_new(result.index.values, self.name,
+ tz=getattr(self, 'tz', None))
+ return result
def unique(self):
"""
@@ -542,5 +552,3 @@ def __sub__(self, other):
def _add_delta(self, other):
return NotImplemented
-
-
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 1b7db1451f6cf..494c0ee6b2bec 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -267,8 +267,9 @@ def test_value_counts_unique_nunique(self):
# skips int64 because it doesn't allow to include nan or None
continue
- if o.values.dtype == 'datetime64[ns]' and _np_version_under1p7:
- # Unable to assign None
+ if ((isinstance(o, Int64Index) and not isinstance(o,
+ (DatetimeIndex, PeriodIndex)))):
+ # skips int64 because it doesn't allow to include nan or None
continue
# special assign to the numpy array
@@ -283,12 +284,8 @@ def test_value_counts_unique_nunique(self):
else:
o = klass(np.repeat(values, range(1, len(o) + 1)))
- if isinstance(o, DatetimeIndex):
- expected_s_na = Series(list(range(10, 2, -1)) + [3], index=values[9:0:-1])
- expected_s = Series(list(range(10, 2, -1)), index=values[9:1:-1])
- else:
- expected_s_na = Series(list(range(10, 2, -1)) +[3], index=values[9:0:-1], dtype='int64')
- expected_s = Series(list(range(10, 2, -1)), index=values[9:1:-1], dtype='int64')
+ expected_s_na = Series(list(range(10, 2, -1)) +[3], index=values[9:0:-1], dtype='int64')
+ expected_s = Series(list(range(10, 2, -1)), index=values[9:1:-1], dtype='int64')
tm.assert_series_equal(o.value_counts(dropna=False), expected_s_na)
tm.assert_series_equal(o.value_counts(), expected_s)
@@ -709,6 +706,28 @@ def test_sub_isub(self):
rng -= 1
tm.assert_index_equal(rng, expected)
+ def test_value_counts(self):
+ # GH 7735
+ for tz in [None, 'UTC', 'Asia/Tokyo', 'US/Eastern']:
+ idx = pd.date_range('2011-01-01 09:00', freq='H', periods=10)
+ # create repeated values, 'n'th element is repeated by n+1 times
+ idx = DatetimeIndex(np.repeat(idx.values, range(1, len(idx) + 1)), tz=tz)
+
+ exp_idx = pd.date_range('2011-01-01 18:00', freq='-1H', periods=10, tz=tz)
+ expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64')
+ tm.assert_series_equal(idx.value_counts(), expected)
+
+ idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00',
+ '2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], tz=tz)
+
+ exp_idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 08:00'], tz=tz)
+ expected = Series([3, 2], index=exp_idx)
+ tm.assert_series_equal(idx.value_counts(), expected)
+
+ exp_idx = DatetimeIndex(['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], tz=tz)
+ expected = Series([3, 2, 1], index=exp_idx)
+ tm.assert_series_equal(idx.value_counts(dropna=False), expected)
+
class TestPeriodIndexOps(Ops):
_allowed = '_allow_period_index_ops'
@@ -968,6 +987,30 @@ def test_sub_isub(self):
rng -= 1
tm.assert_index_equal(rng, expected)
+ def test_value_counts(self):
+ # GH 7735
+ idx = pd.period_range('2011-01-01 09:00', freq='H', periods=10)
+ # create repeated values, 'n'th element is repeated by n+1 times
+ idx = PeriodIndex(np.repeat(idx.values, range(1, len(idx) + 1)), freq='H')
+
+ exp_idx = PeriodIndex(['2011-01-01 18:00', '2011-01-01 17:00', '2011-01-01 16:00',
+ '2011-01-01 15:00', '2011-01-01 14:00', '2011-01-01 13:00',
+ '2011-01-01 12:00', '2011-01-01 11:00', '2011-01-01 10:00',
+ '2011-01-01 09:00'], freq='H')
+ expected = Series(range(10, 0, -1), index=exp_idx, dtype='int64')
+ tm.assert_series_equal(idx.value_counts(), expected)
+
+ idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 09:00', '2013-01-01 09:00',
+ '2013-01-01 08:00', '2013-01-01 08:00', pd.NaT], freq='H')
+
+ exp_idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 08:00'], freq='H')
+ expected = Series([3, 2], index=exp_idx)
+ tm.assert_series_equal(idx.value_counts(), expected)
+
+ exp_idx = PeriodIndex(['2013-01-01 09:00', '2013-01-01 08:00', pd.NaT], freq='H')
+ expected = Series([3, 2, 1], index=exp_idx)
+ tm.assert_series_equal(idx.value_counts(dropna=False), expected)
+
if __name__ == '__main__':
import nose
| Found 2 problems related to `value_counts`.
- `DatetimeIndex.value_counts` loses tz.
```
didx = pd.date_range('2011-01-01 09:00', freq='H', periods=3, tz='Asia/Tokyo')
print(didx.value_counts())
#2011-01-01 00:00:00 1
#2011-01-01 01:00:00 1
#2011-01-01 02:00:00 1
# dtype: int64
```
- `PeriodIndex.value_counts` results in `Int64Index`, and unable to drop `NaT`.
```
pidx = pd.PeriodIndex(['2011-01-01 09:00', '2011-01-01 10:00', pd.NaT], freq='H')
print(pidx.value_counts())
# 359410 1
# 359409 1
# -9223372036854775808 1
# dtype: int64
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7735 | 2014-07-12T11:54:36Z | 2014-07-25T21:04:51Z | 2014-07-25T21:04:51Z | 2014-07-26T13:27:25Z |
Specify in docs that join='outer' is the defaul for align method. | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index ec8456089f452..4d67616c5cd60 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -869,7 +869,7 @@ Aligning objects with each other with ``align``
The ``align`` method is the fastest way to simultaneously align two objects. It
supports a ``join`` argument (related to :ref:`joining and merging <merging>`):
- - ``join='outer'``: take the union of the indexes
+ - ``join='outer'``: take the union of the indexes (default)
- ``join='left'``: use the calling object's index
- ``join='right'``: use the passed object's index
- ``join='inner'``: intersect the indexes
| https://api.github.com/repos/pandas-dev/pandas/pulls/7734 | 2014-07-12T10:27:01Z | 2014-07-12T11:00:26Z | 2014-07-12T11:00:26Z | 2014-07-12T11:00:48Z | |
BUG: Repeated timeseries plot may result in incorrect kind | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index 06c93541a7783..4eebcd4c000a3 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -199,12 +199,18 @@ Bug Fixes
- Bug in ``HDFStore.select_column()`` not preserving UTC timezone info when selecting a DatetimeIndex (:issue:`7777`)
+
- Bug in pickles contains ``DateOffset`` may raise ``AttributeError`` when ``normalize`` attribute is reffered internally (:issue:`7748`)
- Bug in pickle deserialization that failed for pre-0.14.1 containers with dup items trying to avoid ambiguity
when matching block and manager items, when there's only one block there's no ambiguity (:issue:`7794`)
+
+- Bug in repeated timeseries line and area plot may result in ``ValueError`` or incorrect kind (:issue:`7733`)
+
+
+
- Bug in ``is_superperiod`` and ``is_subperiod`` cannot handle higher frequencies than ``S`` (:issue:`7760`, :issue:`7772`, :issue:`7803`)
- Bug in ``DataFrame.reset_index`` which has ``MultiIndex`` contains ``PeriodIndex`` or ``DatetimeIndex`` with tz raises ``ValueError`` (:issue:`7746`, :issue:`7793`)
diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 9d6391c58e2d5..ea7f963f79f28 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1564,10 +1564,8 @@ def _make_plot(self):
label = com.pprint_thing(label) # .encode('utf-8')
kwds['label'] = label
- y_values = self._get_stacked_values(y, label)
- newlines = plotf(ax, x, y_values, style=style, **kwds)
- self._update_prior(y)
+ newlines = plotf(ax, x, y, style=style, column_num=i, **kwds)
self._add_legend_handle(newlines[0], label, index=i)
lines = _get_all_lines(ax)
@@ -1586,6 +1584,18 @@ def _get_stacked_values(self, y, label):
else:
return y
+ def _get_plot_function(self):
+ f = MPLPlot._get_plot_function(self)
+ def plotf(ax, x, y, style=None, column_num=None, **kwds):
+ # column_num is used to get the target column from protf in line and area plots
+ if column_num == 0:
+ self._initialize_prior(len(self.data))
+ y_values = self._get_stacked_values(y, kwds['label'])
+ lines = f(ax, x, y_values, style=style, **kwds)
+ self._update_prior(y)
+ return lines
+ return plotf
+
def _get_ts_plot_function(self):
from pandas.tseries.plotting import tsplot
plotf = self._get_plot_function()
@@ -1678,11 +1688,13 @@ def _get_plot_function(self):
raise ValueError("Log-y scales are not supported in area plot")
else:
f = MPLPlot._get_plot_function(self)
- def plotf(ax, x, y, style=None, **kwds):
- lines = f(ax, x, y, style=style, **kwds)
+ def plotf(ax, x, y, style=None, column_num=0, **kwds):
+ if column_num == 0:
+ self._initialize_prior(len(self.data))
+ y_values = self._get_stacked_values(y, kwds['label'])
+ lines = f(ax, x, y_values, style=style, **kwds)
- # get data from the line
- # insert fill_between starting point
+ # get data from the line to get coordinates for fill_between
xdata, y_values = lines[0].get_data(orig=False)
if (y >= 0).all():
@@ -1696,6 +1708,7 @@ def plotf(ax, x, y, style=None, **kwds):
kwds['color'] = lines[0].get_color()
self.plt.Axes.fill_between(ax, xdata, start, y_values, **kwds)
+ self._update_prior(y)
return lines
return plotf
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 33a14403b0f08..b95553f87ec6b 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -60,8 +60,7 @@ def tsplot(series, plotf, **kwargs):
# how to make sure ax.clear() flows through?
if not hasattr(ax, '_plot_data'):
ax._plot_data = []
- ax._plot_data.append((series, kwargs))
-
+ ax._plot_data.append((series, plotf, kwargs))
lines = plotf(ax, series.index, series.values, **kwargs)
# set date formatter, locators and rescale limits
@@ -118,7 +117,7 @@ def _is_sup(f1, f2):
def _upsample_others(ax, freq, plotf, kwargs):
legend = ax.get_legend()
- lines, labels = _replot_ax(ax, freq, plotf, kwargs)
+ lines, labels = _replot_ax(ax, freq, kwargs)
other_ax = None
if hasattr(ax, 'left_ax'):
@@ -127,7 +126,7 @@ def _upsample_others(ax, freq, plotf, kwargs):
other_ax = ax.right_ax
if other_ax is not None:
- rlines, rlabels = _replot_ax(other_ax, freq, plotf, kwargs)
+ rlines, rlabels = _replot_ax(other_ax, freq, kwargs)
lines.extend(rlines)
labels.extend(rlabels)
@@ -139,7 +138,7 @@ def _upsample_others(ax, freq, plotf, kwargs):
ax.legend(lines, labels, loc='best', title=title)
-def _replot_ax(ax, freq, plotf, kwargs):
+def _replot_ax(ax, freq, kwargs):
data = getattr(ax, '_plot_data', None)
ax._plot_data = []
ax.clear()
@@ -148,7 +147,7 @@ def _replot_ax(ax, freq, plotf, kwargs):
lines = []
labels = []
if data is not None:
- for series, kwds in data:
+ for series, plotf, kwds in data:
series = series.copy()
idx = series.index.asfreq(freq, how='S')
series.index = idx
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 5742b8e9bfaae..b52dca76f2c77 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -704,9 +704,81 @@ def test_from_weekly_resampling(self):
low = Series(np.random.randn(len(idxl)), idxl)
low.plot()
ax = high.plot()
+
+ expected_h = idxh.to_period().asi8
+ expected_l = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549,
+ 1553, 1558, 1562])
for l in ax.get_lines():
self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ xdata = l.get_xdata(orig=False)
+ if len(xdata) == 12: # idxl lines
+ self.assert_numpy_array_equal(xdata, expected_l)
+ else:
+ self.assert_numpy_array_equal(xdata, expected_h)
+
+ @slow
+ def test_from_resampling_area_line_mixed(self):
+ idxh = date_range('1/1/1999', periods=52, freq='W')
+ idxl = date_range('1/1/1999', periods=12, freq='M')
+ high = DataFrame(np.random.rand(len(idxh), 3),
+ index=idxh, columns=[0, 1, 2])
+ low = DataFrame(np.random.rand(len(idxl), 3),
+ index=idxl, columns=[0, 1, 2])
+
+ # low to high
+ for kind1, kind2 in [('line', 'area'), ('area', 'line')]:
+ ax = low.plot(kind=kind1, stacked=True)
+ ax = high.plot(kind=kind2, stacked=True, ax=ax)
+
+ # check low dataframe result
+ expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549,
+ 1553, 1558, 1562])
+ expected_y = np.zeros(len(expected_x))
+ for i in range(3):
+ l = ax.lines[i]
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
+ # check stacked values are correct
+ expected_y += low[i].values
+ self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
+
+ # check high dataframe result
+ expected_x = idxh.to_period().asi8
+ expected_y = np.zeros(len(expected_x))
+ for i in range(3):
+ l = ax.lines[3 + i]
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
+ expected_y += high[i].values
+ self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
+
+ # high to low
+ for kind1, kind2 in [('line', 'area'), ('area', 'line')]:
+ ax = high.plot(kind=kind1, stacked=True)
+ ax = low.plot(kind=kind2, stacked=True, ax=ax)
+
+ # check high dataframe result
+ expected_x = idxh.to_period().asi8
+ expected_y = np.zeros(len(expected_x))
+ for i in range(3):
+ l = ax.lines[i]
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
+ expected_y += high[i].values
+ self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
+
+ # check low dataframe result
+ expected_x = np.array([1514, 1519, 1523, 1527, 1531, 1536, 1540, 1544, 1549,
+ 1553, 1558, 1562])
+ expected_y = np.zeros(len(expected_x))
+ for i in range(3):
+ l = ax.lines[3 + i]
+ self.assertTrue(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_numpy_array_equal(l.get_xdata(orig=False), expected_x)
+ expected_y += low[i].values
+ self.assert_numpy_array_equal(l.get_ydata(orig=False), expected_y)
+
@slow
def test_mixed_freq_second_millisecond(self):
# GH 7772, GH 7760
| Must be revisited after #7717.
Repeated line and area plot may result incorrect if it requires resampling.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
fig, axes = plt.subplots(2, 2, figsize=(7, 5))
np.random.seed(1)
df1 = pd.DataFrame(np.random.rand(5, 2), pd.date_range('2011-01-01', periods=5, freq='D'))
df2 = pd.DataFrame(np.random.rand(2, 2), pd.date_range('2011-01-01', periods=2, freq='M'))
df1.plot(kind='line', ax=axes[0][0], legend=False)
df2.plot(kind='area', ax=axes[0][0], legend=False)
df1.plot(kind='area', ax=axes[1][0], legend=False)
df2.plot(kind='line', ax=axes[1][0], legend=False)
df2.plot(kind='line', ax=axes[0][1], legend=False)
df1.plot(kind='area', ax=axes[0][1], legend=False)
# ValueError: Argument dimensions are incompatible
df2.plot(kind='area', ax=axes[1][1], legend=False)
df1.plot(kind='line', ax=axes[1][1], legend=False)
```
### Result using current master
- line with low freq -> area with high freq results in `ValueError` (top-right axes)
- area with low freq -> line with high freq results in all lines, not area (bottom-right axes)

### Result after fix

| https://api.github.com/repos/pandas-dev/pandas/pulls/7733 | 2014-07-12T00:14:34Z | 2014-07-24T12:59:45Z | 2014-07-24T12:59:45Z | 2014-07-25T20:44:05Z |
BUG: allow get default value upon IndexError, GH #7725 | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index d776848de40d0..116608e5f8817 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -152,6 +152,7 @@ There are no experimental changes in 0.15.0
Bug Fixes
~~~~~~~~~
+- Bug in ``get`` where an ``IndexError`` would not cause the default value to be returned (:issue:`7725`)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 59a457229d512..8daad2e76fae0 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1038,7 +1038,7 @@ def get(self, key, default=None):
"""
try:
return self[key]
- except (KeyError, ValueError):
+ except (KeyError, ValueError, IndexError):
return default
def __getitem__(self, item):
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 044d4054755ba..43ac8275aeb45 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -123,6 +123,23 @@ def test_get_numeric_data(self):
# _get_numeric_data is includes _get_bool_data, so can't test for non-inclusion
+ def test_get_default(self):
+
+ # GH 7725
+ d0 = "a", "b", "c", "d"
+ d1 = np.arange(4, dtype='int64')
+ others = "e", 10
+
+ for data, index in ((d0, d1), (d1, d0)):
+ s = Series(data, index=index)
+ for i,d in zip(index, data):
+ self.assertEqual(s.get(i), d)
+ self.assertEqual(s.get(i, d), d)
+ self.assertEqual(s.get(i, "z"), d)
+ for other in others:
+ self.assertEqual(s.get(other, "z"), "z")
+ self.assertEqual(s.get(other, other), other)
+
def test_nonzero(self):
# GH 4633
| Fixes #7725 by adding IndexError to the tuple of caught exceptions.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7728 | 2014-07-11T01:17:23Z | 2014-09-08T14:13:22Z | 2014-09-08T14:13:22Z | 2014-09-08T14:13:33Z |
ENH: Add uint and bool support in to_stata | diff --git a/doc/source/io.rst b/doc/source/io.rst
index fa6ab646a47c8..d82af333330db 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -3504,6 +3504,31 @@ into a .dta file. The format version of this file is always 115 (Stata 12).
df = DataFrame(randn(10, 2), columns=list('AB'))
df.to_stata('stata.dta')
+*Stata* data files have limited data type support; only strings with 244 or
+fewer characters, ``int8``, ``int16``, ``int32`` and ``float64`` can be stored
+in ``.dta`` files. Additionally, *Stata* reserveds certain values to represent
+missing data, and when a value is encountered outside of the
+permitted range, the data type is upcast to the next larger size. For
+example, ``int8`` values are restricted to lie between -127 and 100, and so
+variables with values above 100 will trigger a conversion to ``int16``. ``nan``
+values in floating points data types are stored as the basic missing data type
+(``.`` in *Stata*). Tt is not possible to indicate missing data values for
+integer data types.
+
+The *Stata* writer gracefully handles other data types including ``int64``,
+``bool``, ``uint8``, ``uint16``, ``uint32`` and ``float32`` by upcasting to
+the smallest supported type that can represent the data. For example, data
+with a type of ``uint8`` will be cast to ``int8`` if all values are less than
+100 (the upper bound for non-missing ``int8`` data in *Stata*), or, if values are
+outside of this range, the data is cast to ``int16``.
+
+
+.. warning::
+
+ Conversion from ``int64`` to ``float64`` may result in a loss of precision
+ if ``int64`` values are larger than 2**53.
+
+
.. _io.stata_reader:
Reading from STATA format
diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index f305d088e996f..3ccd8c14fbff8 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -114,7 +114,7 @@ Known Issues
Enhancements
~~~~~~~~~~~~
-
+- Added support for bool, uint8, uint16 and uint32 datatypes in ``to_stata`` (:issue:`7097`, :issue:`7365`)
diff --git a/pandas/io/stata.py b/pandas/io/stata.py
index ed6b540b890a2..48a5f5ee6c994 100644
--- a/pandas/io/stata.py
+++ b/pandas/io/stata.py
@@ -206,6 +206,7 @@ class InvalidColumnName(Warning):
underscores, no Stata reserved words)
"""
+
def _cast_to_stata_types(data):
"""Checks the dtypes of the columns of a pandas DataFrame for
compatibility with the data types and ranges supported by Stata, and
@@ -218,18 +219,44 @@ def _cast_to_stata_types(data):
Notes
-----
- Numeric columns must be one of int8, int16, int32, float32 or float64, with
- some additional value restrictions on the integer data types. int8 and
- int16 columns are checked for violations of the value restrictions and
+ Numeric columns in Stata must be one of int8, int16, int32, float32 or
+ float64, with some additional value restrictions. int8 and int16 columns
+ are checked for violations of the value restrictions and
upcast if needed. int64 data is not usable in Stata, and so it is
downcast to int32 whenever the value are in the int32 range, and
sidecast to float64 when larger than this range. If the int64 values
are outside of the range of those perfectly representable as float64 values,
a warning is raised.
+
+ bool columns are cast to int8. uint colums are converted to int of the same
+ size if there is no loss in precision, other wise are upcast to a larger
+ type. uint64 is currently not supported since it is concerted to object in
+ a DataFrame.
"""
ws = ''
+ # original, if small, if large
+ conversion_data = ((np.bool, np.int8, np.int8),
+ (np.uint8, np.int8, np.int16),
+ (np.uint16, np.int16, np.int32),
+ (np.uint32, np.int32, np.int64))
+
for col in data:
dtype = data[col].dtype
+ # Cast from unsupported types to supported types
+ for c_data in conversion_data:
+ if dtype == c_data[0]:
+ if data[col].max() <= np.iinfo(c_data[1]).max:
+ dtype = c_data[1]
+ else:
+ dtype = c_data[2]
+ if c_data[2] == np.float64: # Warn if necessary
+ if data[col].max() >= 2 * 53:
+ ws = precision_loss_doc % ('uint64', 'float64')
+
+ data[col] = data[col].astype(dtype)
+
+
+ # Check values and upcast if necessary
if dtype == np.int8:
if data[col].max() > 100 or data[col].min() < -127:
data[col] = data[col].astype(np.int16)
@@ -241,7 +268,7 @@ def _cast_to_stata_types(data):
data[col] = data[col].astype(np.int32)
else:
data[col] = data[col].astype(np.float64)
- if data[col].max() <= 2 * 53 or data[col].min() >= -2 ** 53:
+ if data[col].max() >= 2 ** 53 or data[col].min() <= -2 ** 53:
ws = precision_loss_doc % ('int64', 'float64')
if ws:
diff --git a/pandas/io/tests/test_stata.py b/pandas/io/tests/test_stata.py
index 1a2673342df45..435226bc4313f 100644
--- a/pandas/io/tests/test_stata.py
+++ b/pandas/io/tests/test_stata.py
@@ -527,6 +527,29 @@ def test_write_missing_strings(self):
tm.assert_frame_equal(written_and_read_again.set_index('index'),
expected)
+ def test_bool_uint(self):
+ s0 = Series([0, 1, True], dtype=np.bool)
+ s1 = Series([0, 1, 100], dtype=np.uint8)
+ s2 = Series([0, 1, 255], dtype=np.uint8)
+ s3 = Series([0, 1, 2 ** 15 - 100], dtype=np.uint16)
+ s4 = Series([0, 1, 2 ** 16 - 1], dtype=np.uint16)
+ s5 = Series([0, 1, 2 ** 31 - 100], dtype=np.uint32)
+ s6 = Series([0, 1, 2 ** 32 - 1], dtype=np.uint32)
+
+ original = DataFrame({'s0': s0, 's1': s1, 's2': s2, 's3': s3,
+ 's4': s4, 's5': s5, 's6': s6})
+ original.index.name = 'index'
+ expected = original.copy()
+ expected_types = (np.int8, np.int8, np.int16, np.int16, np.int32,
+ np.int32, np.float64)
+ for c, t in zip(expected.columns, expected_types):
+ expected[c] = expected[c].astype(t)
+
+ with tm.ensure_clean() as path:
+ original.to_stata(path)
+ written_and_read_again = self.read_dta(path)
+ written_and_read_again = written_and_read_again.set_index('index')
+ tm.assert_frame_equal(written_and_read_again, expected)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| Added support for uint (uint8, uint16 and uint32, but not uint64) and bool
datatypes in to_stata.
closes #7097
closes #7365
| https://api.github.com/repos/pandas-dev/pandas/pulls/7726 | 2014-07-10T21:36:07Z | 2014-07-16T00:36:36Z | null | 2014-07-16T18:59:42Z |
DOC: docstring for PeriodIndex | diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index ed56bdc827ede..5948fbf8e5fa7 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -20,7 +20,7 @@
import pandas.lib as lib
import pandas.tslib as tslib
import pandas.algos as _algos
-from pandas.compat import map, zip, u
+from pandas.compat import zip, u
#---------------
@@ -546,13 +546,13 @@ class PeriodIndex(DatetimeIndexOpsMixin, Int64Index):
end : end value, period-like, optional
If periods is none, generated index will extend to first conforming
period on or just past end argument
- year : int or array, default None
- month : int or array, default None
- quarter : int or array, default None
- day : int or array, default None
- hour : int or array, default None
- minute : int or array, default None
- second : int or array, default None
+ year : int, array, or Series, default None
+ month : int, array, or Series, default None
+ quarter : int, array, or Series, default None
+ day : int, array, or Series, default None
+ hour : int, array, or Series, default None
+ minute : int, array, or Series, default None
+ second : int, array, or Series, default None
tz : object, default None
Timezone for converting datetime64 data to Periods
| null | https://api.github.com/repos/pandas-dev/pandas/pulls/7721 | 2014-07-10T15:47:30Z | 2014-07-10T15:48:55Z | 2014-07-10T15:48:55Z | 2014-07-10T15:48:56Z |
PERF: improve perf of index iteration (GH7683) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index f305d088e996f..5a348025d0185 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -150,6 +150,7 @@ Enhancements
Performance
~~~~~~~~~~~
+- Performance improvements in ``DatetimeIndex.__iter__`` to allow faster iteration (:issue:`7683`)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 81e13687441de..72fcfbff677ab 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -390,6 +390,9 @@ def _ops_compat(self, name, op_accessor):
is_year_start = _field_accessor('is_year_start', "Logical indicating if first day of year (defined by frequency)")
is_year_end = _field_accessor('is_year_end', "Logical indicating if last day of year (defined by frequency)")
+ def __iter__(self):
+ return (self._box_func(v) for v in self.asi8)
+
@property
def _box_func(self):
"""
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 70cd95341611f..dca2947f6a7a6 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1093,6 +1093,27 @@ def __array_finalize__(self, obj):
self.name = getattr(obj, 'name', None)
self._reset_identity()
+ def __iter__(self):
+ """
+ Return an iterator over the boxed values
+
+ Returns
+ -------
+ Timestamps : ndarray
+ """
+
+ # convert in chunks of 10k for efficiency
+ data = self.asi8
+ l = len(self)
+ chunksize = 10000
+ chunks = int(l / chunksize) + 1
+ for i in range(chunks):
+ start_i = i*chunksize
+ end_i = min((i+1)*chunksize,l)
+ converted = tslib.ints_to_pydatetime(data[start_i:end_i], tz=self.tz, offset=self.offset, box=True)
+ for v in converted:
+ yield v
+
def _wrap_union_result(self, other, result):
name = self.name if self.name == other.name else None
if self.tz != other.tz:
@@ -1476,9 +1497,6 @@ def normalize(self):
return DatetimeIndex(new_values, freq='infer', name=self.name,
tz=self.tz)
- def __iter__(self):
- return iter(self.asobject)
-
def searchsorted(self, key, side='left'):
if isinstance(key, np.ndarray):
key = np.array(key, dtype=_NS_DTYPE, copy=False)
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 5948fbf8e5fa7..8c4bb2f5adc5e 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -738,10 +738,6 @@ def astype(self, dtype):
return Index(self.values, dtype)
raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype)
- def __iter__(self):
- for val in self.values:
- yield Period(ordinal=val, freq=self.freq)
-
def searchsorted(self, key, side='left'):
if isinstance(key, compat.string_types):
key = Period(key, freq=self.freq).ordinal
diff --git a/pandas/tseries/tests/test_timezones.py b/pandas/tseries/tests/test_timezones.py
index 9c374716a84ee..531724cdb6837 100644
--- a/pandas/tseries/tests/test_timezones.py
+++ b/pandas/tseries/tests/test_timezones.py
@@ -1027,7 +1027,6 @@ def test_intersection(self):
def test_timestamp_equality_different_timezones(self):
utc_range = date_range('1/1/2000', periods=20, tz='UTC')
-
eastern_range = utc_range.tz_convert('US/Eastern')
berlin_range = utc_range.tz_convert('Europe/Berlin')
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 2fd71521b24d5..c06d8a3ba9a05 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -74,41 +74,72 @@ try:
except NameError: # py3
basestring = str
-def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
+cdef inline object create_timestamp_from_ts(int64_t value, pandas_datetimestruct dts, object tz, object offset):
+ cdef _Timestamp ts_base
+ ts_base = _Timestamp.__new__(Timestamp, dts.year, dts.month,
+ dts.day, dts.hour, dts.min,
+ dts.sec, dts.us, tz)
+
+ ts_base.value = value
+ ts_base.offset = offset
+ ts_base.nanosecond = dts.ps / 1000
+
+ return ts_base
+
+cdef inline object create_datetime_from_ts(int64_t value, pandas_datetimestruct dts, object tz, object offset):
+ return datetime(dts.year, dts.month, dts.day, dts.hour,
+ dts.min, dts.sec, dts.us, tz)
+
+def ints_to_pydatetime(ndarray[int64_t] arr, tz=None, offset=None, box=False):
+ # convert an i8 repr to an ndarray of datetimes or Timestamp (if box == True)
+
cdef:
Py_ssize_t i, n = len(arr)
pandas_datetimestruct dts
+ object dt
+ int64_t value
ndarray[object] result = np.empty(n, dtype=object)
+ object (*func_create)(int64_t, pandas_datetimestruct, object, object)
+
+ if box and util.is_string_object(offset):
+ from pandas.tseries.frequencies import to_offset
+ offset = to_offset(offset)
+
+ if box:
+ func_create = create_timestamp_from_ts
+ else:
+ func_create = create_datetime_from_ts
if tz is not None:
if _is_utc(tz):
for i in range(n):
- if arr[i] == iNaT:
- result[i] = np.nan
+ value = arr[i]
+ if value == iNaT:
+ result[i] = NaT
else:
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us, tz)
+ pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts)
+ result[i] = func_create(value, dts, tz, offset)
elif _is_tzlocal(tz) or _is_fixed_offset(tz):
for i in range(n):
- if arr[i] == iNaT:
- result[i] = np.nan
+ value = arr[i]
+ if value == iNaT:
+ result[i] = NaT
else:
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- dt = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us, tz)
+ pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts)
+ dt = func_create(value, dts, tz, offset)
result[i] = dt + tz.utcoffset(dt)
else:
trans = _get_transitions(tz)
deltas = _get_deltas(tz)
for i in range(n):
- if arr[i] == iNaT:
- result[i] = np.nan
+ value = arr[i]
+ if value == iNaT:
+ result[i] = NaT
else:
# Adjust datetime64 timestamp, recompute datetimestruct
- pos = trans.searchsorted(arr[i], side='right') - 1
+ pos = trans.searchsorted(value, side='right') - 1
if _treat_tz_as_pytz(tz):
# find right representation of dst etc in pytz timezone
new_tz = tz._tzinfos[tz._transition_info[pos]]
@@ -116,19 +147,17 @@ def ints_to_pydatetime(ndarray[int64_t] arr, tz=None):
# no zone-name change for dateutil tzs - dst etc represented in single object.
new_tz = tz
- pandas_datetime_to_datetimestruct(arr[i] + deltas[pos],
- PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us,
- new_tz)
+ pandas_datetime_to_datetimestruct(value + deltas[pos], PANDAS_FR_ns, &dts)
+ result[i] = func_create(value, dts, new_tz, offset)
else:
for i in range(n):
- if arr[i] == iNaT:
- result[i] = np.nan
+
+ value = arr[i]
+ if value == iNaT:
+ result[i] = NaT
else:
- pandas_datetime_to_datetimestruct(arr[i], PANDAS_FR_ns, &dts)
- result[i] = datetime(dts.year, dts.month, dts.day, dts.hour,
- dts.min, dts.sec, dts.us)
+ pandas_datetime_to_datetimestruct(value, PANDAS_FR_ns, &dts)
+ result[i] = func_create(value, dts, None, offset)
return result
@@ -183,6 +212,7 @@ class Timestamp(_Timestamp):
def utcnow(cls):
return cls.now('UTC')
+
def __new__(cls, object ts_input, object offset=None, tz=None, unit=None):
cdef _TSObject ts
cdef _Timestamp ts_base
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 2b63eeaf99550..bb55b88cf1f34 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -333,3 +333,28 @@ def date_range(start=None, end=None, periods=None, freq=None):
timeseries_is_month_start = Benchmark('rng.is_month_start', setup,
start_date=datetime(2014, 4, 1))
+
+#----------------------------------------------------------------------
+# iterate over DatetimeIndex/PeriodIndex
+setup = common_setup + """
+N = 1000000
+M = 10000
+idx1 = date_range(start='20140101', freq='T', periods=N)
+idx2 = period_range(start='20140101', freq='T', periods=N)
+
+def iter_n(iterable, n=None):
+ i = 0
+ for _ in iterable:
+ i += 1
+ if n is not None and i > n:
+ break
+"""
+
+timeseries_iter_datetimeindex = Benchmark('iter_n(idx1)', setup)
+
+timeseries_iter_periodindex = Benchmark('iter_n(idx2)', setup)
+
+timeseries_iter_datetimeindex_preexit = Benchmark('iter_n(idx1, M)', setup)
+
+timeseries_iter_periodindex_preexit = Benchmark('iter_n(idx2, M)', setup)
+
| closes #7683
`PeriodIndex` creation is still in python space, not much help
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
timeseries_iter_datetimeindex_preexit | 12.7254 | 3657.9890 | 0.0035 |
timeseries_iter_datetimeindex | 679.8913 | 3726.6284 | 0.1824 |
timeseries_iter_periodindex_preexit | 69.0370 | 62.8881 | 1.0978 |
timeseries_iter_periodindex | 6941.9633 | 6024.2947 | 1.1523 |
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
Ratio < 1.0 means the target commit is faster then the baseline.
Seed used: 1234
Target [142718e] : PERF: DatetimeIndex.__iter__ now uses ints_to_pydatetime with boxing
Base [f9493ea] : Merge pull request #7713 from jorisvandenbossche/doc-fixes3
DOC: fix doc build warnings
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7720 | 2014-07-10T14:50:32Z | 2014-07-16T00:29:51Z | 2014-07-16T00:29:51Z | 2014-07-16T00:30:13Z |
SQL: suppress warning for BIGINT with sqlite and sqlalchemy<0.8.2 (GH7433) | diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 9a479afd86cad..23ca80d771df9 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -29,6 +29,37 @@ class DatabaseError(IOError):
#------------------------------------------------------------------------------
# Helper functions
+_SQLALCHEMY_INSTALLED = None
+
+def _is_sqlalchemy_engine(con):
+ global _SQLALCHEMY_INSTALLED
+ if _SQLALCHEMY_INSTALLED is None:
+ try:
+ import sqlalchemy
+ _SQLALCHEMY_INSTALLED = True
+
+ from distutils.version import LooseVersion
+ ver = LooseVersion(sqlalchemy.__version__)
+ # For sqlalchemy versions < 0.8.2, the BIGINT type is recognized
+ # for a sqlite engine, which results in a warning when trying to
+ # read/write a DataFrame with int64 values. (GH7433)
+ if ver < '0.8.2':
+ from sqlalchemy import BigInteger
+ from sqlalchemy.ext.compiler import compiles
+
+ @compiles(BigInteger, 'sqlite')
+ def compile_big_int_sqlite(type_, compiler, **kw):
+ return 'INTEGER'
+ except ImportError:
+ _SQLALCHEMY_INSTALLED = False
+
+ if _SQLALCHEMY_INSTALLED:
+ import sqlalchemy
+ return isinstance(con, sqlalchemy.engine.Engine)
+ else:
+ return False
+
+
def _convert_params(sql, params):
"""convert sql and params args to DBAPI2.0 compliant format"""
args = [sql]
@@ -76,17 +107,6 @@ def _parse_date_columns(data_frame, parse_dates):
return data_frame
-def _is_sqlalchemy_engine(con):
- try:
- import sqlalchemy
- if isinstance(con, sqlalchemy.engine.Engine):
- return True
- else:
- return False
- except ImportError:
- return False
-
-
def execute(sql, con, cur=None, params=None):
"""
Execute the given SQL query using the provided connection object.
@@ -271,8 +291,10 @@ def read_sql_table(table_name, con, index_col=None, coerce_float=True,
read_sql_query : Read SQL query into a DataFrame.
read_sql
-
"""
+ if not _is_sqlalchemy_engine(con):
+ raise NotImplementedError("read_sql_table only supported for "
+ "SQLAlchemy engines.")
import sqlalchemy
from sqlalchemy.schema import MetaData
meta = MetaData(con)
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index 122b80c3f0076..eadcb2c9f1fdb 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -1079,6 +1079,16 @@ def test_default_date_load(self):
self.assertFalse(issubclass(df.DateCol.dtype.type, np.datetime64),
"DateCol loaded with incorrect type")
+ def test_bigint_warning(self):
+ # test no warning for BIGINT (to support int64) is raised (GH7433)
+ df = DataFrame({'a':[1,2]}, dtype='int64')
+ df.to_sql('test_bigintwarning', self.conn, index=False)
+
+ with warnings.catch_warnings(record=True) as w:
+ warnings.simplefilter("always")
+ sql.read_sql_table('test_bigintwarning', self.conn)
+ self.assertEqual(len(w), 0, "Warning triggered for other table")
+
class TestMySQLAlchemy(_TestSQLAlchemy):
"""
| From discussion here: https://github.com/pydata/pandas/pull/7634#issuecomment-48111148
Due to switching from Integer to BigInteger (to support int64 on some database systems), reading a table from sqlite with integers leads to a warning when you have an slqalchemy version of below 0.8.2.
I know it is very very late and goes against all reservations of putting in new stuff just before a release, but after some more consideration, I think we should include this (or at least something that fixes it, and I think this does). @jreback, you said not to worry about it (it is just a warning), but the sqlalchemy release that fixes it is only just a year old and this is something most users will try as the first thing I think when using the sqlalchemy functions (writing/reading simple dataframe with some numbers with sqlite), so should not get a warning they don't understand.
I tested it locally with sqlalchemy 0.7.8 (on Windows), and on travis it is tested with 0.7.1 (the py2.6 build) and there also the warnings disappeared.
What do you think?
| https://api.github.com/repos/pandas-dev/pandas/pulls/7719 | 2014-07-10T14:24:58Z | 2014-07-10T22:34:56Z | 2014-07-10T22:34:56Z | 2014-07-10T22:35:31Z |
CLN: Simplify LinePlot flow | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index d3ea809b79b76..6124da58995d8 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -755,9 +755,9 @@ class MPLPlot(object):
_default_rot = 0
_pop_attributes = ['label', 'style', 'logy', 'logx', 'loglog',
- 'mark_right']
+ 'mark_right', 'stacked']
_attr_defaults = {'logy': False, 'logx': False, 'loglog': False,
- 'mark_right': True}
+ 'mark_right': True, 'stacked': False}
def __init__(self, data, kind=None, by=None, subplots=False, sharex=True,
sharey=False, use_index=True,
@@ -1080,7 +1080,6 @@ def _make_legend(self):
for ax in self.axes:
ax.legend(loc='best')
-
def _get_ax_legend(self, ax):
leg = ax.get_legend()
other_ax = (getattr(ax, 'right_ax', None) or
@@ -1139,12 +1138,22 @@ def _get_plot_function(self):
Returns the matplotlib plotting function (plot or errorbar) based on
the presence of errorbar keywords.
'''
-
- if all(e is None for e in self.errors.values()):
- plotf = self.plt.Axes.plot
- else:
- plotf = self.plt.Axes.errorbar
-
+ errorbar = any(e is not None for e in self.errors.values())
+ def plotf(ax, x, y, style=None, **kwds):
+ mask = com.isnull(y)
+ if mask.any():
+ y = np.ma.array(y)
+ y = np.ma.masked_where(mask, y)
+
+ if errorbar:
+ return self.plt.Axes.errorbar(ax, x, y, **kwds)
+ else:
+ # prevent style kwarg from going to errorbar, where it is unsupported
+ if style is not None:
+ args = (ax, x, y, style)
+ else:
+ args = (ax, x, y)
+ return self.plt.Axes.plot(*args, **kwds)
return plotf
def _get_index_name(self):
@@ -1472,11 +1481,9 @@ def _post_plot_logic(self):
class LinePlot(MPLPlot):
def __init__(self, data, **kwargs):
- self.stacked = kwargs.pop('stacked', False)
- if self.stacked:
- data = data.fillna(value=0)
-
MPLPlot.__init__(self, data, **kwargs)
+ if self.stacked:
+ self.data = self.data.fillna(value=0)
self.x_compat = plot_params['x_compat']
if 'x_compat' in self.kwds:
self.x_compat = bool(self.kwds.pop('x_compat'))
@@ -1533,56 +1540,39 @@ def _is_ts_plot(self):
return not self.x_compat and self.use_index and self._use_dynamic_x()
def _make_plot(self):
- self._pos_prior = np.zeros(len(self.data))
- self._neg_prior = np.zeros(len(self.data))
+ self._initialize_prior(len(self.data))
if self._is_ts_plot():
data = self._maybe_convert_index(self.data)
- self._make_ts_plot(data)
+ x = data.index # dummy, not used
+ plotf = self._get_ts_plot_function()
+ it = self._iter_data(data=data, keep_index=True)
else:
x = self._get_xticks(convert_period=True)
-
plotf = self._get_plot_function()
- colors = self._get_colors()
-
- for i, (label, y) in enumerate(self._iter_data()):
- ax = self._get_ax(i)
- style = self._get_style(i, label)
- kwds = self.kwds.copy()
- self._maybe_add_color(colors, kwds, style, i)
+ it = self._iter_data()
- errors = self._get_errorbars(label=label, index=i)
- kwds = dict(kwds, **errors)
-
- label = com.pprint_thing(label) # .encode('utf-8')
- kwds['label'] = label
-
- y_values = self._get_stacked_values(y, label)
-
- if not self.stacked:
- mask = com.isnull(y_values)
- if mask.any():
- y_values = np.ma.array(y_values)
- y_values = np.ma.masked_where(mask, y_values)
+ colors = self._get_colors()
+ for i, (label, y) in enumerate(it):
+ ax = self._get_ax(i)
+ style = self._get_style(i, label)
+ kwds = self.kwds.copy()
+ self._maybe_add_color(colors, kwds, style, i)
- # prevent style kwarg from going to errorbar, where it is unsupported
- if style is not None and plotf.__name__ != 'errorbar':
- args = (ax, x, y_values, style)
- else:
- args = (ax, x, y_values)
+ errors = self._get_errorbars(label=label, index=i)
+ kwds = dict(kwds, **errors)
- newlines = plotf(*args, **kwds)
- self._add_legend_handle(newlines[0], label, index=i)
+ label = com.pprint_thing(label) # .encode('utf-8')
+ kwds['label'] = label
+ y_values = self._get_stacked_values(y, label)
- if self.stacked and not self.subplots:
- if (y >= 0).all():
- self._pos_prior += y
- elif (y <= 0).all():
- self._neg_prior += y
+ newlines = plotf(ax, x, y_values, style=style, **kwds)
+ self._update_prior(y)
+ self._add_legend_handle(newlines[0], label, index=i)
- lines = _get_all_lines(ax)
- left, right = _get_xlim(lines)
- ax.set_xlim(left, right)
+ lines = _get_all_lines(ax)
+ left, right = _get_xlim(lines)
+ ax.set_xlim(left, right)
def _get_stacked_values(self, y, label):
if self.stacked:
@@ -1599,46 +1589,26 @@ def _get_stacked_values(self, y, label):
def _get_ts_plot_function(self):
from pandas.tseries.plotting import tsplot
plotf = self._get_plot_function()
-
- def _plot(data, ax, label, style, **kwds):
- # errorbar function does not support style argument
- if plotf.__name__ == 'errorbar':
- lines = tsplot(data, plotf, ax=ax, label=label,
- **kwds)
- return lines
- else:
- lines = tsplot(data, plotf, ax=ax, label=label,
- style=style, **kwds)
- return lines
+ def _plot(ax, x, data, style=None, **kwds):
+ # accept x to be consistent with normal plot func,
+ # x is not passed to tsplot as it uses data.index as x coordinate
+ lines = tsplot(data, plotf, ax=ax, style=style, **kwds)
+ return lines
return _plot
- def _make_ts_plot(self, data, **kwargs):
- colors = self._get_colors()
- plotf = self._get_ts_plot_function()
-
- it = self._iter_data(data=data, keep_index=True)
- for i, (label, y) in enumerate(it):
- ax = self._get_ax(i)
- style = self._get_style(i, label)
- kwds = self.kwds.copy()
-
- self._maybe_add_color(colors, kwds, style, i)
-
- errors = self._get_errorbars(label=label, index=i, xerr=False)
- kwds = dict(kwds, **errors)
-
- label = com.pprint_thing(label)
-
- y_values = self._get_stacked_values(y, label)
-
- newlines = plotf(y_values, ax, label, style, **kwds)
- self._add_legend_handle(newlines[0], label, index=i)
+ def _initialize_prior(self, n):
+ self._pos_prior = np.zeros(n)
+ self._neg_prior = np.zeros(n)
- if self.stacked and not self.subplots:
- if (y >= 0).all():
- self._pos_prior += y
- elif (y <= 0).all():
- self._neg_prior += y
+ def _update_prior(self, y):
+ if self.stacked and not self.subplots:
+ # tsplot resample may changedata length
+ if len(self._pos_prior) != len(y):
+ self._initialize_prior(len(y))
+ if (y >= 0).all():
+ self._pos_prior += y
+ elif (y <= 0).all():
+ self._neg_prior += y
def _maybe_convert_index(self, data):
# tsplot converts automatically, but don't want to convert index
@@ -1707,13 +1677,14 @@ def _get_plot_function(self):
if self.logy or self.loglog:
raise ValueError("Log-y scales are not supported in area plot")
else:
- f = LinePlot._get_plot_function(self)
-
- def plotf(*args, **kwds):
- lines = f(*args, **kwds)
+ f = MPLPlot._get_plot_function(self)
+ def plotf(ax, x, y, style=None, **kwds):
+ lines = f(ax, x, y, style=style, **kwds)
+ # get data from the line
# insert fill_between starting point
- y = args[2]
+ xdata, y_values = lines[0].get_data(orig=False)
+
if (y >= 0).all():
start = self._pos_prior
elif (y <= 0).all():
@@ -1721,16 +1692,10 @@ def plotf(*args, **kwds):
else:
start = np.zeros(len(y))
- # get x data from the line
- # to retrieve x coodinates of tsplot
- xdata = lines[0].get_data()[0]
- # remove style
- args = (args[0], xdata, start, y)
-
if not 'color' in kwds:
kwds['color'] = lines[0].get_color()
- self.plt.Axes.fill_between(*args, **kwds)
+ self.plt.Axes.fill_between(ax, xdata, start, y_values, **kwds)
return lines
return plotf
@@ -1746,15 +1711,6 @@ def _add_legend_handle(self, handle, label, index=None):
def _post_plot_logic(self):
LinePlot._post_plot_logic(self)
- if self._is_ts_plot():
- pass
- else:
- if self.xlim is None:
- for ax in self.axes:
- lines = _get_all_lines(ax)
- left, right = _get_xlim(lines)
- ax.set_xlim(left, right)
-
if self.ylim is None:
if (self.data >= 0).all().all():
for ax in self.axes:
@@ -1769,12 +1725,8 @@ class BarPlot(MPLPlot):
_default_rot = {'bar': 90, 'barh': 0}
def __init__(self, data, **kwargs):
- self.stacked = kwargs.pop('stacked', False)
-
self.bar_width = kwargs.pop('width', 0.5)
-
pos = kwargs.pop('position', 0.5)
-
kwargs.setdefault('align', 'center')
self.tick_pos = np.arange(len(data))
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index 6031482fd9927..33a14403b0f08 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -18,8 +18,6 @@
from pandas.tseries.converter import (PeriodConverter, TimeSeries_DateLocator,
TimeSeries_DateFormatter)
-from pandas.tools.plotting import _get_all_lines, _get_xlim
-
#----------------------------------------------------------------------
# Plotting functions and monkey patches
@@ -59,25 +57,15 @@ def tsplot(series, plotf, **kwargs):
# Set ax with freq info
_decorate_axes(ax, freq, kwargs)
- # mask missing values
- args = _maybe_mask(series)
-
# how to make sure ax.clear() flows through?
if not hasattr(ax, '_plot_data'):
ax._plot_data = []
ax._plot_data.append((series, kwargs))
- # styles
- style = kwargs.pop('style', None)
- if style is not None:
- args.append(style)
-
- lines = plotf(ax, *args, **kwargs)
+ lines = plotf(ax, series.index, series.values, **kwargs)
# set date formatter, locators and rescale limits
format_dateaxis(ax, ax.freq)
- left, right = _get_xlim(_get_all_lines(ax))
- ax.set_xlim(left, right)
# x and y coord info
ax.format_coord = lambda t, y: ("t = {0} "
@@ -165,8 +153,7 @@ def _replot_ax(ax, freq, plotf, kwargs):
idx = series.index.asfreq(freq, how='S')
series.index = idx
ax._plot_data.append(series)
- args = _maybe_mask(series)
- lines.append(plotf(ax, *args, **kwds)[0])
+ lines.append(plotf(ax, series.index, series.values, **kwds)[0])
labels.append(com.pprint_thing(series.name))
return lines, labels
@@ -184,17 +171,6 @@ def _decorate_axes(ax, freq, kwargs):
ax.date_axis_info = None
-def _maybe_mask(series):
- mask = isnull(series)
- if mask.any():
- masked_array = np.ma.array(series.values)
- masked_array = np.ma.masked_where(mask, masked_array)
- args = [series.index, masked_array]
- else:
- args = [series.index, series.values]
- return args
-
-
def _get_freq(ax, series):
# get frequency from data
freq = getattr(series.index, 'freq', None)
| Related to #7670. Made `LinePlot` to use single plotting flow in `x_compat` and `tsplot`.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7717 | 2014-07-10T13:02:54Z | 2014-07-21T13:14:05Z | 2014-07-21T13:14:05Z | 2014-07-23T11:08:48Z |
DOC: clean up 0.14.1 whatsnew file | diff --git a/doc/source/release.rst b/doc/source/release.rst
index d6fbc3a9d8896..fb06dc4d61814 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -50,9 +50,20 @@ pandas 0.14.1
**Release date:** (July 11, 2014)
-This is a minor release from 0.14.0 and includes a number of API changes, several new features, enhancements, and
+This is a minor release from 0.14.0 and includes a small number of API changes, several new features, enhancements, and
performance improvements along with a large number of bug fixes.
+Highlights include:
+
+- New methods :meth:`~pandas.DataFrame.select_dtypes` to select columns
+ based on the dtype and :meth:`~pandas.Series.sem` to calculate the
+ standard error of the mean.
+- Support for dateutil timezones (see :ref:`docs <timeseries.timezone>`).
+- Support for ignoring full line comments in the :func:`~pandas.read_csv`
+ text parser.
+- New documentation section on :ref:`Options and Settings <options>`.
+- Lots of bug fixes.
+
See the :ref:`v0.14.1 Whatsnew <whatsnew_0141>` overview or the issue tracker on GitHub for an extensive list
of all API changes, enhancements and bugs that have been fixed in 0.14.1.
diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 0e6c98a1a8d23..2b5f8b2dfbb38 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -7,16 +7,21 @@ This is a minor release from 0.14.0 and includes a small number of API changes,
enhancements, and performance improvements along with a large number of bug fixes. We recommend that all
users upgrade to this version.
-- New Documentation section on :ref:`Options and Settings <options>`
+- Highlights include:
-- :ref:`Enhancements <whatsnew_0141.enhancements>`
+ - New methods :meth:`~pandas.DataFrame.select_dtypes` to select columns
+ based on the dtype and :meth:`~pandas.Series.sem` to calculate the
+ standard error of the mean.
+ - Support for dateutil timezones (see :ref:`docs <timeseries.timezone>`).
+ - Support for ignoring full line comments in the :func:`~pandas.read_csv`
+ text parser.
+ - New documentation section on :ref:`Options and Settings <options>`.
+ - Lots of bug fixes.
+- :ref:`Enhancements <whatsnew_0141.enhancements>`
- :ref:`API Changes <whatsnew_0141.api>`
-
- :ref:`Performance Improvements <whatsnew_0141.performance>`
-
- :ref:`Experimental Changes <whatsnew_0141.experimental>`
-
- :ref:`Bug Fixes <whatsnew_0141.bug_fixes>`
.. _whatsnew_0141.api:
@@ -24,22 +29,6 @@ users upgrade to this version.
API changes
~~~~~~~~~~~
-- All ``offsets`` suppports ``normalize`` keyword to specify whether ``offsets.apply``, ``rollforward`` and ``rollback`` resets time (hour, minute, etc) or not (default ``False``, preserves time) (:issue:`7156`)
-
-
- .. ipython:: python
-
- import pandas.tseries.offsets as offsets
-
- day = offsets.Day()
- day.apply(Timestamp('2014-01-01 09:00'))
-
- day = offsets.Day(normalize=True)
- day.apply(Timestamp('2014-01-01 09:00'))
-
-- Improved inference of datetime/timedelta with mixed null objects. Regression from 0.13.1 in interpretation of an object Index
- with all null elements (:issue:`7431`)
-
- Openpyxl now raises a ValueError on construction of the openpyxl writer
instead of warning on pandas import (:issue:`7284`).
@@ -47,68 +36,85 @@ API changes
containing ``NaN`` values - now also has ``dtype=object`` instead of
``float`` (:issue:`7242`)
-- ``StringMethods`` now work on empty Series (:issue:`7242`)
- ``Period`` objects no longer raise a ``TypeError`` when compared using ``==``
with another object that *isn't* a ``Period``. Instead
when comparing a ``Period`` with another object using ``==`` if the other
object isn't a ``Period`` ``False`` is returned. (:issue:`7376`)
-- Bug in ``.loc`` performing fallback integer indexing with ``object`` dtype indices (:issue:`7496`)
-- Add back ``#N/A N/A`` as a default NA value in text parsing, (regresion from 0.12) (:issue:`5521`)
-- Raise a ``TypeError`` on inplace-setting with a ``.where`` and a non ``np.nan`` value as this is inconsistent
- with a set-item expression like ``df[mask] = None`` (:issue:`7656`)
-
-.. _whatsnew_0141.prior_deprecations:
-
-Prior Version Deprecations/Changes
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+- Previously, the behaviour on resetting the time or not in
+ ``offsets.apply``, ``rollforward`` and ``rollback`` operations differed
+ between offsets. With the support of the ``normalize`` keyword for all offsets(see
+ below) with a default value of False (preserve time), the behaviour changed for certain
+ offsets (BusinessMonthBegin, MonthEnd, BusinessMonthEnd, CustomBusinessMonthEnd,
+ BusinessYearBegin, LastWeekOfMonth, FY5253Quarter, LastWeekOfMonth, Easter):
-There are no prior version deprecations that are taking effect as of 0.14.1.
+ .. code-block:: python
-.. _whatsnew_0141.deprecations:
+ In [6]: from pandas.tseries import offsets
-Deprecations
-~~~~~~~~~~~~
+ In [7]: d = pd.Timestamp('2014-01-01 09:00')
-There are no deprecations that are taking effect as of 0.14.1.
+ # old behaviour < 0.14.1
+ In [8]: d + offsets.MonthEnd()
+ Out[8]: Timestamp('2014-01-31 00:00:00')
-.. _whatsnew_0141.enhancements:
-
-Enhancements
-~~~~~~~~~~~~
-
-
-
-- Add ``dropna`` argument to ``value_counts`` and ``nunique`` (:issue:`5569`).
-- Add ``NotImplementedError`` for simultaneous use of ``chunksize`` and ``nrows``
- for read_csv() (:issue:`6774`).
+ Starting from 0.14.1 all offsets preserve time by default. The old
+ behaviour can be obtained with ``normalize=True``
-- ``PeriodIndex`` is represented as the same format as ``DatetimeIndex`` (:issue:`7601`)
+ .. ipython:: python
+ :suppress:
+ import pandas.tseries.offsets as offsets
+ d = pd.Timestamp('2014-01-01 09:00')
+ .. ipython:: python
+ # new behaviour
+ d + offsets.MonthEnd()
+ d + offsets.MonthEnd(normalize=True)
+ Note that for the other offsets the default behaviour did not change.
+- Add back ``#N/A N/A`` as a default NA value in text parsing, (regresion from 0.12) (:issue:`5521`)
+- Raise a ``TypeError`` on inplace-setting with a ``.where`` and a non ``np.nan`` value as this is inconsistent
+ with a set-item expression like ``df[mask] = None`` (:issue:`7656`)
+.. _whatsnew_0141.enhancements:
+Enhancements
+~~~~~~~~~~~~
+- Add ``dropna`` argument to ``value_counts`` and ``nunique`` (:issue:`5569`).
- Add :meth:`~pandas.DataFrame.select_dtypes` method to allow selection of
columns based on dtype (:issue:`7316`). See :ref:`the docs <basics.selectdtypes>`.
+- All ``offsets`` suppports the ``normalize`` keyword to specify whether
+ ``offsets.apply``, ``rollforward`` and ``rollback`` resets the time (hour,
+ minute, etc) or not (default ``False``, preserves time) (:issue:`7156`):
+ .. ipython:: python
+ import pandas.tseries.offsets as offsets
+ day = offsets.Day()
+ day.apply(Timestamp('2014-01-01 09:00'))
+
+ day = offsets.Day(normalize=True)
+ day.apply(Timestamp('2014-01-01 09:00'))
+
+- ``PeriodIndex`` is represented as the same format as ``DatetimeIndex`` (:issue:`7601`)
+- ``StringMethods`` now work on empty Series (:issue:`7242`)
- The file parsers ``read_csv`` and ``read_table`` now ignore line comments provided by
the parameter `comment`, which accepts only a single character for the C reader.
In particular, they allow for comments before file data begins (:issue:`2685`)
+- Add ``NotImplementedError`` for simultaneous use of ``chunksize`` and ``nrows``
+ for read_csv() (:issue:`6774`).
- Tests for basic reading of public S3 buckets now exist (:issue:`7281`).
- ``read_html`` now sports an ``encoding`` argument that is passed to the
underlying parser library. You can use this to read non-ascii encoded web
pages (:issue:`7323`).
- ``read_excel`` now supports reading from URLs in the same way
that ``read_csv`` does. (:issue:`6809`)
-
-
- Support for dateutil timezones, which can now be used in the same way as
pytz timezones across pandas. (:issue:`4688`)
@@ -125,16 +131,13 @@ Enhancements
- Add ``nlargest`` and ``nsmallest`` to the ``Series`` ``groupby`` whitelist,
which means you can now use these methods on a ``SeriesGroupBy`` object
(:issue:`7053`).
-
-
-
- All offsets ``apply``, ``rollforward`` and ``rollback`` can now handle ``np.datetime64``, previously results in ``ApplyTypeError`` (:issue:`7452`)
-
- ``Period`` and ``PeriodIndex`` can contain ``NaT`` in its values (:issue:`7485`)
- Support pickling ``Series``, ``DataFrame`` and ``Panel`` objects with
non-unique labels along *item* axis (``index``, ``columns`` and ``items``
respectively) (:issue:`7370`).
-
+- Improved inference of datetime/timedelta with mixed null objects. Regression from 0.13.1 in interpretation of an object Index
+ with all null elements (:issue:`7431`)
.. _whatsnew_0141.performance:
@@ -147,25 +150,20 @@ Performance
- Improvements in `MultiIndex.from_product` for large iterables (:issue:`7627`)
-
.. _whatsnew_0141.experimental:
-
-
-
-
Experimental
~~~~~~~~~~~~
- ``pandas.io.data.Options`` has a new method, ``get_all_data`` method, and now consistently returns a
multi-indexed ``DataFrame``, see :ref:`the docs <remote_data.yahoo_options>`. (:issue:`5602`)
-
- ``io.gbq.read_gbq`` and ``io.gbq.to_gbq`` were refactored to remove the
dependency on the Google ``bq.py`` command line client. This submodule
now uses ``httplib2`` and the Google ``apiclient`` and ``oauth2client`` API client
libraries which should be more stable and, therefore, reliable than
``bq.py``. See :ref:`the docs <io.bigquery>`. (:issue:`6937`).
+
.. _whatsnew_0141.bug_fixes:
Bug Fixes
@@ -185,10 +183,7 @@ Bug Fixes
- Bug in plotting subplots with ``DataFrame.plot``, ``hist`` clears passed ``ax`` even if the number of subplots is one (:issue:`7391`).
- Bug in plotting subplots with ``DataFrame.boxplot`` with ``by`` kw raises ``ValueError`` if the number of subplots exceeds 1 (:issue:`7391`).
- Bug in subplots displays ``ticklabels`` and ``labels`` in different rule (:issue:`5897`)
-
- Bug in ``Panel.apply`` with a multi-index as an axis (:issue:`7469`)
-
-
- Bug in ``DatetimeIndex.insert`` doesn't preserve ``name`` and ``tz`` (:issue:`7299`)
- Bug in ``DatetimeIndex.asobject`` doesn't preserve ``name`` (:issue:`7299`)
- Bug in multi-index slicing with datetimelike ranges (strings and Timestamps), (:issue:`7429`)
@@ -246,49 +241,31 @@ Bug Fixes
- Bug in ``StataReader.data`` where reading a 0-observation dta failed (:issue:`7369`)
- Bug in when reading Stata 13 (117) files containing fixed width strings (:issue:`7360`)
- Bug in when writing Stata files where the encoding was ignored (:issue:`7286`)
-
-
- Bug in ``DatetimeIndex`` comparison doesn't handle ``NaT`` properly (:issue:`7529`)
-
-
- Bug in passing input with ``tzinfo`` to some offsets ``apply``, ``rollforward`` or ``rollback`` resets ``tzinfo`` or raises ``ValueError`` (:issue:`7465`)
- Bug in ``DatetimeIndex.to_period``, ``PeriodIndex.asobject``, ``PeriodIndex.to_timestamp`` doesn't preserve ``name`` (:issue:`7485`)
- Bug in ``DatetimeIndex.to_period`` and ``PeriodIndex.to_timestanp`` handle ``NaT`` incorrectly (:issue:`7228`)
-
- Bug in ``offsets.apply``, ``rollforward`` and ``rollback`` may return normal ``datetime`` (:issue:`7502`)
-
-
- Bug in ``resample`` raises ``ValueError`` when target contains ``NaT`` (:issue:`7227`)
-
- Bug in ``Timestamp.tz_localize`` resets ``nanosecond`` info (:issue:`7534`)
- Bug in ``DatetimeIndex.asobject`` raises ``ValueError`` when it contains ``NaT`` (:issue:`7539`)
- Bug in ``Timestamp.__new__`` doesn't preserve nanosecond properly (:issue:`7610`)
-
- Bug in ``Index.astype(float)`` where it would return an ``object`` dtype
``Index`` (:issue:`7464`).
- Bug in ``DataFrame.reset_index`` loses ``tz`` (:issue:`3950`)
- Bug in ``DatetimeIndex.freqstr`` raises ``AttributeError`` when ``freq`` is ``None`` (:issue:`7606`)
- Bug in ``GroupBy.size`` created by ``TimeGrouper`` raises ``AttributeError`` (:issue:`7453`)
-
- Bug in single column bar plot is misaligned (:issue:`7498`).
-
-
-
- Bug in area plot with tz-aware time series raises ``ValueError`` (:issue:`7471`)
-
- Bug in non-monotonic ``Index.union`` may preserve ``name`` incorrectly (:issue:`7458`)
- Bug in ``DatetimeIndex.intersection`` doesn't preserve timezone (:issue:`4690`)
-
- Bug in ``rolling_var`` where a window larger than the array would raise an error(:issue:`7297`)
-
- Bug with last plotted timeseries dictating ``xlim`` (:issue:`2960`)
- Bug with ``secondary_y`` axis not being considered for timeseries ``xlim`` (:issue:`3490`)
-
- Bug in ``Float64Index`` assignment with a non scalar indexer (:issue:`7586`)
- Bug in ``pandas.core.strings.str_contains`` does not properly match in a case insensitive fashion when ``regex=False`` and ``case=False`` (:issue:`7505`)
-
- Bug in ``expanding_cov``, ``expanding_corr``, ``rolling_cov``, and ``rolling_corr`` for two arguments with mismatched index (:issue:`7512`)
-
- Bug in ``to_sql`` taking the boolean column as text column (:issue:`7678`)
- Bug in grouped `hist` doesn't handle `rot` kw and `sharex` kw properly (:issue:`7234`)
+- Bug in ``.loc`` performing fallback integer indexing with ``object`` dtype indices (:issue:`7496`)
- Bug (regression) in ``PeriodIndex`` constructor when passed ``Series`` objects (:issue:`7701`).
| - removed the sections with no entries
- removed whitespace
- moved some entries from API-changes to Enhancements/Big fixes when it was not really a backwards incompatible change or if it was not clear what the relevant change was for users (and for the offsets normalize issue I rewrote it)
Highlights should still be written to include here and in the release.rst
| https://api.github.com/repos/pandas-dev/pandas/pulls/7714 | 2014-07-10T07:57:40Z | 2014-07-10T23:08:37Z | 2014-07-10T23:08:37Z | 2014-07-11T20:38:28Z |
DOC: fix doc build warnings | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 109b7a0a38fc5..cfa97ca0f3fef 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1996,7 +1996,7 @@ Excel writer engines
By default, ``pandas`` uses the `XlsxWriter`_ for ``.xlsx`` and `openpyxl`_
for ``.xlsm`` files and `xlwt`_ for ``.xls`` files. If you have multiple
engines installed, you can set the default engine through :ref:`setting the
-config options <basics.working_with_options>` ``io.excel.xlsx.writer`` and
+config options <options>` ``io.excel.xlsx.writer`` and
``io.excel.xls.writer``. pandas will fall back on `openpyxl`_ for ``.xlsx``
files if `Xlsxwriter`_ is not available.
diff --git a/pandas/core/config.py b/pandas/core/config.py
index a16b32d5dd185..3e8d76500d128 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -640,7 +640,7 @@ def _build_option_description(k):
_get_option(k, True))
if d:
- s += u('\n\t(Deprecated')
+ s += u('\n (Deprecated')
s += (u(', use `%s` instead.') % d.rkey if d.rkey else '')
s += u(')')
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 413f3daa52a52..b97cb11906e2f 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1898,7 +1898,7 @@ def select_dtypes(self, include=None, exclude=None):
* To select strings you must use the ``object`` dtype, but note that
this will return *all* object dtype columns
* See the `numpy dtype hierarchy
- <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`__
+ <http://docs.scipy.org/doc/numpy/reference/arrays.scalars.html>`__
Examples
--------
| This should fix all doc build errors/warnings (apart from the known ones).
| https://api.github.com/repos/pandas-dev/pandas/pulls/7713 | 2014-07-09T21:01:45Z | 2014-07-10T07:17:53Z | 2014-07-10T07:17:53Z | 2014-07-10T07:17:55Z |
BUG: PeriodIndex constructor doesn't work with Series objects | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 8fde5df6fd75a..0e6c98a1a8d23 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -291,4 +291,4 @@ Bug Fixes
- Bug in ``to_sql`` taking the boolean column as text column (:issue:`7678`)
- Bug in grouped `hist` doesn't handle `rot` kw and `sharex` kw properly (:issue:`7234`)
-
+- Bug (regression) in ``PeriodIndex`` constructor when passed ``Series`` objects (:issue:`7701`).
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index cceac61f392a8..ed56bdc827ede 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -14,7 +14,7 @@
import pandas.core.common as com
from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box,
- _values_from_object)
+ _values_from_object, ABCSeries)
from pandas import compat
from pandas.lib import Timestamp
import pandas.lib as lib
@@ -1261,13 +1261,13 @@ def _range_from_fields(year=None, month=None, quarter=None, day=None,
def _make_field_arrays(*fields):
length = None
for x in fields:
- if isinstance(x, (list, np.ndarray)):
+ if isinstance(x, (list, np.ndarray, ABCSeries)):
if length is not None and len(x) != length:
raise ValueError('Mismatched Period array lengths')
elif length is None:
length = len(x)
- arrays = [np.asarray(x) if isinstance(x, (np.ndarray, list))
+ arrays = [np.asarray(x) if isinstance(x, (np.ndarray, list, ABCSeries))
else np.repeat(x, length) for x in fields]
return arrays
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 42edb799b4c89..53375b4d07796 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -1281,6 +1281,15 @@ def test_constructor_nat(self):
self.assertRaises(
ValueError, period_range, start='2011-01-01', end='NaT', freq='M')
+ def test_constructor_year_and_quarter(self):
+ year = pd.Series([2001, 2002, 2003])
+ quarter = year - 2000
+ idx = PeriodIndex(year=year, quarter=quarter)
+ strs = ['%dQ%d' % t for t in zip(quarter, year)]
+ lops = list(map(Period, strs))
+ p = PeriodIndex(lops)
+ tm.assert_index_equal(p, idx)
+
def test_is_(self):
create_index = lambda: PeriodIndex(freq='A', start='1/1/2001',
end='12/1/2009')
| closes #7701
| https://api.github.com/repos/pandas-dev/pandas/pulls/7712 | 2014-07-09T20:48:17Z | 2014-07-10T15:15:02Z | 2014-07-10T15:15:02Z | 2014-07-10T15:44:34Z |
BUG: DatetimeIndex.__iter__ creates a temp array of Timestamp (GH7683) | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index d776848de40d0..e8203a6b74933 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -152,6 +152,7 @@ There are no experimental changes in 0.15.0
Bug Fixes
~~~~~~~~~
+- Bug in ``DatetimeIndex``, ``__iter__`` creates a temp array of ``Timestamp`` (:issue:`7683`)
diff --git a/pandas/core/base.py b/pandas/core/base.py
index ce078eb91735d..33beaa58affda 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -387,6 +387,9 @@ def _ops_compat(self, name, op_accessor):
is_year_start = _field_accessor('is_year_start', "Logical indicating if first day of year (defined by frequency)")
is_year_end = _field_accessor('is_year_end', "Logical indicating if last day of year (defined by frequency)")
+ def __iter__(self):
+ return (self._box_func(v) for v in self.asi8)
+
@property
def _box_func(self):
"""
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index de758c4c8a579..704a6a7d1ddec 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1476,9 +1476,6 @@ def normalize(self):
return DatetimeIndex(new_values, freq='infer', name=self.name,
tz=self.tz)
- def __iter__(self):
- return iter(self.asobject)
-
def searchsorted(self, key, side='left'):
if isinstance(key, np.ndarray):
key = np.array(key, dtype=_NS_DTYPE, copy=False)
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index cceac61f392a8..a94a45a5cc501 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -738,10 +738,6 @@ def astype(self, dtype):
return Index(self.values, dtype)
raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype)
- def __iter__(self):
- for val in self.values:
- yield Period(ordinal=val, freq=self.freq)
-
def searchsorted(self, key, side='left'):
if isinstance(key, compat.string_types):
key = Period(key, freq=self.freq).ordinal
diff --git a/vb_suite/timeseries.py b/vb_suite/timeseries.py
index 2b63eeaf99550..bb55b88cf1f34 100644
--- a/vb_suite/timeseries.py
+++ b/vb_suite/timeseries.py
@@ -333,3 +333,28 @@ def date_range(start=None, end=None, periods=None, freq=None):
timeseries_is_month_start = Benchmark('rng.is_month_start', setup,
start_date=datetime(2014, 4, 1))
+
+#----------------------------------------------------------------------
+# iterate over DatetimeIndex/PeriodIndex
+setup = common_setup + """
+N = 1000000
+M = 10000
+idx1 = date_range(start='20140101', freq='T', periods=N)
+idx2 = period_range(start='20140101', freq='T', periods=N)
+
+def iter_n(iterable, n=None):
+ i = 0
+ for _ in iterable:
+ i += 1
+ if n is not None and i > n:
+ break
+"""
+
+timeseries_iter_datetimeindex = Benchmark('iter_n(idx1)', setup)
+
+timeseries_iter_periodindex = Benchmark('iter_n(idx2)', setup)
+
+timeseries_iter_datetimeindex_preexit = Benchmark('iter_n(idx1, M)', setup)
+
+timeseries_iter_periodindex_preexit = Benchmark('iter_n(idx2, M)', setup)
+
| closes #7683
Move `__iter__` from `DatatimeIndex/PeriodIndex` to `DatetimeIndexOpsMixin`
Adding perf tests for change
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
timeseries_iter_datetimeindex_preexit | 35.6683 | 2546.9604 | 0.0140 |
timeseries_iter_periodindex | 5051.6427 | 4353.0200 | 1.1605 |
timeseries_iter_periodindex_preexit | 50.9930 | 43.8580 | 1.1627 |
timeseries_iter_datetimeindex | 3463.0601 | 2596.2270 | 1.3339 |
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7709 | 2014-07-09T17:00:31Z | 2014-07-10T15:32:51Z | null | 2014-07-10T15:32:51Z |
API: Add PeriodIndex.resolution | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index eb58f46f0f3fe..e11a3730cd95a 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -117,6 +117,8 @@ Enhancements
- Added support for bool, uint8, uint16 and uint32 datatypes in ``to_stata`` (:issue:`7097`, :issue:`7365`)
+- ``PeriodIndex`` supports ``resolution`` as the same as ``DatetimeIndex`` (:issue:`7708`)
+
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 72fcfbff677ab..4035627b98458 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -498,3 +498,17 @@ def __unicode__(self):
summary += self._format_footer()
return summary
+ @cache_readonly
+ def _resolution(self):
+ from pandas.tseries.frequencies import Resolution
+ return Resolution.get_reso_from_freq(self.freqstr)
+
+ @cache_readonly
+ def resolution(self):
+ """
+ Returns day, hour, minute, second, millisecond or microsecond
+ """
+ from pandas.tseries.frequencies import get_reso_string
+ return get_reso_string(self._resolution)
+
+
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index 832671521c815..761d79a288df3 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -605,6 +605,14 @@ def test_representation(self):
result = getattr(idx, func)()
self.assertEqual(result, expected)
+ def test_resolution(self):
+ for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'],
+ ['day', 'day', 'day', 'day',
+ 'hour', 'minute', 'second', 'millisecond', 'microsecond']):
+ for tz in [None, 'Asia/Tokyo', 'US/Eastern']:
+ idx = pd.date_range(start='2013-04-01', periods=30, freq=freq, tz=tz)
+ self.assertEqual(idx.resolution, expected)
+
class TestPeriodIndexOps(Ops):
_allowed = '_allow_period_index_ops'
@@ -729,6 +737,14 @@ def test_representation(self):
result = getattr(idx, func)()
self.assertEqual(result, expected)
+ def test_resolution(self):
+ for freq, expected in zip(['A', 'Q', 'M', 'D', 'H', 'T', 'S', 'L', 'U'],
+ ['day', 'day', 'day', 'day',
+ 'hour', 'minute', 'second', 'millisecond', 'microsecond']):
+
+ idx = pd.period_range(start='2013-04-01', periods=30, freq=freq)
+ self.assertEqual(idx.resolution, expected)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index fe61e5f0acd9b..4beccaa758006 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -45,7 +45,9 @@ class Resolution(object):
RESO_HR: 'hour',
RESO_DAY: 'day'}
- _reso_period_map = {
+ _str_reso_map = dict([(v, k) for k, v in compat.iteritems(_reso_str_map)])
+
+ _reso_freq_map = {
'year': 'A',
'quarter': 'Q',
'month': 'M',
@@ -57,13 +59,28 @@ class Resolution(object):
'microsecond': 'U',
'nanosecond': 'N'}
+ _freq_reso_map = dict([(v, k) for k, v in compat.iteritems(_reso_freq_map)])
+
@classmethod
def get_str(cls, reso):
return cls._reso_str_map.get(reso, 'day')
+ @classmethod
+ def get_reso(cls, resostr):
+ return cls._str_reso_map.get(resostr, cls.RESO_DAY)
+
@classmethod
def get_freq(cls, resostr):
- return cls._reso_period_map[resostr]
+ return cls._reso_freq_map[resostr]
+
+ @classmethod
+ def get_str_from_freq(cls, freq):
+ return cls._freq_reso_map.get(freq, 'day')
+
+ @classmethod
+ def get_reso_from_freq(cls, freq):
+ return cls.get_reso(cls.get_str_from_freq(freq))
+
def get_reso_string(reso):
return Resolution.get_str(reso)
@@ -593,7 +610,7 @@ def _period_alias_dictionary():
def _infer_period_group(freqstr):
- return _period_group(Resolution._reso_period_map[freqstr])
+ return _period_group(Resolution._reso_freq_map[freqstr])
def _period_group(freqstr):
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index dca2947f6a7a6..9423037844e74 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1536,14 +1536,6 @@ def is_normalized(self):
"""
return tslib.dates_normalized(self.asi8, self.tz)
- @cache_readonly
- def resolution(self):
- """
- Returns day, hour, minute, second, or microsecond
- """
- reso = self._resolution
- return get_reso_string(reso)
-
@cache_readonly
def _resolution(self):
return tslib.resolution(self.asi8, self.tz)
| Add `resolution` to `PeriodIndex` as `DatetimeIndex` has.
NOTE: Going to use this to calculate common freq in #7670.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7708 | 2014-07-09T15:14:46Z | 2014-07-21T11:49:58Z | 2014-07-21T11:49:58Z | 2014-07-23T11:09:22Z |
BUG: offset normalize option may not work in addition/subtraction | diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index e9406b7f49245..76bc796beced8 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -493,7 +493,7 @@ The basic ``DateOffset`` takes the same arguments as
.. ipython:: python
- d = datetime(2008, 8, 18)
+ d = datetime(2008, 8, 18, 9, 0)
d + relativedelta(months=4, days=5)
We could have done the same thing with ``DateOffset``:
@@ -568,10 +568,21 @@ particular day of the week:
.. ipython:: python
+ d
d + Week()
d + Week(weekday=4)
(d + Week(weekday=4)).weekday()
+ d - Week()
+
+``normalize`` option will be effective for addition and subtraction.
+
+.. ipython:: python
+
+ d + Week(normalize=True)
+ d - Week(normalize=True)
+
+
Another example is parameterizing ``YearEnd`` with the specific ending month:
.. ipython:: python
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index d1fe287bf33be..57181b43df9f6 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -157,7 +157,7 @@ def isAnchored(self):
return (self.n == 1)
def copy(self):
- return self.__class__(self.n, **self.kwds)
+ return self.__class__(self.n, normalize=self.normalize, **self.kwds)
def _should_cache(self):
return self.isAnchored() and self._cacheable
@@ -251,34 +251,34 @@ def __sub__(self, other):
if isinstance(other, datetime):
raise TypeError('Cannot subtract datetime from offset.')
elif type(other) == type(self):
- return self.__class__(self.n - other.n, **self.kwds)
+ return self.__class__(self.n - other.n, normalize=self.normalize, **self.kwds)
else: # pragma: no cover
return NotImplemented
def __rsub__(self, other):
- return self.__class__(-self.n, **self.kwds) + other
+ return self.__class__(-self.n, normalize=self.normalize, **self.kwds) + other
def __mul__(self, someInt):
- return self.__class__(n=someInt * self.n, **self.kwds)
+ return self.__class__(n=someInt * self.n, normalize=self.normalize, **self.kwds)
def __rmul__(self, someInt):
return self.__mul__(someInt)
def __neg__(self):
- return self.__class__(-self.n, **self.kwds)
+ return self.__class__(-self.n, normalize=self.normalize, **self.kwds)
@apply_wraps
def rollback(self, dt):
"""Roll provided date backward to next offset only if not on offset"""
if not self.onOffset(dt):
- dt = dt - self.__class__(1, **self.kwds)
+ dt = dt - self.__class__(1, normalize=self.normalize, **self.kwds)
return dt
@apply_wraps
def rollforward(self, dt):
"""Roll provided date forward to next offset only if not on offset"""
if not self.onOffset(dt):
- dt = dt + self.__class__(1, **self.kwds)
+ dt = dt + self.__class__(1, normalize=self.normalize, **self.kwds)
return dt
def onOffset(self, dt):
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 1ef1bd184bdbc..9febec68bd458 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -361,6 +361,42 @@ def test_onOffset(self):
date = datetime(dt.year, dt.month, dt.day)
self.assert_(offset_n.onOffset(date))
+ def test_add(self):
+ dt = datetime(2011, 1, 1, 9, 0)
+
+ for offset in self.offset_types:
+ offset_s = self._get_offset(offset)
+ expected = self.expecteds[offset.__name__]
+
+ result_dt = dt + offset_s
+ result_ts = Timestamp(dt) + offset_s
+ for result in [result_dt, result_ts]:
+ self.assertTrue(isinstance(result, Timestamp))
+ self.assertEqual(result, expected)
+
+ tm._skip_if_no_pytz()
+ for tz in self.timezones:
+ expected_localize = expected.tz_localize(tz)
+ result = Timestamp(dt, tz=tz) + offset_s
+ self.assert_(isinstance(result, Timestamp))
+ self.assertEqual(result, expected_localize)
+
+ # normalize=True
+ offset_s = self._get_offset(offset, normalize=True)
+ expected = Timestamp(expected.date())
+
+ result_dt = dt + offset_s
+ result_ts = Timestamp(dt) + offset_s
+ for result in [result_dt, result_ts]:
+ self.assertTrue(isinstance(result, Timestamp))
+ self.assertEqual(result, expected)
+
+ for tz in self.timezones:
+ expected_localize = expected.tz_localize(tz)
+ result = Timestamp(dt, tz=tz) + offset_s
+ self.assert_(isinstance(result, Timestamp))
+ self.assertEqual(result, expected_localize)
+
class TestDateOffset(Base):
_multiprocess_can_split_ = True
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 70b6b308b6b37..2fd71521b24d5 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -753,7 +753,10 @@ cdef class _Timestamp(datetime):
elif isinstance(other, timedelta) or hasattr(other, 'delta'):
nanos = _delta_to_nanoseconds(other)
- return Timestamp(self.value + nanos, tz=self.tzinfo, offset=self.offset)
+ result = Timestamp(self.value + nanos, tz=self.tzinfo, offset=self.offset)
+ if getattr(other, 'normalize', False):
+ result = Timestamp(normalize_date(result))
+ return result
result = datetime.__add__(self, other)
if isinstance(result, datetime):
| Closes problem found in #7375.
@jreback Is this for 0.15 (needs release note)?
| https://api.github.com/repos/pandas-dev/pandas/pulls/7705 | 2014-07-09T14:10:24Z | 2014-07-09T15:48:51Z | 2014-07-09T15:48:51Z | 2014-07-10T11:01:35Z |
BUG: DatetimeIndex.__iter__ creates a temp array of Timestamp (GH7683) | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 8fde5df6fd75a..886642f25687d 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -292,3 +292,4 @@ Bug Fixes
- Bug in ``to_sql`` taking the boolean column as text column (:issue:`7678`)
- Bug in grouped `hist` doesn't handle `rot` kw and `sharex` kw properly (:issue:`7234`)
+- Bug in ``DatetimeIndex``, ``__iter__`` creates a temp array of ``Timestamp`` (:issue:`7683`)
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index de758c4c8a579..38e66233a948f 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -1477,7 +1477,7 @@ def normalize(self):
tz=self.tz)
def __iter__(self):
- return iter(self.asobject)
+ return (self._box_func(v) for v in self.asi8)
def searchsorted(self, key, side='left'):
if isinstance(key, np.ndarray):
| closes #7683
| https://api.github.com/repos/pandas-dev/pandas/pulls/7702 | 2014-07-09T08:49:59Z | 2014-07-09T17:00:54Z | null | 2014-07-09T17:09:30Z |
DOC: table keyword missing in the docstring for Series.plot() and DataFr... | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index c3189ae98f662..83fbc51787b20 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -2067,6 +2067,10 @@ def plot_frame(frame=None, x=None, y=None, subplots=False, sharex=True,
position : float
Specify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
+ table : boolean, Series or DataFrame, default False
+ If True, draw a table using the data in the DataFrame and the data will
+ be transposed to meet matplotlib’s default layout.
+ If a Series or DataFrame is passed, use passed data to draw a table.
kwds : keywords
Options to pass to matplotlib plotting method
@@ -2210,6 +2214,10 @@ def plot_series(series, label=None, kind='line', use_index=True, rot=None,
position : float
Specify relative alignments for bar plot layout.
From 0 (left/bottom-end) to 1 (right/top-end). Default is 0.5 (center)
+ table : boolean, Series or DataFrame, default False
+ If True, draw a table using the data in the Series and the data will
+ be transposed to meet matplotlib’s default layout.
+ If a Series or DataFrame is passed, use passed data to draw a table.
kwds : keywords
Options to pass to matplotlib plotting method
@@ -2795,7 +2803,7 @@ def table(ax, data, rowLabels=None, colLabels=None,
elif isinstance(data, DataFrame):
pass
else:
- raise ValueError('Input data must be dataframe or series')
+ raise ValueError('Input data must be DataFrame or Series')
if rowLabels is None:
rowLabels = data.index
| ...ame.plot()
| https://api.github.com/repos/pandas-dev/pandas/pulls/7698 | 2014-07-08T21:01:42Z | 2014-07-09T06:37:28Z | 2014-07-09T06:37:28Z | 2015-04-25T23:33:31Z |
BUG/PERF: offsets.apply doesnt preserve nanosecond | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index c2d234b5a06c1..7ef8e1fac08d1 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -256,6 +256,9 @@ Bug Fixes
- Bug in repeated timeseries line and area plot may result in ``ValueError`` or incorrect kind (:issue:`7733`)
+- Bug in ``offsets.apply``, ``rollforward`` and ``rollback`` may reset nanosecond (:issue:`7697`)
+- Bug in ``offsets.apply``, ``rollforward`` and ``rollback`` may raise ``AttributeError`` if ``Timestamp`` has ``dateutil`` tzinfo (:issue:`7697`)
+
- Bug in ``is_superperiod`` and ``is_subperiod`` cannot handle higher frequencies than ``S`` (:issue:`7760`, :issue:`7772`, :issue:`7803`)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 8f77f88910a3c..d2c9acedcee94 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -22,13 +22,14 @@
'QuarterBegin', 'BQuarterBegin', 'QuarterEnd', 'BQuarterEnd',
'LastWeekOfMonth', 'FY5253Quarter', 'FY5253',
'Week', 'WeekOfMonth', 'Easter',
- 'Hour', 'Minute', 'Second', 'Milli', 'Micro', 'Nano']
+ 'Hour', 'Minute', 'Second', 'Milli', 'Micro', 'Nano',
+ 'DateOffset']
# convert to/from datetime/timestamp to allow invalid Timestamp ranges to pass thru
def as_timestamp(obj):
+ if isinstance(obj, Timestamp):
+ return obj
try:
- if isinstance(obj, Timestamp):
- return obj
return Timestamp(obj)
except (OutOfBoundsDatetime):
pass
@@ -45,22 +46,46 @@ def apply_wraps(func):
def wrapper(self, other):
if other is tslib.NaT:
return tslib.NaT
- if type(other) == date:
- other = datetime(other.year, other.month, other.day)
- if isinstance(other, (np.datetime64, datetime)):
+ elif isinstance(other, (timedelta, Tick, DateOffset)):
+ # timedelta path
+ return func(self, other)
+ elif isinstance(other, (np.datetime64, datetime, date)):
other = as_timestamp(other)
tz = getattr(other, 'tzinfo', None)
- result = func(self, other)
+ nano = getattr(other, 'nanosecond', 0)
- if self.normalize:
- result = tslib.normalize_date(result)
+ try:
+ result = func(self, other)
+
+ if self.normalize:
+ # normalize_date returns normal datetime
+ result = tslib.normalize_date(result)
+ result = Timestamp(result)
- if isinstance(other, Timestamp) and not isinstance(result, Timestamp):
- result = as_timestamp(result)
+ # nanosecond may be deleted depending on offset process
+ if not self.normalize and nano != 0:
+ if not isinstance(self, Nano) and result.nanosecond != nano:
+ if result.tz is not None:
+ # convert to UTC
+ value = tslib.tz_convert_single(result.value, 'UTC', result.tz)
+ else:
+ value = result.value
+ result = Timestamp(value + nano)
+
+ if tz is not None and result.tzinfo is None:
+ result = tslib._localize_pydatetime(result, tz)
+
+ except OutOfBoundsDatetime:
+ result = func(self, as_datetime(other))
+
+ if self.normalize:
+ # normalize_date returns normal datetime
+ result = tslib.normalize_date(result)
+
+ if tz is not None and result.tzinfo is None:
+ result = tslib._localize_pydatetime(result, tz)
- if tz is not None and result.tzinfo is None:
- result = result.tz_localize(tz)
return result
return wrapper
@@ -144,7 +169,6 @@ def __init__(self, n=1, normalize=False, **kwds):
@apply_wraps
def apply(self, other):
- other = as_datetime(other)
if len(self.kwds) > 0:
if self.n > 0:
for i in range(self.n):
@@ -152,9 +176,9 @@ def apply(self, other):
else:
for i in range(-self.n):
other = other - self._offset
- return as_timestamp(other)
+ return other
else:
- return as_timestamp(other + timedelta(self.n))
+ return other + timedelta(self.n)
def isAnchored(self):
return (self.n == 1)
@@ -270,16 +294,16 @@ def __rmul__(self, someInt):
def __neg__(self):
return self.__class__(-self.n, normalize=self.normalize, **self.kwds)
- @apply_wraps
def rollback(self, dt):
"""Roll provided date backward to next offset only if not on offset"""
+ dt = as_timestamp(dt)
if not self.onOffset(dt):
dt = dt - self.__class__(1, normalize=self.normalize, **self.kwds)
return dt
- @apply_wraps
def rollforward(self, dt):
"""Roll provided date forward to next offset only if not on offset"""
+ dt = as_timestamp(dt)
if not self.onOffset(dt):
dt = dt + self.__class__(1, normalize=self.normalize, **self.kwds)
return dt
@@ -452,8 +476,7 @@ def apply(self, other):
if self.offset:
result = result + self.offset
-
- return as_timestamp(result)
+ return result
elif isinstance(other, (timedelta, Tick)):
return BDay(self.n, offset=self.offset + other,
@@ -550,7 +573,6 @@ def apply(self, other):
else:
roll = 'backward'
- # Distinguish input cases to enhance performance
if isinstance(other, datetime):
date_in = other
np_dt = np.datetime64(date_in.date())
@@ -563,8 +585,7 @@ def apply(self, other):
if self.offset:
result = result + self.offset
-
- return as_timestamp(result)
+ return result
elif isinstance(other, (timedelta, Tick)):
return BDay(self.n, offset=self.offset + other,
@@ -613,11 +634,11 @@ def apply(self, other):
n = self.n
_, days_in_month = tslib.monthrange(other.year, other.month)
if other.day != days_in_month:
- other = as_datetime(other) + relativedelta(months=-1, day=31)
+ other = other + relativedelta(months=-1, day=31)
if n <= 0:
n = n + 1
- other = as_datetime(other) + relativedelta(months=n, day=31)
- return as_timestamp(other)
+ other = other + relativedelta(months=n, day=31)
+ return other
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -638,8 +659,7 @@ def apply(self, other):
if other.day > 1 and n <= 0: # then roll forward if n<=0
n += 1
- other = as_datetime(other) + relativedelta(months=n, day=1)
- return as_timestamp(other)
+ return other + relativedelta(months=n, day=1)
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -657,9 +677,7 @@ def isAnchored(self):
@apply_wraps
def apply(self, other):
-
n = self.n
-
wkday, days_in_month = tslib.monthrange(other.year, other.month)
lastBDay = days_in_month - max(((wkday + days_in_month - 1)
% 7) - 4, 0)
@@ -668,11 +686,11 @@ def apply(self, other):
n = n - 1
elif n <= 0 and other.day > lastBDay:
n = n + 1
- other = as_datetime(other) + relativedelta(months=n, day=31)
+ other = other + relativedelta(months=n, day=31)
if other.weekday() > 4:
other = other - BDay()
- return as_timestamp(other)
+ return other
_prefix = 'BM'
@@ -683,7 +701,6 @@ class BusinessMonthBegin(MonthOffset):
@apply_wraps
def apply(self, other):
n = self.n
-
wkday, _ = tslib.monthrange(other.year, other.month)
first = _get_firstbday(wkday)
@@ -691,15 +708,15 @@ def apply(self, other):
# as if rolled forward already
n += 1
elif other.day < first and n > 0:
- other = as_datetime(other) + timedelta(days=first - other.day)
+ other = other + timedelta(days=first - other.day)
n -= 1
- other = as_datetime(other) + relativedelta(months=n)
+ other = other + relativedelta(months=n)
wkday, _ = tslib.monthrange(other.year, other.month)
first = _get_firstbday(wkday)
result = datetime(other.year, other.month, first, other.hour, other.minute,
other.second, other.microsecond)
- return as_timestamp(result)
+ return result
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -746,30 +763,29 @@ def __init__(self, n=1, normalize=False, **kwds):
self.kwds = kwds
self.offset = kwds.get('offset', timedelta(0))
self.weekmask = kwds.get('weekmask', 'Mon Tue Wed Thu Fri')
- self.cbday = CustomBusinessDay(n=self.n, normalize=normalize, **kwds)
- self.m_offset = MonthEnd(normalize=normalize)
+ self.cbday = CustomBusinessDay(n=self.n, **kwds)
+ self.m_offset = MonthEnd()
@apply_wraps
def apply(self,other):
n = self.n
- dt_in = other
# First move to month offset
- cur_mend = self.m_offset.rollforward(dt_in)
+ cur_mend = self.m_offset.rollforward(other)
# Find this custom month offset
cur_cmend = self.cbday.rollback(cur_mend)
-
+
# handle zero case. arbitrarily rollforward
- if n == 0 and dt_in != cur_cmend:
+ if n == 0 and other != cur_cmend:
n += 1
- if dt_in < cur_cmend and n >= 1:
+ if other < cur_cmend and n >= 1:
n -= 1
- elif dt_in > cur_cmend and n <= -1:
+ elif other > cur_cmend and n <= -1:
n += 1
new = cur_mend + n * MonthEnd()
result = self.cbday.rollback(new)
- return as_timestamp(result)
+ return result
class CustomBusinessMonthBegin(BusinessMixin, MonthOffset):
"""
@@ -824,7 +840,7 @@ def apply(self,other):
new = cur_mbegin + n * MonthBegin()
result = self.cbday.rollforward(new)
- return as_timestamp(result)
+ return result
class Week(DateOffset):
"""
@@ -856,23 +872,22 @@ def isAnchored(self):
def apply(self, other):
base = other
if self.weekday is None:
- return as_timestamp(as_datetime(other) + self.n * self._inc)
+ return other + self.n * self._inc
if self.n > 0:
k = self.n
otherDay = other.weekday()
if otherDay != self.weekday:
- other = as_datetime(other) + timedelta((self.weekday - otherDay) % 7)
+ other = other + timedelta((self.weekday - otherDay) % 7)
k = k - 1
- other = as_datetime(other)
+ other = other
for i in range(k):
other = other + self._inc
else:
k = self.n
otherDay = other.weekday()
if otherDay != self.weekday:
- other = as_datetime(other) + timedelta((self.weekday - otherDay) % 7)
- other = as_datetime(other)
+ other = other + timedelta((self.weekday - otherDay) % 7)
for i in range(-k):
other = other - self._inc
@@ -979,20 +994,14 @@ def apply(self, other):
else:
months = self.n + 1
- other = self.getOffsetOfMonth(as_datetime(other) + relativedelta(months=months, day=1))
+ other = self.getOffsetOfMonth(other + relativedelta(months=months, day=1))
other = datetime(other.year, other.month, other.day, base.hour,
base.minute, base.second, base.microsecond)
- if getattr(other, 'tzinfo', None) is not None:
- other = other.tzinfo.localize(other)
return other
def getOffsetOfMonth(self, dt):
w = Week(weekday=self.weekday)
-
- d = datetime(dt.year, dt.month, 1)
- if getattr(dt, 'tzinfo', None) is not None:
- d = dt.tzinfo.localize(d)
-
+ d = datetime(dt.year, dt.month, 1, tzinfo=dt.tzinfo)
d = w.rollforward(d)
for i in range(self.week):
@@ -1003,9 +1012,7 @@ def getOffsetOfMonth(self, dt):
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
return False
- d = datetime(dt.year, dt.month, dt.day)
- if getattr(dt, 'tzinfo', None) is not None:
- d = dt.tzinfo.localize(d)
+ d = datetime(dt.year, dt.month, dt.day, tzinfo=dt.tzinfo)
return d == self.getOffsetOfMonth(dt)
@property
@@ -1072,18 +1079,14 @@ def apply(self, other):
else:
months = self.n + 1
- return self.getOffsetOfMonth(as_datetime(other) + relativedelta(months=months, day=1))
+ return self.getOffsetOfMonth(other + relativedelta(months=months, day=1))
def getOffsetOfMonth(self, dt):
m = MonthEnd()
- d = datetime(dt.year, dt.month, 1, dt.hour, dt.minute, dt.second, dt.microsecond)
- if getattr(dt, 'tzinfo', None) is not None:
- d = dt.tzinfo.localize(d)
-
+ d = datetime(dt.year, dt.month, 1, dt.hour, dt.minute,
+ dt.second, dt.microsecond, tzinfo=dt.tzinfo)
eom = m.rollforward(d)
-
w = Week(weekday=self.weekday)
-
return w.rollback(eom)
def onOffset(self, dt):
@@ -1175,13 +1178,11 @@ def apply(self, other):
elif n <= 0 and other.day > lastBDay and monthsToGo == 0:
n = n + 1
- other = as_datetime(other) + relativedelta(months=monthsToGo + 3 * n, day=31)
- if getattr(base, 'tzinfo', None) is not None:
- other = base.tzinfo.localize(other)
+ other = other + relativedelta(months=monthsToGo + 3 * n, day=31)
+ other = tslib._localize_pydatetime(other, base.tzinfo)
if other.weekday() > 4:
other = other - BDay()
-
- return as_timestamp(other)
+ return other
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -1219,8 +1220,6 @@ class BQuarterBegin(QuarterOffset):
@apply_wraps
def apply(self, other):
n = self.n
- other = as_datetime(other)
-
wkday, _ = tslib.monthrange(other.year, other.month)
first = _get_firstbday(wkday)
@@ -1244,9 +1243,7 @@ def apply(self, other):
result = datetime(other.year, other.month, first,
other.hour, other.minute, other.second,
other.microsecond)
- if getattr(other, 'tzinfo', None) is not None:
- result = other.tzinfo.localize(result)
- return as_timestamp(result)
+ return result
class QuarterEnd(QuarterOffset):
@@ -1272,12 +1269,9 @@ def isAnchored(self):
@apply_wraps
def apply(self, other):
n = self.n
- base = other
other = datetime(other.year, other.month, other.day,
other.hour, other.minute, other.second,
other.microsecond)
- other = as_datetime(other)
-
wkday, days_in_month = tslib.monthrange(other.year, other.month)
monthsToGo = 3 - ((other.month - self.startingMonth) % 3)
@@ -1288,9 +1282,7 @@ def apply(self, other):
n = n - 1
other = other + relativedelta(months=monthsToGo + 3 * n, day=31)
- if getattr(base, 'tzinfo', None) is not None:
- other = base.tzinfo.localize(other)
- return as_timestamp(other)
+ return other
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -1311,8 +1303,6 @@ def isAnchored(self):
@apply_wraps
def apply(self, other):
n = self.n
- other = as_datetime(other)
-
wkday, days_in_month = tslib.monthrange(other.year, other.month)
monthsSince = (other.month - self.startingMonth) % 3
@@ -1326,7 +1316,7 @@ def apply(self, other):
n = n + 1
other = other + relativedelta(months=3 * n - monthsSince, day=1)
- return as_timestamp(other)
+ return other
class YearOffset(DateOffset):
@@ -1361,8 +1351,6 @@ class BYearEnd(YearOffset):
@apply_wraps
def apply(self, other):
n = self.n
- other = as_datetime(other)
-
wkday, days_in_month = tslib.monthrange(other.year, self.month)
lastBDay = (days_in_month -
max(((wkday + days_in_month - 1) % 7) - 4, 0))
@@ -1387,7 +1375,7 @@ def apply(self, other):
if result.weekday() > 4:
result = result - BDay()
- return as_timestamp(result)
+ return result
class BYearBegin(YearOffset):
@@ -1399,8 +1387,6 @@ class BYearBegin(YearOffset):
@apply_wraps
def apply(self, other):
n = self.n
- other = as_datetime(other)
-
wkday, days_in_month = tslib.monthrange(other.year, self.month)
first = _get_firstbday(wkday)
@@ -1420,8 +1406,8 @@ def apply(self, other):
other = other + relativedelta(years=years)
wkday, days_in_month = tslib.monthrange(other.year, self.month)
first = _get_firstbday(wkday)
- return as_timestamp(datetime(other.year, self.month, first, other.hour,
- other.minute, other.second, other.microsecond))
+ return datetime(other.year, self.month, first, other.hour,
+ other.minute, other.second, other.microsecond)
class YearEnd(YearOffset):
@@ -1473,8 +1459,7 @@ def _rollf(date):
else:
# n == 0, roll forward
result = _rollf(result)
-
- return as_timestamp(result)
+ return result
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -1490,15 +1475,15 @@ class YearBegin(YearOffset):
@apply_wraps
def apply(self, other):
- def _increment(date):
- year = date.year
+ def _increment(date, n):
+ year = date.year + n - 1
if date.month >= self.month:
year += 1
return datetime(year, self.month, 1, date.hour, date.minute,
date.second, date.microsecond)
- def _decrement(date):
- year = date.year
+ def _decrement(date, n):
+ year = date.year + n + 1
if date.month < self.month or (date.month == self.month and
date.day == 1):
year -= 1
@@ -1507,24 +1492,19 @@ def _decrement(date):
def _rollf(date):
if (date.month != self.month) or date.day > 1:
- date = _increment(date)
+ date = _increment(date, 1)
return date
n = self.n
result = other
if n > 0:
- while n > 0:
- result = _increment(result)
- n -= 1
+ result = _increment(result, n)
elif n < 0:
- while n < 0:
- result = _decrement(result)
- n += 1
+ result = _decrement(result, n)
else:
# n == 0, roll forward
result = _rollf(result)
-
- return as_timestamp(result)
+ return result
def onOffset(self, dt):
if self.normalize and not _is_normalized(dt):
@@ -1624,10 +1604,9 @@ def apply(self, other):
datetime(other.year, self.startingMonth, 1))
next_year = self.get_year_end(
datetime(other.year + 1, self.startingMonth, 1))
- if getattr(other, 'tzinfo', None) is not None:
- prev_year = other.tzinfo.localize(prev_year)
- cur_year = other.tzinfo.localize(cur_year)
- next_year = other.tzinfo.localize(next_year)
+ prev_year = tslib._localize_pydatetime(prev_year, other.tzinfo)
+ cur_year = tslib._localize_pydatetime(cur_year, other.tzinfo)
+ next_year = tslib._localize_pydatetime(next_year, other.tzinfo)
if n > 0:
if other == prev_year:
@@ -1686,9 +1665,7 @@ def get_year_end(self, dt):
return self._get_year_end_last(dt)
def get_target_month_end(self, dt):
- target_month = datetime(dt.year, self.startingMonth, 1)
- if getattr(dt, 'tzinfo', None) is not None:
- target_month = dt.tzinfo.localize(target_month)
+ target_month = datetime(dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo)
next_month_first_of = target_month + relativedelta(months=+1)
return next_month_first_of + relativedelta(days=-1)
@@ -1706,9 +1683,7 @@ def _get_year_end_nearest(self, dt):
return backward
def _get_year_end_last(self, dt):
- current_year = datetime(dt.year, self.startingMonth, 1)
- if getattr(dt, 'tzinfo', None) is not None:
- current_year = dt.tzinfo.localize(current_year)
+ current_year = datetime(dt.year, self.startingMonth, 1, tzinfo=dt.tzinfo)
return current_year + self._offset_lwom
@property
@@ -1822,8 +1797,6 @@ def isAnchored(self):
@apply_wraps
def apply(self, other):
base = other
- other = as_datetime(other)
-
n = self.n
if n > 0:
@@ -1926,8 +1899,7 @@ def __init__(self, n=1, **kwds):
def apply(self, other):
currentEaster = easter(other.year)
currentEaster = datetime(currentEaster.year, currentEaster.month, currentEaster.day)
- if getattr(other, 'tzinfo', None) is not None:
- currentEaster = other.tzinfo.localize(currentEaster)
+ currentEaster = tslib._localize_pydatetime(currentEaster, other.tzinfo)
# NOTE: easter returns a datetime.date so we have to convert to type of other
if self.n >= 0:
@@ -2021,19 +1993,9 @@ def nanos(self):
def apply(self, other):
# Timestamp can handle tz and nano sec, thus no need to use apply_wraps
- if type(other) == date:
- other = datetime(other.year, other.month, other.day)
- elif isinstance(other, (np.datetime64, datetime)):
- other = as_timestamp(other)
-
- if isinstance(other, datetime):
- result = other + self.delta
- if self.normalize:
- # normalize_date returns normal datetime
- result = tslib.normalize_date(result)
- return as_timestamp(result)
-
- elif isinstance(other, timedelta):
+ if isinstance(other, (datetime, np.datetime64, date)):
+ return as_timestamp(other) + self
+ if isinstance(other, timedelta):
return other + self.delta
elif isinstance(other, type(self)):
return type(self)(self.n + other.n)
@@ -2067,16 +2029,7 @@ def _delta_to_tick(delta):
else: # pragma: no cover
return Nano(nanos)
-
-def _delta_to_nanoseconds(delta):
- if isinstance(delta, np.timedelta64):
- return delta.astype('timedelta64[ns]').item()
- elif isinstance(delta, Tick):
- delta = delta.delta
-
- return (delta.days * 24 * 60 * 60 * 1000000
- + delta.seconds * 1000000
- + delta.microseconds) * 1000
+_delta_to_nanoseconds = tslib._delta_to_nanoseconds
class Day(Tick):
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 9febec68bd458..d99cfb254cc48 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -22,8 +22,8 @@
from pandas.tseries.tools import parse_time_string, _maybe_get_tz
import pandas.tseries.offsets as offsets
-from pandas.tslib import monthrange, OutOfBoundsDatetime, NaT
-from pandas.lib import Timestamp
+from pandas.tslib import NaT, Timestamp
+import pandas.tslib as tslib
from pandas.util.testing import assertRaisesRegexp
import pandas.util.testing as tm
from pandas.tseries.offsets import BusinessMonthEnd, CacheableOffset, \
@@ -39,7 +39,7 @@ def test_monthrange():
import calendar
for y in range(2000, 2013):
for m in range(1, 13):
- assert monthrange(y, m) == calendar.monthrange(y, m)
+ assert tslib.monthrange(y, m) == calendar.monthrange(y, m)
####
@@ -99,6 +99,9 @@ class Base(tm.TestCase):
skip_np_u1p7 = [offsets.CustomBusinessDay, offsets.CDay, offsets.CustomBusinessMonthBegin,
offsets.CustomBusinessMonthEnd, offsets.Nano]
+ timezones = [None, 'UTC', 'Asia/Tokyo', 'US/Eastern',
+ 'dateutil/Asia/Tokyo', 'dateutil/US/Pacific']
+
@property
def offset_types(self):
if _np_version_under1p7:
@@ -118,6 +121,8 @@ def _get_offset(self, klass, value=1, normalize=False):
klass = klass(n=value, week=1, weekday=5, normalize=normalize)
elif klass is Week:
klass = klass(n=value, weekday=5, normalize=normalize)
+ elif klass is DateOffset:
+ klass = klass(days=value, normalize=normalize)
else:
try:
klass = klass(value, normalize=normalize)
@@ -138,7 +143,18 @@ def test_apply_out_of_range(self):
result = Timestamp('20080101') + offset
self.assertIsInstance(result, datetime)
- except (OutOfBoundsDatetime):
+ self.assertIsNone(result.tzinfo)
+
+ tm._skip_if_no_pytz()
+ tm._skip_if_no_dateutil()
+ # Check tz is preserved
+ for tz in self.timezones:
+ t = Timestamp('20080101', tz=tz)
+ result = t + offset
+ self.assertIsInstance(result, datetime)
+ self.assertEqual(t.tzinfo, result.tzinfo)
+
+ except (tslib.OutOfBoundsDatetime):
raise
except (ValueError, KeyError) as e:
raise nose.SkipTest("cannot create out_of_range offset: {0} {1}".format(str(self).split('.')[-1],e))
@@ -152,6 +168,7 @@ def setUp(self):
# are applied to 2011/01/01 09:00 (Saturday)
# used for .apply and .rollforward
self.expecteds = {'Day': Timestamp('2011-01-02 09:00:00'),
+ 'DateOffset': Timestamp('2011-01-02 09:00:00'),
'BusinessDay': Timestamp('2011-01-03 09:00:00'),
'CustomBusinessDay': Timestamp('2011-01-03 09:00:00'),
'CustomBusinessMonthEnd': Timestamp('2011-01-31 09:00:00'),
@@ -181,8 +198,6 @@ def setUp(self):
'Micro': Timestamp('2011-01-01 09:00:00.000001'),
'Nano': Timestamp(np.datetime64('2011-01-01T09:00:00.000000001Z'))}
- self.timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern']
-
def test_return_type(self):
for offset in self.offset_types:
offset = self._get_offset(offset)
@@ -204,37 +219,48 @@ def _check_offsetfunc_works(self, offset, funcname, dt, expected,
func = getattr(offset_s, funcname)
result = func(dt)
- self.assert_(isinstance(result, Timestamp))
+ self.assertTrue(isinstance(result, Timestamp))
self.assertEqual(result, expected)
result = func(Timestamp(dt))
- self.assert_(isinstance(result, Timestamp))
+ self.assertTrue(isinstance(result, Timestamp))
self.assertEqual(result, expected)
+ # test nano second is preserved
+ result = func(Timestamp(dt) + Nano(5))
+ self.assertTrue(isinstance(result, Timestamp))
+ if normalize is False:
+ self.assertEqual(result, expected + Nano(5))
+ else:
+ self.assertEqual(result, expected)
+
if isinstance(dt, np.datetime64):
# test tz when input is datetime or Timestamp
return
tm._skip_if_no_pytz()
- import pytz
+ tm._skip_if_no_dateutil()
+
for tz in self.timezones:
expected_localize = expected.tz_localize(tz)
+ tz_obj = _maybe_get_tz(tz)
+ dt_tz = tslib._localize_pydatetime(dt, tz_obj)
- dt_tz = pytz.timezone(tz).localize(dt)
result = func(dt_tz)
- self.assert_(isinstance(result, Timestamp))
+ self.assertTrue(isinstance(result, Timestamp))
self.assertEqual(result, expected_localize)
result = func(Timestamp(dt, tz=tz))
- self.assert_(isinstance(result, Timestamp))
+ self.assertTrue(isinstance(result, Timestamp))
self.assertEqual(result, expected_localize)
- def _check_nanofunc_works(self, offset, funcname, dt, expected):
- offset = self._get_offset(offset)
- func = getattr(offset, funcname)
-
- t1 = Timestamp(dt)
- self.assertEqual(func(t1), expected)
+ # test nano second is preserved
+ result = func(Timestamp(dt, tz=tz) + Nano(5))
+ self.assertTrue(isinstance(result, Timestamp))
+ if normalize is False:
+ self.assertEqual(result, expected_localize + Nano(5))
+ else:
+ self.assertEqual(result, expected_localize)
def test_apply(self):
sdt = datetime(2011, 1, 1, 9, 0)
@@ -243,21 +269,18 @@ def test_apply(self):
for offset in self.offset_types:
for dt in [sdt, ndt]:
expected = self.expecteds[offset.__name__]
- if offset == Nano:
- self._check_nanofunc_works(offset, 'apply', dt, expected)
- else:
- self._check_offsetfunc_works(offset, 'apply', dt, expected)
+ self._check_offsetfunc_works(offset, 'apply', dt, expected)
- expected = Timestamp(expected.date())
- self._check_offsetfunc_works(offset, 'apply', dt, expected,
- normalize=True)
+ expected = Timestamp(expected.date())
+ self._check_offsetfunc_works(offset, 'apply', dt, expected,
+ normalize=True)
def test_rollforward(self):
expecteds = self.expecteds.copy()
# result will not be changed if the target is on the offset
no_changes = ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour', 'Minute',
- 'Second', 'Milli', 'Micro', 'Nano']
+ 'Second', 'Milli', 'Micro', 'Nano', 'DateOffset']
for n in no_changes:
expecteds[n] = Timestamp('2011/01/01 09:00')
@@ -267,6 +290,7 @@ def test_rollforward(self):
norm_expected[k] = Timestamp(norm_expected[k].date())
normalized = {'Day': Timestamp('2011-01-02 00:00:00'),
+ 'DateOffset': Timestamp('2011-01-02 00:00:00'),
'MonthBegin': Timestamp('2011-02-01 00:00:00'),
'YearBegin': Timestamp('2012-01-01 00:00:00'),
'Week': Timestamp('2011-01-08 00:00:00'),
@@ -283,13 +307,10 @@ def test_rollforward(self):
for offset in self.offset_types:
for dt in [sdt, ndt]:
expected = expecteds[offset.__name__]
- if offset == Nano:
- self._check_nanofunc_works(offset, 'rollforward', dt, expected)
- else:
- self._check_offsetfunc_works(offset, 'rollforward', dt, expected)
- expected = norm_expected[offset.__name__]
- self._check_offsetfunc_works(offset, 'rollforward', dt, expected,
- normalize=True)
+ self._check_offsetfunc_works(offset, 'rollforward', dt, expected)
+ expected = norm_expected[offset.__name__]
+ self._check_offsetfunc_works(offset, 'rollforward', dt, expected,
+ normalize=True)
def test_rollback(self):
expecteds = {'BusinessDay': Timestamp('2010-12-31 09:00:00'),
@@ -314,7 +335,7 @@ def test_rollback(self):
# result will not be changed if the target is on the offset
for n in ['Day', 'MonthBegin', 'YearBegin', 'Week', 'Hour', 'Minute',
- 'Second', 'Milli', 'Micro', 'Nano']:
+ 'Second', 'Milli', 'Micro', 'Nano', 'DateOffset']:
expecteds[n] = Timestamp('2011/01/01 09:00')
# but be changed when normalize=True
@@ -323,6 +344,7 @@ def test_rollback(self):
norm_expected[k] = Timestamp(norm_expected[k].date())
normalized = {'Day': Timestamp('2010-12-31 00:00:00'),
+ 'DateOffset': Timestamp('2010-12-31 00:00:00'),
'MonthBegin': Timestamp('2010-12-01 00:00:00'),
'YearBegin': Timestamp('2010-01-01 00:00:00'),
'Week': Timestamp('2010-12-25 00:00:00'),
@@ -339,27 +361,24 @@ def test_rollback(self):
for offset in self.offset_types:
for dt in [sdt, ndt]:
expected = expecteds[offset.__name__]
- if offset == Nano:
- self._check_nanofunc_works(offset, 'rollback', dt, expected)
- else:
- self._check_offsetfunc_works(offset, 'rollback', dt, expected)
+ self._check_offsetfunc_works(offset, 'rollback', dt, expected)
- expected = norm_expected[offset.__name__]
- self._check_offsetfunc_works(offset, 'rollback',
- dt, expected, normalize=True)
+ expected = norm_expected[offset.__name__]
+ self._check_offsetfunc_works(offset, 'rollback',
+ dt, expected, normalize=True)
def test_onOffset(self):
for offset in self.offset_types:
dt = self.expecteds[offset.__name__]
offset_s = self._get_offset(offset)
- self.assert_(offset_s.onOffset(dt))
+ self.assertTrue(offset_s.onOffset(dt))
# when normalize=True, onOffset checks time is 00:00:00
offset_n = self._get_offset(offset, normalize=True)
- self.assert_(not offset_n.onOffset(dt))
+ self.assertFalse(offset_n.onOffset(dt))
date = datetime(dt.year, dt.month, dt.day)
- self.assert_(offset_n.onOffset(date))
+ self.assertTrue(offset_n.onOffset(date))
def test_add(self):
dt = datetime(2011, 1, 1, 9, 0)
@@ -2482,6 +2501,13 @@ def test_offset(self):
datetime(2005, 12, 30): datetime(2006, 1, 1),
datetime(2005, 12, 31): datetime(2006, 1, 1), }))
+ tests.append((YearBegin(3),
+ {datetime(2008, 1, 1): datetime(2011, 1, 1),
+ datetime(2008, 6, 30): datetime(2011, 1, 1),
+ datetime(2008, 12, 31): datetime(2011, 1, 1),
+ datetime(2005, 12, 30): datetime(2008, 1, 1),
+ datetime(2005, 12, 31): datetime(2008, 1, 1), }))
+
tests.append((YearBegin(-1),
{datetime(2007, 1, 1): datetime(2006, 1, 1),
datetime(2007, 1, 15): datetime(2007, 1, 1),
@@ -2509,12 +2535,25 @@ def test_offset(self):
datetime(2007, 12, 15): datetime(2008, 4, 1),
datetime(2012, 1, 31): datetime(2012, 4, 1), }))
+ tests.append((YearBegin(4, month=4),
+ {datetime(2007, 4, 1): datetime(2011, 4, 1),
+ datetime(2007, 4, 15): datetime(2011, 4, 1),
+ datetime(2007, 3, 1): datetime(2010, 4, 1),
+ datetime(2007, 12, 15): datetime(2011, 4, 1),
+ datetime(2012, 1, 31): datetime(2015, 4, 1), }))
+
tests.append((YearBegin(-1, month=4),
{datetime(2007, 4, 1): datetime(2006, 4, 1),
datetime(2007, 3, 1): datetime(2006, 4, 1),
datetime(2007, 12, 15): datetime(2007, 4, 1),
datetime(2012, 1, 31): datetime(2011, 4, 1), }))
+ tests.append((YearBegin(-3, month=4),
+ {datetime(2007, 4, 1): datetime(2004, 4, 1),
+ datetime(2007, 3, 1): datetime(2004, 4, 1),
+ datetime(2007, 12, 15): datetime(2005, 4, 1),
+ datetime(2012, 1, 31): datetime(2009, 4, 1), }))
+
for offset, cases in tests:
for base, expected in compat.iteritems(cases):
assertEq(offset, base, expected)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index c06d8a3ba9a05..655b92cfe70f3 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1051,6 +1051,26 @@ cdef inline void _localize_tso(_TSObject obj, object tz):
obj.tzinfo = tz
+def _localize_pydatetime(object dt, object tz):
+ '''
+ Take a datetime/Timestamp in UTC and localizes to timezone tz.
+ '''
+ if tz is None:
+ return dt
+ elif isinstance(dt, Timestamp):
+ return dt.tz_localize(tz)
+ elif tz == 'UTC' or tz is UTC:
+ return UTC.localize(dt)
+
+ elif _treat_tz_as_pytz(tz):
+ # datetime.replace may return incorrect result in pytz
+ return tz.localize(dt)
+ elif _treat_tz_as_dateutil(tz):
+ return dt.replace(tzinfo=tz)
+ else:
+ raise ValueError(type(tz), tz)
+
+
def get_timezone(tz):
return _get_zone(tz)
| Main Fix is to preserve nanosecond info which can lost during `offset.apply`, but it also includes:
- Support dateutil timezone
- Little performance improvement. Even though v0.14.1 should take longer than v0.14.0 because perf test in v0.14 doesn't perform timestamp conversion which was fixed in #7502.
NOTE: This caches `Tick.delta` because it was calculated 3 times repeatedly, but does it cause any side effect?
### Before
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
timeseries_year_incr | 0.0164 | 0.0103 | 1.5846 |
timeseries_year_apply | 0.0153 | 0.0094 | 1.6356 |
timeseries_day_incr | 0.0187 | 0.0053 | 3.5075 |
timeseries_day_apply | 0.0164 | 0.0033 | 4.9048 |
Target [d0076db] : PERF: Improve index.min and max perf
Base [da0f7ae] : RLS: 0.14.0 final
```
### After the fix
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
timeseries_year_incr | 0.0150 | 0.0087 | 1.7339 |
timeseries_year_apply | 0.0126 | 0.0073 | 1.7283 |
timeseries_day_incr | 0.0130 | 0.0053 | 2.4478 |
timeseries_day_apply | 0.0107 | 0.0033 | 3.2143 |
Target [64dd021] : BUG: offsets.apply doesnt preserve nanosecond
Base [da0f7ae] : RLS: 0.14.0 final
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7697 | 2014-07-08T15:55:25Z | 2014-07-25T15:10:47Z | 2014-07-25T15:10:47Z | 2014-07-25T20:42:15Z |
TST/CLN: centralize numpy < 1.7 skips | diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py
index a6bd94153c3bd..f6f705201bf18 100644
--- a/pandas/io/tests/test_json/test_pandas.py
+++ b/pandas/io/tests/test_json/test_pandas.py
@@ -5,7 +5,7 @@
import numpy as np
import nose
-from pandas import Series, DataFrame, DatetimeIndex, Timestamp, _np_version_under1p7
+from pandas import Series, DataFrame, DatetimeIndex, Timestamp
import pandas as pd
read_json = pd.read_json
@@ -601,8 +601,7 @@ def test_url(self):
self.assertEqual(result[c].dtype, 'datetime64[ns]')
def test_timedelta(self):
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
+ tm._skip_if_not_numpy17_friendly()
from datetime import timedelta
converter = lambda x: pd.to_timedelta(x,unit='ms')
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index dd30527b1f82d..d0d1b02577f89 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -2061,11 +2061,7 @@ def compare(a,b):
def test_append_with_timezones_dateutil(self):
from datetime import timedelta
-
- try:
- import dateutil
- except ImportError:
- raise nose.SkipTest
+ tm._skip_if_no_dateutil()
# use maybe_get_tz instead of dateutil.tz.gettz to handle the windows filename issues.
from pandas.tslib import maybe_get_tz
@@ -2186,8 +2182,7 @@ def setTZ(tz):
setTZ(orig_tz)
def test_append_with_timedelta(self):
- if _np_version_under1p7:
- raise nose.SkipTest("requires numpy >= 1.7")
+ tm._skip_if_not_numpy17_friendly()
# GH 3577
# append timedelta
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index aa69fb964d947..122b80c3f0076 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -509,8 +509,7 @@ def test_date_and_index(self):
def test_timedelta(self):
# see #6921
- if _np_version_under1p7:
- raise nose.SkipTest("test only valid in numpy >= 1.7")
+ tm._skip_if_not_numpy17_friendly()
df = to_timedelta(Series(['00:00:01', '00:00:03'], name='foo')).to_frame()
with tm.assert_produces_warning(UserWarning):
@@ -659,7 +658,7 @@ def test_not_reflect_all_tables(self):
self.conn.execute(qry)
qry = """CREATE TABLE other_table (x INTEGER, y INTEGER);"""
self.conn.execute(qry)
-
+
with warnings.catch_warnings(record=True) as w:
# Cause all warnings to always be triggered.
warnings.simplefilter("always")
diff --git a/pandas/tests/test_base.py b/pandas/tests/test_base.py
index c2fb7017ee4d6..832671521c815 100644
--- a/pandas/tests/test_base.py
+++ b/pandas/tests/test_base.py
@@ -198,8 +198,7 @@ def setUp(self):
self.not_valid_objs = [ o for o in self.objs if not o._allow_index_ops ]
def test_ops(self):
- if _np_version_under1p7:
- raise nose.SkipTest("test only valid in numpy >= 1.7")
+ tm._skip_if_not_numpy17_friendly()
for op in ['max','min']:
for o in self.objs:
result = getattr(o,op)()
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 884a2c1a1ae8e..5d785df355aa3 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -80,13 +80,6 @@ def has_expanded_repr(df):
return True
return False
-def skip_if_np_version_under1p7():
- if _np_version_under1p7:
- import nose
-
- raise nose.SkipTest('numpy >= 1.7 required')
-
-
class TestDataFrameFormatting(tm.TestCase):
_multiprocess_can_split_ = True
@@ -2736,7 +2729,7 @@ def test_format(self):
class TestRepr_timedelta64(tm.TestCase):
@classmethod
def setUpClass(cls):
- skip_if_np_version_under1p7()
+ tm._skip_if_not_numpy17_friendly()
def test_legacy(self):
delta_1d = pd.to_timedelta(1, unit='D')
@@ -2784,7 +2777,7 @@ def test_long(self):
class TestTimedelta64Formatter(tm.TestCase):
@classmethod
def setUpClass(cls):
- skip_if_np_version_under1p7()
+ tm._skip_if_not_numpy17_friendly()
def test_mixed(self):
x = pd.to_timedelta(list(range(5)) + [pd.NaT], unit='D')
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index d3bf3cfe32926..1cada8efb6c6f 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -32,8 +32,7 @@
import pandas.core.format as fmt
import pandas.core.datetools as datetools
from pandas import (DataFrame, Index, Series, notnull, isnull,
- MultiIndex, DatetimeIndex, Timestamp, date_range, read_csv,
- _np_version_under1p7)
+ MultiIndex, DatetimeIndex, Timestamp, date_range, read_csv)
import pandas as pd
from pandas.parser import CParserError
from pandas.util.misc import is_little_endian
@@ -3772,8 +3771,7 @@ def test_operators_timedelta64(self):
self.assertTrue(df['off2'].dtype == 'timedelta64[ns]')
def test_datetimelike_setitem_with_inference(self):
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
+ tm._skip_if_not_numpy17_friendly()
# GH 7592
# assignment of timedeltas with NaT
@@ -13036,6 +13034,7 @@ def test_select_dtypes_exclude_include(self):
tm.assert_frame_equal(r, e)
def test_select_dtypes_not_an_attr_but_still_valid_dtype(self):
+ tm._skip_if_not_numpy17_friendly()
df = DataFrame({'a': list('abc'),
'b': list(range(1, 4)),
'c': np.arange(3, 6).astype('u1'),
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 82447635473a3..044d4054755ba 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -7,7 +7,7 @@
import pandas as pd
from pandas import (Index, Series, DataFrame, Panel,
- isnull, notnull,date_range, _np_version_under1p7)
+ isnull, notnull,date_range)
from pandas.core.index import Index, MultiIndex
import pandas.core.common as com
@@ -160,8 +160,7 @@ def f():
self.assertRaises(ValueError, lambda : not obj1)
def test_numpy_1_7_compat_numeric_methods(self):
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
+ tm._skip_if_not_numpy17_friendly()
# GH 4435
# numpy in 1.7 tries to pass addtional arguments to pandas functions
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 23a0f39ef3547..6fb88eb5597a9 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -32,11 +32,6 @@
from pandas import _np_version_under1p7
-def _skip_if_need_numpy_1_7():
- if _np_version_under1p7:
- raise nose.SkipTest('numpy >= 1.7 required')
-
-
class TestIndex(tm.TestCase):
_multiprocess_can_split_ = True
@@ -340,7 +335,7 @@ def test_asof(self):
tm.assert_isinstance(self.dateIndex.asof(d), Timestamp)
def test_nanosecond_index_access(self):
- _skip_if_need_numpy_1_7()
+ tm._skip_if_not_numpy17_friendly()
s = Series([Timestamp('20130101')]).values.view('i8')[0]
r = DatetimeIndex([s + 50 + i for i in range(100)])
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index fae403ebb653d..d08f7e1d547c8 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2721,8 +2721,7 @@ def test_timedelta64_operations_with_integers(self):
self.assertRaises(TypeError, sop, s2.values)
def test_timedelta64_conversions(self):
- if _np_version_under1p7:
- raise nose.SkipTest("cannot use 2 argument form of timedelta64 conversions with numpy < 1.7")
+ tm._skip_if_not_numpy17_friendly()
startdate = Series(date_range('2013-01-01', '2013-01-03'))
enddate = Series(date_range('2013-03-01', '2013-03-03'))
@@ -2835,8 +2834,7 @@ def run_ops(ops, get_ser, test_ser):
dt1 + td1
def test_ops_datetimelike_align(self):
- if _np_version_under1p7:
- raise nose.SkipTest("timedelta broken in np < 1.7")
+ tm._skip_if_not_numpy17_friendly()
# GH 7500
# datetimelike ops need to align
@@ -2899,8 +2897,7 @@ def test_timedelta64_functions(self):
assert_series_equal(result, expected)
def test_timedelta_fillna(self):
- if _np_version_under1p7:
- raise nose.SkipTest("timedelta broken in np 1.6.1")
+ tm._skip_if_not_numpy17_friendly()
#GH 3371
s = Series([Timestamp('20130101'), Timestamp('20130101'),
@@ -3107,8 +3104,7 @@ def test_bfill(self):
assert_series_equal(ts.bfill(), ts.fillna(method='bfill'))
def test_sub_of_datetime_from_TimeSeries(self):
- if _np_version_under1p7:
- raise nose.SkipTest("timedelta broken in np 1.6.1")
+ tm._skip_if_not_numpy17_friendly()
from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
from datetime import datetime
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index f2239bba520e7..4601ad0784562 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -812,8 +812,7 @@ def test_join_append_timedeltas(self):
# timedelta64 issues with join/merge
# GH 5695
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
+ tm._skip_if_not_numpy17_friendly()
d = {'d': dt.datetime(2013, 11, 5, 5, 56), 't': dt.timedelta(0, 22500)}
df = DataFrame(columns=list('dt'))
@@ -2005,9 +2004,7 @@ def test_concat_datetime64_block(self):
def test_concat_timedelta64_block(self):
# not friendly for < 1.7
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
-
+ tm._skip_if_not_numpy17_friendly()
from pandas import to_timedelta
rng = to_timedelta(np.arange(10),unit='s')
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 9089ca85ac3bb..37371b5828c8c 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -137,8 +137,7 @@ def test_microsecond(self):
self._check_tick(timedelta(microseconds=1), 'U')
def test_nanosecond(self):
- if _np_version_under1p7:
- raise nose.SkipTest("requires numpy >= 1.7 to run")
+ tm._skip_if_not_numpy17_friendly()
self._check_tick(np.timedelta64(1, 'ns'), 'N')
def _check_tick(self, base_delta, code):
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index e51ec45fe1c79..1ef1bd184bdbc 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -180,7 +180,7 @@ def setUp(self):
'Milli': Timestamp('2011-01-01 09:00:00.001000'),
'Micro': Timestamp('2011-01-01 09:00:00.000001'),
'Nano': Timestamp(np.datetime64('2011-01-01T09:00:00.000000001Z'))}
-
+
self.timezones = ['UTC', 'Asia/Tokyo', 'US/Eastern']
def test_return_type(self):
@@ -2782,8 +2782,8 @@ def test_Microsecond():
def test_NanosecondGeneric():
- if _np_version_under1p7:
- raise nose.SkipTest('numpy >= 1.7 required')
+ tm._skip_if_not_numpy17_friendly()
+
timestamp = Timestamp(datetime(2010, 1, 1))
assert timestamp.nanosecond == 0
@@ -2795,8 +2795,7 @@ def test_NanosecondGeneric():
def test_Nanosecond():
- if _np_version_under1p7:
- raise nose.SkipTest('numpy >= 1.7 required')
+ tm._skip_if_not_numpy17_friendly()
timestamp = Timestamp(datetime(2010, 1, 1))
assertEq(Nano(), timestamp, timestamp + np.timedelta64(1, 'ns'))
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
index 8e841632d88d3..9d85c599c840c 100644
--- a/pandas/tseries/tests/test_timedeltas.py
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -7,7 +7,7 @@
import pandas as pd
from pandas import (Index, Series, DataFrame, Timestamp, isnull, notnull,
- bdate_range, date_range, _np_version_under1p7)
+ bdate_range, date_range)
import pandas.core.common as com
from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long
from pandas import compat, to_timedelta, tslib
@@ -15,14 +15,10 @@
from pandas.util.testing import (assert_series_equal,
assert_frame_equal,
assert_almost_equal,
- ensure_clean)
+ ensure_clean,
+ _skip_if_not_numpy17_friendly)
import pandas.util.testing as tm
-def _skip_if_numpy_not_friendly():
- # not friendly for < 1.7
- if _np_version_under1p7:
- raise nose.SkipTest("numpy < 1.7")
-
class TestTimedeltas(tm.TestCase):
_multiprocess_can_split_ = True
@@ -30,7 +26,7 @@ def setUp(self):
pass
def test_numeric_conversions(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
self.assertEqual(ct(0), np.timedelta64(0,'ns'))
self.assertEqual(ct(10), np.timedelta64(10,'ns'))
@@ -42,14 +38,14 @@ def test_numeric_conversions(self):
self.assertEqual(ct(10,unit='d'), np.timedelta64(10,'D').astype('m8[ns]'))
def test_timedelta_conversions(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
self.assertEqual(ct(timedelta(seconds=1)), np.timedelta64(1,'s').astype('m8[ns]'))
self.assertEqual(ct(timedelta(microseconds=1)), np.timedelta64(1,'us').astype('m8[ns]'))
self.assertEqual(ct(timedelta(days=1)), np.timedelta64(1,'D').astype('m8[ns]'))
def test_short_format_converters(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
def conv(v):
return v.astype('m8[ns]')
@@ -97,7 +93,7 @@ def conv(v):
self.assertRaises(ValueError, ct, 'foo')
def test_full_format_converters(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
def conv(v):
return v.astype('m8[ns]')
@@ -120,13 +116,13 @@ def conv(v):
self.assertRaises(ValueError, ct, '- 1days, 00')
def test_nat_converters(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
self.assertEqual(to_timedelta('nat',box=False).astype('int64'), tslib.iNaT)
self.assertEqual(to_timedelta('nan',box=False).astype('int64'), tslib.iNaT)
def test_to_timedelta(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
def conv(v):
return v.astype('m8[ns]')
@@ -235,7 +231,7 @@ def testit(unit, transform):
self.assertRaises(ValueError, lambda : to_timedelta(1,unit='foo'))
def test_to_timedelta_via_apply(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
# GH 5458
expected = Series([np.timedelta64(1,'s')])
@@ -246,7 +242,7 @@ def test_to_timedelta_via_apply(self):
tm.assert_series_equal(result, expected)
def test_timedelta_ops(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
# GH4984
# make sure ops return timedeltas
@@ -275,7 +271,7 @@ def test_timedelta_ops(self):
tm.assert_almost_equal(result, expected)
def test_timedelta_ops_scalar(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
# GH 6808
base = pd.to_datetime('20130101 09:01:12.123456')
@@ -309,7 +305,7 @@ def test_timedelta_ops_scalar(self):
self.assertEqual(result, expected_sub)
def test_to_timedelta_on_missing_values(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
# GH5438
timedelta_NaT = np.timedelta64('NaT')
@@ -328,7 +324,7 @@ def test_to_timedelta_on_missing_values(self):
self.assertEqual(actual.astype('int64'), timedelta_NaT.astype('int64'))
def test_timedelta_ops_with_missing_values(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
# setup
s1 = pd.to_timedelta(Series(['00:00:01']))
@@ -407,7 +403,7 @@ def test_timedelta_ops_with_missing_values(self):
assert_frame_equal(actual, dfn)
def test_apply_to_timedelta(self):
- _skip_if_numpy_not_friendly()
+ _skip_if_not_numpy17_friendly()
timedelta_NaT = pd.to_timedelta('NaT')
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index f353f08114a2c..1614261542733 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -770,11 +770,13 @@ def test_index_cast_datetime64_other_units(self):
self.assertTrue((idx.values == tslib.cast_to_nanoseconds(arr)).all())
def test_index_astype_datetime64(self):
- idx = Index([datetime(2012, 1, 1)], dtype=object)
-
+ # valid only under 1.7!
if not _np_version_under1p7:
raise nose.SkipTest("test only valid in numpy < 1.7")
+ idx = Index([datetime(2012, 1, 1)], dtype=object)
+ casted = idx.astype(np.dtype('M8[D]'))
+
casted = idx.astype(np.dtype('M8[D]'))
expected = DatetimeIndex(idx.values)
tm.assert_isinstance(casted, DatetimeIndex)
@@ -2680,9 +2682,7 @@ def assert_index_parameters(self, index):
assert index.inferred_freq == '40960N'
def test_ns_index(self):
-
- if _np_version_under1p7:
- raise nose.SkipTest
+ tm._skip_if_not_numpy17_friendly()
nsamples = 400
ns = int(1e9 / 24414)
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index 122bb93a878ee..a47d6a178f8b2 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -264,8 +264,7 @@ def test_parsing_timezone_offsets(self):
class TestTimestampNsOperations(tm.TestCase):
def setUp(self):
- if _np_version_under1p7:
- raise nose.SkipTest('numpy >= 1.7 required')
+ tm._skip_if_not_numpy17_friendly()
self.timestamp = Timestamp(datetime.datetime.utcnow())
def assert_ns_timedelta(self, modified_timestamp, expected_value):
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 9c49014a47da7..0d7ea77e96955 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -41,7 +41,8 @@
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex
-from pandas import _testing
+from pandas import _testing, _np_version_under1p7
+
from pandas.io.common import urlopen
@@ -209,6 +210,12 @@ def setUpClass(cls):
cls.setUpClass = setUpClass
return cls
+def _skip_if_not_numpy17_friendly():
+ # not friendly for < 1.7
+ if _np_version_under1p7:
+ import nose
+ raise nose.SkipTest("numpy >= 1.7 is required")
+
def _skip_if_no_scipy():
try:
import scipy.stats
| TST: skip on older numpy for (GH7694)
closes #7694
| https://api.github.com/repos/pandas-dev/pandas/pulls/7696 | 2014-07-08T14:15:02Z | 2014-07-08T15:40:09Z | 2014-07-08T15:40:09Z | 2014-07-08T15:40:09Z |
Fix 7180 autodetect | diff --git a/doc/source/v0.15.0.txt b/doc/source/v0.15.0.txt
index dd71ef1f63d54..d8e87ceaa830c 100644
--- a/doc/source/v0.15.0.txt
+++ b/doc/source/v0.15.0.txt
@@ -786,3 +786,5 @@ Bug Fixes
needed interpolating (:issue:`7173`).
- Bug where ``col_space`` was ignored in ``DataFrame.to_string()`` when ``header=False``
(:issue:`8230`).
+
+- Bug in DataFrame terminal display: Setting max_column/max_rows to zero did not trigger auto-resizing of dfs to fit terminal width/height (:issue:`7180`).
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index c32796cf082d4..8379266533c86 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -33,22 +33,30 @@
pc_max_rows_doc = """
: int
- This sets the maximum number of rows pandas should output when printing
- out various output. For example, this value determines whether the repr()
- for a dataframe prints out fully or just a summary repr.
- 'None' value means unlimited.
+ If max_rows is exceeded, switch to truncate view. Depending on
+ `large_repr`, objects are either centrally truncated or printed as
+ a summary view. 'None' value means unlimited.
+
+ In case python/IPython is running in a terminal and `large_repr`
+ equals 'truncate' this can be set to 0 and pandas will auto-detect
+ the height of the terminal and print a truncated object which fits
+ the screen height. The IPython notebook, IPython qtconsole, or
+ IDLE do not run in a terminal and hence it is not possible to do
+ correct auto-detection.
"""
pc_max_cols_doc = """
: int
- max_rows and max_columns are used in __repr__() methods to decide if
- to_string() or info() is used to render an object to a string. In case
- python/IPython is running in a terminal this can be set to 0 and pandas
- will correctly auto-detect the width the terminal and swap to a smaller
- format in case all columns would not fit vertically. The IPython notebook,
- IPython qtconsole, or IDLE do not run in a terminal and hence it is not
- possible to do correct auto-detection.
- 'None' value means unlimited.
+ If max_cols is exceeded, switch to truncate view. Depending on
+ `large_repr`, objects are either centrally truncated or printed as
+ a summary view. 'None' value means unlimited.
+
+ In case python/IPython is running in a terminal and `large_repr`
+ equals 'truncate' this can be set to 0 and pandas will auto-detect
+ the width of the terminal and print a truncated object which fits
+ the screen width. The IPython notebook, IPython qtconsole, or IDLE
+ do not run in a terminal and hence it is not possible to do
+ correct auto-detection.
"""
pc_max_levels_doc = """
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 2658410358000..89fe7b9b9a769 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -4,16 +4,15 @@
# pylint: disable=W0141
import sys
-import re
from pandas.core.base import PandasObject
-from pandas.core.common import adjoin, isnull, notnull
+from pandas.core.common import adjoin, notnull
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas import compat
from pandas.compat import(StringIO, lzip, range, map, zip, reduce, u,
OrderedDict)
from pandas.util.terminal import get_terminal_size
-from pandas.core.config import get_option, set_option, reset_option
+from pandas.core.config import get_option, set_option
import pandas.core.common as com
import pandas.lib as lib
from pandas.tslib import iNaT
@@ -22,7 +21,6 @@
import itertools
import csv
-from datetime import time
from pandas.tseries.period import PeriodIndex, DatetimeIndex
@@ -321,30 +319,69 @@ def __init__(self, frame, buf=None, columns=None, col_space=None,
self._chk_truncate()
def _chk_truncate(self):
+ '''
+ Checks whether the frame should be truncated. If so, slices
+ the frame up.
+ '''
from pandas.tools.merge import concat
- truncate_h = self.max_cols and (len(self.columns) > self.max_cols)
- truncate_v = self.max_rows and (len(self.frame) > self.max_rows)
+ # Column of which first element is used to determine width of a dot col
+ self.tr_size_col = -1
# Cut the data to the information actually printed
max_cols = self.max_cols
max_rows = self.max_rows
+
+ if max_cols == 0 or max_rows == 0: # assume we are in the terminal (why else = 0)
+ (w,h) = get_terminal_size()
+ self.w = w
+ self.h = h
+ if self.max_rows == 0:
+ dot_row = 1
+ prompt_row = 1
+ if self.show_dimensions:
+ show_dimension_rows = 3
+ n_add_rows = self.header + dot_row + show_dimension_rows + prompt_row
+ max_rows_adj = self.h - n_add_rows # rows available to fill with actual data
+ self.max_rows_adj = max_rows_adj
+
+ # Format only rows and columns that could potentially fit the screen
+ if max_cols == 0 and len(self.frame.columns) > w:
+ max_cols = w
+ if max_rows == 0 and len(self.frame) > h:
+ max_rows = h
+
+ if not hasattr(self,'max_rows_adj'):
+ self.max_rows_adj = max_rows
+ if not hasattr(self,'max_cols_adj'):
+ self.max_cols_adj = max_cols
+
+ max_cols_adj = self.max_cols_adj
+ max_rows_adj = self.max_rows_adj
+
+ truncate_h = max_cols_adj and (len(self.columns) > max_cols_adj)
+ truncate_v = max_rows_adj and (len(self.frame) > max_rows_adj)
+
frame = self.frame
if truncate_h:
- if max_cols > 1:
- col_num = (max_cols // 2)
- frame = concat( (frame.iloc[:,:col_num],frame.iloc[:,-col_num:]),axis=1 )
- else:
- col_num = max_cols
+ if max_cols_adj == 0:
+ col_num = len(frame.columns)
+ elif max_cols_adj == 1:
frame = frame.iloc[:,:max_cols]
+ col_num = max_cols
+ else:
+ col_num = (max_cols_adj // 2)
+ frame = concat( (frame.iloc[:,:col_num],frame.iloc[:,-col_num:]),axis=1 )
self.tr_col_num = col_num
if truncate_v:
- if max_rows > 1:
- row_num = max_rows // 2
- frame = concat( (frame.iloc[:row_num,:],frame.iloc[-row_num:,:]) )
- else:
+ if max_rows_adj == 0:
+ row_num = len(frame)
+ if max_rows_adj == 1:
row_num = max_rows
frame = frame.iloc[:max_rows,:]
+ else:
+ row_num = max_rows_adj // 2
+ frame = concat( (frame.iloc[:row_num,:],frame.iloc[-row_num:,:]) )
self.tr_row_num = row_num
self.tr_frame = frame
@@ -360,13 +397,12 @@ def _to_str_columns(self):
frame = self.tr_frame
# may include levels names also
- str_index = self._get_formatted_index(frame)
+ str_index = self._get_formatted_index(frame)
str_columns = self._get_formatted_column_labels(frame)
if self.header:
stringified = []
- col_headers = frame.columns
for i, c in enumerate(frame):
cheader = str_columns[i]
max_colwidth = max(self.col_space or 0,
@@ -389,7 +425,6 @@ def _to_str_columns(self):
else:
stringified = []
for i, c in enumerate(frame):
- formatter = self._get_formatter(i)
fmt_values = self._format_col(i)
fmt_values = _make_fixed_width(fmt_values, self.justify,
minimum=(self.col_space or 0))
@@ -406,8 +441,8 @@ def _to_str_columns(self):
if truncate_h:
col_num = self.tr_col_num
- col_width = len(strcols[col_num][0]) # infer from column header
- strcols.insert(col_num + 1, ['...'.center(col_width)] * (len(str_index)))
+ col_width = len(strcols[self.tr_size_col][0]) # infer from column header
+ strcols.insert(self.tr_col_num + 1, ['...'.center(col_width)] * (len(str_index)))
if truncate_v:
n_header_rows = len(str_index) - len(frame)
row_num = self.tr_row_num
@@ -424,19 +459,19 @@ def _to_str_columns(self):
if ix == 0:
dot_str = my_str.ljust(cwidth)
elif is_dot_col:
+ cwidth = len(strcols[self.tr_size_col][0])
dot_str = my_str.center(cwidth)
else:
dot_str = my_str.rjust(cwidth)
strcols[ix].insert(row_num + n_header_rows, dot_str)
-
return strcols
def to_string(self):
"""
Render a DataFrame to a console-friendly tabular output.
"""
-
+ from pandas import Series
frame = self.frame
if len(frame.columns) == 0 or len(frame.index) == 0:
@@ -447,10 +482,40 @@ def to_string(self):
text = info_line
else:
strcols = self._to_str_columns()
- if self.line_width is None:
+ if self.line_width is None: # no need to wrap around just print the whole frame
text = adjoin(1, *strcols)
- else:
+ elif not isinstance(self.max_cols,int) or self.max_cols > 0: # perhaps need to wrap around
text = self._join_multiline(*strcols)
+ else: # max_cols == 0. Try to fit frame to terminal
+ text = adjoin(1, *strcols).split('\n')
+ row_lens = Series(text).apply(len)
+ max_len_col_ix = np.argmax(row_lens)
+ max_len = row_lens[max_len_col_ix]
+ headers = [ele[0] for ele in strcols]
+ # Size of last col determines dot col size. See `self._to_str_columns
+ size_tr_col = len(headers[self.tr_size_col])
+ max_len += size_tr_col # Need to make space for largest row plus truncate (dot) col
+ dif = max_len - self.w
+ adj_dif = dif
+ col_lens = Series([Series(ele).apply(len).max() for ele in strcols])
+ n_cols = len(col_lens)
+ counter = 0
+ while adj_dif > 0 and n_cols > 1:
+ counter += 1
+ mid = int(round(n_cols / 2.))
+ mid_ix = col_lens.index[mid]
+ col_len = col_lens[mid_ix]
+ adj_dif -= ( col_len + 1 ) # adjoin adds one
+ col_lens = col_lens.drop(mid_ix)
+ n_cols = len(col_lens)
+ max_cols_adj = n_cols - self.index # subtract index column
+ self.max_cols_adj = max_cols_adj
+
+ # Call again _chk_truncate to cut frame appropriately
+ # and then generate string representation
+ self._chk_truncate()
+ strcols = self._to_str_columns()
+ text = adjoin(1, *strcols)
self.buf.writelines(text)
@@ -472,8 +537,8 @@ def _join_multiline(self, *strcols):
col_bins = _binify(col_widths, lwidth)
nbins = len(col_bins)
- if self.max_rows and len(self.frame) > self.max_rows:
- nrows = self.max_rows + 1
+ if self.truncate_v:
+ nrows = self.max_rows_adj + 1
else:
nrows = len(self.frame)
@@ -636,6 +701,7 @@ def is_numeric_dtype(dtype):
for x in str_columns:
x.append('')
+ # self.str_columns = str_columns
return str_columns
@property
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index d07065aed4b6a..5783d148df75d 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -280,6 +280,36 @@ def mkframe(n):
com.pprint_thing(df._repr_fits_horizontal_())
self.assertTrue(has_expanded_repr(df))
+ def test_auto_detect(self):
+ term_width, term_height = get_terminal_size()
+ fac = 1.05 # Arbitrary large factor to exceed term widht
+ cols = range(int(term_width * fac))
+ index = range(10)
+ df = DataFrame(index=index, columns=cols)
+ with option_context('mode.sim_interactive', True):
+ with option_context('max_rows',None):
+ with option_context('max_columns',None):
+ # Wrap around with None
+ self.assertTrue(has_expanded_repr(df))
+ with option_context('max_rows',0):
+ with option_context('max_columns',0):
+ # Truncate with auto detection.
+ self.assertTrue(has_horizontally_truncated_repr(df))
+
+ index = range(int(term_height * fac))
+ df = DataFrame(index=index, columns=cols)
+ with option_context('max_rows',0):
+ with option_context('max_columns',None):
+ # Wrap around with None
+ self.assertTrue(has_expanded_repr(df))
+ # Truncate vertically
+ self.assertTrue(has_vertically_truncated_repr(df))
+
+ with option_context('max_rows',None):
+ with option_context('max_columns',0):
+ self.assertTrue(has_horizontally_truncated_repr(df))
+
+
def test_to_string_repr_unicode(self):
buf = StringIO()
| This PR closes #7180
In the terminal:
If maxcols == 0 then auto detect columns
if maxrows == 0 auto detect rows
| https://api.github.com/repos/pandas-dev/pandas/pulls/7691 | 2014-07-08T08:48:52Z | 2014-09-18T13:55:33Z | null | 2014-09-18T15:36:47Z |
BUG: Fix conditional for underlying price in io.data.options. | diff --git a/pandas/io/data.py b/pandas/io/data.py
index 13ced745b7b3f..0b1601b143be0 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -736,9 +736,8 @@ def _get_option_data(self, month, year, expiry, name):
" found".format(table_loc, ntables))
option_data = _parse_options_data(tables[table_loc])
- option_data = self._process_data(option_data)
option_data['Type'] = name[:-1]
- option_data.set_index(['Strike', 'Expiry', 'Type', 'Symbol'], inplace=True)
+ option_data = self._process_data(option_data, name[:-1])
if month == CUR_MONTH and year == CUR_YEAR:
setattr(self, name, option_data)
@@ -859,8 +858,7 @@ def get_near_stock_price(self, above_below=2, call=True, put=False,
month=None, year=None, expiry=None):
"""
***Experimental***
- Cuts the data frame opt_df that is passed in to only take
- options that are near the current stock price.
+ Returns a data frame of options that are near the current stock price.
Parameters
----------
@@ -889,7 +887,6 @@ def get_near_stock_price(self, above_below=2, call=True, put=False,
Note: Format of returned data frame is dependent on Yahoo and may change.
"""
- year, month, expiry = self._try_parse_dates(year, month, expiry)
to_ret = Series({'calls': call, 'puts': put})
to_ret = to_ret[to_ret].index
@@ -897,26 +894,31 @@ def get_near_stock_price(self, above_below=2, call=True, put=False,
data = {}
for nam in to_ret:
- if month:
- m1 = _two_char_month(month)
- name = nam + m1 + str(year)[2:]
+ df = self._get_option_data(month, year, expiry, nam)
+ data[nam] = self.chop_data(df, above_below, self.underlying_price)
+
+ return concat([data[nam] for nam in to_ret]).sortlevel()
+
+ def chop_data(self, df, above_below=2, underlying_price=None):
+ """Returns a data frame only options that are near the current stock price."""
+ if not underlying_price:
try:
- df = getattr(self, name)
+ underlying_price = self.underlying_price
except AttributeError:
- meth_name = 'get_{0}_data'.format(nam[:-1])
- df = getattr(self, meth_name)(expiry=expiry)
+ underlying_price = np.nan
- if self.underlying_price:
- start_index = np.where(df.index.get_level_values('Strike')
- > self.underlying_price)[0][0]
+ if underlying_price is not np.nan:
+ start_index = np.where(df.index.get_level_values('Strike')
+ > underlying_price)[0][0]
- get_range = slice(start_index - above_below,
+ get_range = slice(start_index - above_below,
start_index + above_below + 1)
- chop = df[get_range].dropna(how='all')
- data[nam] = chop
+ df = df[get_range].dropna(how='all')
+
+ return df
+
- return concat([data[nam] for nam in to_ret]).sortlevel()
@staticmethod
def _try_parse_dates(year, month, expiry):
@@ -1048,7 +1050,7 @@ def get_forward_data(self, months, call=True, put=False, near=False,
frame = self.get_near_stock_price(call=call, put=put,
above_below=above_below,
month=m2, year=y2)
- frame = self._process_data(frame)
+ frame = self._process_data(frame, name[:-1])
all_data.append(frame)
@@ -1178,7 +1180,7 @@ def _parse_url(self, url):
return root
- def _process_data(self, frame):
+ def _process_data(self, frame, type):
"""
Adds columns for Expiry, IsNonstandard (ie: deliverable is not 100 shares)
and Tag (the tag indicating what is actually deliverable, None if standard).
@@ -1195,5 +1197,7 @@ def _process_data(self, frame):
frame['Underlying_Price'] = self.underlying_price
frame["Quote_Time"] = self.quote_time
frame.rename(columns={'Open Int': 'Open_Int'}, inplace=True)
+ frame['Type'] = type
+ frame.set_index(['Strike', 'Expiry', 'Type', 'Symbol'], inplace=True)
return frame
diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py
index 8b5a81f050ced..15ebeba941ccd 100644
--- a/pandas/io/tests/test_data.py
+++ b/pandas/io/tests/test_data.py
@@ -250,6 +250,9 @@ def setUpClass(cls):
cls.html2 = os.path.join(cls.dirpath, 'yahoo_options2.html')
cls.root1 = cls.aapl._parse_url(cls.html1)
cls.root2 = cls.aapl._parse_url(cls.html2)
+ cls.tables1 = cls.aapl._parse_option_page_from_yahoo(cls.root1)
+ cls.unprocessed_data1 = web._parse_options_data(cls.tables1[cls.aapl._TABLE_LOC['puts']])
+ cls.data1 = cls.aapl._process_data(cls.unprocessed_data1, 'put')
@classmethod
def tearDownClass(cls):
@@ -324,6 +327,13 @@ def test_sample_page_price_quote_time1(self):
self.assertIsInstance(price, (int, float, complex))
self.assertIsInstance(quote_time, (datetime, Timestamp))
+ def test_chop(self):
+ #regression test for #7625
+ self.aapl.chop_data(self.data1, above_below=2, underlying_price=np.nan)
+ chopped = self.aapl.chop_data(self.data1, above_below=2, underlying_price=300)
+ self.assertIsInstance(chopped, DataFrame)
+ self.assertTrue(len(chopped) > 1)
+
@network
def test_sample_page_price_quote_time2(self):
#Tests the weekday quote time format
@@ -334,10 +344,7 @@ def test_sample_page_price_quote_time2(self):
@network
def test_sample_page_chg_float(self):
#Tests that numeric columns with comma's are appropriately dealt with
- tables = self.aapl._parse_option_page_from_yahoo(self.root1)
- data = web._parse_options_data(tables[self.aapl._TABLE_LOC['puts']])
- option_data = self.aapl._process_data(data)
- self.assertEqual(option_data['Chg'].dtype, 'float64')
+ self.assertEqual(self.data1['Chg'].dtype, 'float64')
class TestOptionsWarnings(tm.TestCase):
| Refactor and regression test.
Fixes #7685
| https://api.github.com/repos/pandas-dev/pandas/pulls/7688 | 2014-07-08T05:31:10Z | 2014-07-08T23:33:47Z | 2014-07-08T23:33:47Z | 2014-07-09T04:38:52Z |
PERF: better perf on min/max on indices not containing NaT for DatetimeIndex/PeriodsIndex | diff --git a/pandas/core/base.py b/pandas/core/base.py
index 1ba5061cd7e9a..585db0f49d8bf 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -6,7 +6,7 @@
from pandas.core import common as com
import pandas.core.nanops as nanops
import pandas.tslib as tslib
-
+from pandas.util.decorators import cache_readonly
class StringMixin(object):
@@ -392,6 +392,11 @@ def _box_values(self, values):
import pandas.lib as lib
return lib.map_infer(values, self._box_func)
+ @cache_readonly
+ def hasnans(self):
+ """ return if I have any nans; enables various perf speedups """
+ return (self.asi8 == tslib.iNaT).any()
+
@property
def asobject(self):
from pandas.core.index import Index
@@ -408,11 +413,18 @@ def min(self, axis=None):
Overridden ndarray.min to return an object
"""
try:
- mask = self.asi8 == tslib.iNaT
- if mask.any():
+ i8 = self.asi8
+
+ # quick check
+ if len(i8) and self.is_monotonic:
+ if i8[0] != tslib.iNaT:
+ return self._box_func(i8[0])
+
+ if self.hasnans:
+ mask = i8 == tslib.iNaT
min_stamp = self[~mask].asi8.min()
else:
- min_stamp = self.asi8.min()
+ min_stamp = i8.min()
return self._box_func(min_stamp)
except ValueError:
return self._na_value
@@ -422,11 +434,18 @@ def max(self, axis=None):
Overridden ndarray.max to return an object
"""
try:
- mask = self.asi8 == tslib.iNaT
- if mask.any():
+ i8 = self.asi8
+
+ # quick check
+ if len(i8) and self.is_monotonic:
+ if i8[-1] != tslib.iNaT:
+ return self._box_func(i8[-1])
+
+ if self.hasnans:
+ mask = i8 == tslib.iNaT
max_stamp = self[~mask].asi8.max()
else:
- max_stamp = self.asi8.max()
+ max_stamp = i8.max()
return self._box_func(max_stamp)
except ValueError:
return self._na_value
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 51ddacd00af08..262305a335d46 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2072,7 +2072,7 @@ def __contains__(self, other):
try:
# if other is a sequence this throws a ValueError
- return np.isnan(other) and self._hasnans
+ return np.isnan(other) and self.hasnans
except ValueError:
try:
return len(other) <= 1 and _try_get_item(other) in self
@@ -2109,7 +2109,7 @@ def _isnan(self):
return np.isnan(self.values)
@cache_readonly
- def _hasnans(self):
+ def hasnans(self):
return self._isnan.any()
@cache_readonly
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index a064e714e7f89..7690cc4819dd5 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -958,7 +958,7 @@ def is_lexsorted(list list_of_arrays):
@cython.boundscheck(False)
@cython.wraparound(False)
def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
- object closed='left'):
+ object closed='left', bint hasnans=0):
"""
Int64 (datetime64) version of generic python version in groupby.py
"""
@@ -968,9 +968,9 @@ def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
int64_t l_bin, r_bin, nat_count
bint right_closed = closed == 'right'
- mask = values == iNaT
nat_count = 0
- if mask.any():
+ if hasnans:
+ mask = values == iNaT
nat_count = np.sum(mask)
values = values[~mask]
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index 1ee7664f7bb9a..01aff164d8384 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -174,7 +174,7 @@ def _get_time_bins(self, ax):
binner, bin_edges = self._adjust_bin_edges(binner, ax_values)
# general version, knowing nothing about relative frequencies
- bins = lib.generate_bins_dt64(ax_values, bin_edges, self.closed)
+ bins = lib.generate_bins_dt64(ax_values, bin_edges, self.closed, hasnans=ax.hasnans)
if self.closed == 'right':
labels = binner
@@ -188,7 +188,7 @@ def _get_time_bins(self, ax):
elif not trimmed:
labels = labels[:-1]
- if (ax_values == tslib.iNaT).any():
+ if ax.hasnans:
binner = binner.insert(0, tslib.NaT)
labels = labels.insert(0, tslib.NaT)
| closes #7633
close to what it was in 0.14.0
key was to not keep recomputing whether an index `hasnans` every time we need it (it is now cached).
further `min/max` are optimized if the index is monotonic
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
timeseries_timestamp_downsample_mean | 4.5697 | 7.8034 | 0.5856 |
dataframe_resample_min_string | 1.8380 | 2.5294 | 0.7266 |
dataframe_resample_min_numpy | 1.8580 | 2.5463 | 0.7297 |
dataframe_resample_max_numpy | 1.8887 | 2.5803 | 0.7320 |
dataframe_resample_max_string | 1.9130 | 2.5553 | 0.7486 |
dataframe_resample_mean_numpy | 2.6687 | 3.3340 | 0.8004 |
dataframe_resample_mean_string | 2.7773 | 3.3080 | 0.8396 |
timeseries_period_downsample_mean | 12.2183 | 11.6010 | 1.0532 |
Ratio < 1.0 means the target commit is faster then the baseline.
Seed used: 1234
Target [d2d30c7] : PERF: better perf on min/max on indices not containing NaT for DatetimeIndex/PeriodIndex
Base [e060616] : DOC: minor corrections in v0.14.1
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7684 | 2014-07-07T18:43:34Z | 2014-07-07T19:24:50Z | 2014-07-07T19:24:50Z | 2014-07-07T19:24:50Z |
TST/COMPAT: numpy master compat with timedelta type coercion | diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index 82f05a0de4588..122bb93a878ee 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -98,7 +98,7 @@ def test_tz(self):
self.assertEqual(conv.hour, 19)
def test_barely_oob_dts(self):
- one_us = np.timedelta64(1)
+ one_us = np.timedelta64(1).astype('timedelta64[us]')
# By definition we can't go out of bounds in [ns], so we
# convert the datetime64s to [us] so we can go out of bounds
| https://api.github.com/repos/pandas-dev/pandas/pulls/7681 | 2014-07-07T14:15:46Z | 2014-07-07T15:11:58Z | 2014-07-07T15:11:58Z | 2014-07-07T15:11:58Z | |
FIX: to_sql takes the boolean column as text column | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 3159bbfc34e7d..6292868dae669 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -280,3 +280,5 @@ Bug Fixes
- Bug in ``pandas.core.strings.str_contains`` does not properly match in a case insensitive fashion when ``regex=False`` and ``case=False`` (:issue:`7505`)
- Bug in ``expanding_cov``, ``expanding_corr``, ``rolling_cov``, and ``rolling_corr`` for two arguments with mismatched index (:issue:`7512`)
+
+- Bug in ``to_sql`` taking the boolean column as text column (:issue:`7678`)
\ No newline at end of file
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index 252d807d1dc3c..9a479afd86cad 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -733,7 +733,7 @@ def _sqlalchemy_type(self, arr_or_dtype):
elif com.is_integer_dtype(arr_or_dtype):
# TODO: Refine integer size.
return BigInteger
- elif com.is_bool(arr_or_dtype):
+ elif com.is_bool_dtype(arr_or_dtype):
return Boolean
return Text
diff --git a/pandas/io/tests/test_sql.py b/pandas/io/tests/test_sql.py
index f3ff84120197a..aa69fb964d947 100644
--- a/pandas/io/tests/test_sql.py
+++ b/pandas/io/tests/test_sql.py
@@ -191,6 +191,26 @@ def _load_test1_data(self):
self.test_frame1 = DataFrame(data, columns=columns)
+ def _load_test2_data(self):
+ df = DataFrame(dict(A=[4, 1, 3, 6],
+ B=['asd', 'gsq', 'ylt', 'jkl'],
+ C=[1.1, 3.1, 6.9, 5.3],
+ D=[False, True, True, False],
+ E=['1990-11-22', '1991-10-26', '1993-11-26', '1995-12-12']))
+ df['E'] = to_datetime(df['E'])
+
+ self.test_frame3 = df
+
+ def _load_test3_data(self):
+ columns = ['index', 'A', 'B']
+ data = [(
+ '2000-01-03 00:00:00', 2 ** 31 - 1, -1.987670),
+ ('2000-01-04 00:00:00', -29, -0.0412318367011),
+ ('2000-01-05 00:00:00', 20000, 0.731167677815),
+ ('2000-01-06 00:00:00', -290867, 1.56762092543)]
+
+ self.test_frame3 = DataFrame(data, columns=columns)
+
def _load_raw_sql(self):
self.drop_table('types_test_data')
self._get_exec().execute(SQL_STRINGS['create_test_types'][self.flavor])
@@ -331,6 +351,8 @@ def setUp(self):
self.conn = self.connect()
self._load_iris_data()
self._load_test1_data()
+ self._load_test2_data()
+ self._load_test3_data()
self._load_raw_sql()
def test_read_sql_iris(self):
@@ -391,6 +413,13 @@ def test_to_sql_append(self):
self.assertEqual(
num_rows, num_entries, "not the same number of rows as entries")
+ def test_to_sql_type_mapping(self):
+ sql.to_sql(self.test_frame3, 'test_frame5',
+ self.conn, flavor='sqlite', index=False)
+ result = sql.read_sql("SELECT * FROM test_frame5", self.conn)
+
+ tm.assert_frame_equal(self.test_frame3, result)
+
def test_to_sql_series(self):
s = Series(np.arange(5, dtype='int64'), name='series')
sql.to_sql(s, "test_series", self.conn, flavor='sqlite', index=False)
@@ -651,35 +680,23 @@ class TestSQLLegacyApi(_TestSQLApi):
def connect(self, database=":memory:"):
return sqlite3.connect(database)
- def _load_test2_data(self):
- columns = ['index', 'A', 'B']
- data = [(
- '2000-01-03 00:00:00', 2 ** 31 - 1, -1.987670),
- ('2000-01-04 00:00:00', -29, -0.0412318367011),
- ('2000-01-05 00:00:00', 20000, 0.731167677815),
- ('2000-01-06 00:00:00', -290867, 1.56762092543)]
-
- self.test_frame2 = DataFrame(data, columns=columns)
-
def test_sql_open_close(self):
# Test if the IO in the database still work if the connection closed
# between the writing and reading (as in many real situations).
- self._load_test2_data()
-
with tm.ensure_clean() as name:
conn = self.connect(name)
- sql.to_sql(self.test_frame2, "test_frame2_legacy", conn,
+ sql.to_sql(self.test_frame3, "test_frame3_legacy", conn,
flavor="sqlite", index=False)
conn.close()
conn = self.connect(name)
- result = sql.read_sql_query("SELECT * FROM test_frame2_legacy;",
+ result = sql.read_sql_query("SELECT * FROM test_frame3_legacy;",
conn)
conn.close()
- tm.assert_frame_equal(self.test_frame2, result)
+ tm.assert_frame_equal(self.test_frame3, result)
def test_read_sql_delegate(self):
iris_frame1 = sql.read_sql_query("SELECT * FROM iris", self.conn)
| In the original code, `com.is_bool(arr_or_dtype)` checks whether `arr_or_dtype` is a boolean value instead of a boolean dtype.
A new function `is_bool_dtype` is added to `pandas.core.common` to fix this bug.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7678 | 2014-07-07T09:47:10Z | 2014-07-07T17:46:41Z | 2014-07-07T17:46:41Z | 2014-07-07T20:07:52Z |
Create DOC: Xls Visualization | diff --git a/DOC: Xls Visualization b/DOC: Xls Visualization
new file mode 100644
index 0000000000000..228b1230a4227
--- /dev/null
+++ b/DOC: Xls Visualization
@@ -0,0 +1,47 @@
+
+
+import numpy as np
+import pandas as pd
+from string import *
+import matplotlib.pyplot as plt
+get_ipython().magic(u'matplotlib inline')
+
+
+
+data = pd.ExcelFile('/file/path/troopMarch2005.xls')
+#Data Taken from http://www.heritage.org/research/reports/2004/10/global-us-troop-deployment-1950-2003
+
+data.sheet_names
+
+
+df = data.parse('DATA')
+
+#Bar plot given year
+def Barhplot(column):
+ y_pos = np.arange(len(df[df.columns[column]]))
+ plt.figure(figsize=(10,200))
+ plt.barh(y_pos,df[df.columns[column]],xerr=10000,alpha = 0.4,linewidth = 3)
+ plt.yticks(y_pos,df.Country)
+ plt.xlabel('troops deployed 2005',size = 'xx-large')
+
+ plt.show()
+
+#Scatter plot
+def Warplot(x):
+ years = np.arange(1950,2005)
+ plt.figure(figsize=(10,10))
+ data = df.xs(x)
+ plt.scatter(years,data[2:57])
+ plt.plot(years,data[2:57],'r--')
+ plt.title(df.iloc[x,1],size='xx-large').set_backgroundcolor('y')
+ plt.ylabel('Troops deployed',size='xx-large')
+ plt.xlabel('Years',size='xx-large')
+
+ plt.grid(True)
+ plt.show()
+
+
+
+
+
+
| https://api.github.com/repos/pandas-dev/pandas/pulls/7676 | 2014-07-06T18:29:02Z | 2014-08-05T17:07:24Z | null | 2014-08-05T17:07:24Z | |
TST: skip buggy tests on debian (GH6270, GH7664) | diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index dece7be5fbbdf..fae403ebb653d 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -1,5 +1,6 @@
# pylint: disable-msg=E1101,W0612
+import sys
from datetime import datetime, timedelta
import operator
import string
@@ -5541,6 +5542,11 @@ def test_isin_with_i8(self):
#------------------------------------------------------------------------------
# TimeSeries-specific
def test_cummethods_bool(self):
+ # GH 6270
+ # looks like a buggy np.maximum.accumulate for numpy 1.6.1, py 3.2
+ if _np_version_under1p7 and sys.version_info[0] == 3 and sys.version_info[1] == 2:
+ raise nose.SkipTest("failure of GH6270 on numpy < 1.7 and py 3.2")
+
def cummin(x):
return np.minimum.accumulate(x)
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index f7aaf3e273b40..0bdba3751b6fd 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -132,7 +132,10 @@ def check_format_of_first_point(ax, expected_string):
first_line = ax.get_lines()[0]
first_x = first_line.get_xdata()[0].ordinal
first_y = first_line.get_ydata()[0]
- self.assertEqual(expected_string, ax.format_coord(first_x, first_y))
+ try:
+ self.assertEqual(expected_string, ax.format_coord(first_x, first_y))
+ except (ValueError):
+ raise nose.SkipTest("skipping test because issue forming test comparison GH7664")
annual = Series(1, index=date_range('2014-01-01', periods=3, freq='A-DEC'))
check_format_of_first_point(annual.plot(), 't = 2014 y = 1.000000')
| closes #6270
closes #7664
| https://api.github.com/repos/pandas-dev/pandas/pulls/7675 | 2014-07-06T17:43:50Z | 2014-07-07T16:00:22Z | 2014-07-07T16:00:22Z | 2014-07-07T16:01:01Z |
DOC: remove extra spaces from option descriptions | diff --git a/pandas/core/config.py b/pandas/core/config.py
index 9b74ef0d9d3c0..a16b32d5dd185 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -334,8 +334,8 @@ def __doc__(self):
Parameters
----------
-pat : str/regex
- If specified only options matching `prefix*` will be reset.
+pat : str/regex
+ If specified only options matching `prefix*` will be reset.
Note: partial matches are supported for convenience, but unless you
use the full option name (e.g. x.y.z.option_name), your code may break
in future versions if new options with similar names are introduced.
@@ -368,7 +368,7 @@ class option_context(object):
Context manager to temporarily set options in the `with` statement context.
You need to invoke as ``option_context(pat, val, [(pat, val), ...])``.
-
+
Examples
--------
@@ -628,20 +628,21 @@ def _build_option_description(k):
o = _get_registered_option(k)
d = _get_deprecated_option(k)
- s = u('%s : ') % k
- if o:
- s += u('[default: %s] [currently: %s]') % (o.defval,
- _get_option(k, True))
+ s = u('%s ') % k
if o.doc:
- s += '\n '.join(o.doc.strip().split('\n'))
+ s += '\n'.join(o.doc.strip().split('\n'))
else:
- s += 'No description available.\n'
+ s += 'No description available.'
+
+ if o:
+ s += u('\n [default: %s] [currently: %s]') % (o.defval,
+ _get_option(k, True))
if d:
s += u('\n\t(Deprecated')
s += (u(', use `%s` instead.') % d.rkey if d.rkey else '')
- s += u(')\n')
+ s += u(')')
s += '\n\n'
return s
| There are already 4 spaces in the description strings in config_init.py, so no need to add some more.
This caused the descriptions to be longer than 79 characters, and so line wrapping in the terminal. Closes #6838.
Plus, moved the default and current values to the last line of the description as proposed by @jseabold
Example output now is:
```
display.max_colwidth : int
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output.
[default: 50] [currently: 50]
display.max_info_columns : int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
[default: 100] [currently: 100]
```
Previous in 0.14 this was:
```
display.max_colwidth : [default: 50] [currently: 50]: int
The maximum width in characters of a column in the repr of
a pandas data structure. When the column overflows, a "..."
placeholder is embedded in the output.
display.max_info_columns : [default: 100] [currently: 100]: int
max_info_columns is used in DataFrame.info method to decide if
per column information will be printed.
```
Before 0.14 it was worse (as reported in #6838), but I already improved it a bit some time ago.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7674 | 2014-07-06T14:29:18Z | 2014-07-07T07:09:55Z | 2014-07-07T07:09:55Z | 2014-07-07T07:10:04Z |
PERF: improve resample perf | diff --git a/pandas/core/base.py b/pandas/core/base.py
index b06b0856d5909..1ba5061cd7e9a 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -4,6 +4,9 @@
from pandas import compat
import numpy as np
from pandas.core import common as com
+import pandas.core.nanops as nanops
+import pandas.tslib as tslib
+
class StringMixin(object):
@@ -236,13 +239,11 @@ def _wrap_access_object(self, obj):
def max(self):
""" The maximum value of the object """
- import pandas.core.nanops
- return pandas.core.nanops.nanmax(self.values)
+ return nanops.nanmax(self.values)
def min(self):
""" The minimum value of the object """
- import pandas.core.nanops
- return pandas.core.nanops.nanmin(self.values)
+ return nanops.nanmin(self.values)
def value_counts(self, normalize=False, sort=True, ascending=False,
bins=None, dropna=True):
@@ -406,31 +407,29 @@ def min(self, axis=None):
"""
Overridden ndarray.min to return an object
"""
- import pandas.tslib as tslib
- mask = self.asi8 == tslib.iNaT
- masked = self[~mask]
- if len(masked) == 0:
- return self._na_value
- elif self.is_monotonic:
- return masked[0]
- else:
- min_stamp = masked.asi8.min()
+ try:
+ mask = self.asi8 == tslib.iNaT
+ if mask.any():
+ min_stamp = self[~mask].asi8.min()
+ else:
+ min_stamp = self.asi8.min()
return self._box_func(min_stamp)
+ except ValueError:
+ return self._na_value
def max(self, axis=None):
"""
Overridden ndarray.max to return an object
"""
- import pandas.tslib as tslib
- mask = self.asi8 == tslib.iNaT
- masked = self[~mask]
- if len(masked) == 0:
- return self._na_value
- elif self.is_monotonic:
- return masked[-1]
- else:
- max_stamp = masked.asi8.max()
+ try:
+ mask = self.asi8 == tslib.iNaT
+ if mask.any():
+ max_stamp = self[~mask].asi8.max()
+ else:
+ max_stamp = self.asi8.max()
return self._box_func(max_stamp)
+ except ValueError:
+ return self._na_value
@property
def _formatter_func(self):
diff --git a/pandas/lib.pyx b/pandas/lib.pyx
index 89e681e6f1c90..a064e714e7f89 100644
--- a/pandas/lib.pyx
+++ b/pandas/lib.pyx
@@ -965,12 +965,14 @@ def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
cdef:
Py_ssize_t lenidx, lenbin, i, j, bc, vc
ndarray[int64_t] bins
- int64_t l_bin, r_bin
+ int64_t l_bin, r_bin, nat_count
bint right_closed = closed == 'right'
mask = values == iNaT
- nat_count = values[mask].size
- values = values[~mask]
+ nat_count = 0
+ if mask.any():
+ nat_count = np.sum(mask)
+ values = values[~mask]
lenidx = len(values)
lenbin = len(binner)
@@ -991,17 +993,22 @@ def generate_bins_dt64(ndarray[int64_t] values, ndarray[int64_t] binner,
bc = 0 # bin count
# linear scan
- for i in range(0, lenbin - 1):
- l_bin = binner[i]
- r_bin = binner[i+1]
-
- # count values in current bin, advance to next bin
- while j < lenidx and (values[j] < r_bin or
- (right_closed and values[j] == r_bin)):
- j += 1
-
- bins[bc] = j
- bc += 1
+ if right_closed:
+ for i in range(0, lenbin - 1):
+ r_bin = binner[i+1]
+ # count values in current bin, advance to next bin
+ while j < lenidx and values[j] <= r_bin:
+ j += 1
+ bins[bc] = j
+ bc += 1
+ else:
+ for i in range(0, lenbin - 1):
+ r_bin = binner[i+1]
+ # count values in current bin, advance to next bin
+ while j < lenidx and values[j] < r_bin:
+ j += 1
+ bins[bc] = j
+ bc += 1
if nat_count > 0:
# shift bins by the number of NaT
diff --git a/pandas/src/generate_code.py b/pandas/src/generate_code.py
index 4098ac06c2da2..842be5a1645bf 100644
--- a/pandas/src/generate_code.py
+++ b/pandas/src/generate_code.py
@@ -1584,7 +1584,7 @@ def group_mean_bin_%(name)s(ndarray[%(dest_type2)s, ndim=2] out,
for i in range(ngroups):
for j in range(K):
count = nobs[i, j]
- if nobs[i, j] == 0:
+ if count == 0:
out[i, j] = nan
else:
out[i, j] = sumx[i, j] / count
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index bcb68ded6fda7..d1fe287bf33be 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -27,6 +27,8 @@
# convert to/from datetime/timestamp to allow invalid Timestamp ranges to pass thru
def as_timestamp(obj):
try:
+ if isinstance(obj, Timestamp):
+ return obj
return Timestamp(obj)
except (OutOfBoundsDatetime):
pass
@@ -2014,9 +2016,21 @@ def delta(self):
def nanos(self):
return _delta_to_nanoseconds(self.delta)
- @apply_wraps
def apply(self, other):
- if isinstance(other, (datetime, timedelta)):
+ # Timestamp can handle tz and nano sec, thus no need to use apply_wraps
+ if type(other) == date:
+ other = datetime(other.year, other.month, other.day)
+ elif isinstance(other, (np.datetime64, datetime)):
+ other = as_timestamp(other)
+
+ if isinstance(other, datetime):
+ result = other + self.delta
+ if self.normalize:
+ # normalize_date returns normal datetime
+ result = tslib.normalize_date(result)
+ return as_timestamp(result)
+
+ elif isinstance(other, timedelta):
return other + self.delta
elif isinstance(other, type(self)):
return type(self)(self.n + other.n)
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index 059a6bfd06719..1ee7664f7bb9a 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -152,7 +152,8 @@ def _get_time_bins(self, ax):
binner = labels = DatetimeIndex(data=[], freq=self.freq, name=ax.name)
return binner, [], labels
- first, last = _get_range_edges(ax, self.freq, closed=self.closed,
+ first, last = ax.min(), ax.max()
+ first, last = _get_range_edges(first, last, self.freq, closed=self.closed,
base=self.base)
tz = ax.tz
binner = labels = DatetimeIndex(freq=self.freq,
@@ -163,7 +164,7 @@ def _get_time_bins(self, ax):
# a little hack
trimmed = False
- if (len(binner) > 2 and binner[-2] == ax.max() and
+ if (len(binner) > 2 and binner[-2] == last and
self.closed == 'right'):
binner = binner[:-1]
@@ -353,11 +354,10 @@ def _take_new_index(obj, indexer, new_index, axis=0):
raise NotImplementedError
-def _get_range_edges(axis, offset, closed='left', base=0):
+def _get_range_edges(first, last, offset, closed='left', base=0):
if isinstance(offset, compat.string_types):
offset = to_offset(offset)
- first, last = axis.min(), axis.max()
if isinstance(offset, Tick):
day_nanos = _delta_to_nanoseconds(timedelta(1))
# #1165
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 090b49bde68a6..70b6b308b6b37 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -3134,14 +3134,21 @@ def period_asfreq_arr(ndarray[int64_t] arr, int freq1, int freq2, bint end):
else:
relation = START
- for i in range(n):
- if arr[i] == iNaT:
- result[i] = iNaT
- continue
- val = func(arr[i], relation, &finfo)
- if val == INT32_MIN:
- raise ValueError("Unable to convert to desired frequency.")
- result[i] = val
+ mask = arr == iNaT
+ if mask.any(): # NaT process
+ for i in range(n):
+ val = arr[i]
+ if val != iNaT:
+ val = func(val, relation, &finfo)
+ if val == INT32_MIN:
+ raise ValueError("Unable to convert to desired frequency.")
+ result[i] = val
+ else:
+ for i in range(n):
+ val = func(arr[i], relation, &finfo)
+ if val == INT32_MIN:
+ raise ValueError("Unable to convert to desired frequency.")
+ result[i] = val
return result
| Related to #7633. It gets better than the result attached #7633, but still slower more than 1.2 times compared to 1.4.0
Modified:
- Avoid every time module `import` in `Index.max/min`
- Avoid duplicated `max` call from `resample/_get_time_bins` and `_get_range_edges`.
- Optimize `lib/generate_bins_dt64` and `tslib/period_asfreq_arr`.
Remaining bottlenecks are `NaT` masking performed in `lib/generate_bins_dt64` and `tslib/period_asfreq_arr`. Is there any better way to do that?
```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
dataframe_resample_mean_numpy | 4.9963 | 3.7940 | 1.3169 |
dataframe_resample_mean_string | 5.0424 | 3.8280 | 1.3172 |
dataframe_resample_max_numpy | 4.1796 | 3.0069 | 1.3900 |
dataframe_resample_min_numpy | 4.2127 | 2.9987 | 1.4049 |
dataframe_resample_min_string | 4.1687 | 2.9490 | 1.4136 |
dataframe_resample_max_string | 4.3443 | 2.9283 | 1.4835 |
timeseries_timestamp_downsample_mean | 16.1959 | 8.6366 | 1.8753 |
timeseries_period_downsample_mean | 47.6096 | 19.7030 | 2.4164 |
-------------------------------------------------------------------------------
Ratio < 1.0 means the target commit is faster then the baseline.
Seed used: 1234
Target [54fb875] : PERF: Improve index.min and max perf
Base [da0f7ae] : RLS: 0.14.0 final
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7673 | 2014-07-05T23:49:20Z | 2014-07-07T13:12:20Z | 2014-07-07T13:12:20Z | 2014-07-09T12:37:40Z |
Add some documentation on gotchas related to pytz updates #7620 | diff --git a/doc/source/io.rst b/doc/source/io.rst
index bc58b04de4473..7d16d9309021d 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -2911,6 +2911,8 @@ Furthermore ``ptrepack in.h5 out.h5`` will *repack* the file to allow
you to reuse previously deleted space. Aalternatively, one can simply
remove the file and write again, or use the ``copy`` method.
+.. _io.hdf5-notes:
+
Notes & Caveats
~~~~~~~~~~~~~~~
@@ -2933,6 +2935,13 @@ Notes & Caveats
``tables``. The sizes of a string based indexing column
(e.g. *columns* or *minor_axis*) are determined as the maximum size
of the elements in that axis or by passing the parameter
+ - Be aware that timezones (e.g., ``pytz.timezone('US/Eastern')``)
+ are not necessarily equal across timezone versions. So if data is
+ localized to a specific timezone in the HDFStore using one version
+ of a timezone library and that data is updated with another version, the data
+ will be converted to UTC since these timezones are not considered
+ equal. Either use the same version of timezone library or use ``tz_convert`` with
+ the updated timezone definition.
.. warning::
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 795bbca673f77..a75e943d7cec0 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1342,7 +1342,14 @@ tz-aware data to another time zone:
Be wary of conversions between libraries. For some zones ``pytz`` and ``dateutil`` have different
definitions of the zone. This is more of a problem for unusual timezones than for
- 'standard' zones like ``US/Eastern``.
+ 'standard' zones like ``US/Eastern``.
+
+.. warning::
+
+ Be aware that a timezone definition across versions of timezone libraries may not
+ be considered equal. This may cause problems when working with stored data that
+ is localized using one version and operated on with a different version.
+ See :ref:`here<io.hdf5-notes>` for how to handle such a situation.
Under the hood, all timestamps are stored in UTC. Scalar values from a
``DatetimeIndex`` with a time zone will have their fields (day, hour, minute)
| closes #7620
| https://api.github.com/repos/pandas-dev/pandas/pulls/7672 | 2014-07-05T21:27:39Z | 2014-07-06T14:45:15Z | 2014-07-06T14:45:14Z | 2014-07-06T19:02:51Z |
DOC: remove mention of TimeSeries in docs | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index adcf2fca9b4c5..9221f2685d79b 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -577,10 +577,8 @@ row-wise. For example:
df - df.iloc[0]
-In the special case of working with time series data, if the Series is a
-TimeSeries (which it will be automatically if the index contains datetime
-objects), and the DataFrame index also contains dates, the broadcasting will be
-column-wise:
+In the special case of working with time series data, and the DataFrame index
+also contains dates, the broadcasting will be column-wise:
.. ipython:: python
:okwarning:
diff --git a/doc/source/faq.rst b/doc/source/faq.rst
index 20762e3fc039f..1fc8488e92fde 100644
--- a/doc/source/faq.rst
+++ b/doc/source/faq.rst
@@ -207,9 +207,9 @@ properties. Here are the pandas equivalents:
Frequency conversion
~~~~~~~~~~~~~~~~~~~~
-Frequency conversion is implemented using the ``resample`` method on TimeSeries
-and DataFrame objects (multiple time series). ``resample`` also works on panels
-(3D). Here is some code that resamples daily data to monthly:
+Frequency conversion is implemented using the ``resample`` method on Series
+and DataFrame objects with a DatetimeIndex or PeriodIndex. ``resample`` also
+works on panels (3D). Here is some code that resamples daily data to montly:
.. ipython:: python
@@ -369,4 +369,3 @@ just a thin layer around the ``QTableView``.
mw = MainWidget()
mw.show()
app.exec_()
-
diff --git a/doc/source/overview.rst b/doc/source/overview.rst
index 49a788def2854..b1addddc2121d 100644
--- a/doc/source/overview.rst
+++ b/doc/source/overview.rst
@@ -9,7 +9,7 @@ Package overview
:mod:`pandas` consists of the following things
* A set of labeled array data structures, the primary of which are
- Series/TimeSeries and DataFrame
+ Series and DataFrame
* Index objects enabling both simple axis indexing and multi-level /
hierarchical axis indexing
* An integrated group by engine for aggregating and transforming data sets
@@ -32,7 +32,6 @@ Data structures at a glance
:widths: 15, 20, 50
1, Series, "1D labeled homogeneously-typed array"
- 1, TimeSeries, "Series with index containing datetimes"
2, DataFrame, "General 2D labeled, size-mutable tabular structure with
potentially heterogeneously-typed columns"
3, Panel, "General 3D labeled, also size-mutable array"
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index b69b523d9c908..ce1035e91391a 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1008,7 +1008,7 @@ Time series-related instance methods
Shifting / lagging
~~~~~~~~~~~~~~~~~~
-One may want to *shift* or *lag* the values in a TimeSeries back and forward in
+One may want to *shift* or *lag* the values in a time series back and forward in
time. The method for this is ``shift``, which is available on all of the pandas
objects.
@@ -1026,7 +1026,7 @@ The shift method accepts an ``freq`` argument which can accept a
ts.shift(5, freq='BM')
Rather than changing the alignment of the data and the index, ``DataFrame`` and
-``TimeSeries`` objects also have a ``tshift`` convenience method that changes
+``Series`` objects also have a ``tshift`` convenience method that changes
all the dates in the index by a specified number of offsets:
.. ipython:: python
@@ -1569,7 +1569,7 @@ time zones using ``tz_convert``:
rng_berlin[5]
rng_eastern[5].tz_convert('Europe/Berlin')
-Localization of Timestamps functions just like DatetimeIndex and TimeSeries:
+Localization of Timestamps functions just like DatetimeIndex and Series:
.. ipython:: python
@@ -1577,8 +1577,8 @@ Localization of Timestamps functions just like DatetimeIndex and TimeSeries:
rng[5].tz_localize('Asia/Shanghai')
-Operations between TimeSeries in different time zones will yield UTC
-TimeSeries, aligning the data on the UTC timestamps:
+Operations between Series in different time zones will yield UTC
+Series, aligning the data on the UTC timestamps:
.. ipython:: python
| As a Series with a DatetimeIndex is no longer presented as a `TimeSeries`, I think we also should not longer mention it in the docs as a 'seperate object', so removed the last few mentions.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7671 | 2014-07-05T14:17:52Z | 2015-05-15T08:10:13Z | 2015-05-15T08:10:13Z | 2015-06-02T19:26:59Z |
(WIP) BUG/CLN: Better timeseries plotting / refactoring tsplot | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 1433ce65b3021..ee27c81a27acb 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -1665,21 +1665,35 @@ def __init__(self, data, **kwargs):
def _is_ts_plot(self):
# this is slightly deceptive
- return not self.x_compat and self.use_index and self._use_dynamic_x()
-
- def _use_dynamic_x(self):
- from pandas.tseries.plotting import _use_dynamic_x
- return _use_dynamic_x(self._get_ax(0), self.data)
+ from pandas.tseries.plotting import _use_dynamic_x, _get_freq
+ ax = self._get_ax(0)
+ freq, ax_freq = _get_freq(ax, self.data)
+ dynamic_x = _use_dynamic_x(ax, self.data)
+ return (not self.x_compat and self.use_index and
+ dynamic_x and freq is not None)
def _make_plot(self):
if self._is_ts_plot():
+ print('tsplot-path!!')
from pandas.tseries.plotting import _maybe_convert_index
data = _maybe_convert_index(self._get_ax(0), self.data)
+ from pandas.tseries.plotting import _maybe_resample
+ for ax in self.axes:
+ # resample data and replot if required
+ kwds = self.kwds.copy()
+ data = _maybe_resample(data, ax, kwds)
+
x = data.index # dummy, not used
plotf = self._ts_plot
it = self._iter_data(data=data, keep_index=True)
else:
+ print('xcompat-path!!')
+ from pandas.tseries.plotting import _replot_x_compat
+ for ax in self.axes:
+ # if ax holds _plot_data, replot them on the x_compat scale
+ _replot_x_compat(ax)
+
x = self._get_xticks(convert_period=True)
plotf = self._plot
it = self._iter_data()
@@ -1723,24 +1737,15 @@ def _plot(cls, ax, x, y, style=None, column_num=None,
@classmethod
def _ts_plot(cls, ax, x, data, style=None, **kwds):
- from pandas.tseries.plotting import (_maybe_resample,
- _decorate_axes,
- format_dateaxis)
+ from pandas.tseries.plotting import _maybe_resample, format_dateaxis
# accept x to be consistent with normal plot func,
# x is not passed to tsplot as it uses data.index as x coordinate
# column_num must be in kwds for stacking purpose
- freq, data = _maybe_resample(data, ax, kwds)
- # Set ax with freq info
- _decorate_axes(ax, freq, kwds)
- # digging deeper
- if hasattr(ax, 'left_ax'):
- _decorate_axes(ax.left_ax, freq, kwds)
- if hasattr(ax, 'right_ax'):
- _decorate_axes(ax.right_ax, freq, kwds)
+ data = _maybe_resample(data, ax, kwds)
ax._plot_data.append((data, cls._kind, kwds))
-
lines = cls._plot(ax, data.index, data.values, style=style, **kwds)
+
# set date formatter, locators and rescale limits
format_dateaxis(ax, ax.freq)
return lines
@@ -1790,14 +1795,9 @@ def _update_stacker(cls, ax, stacking_id, values):
ax._stacker_neg_prior[stacking_id] += values
def _post_plot_logic(self, ax, data):
- condition = (not self._use_dynamic_x() and
- data.index.is_all_dates and
- not self.subplots or
- (self.subplots and self.sharex))
-
index_name = self._get_index_name()
- if condition:
+ if not self._is_ts_plot():
# irregular TS rotated 30 deg. by default
# probably a better place to check / set this.
if not self._rot_set:
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index fe64af67af0ed..072c6aa3a4362 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -9,6 +9,8 @@
from matplotlib import pylab
from pandas.tseries.period import Period
+import numpy as np
+
from pandas.tseries.offsets import DateOffset
import pandas.tseries.frequencies as frequencies
from pandas.tseries.index import DatetimeIndex
@@ -41,10 +43,7 @@ def tsplot(series, plotf, ax=None, **kwargs):
import matplotlib.pyplot as plt
ax = plt.gca()
- freq, series = _maybe_resample(series, ax, kwargs)
-
- # Set ax with freq info
- _decorate_axes(ax, freq, kwargs)
+ series = _maybe_resample(series, ax, kwargs)
ax._plot_data.append((series, plotf, kwargs))
lines = plotf(ax, series.index._mpl_repr(), series.values, **kwargs)
@@ -52,7 +51,6 @@ def tsplot(series, plotf, ax=None, **kwargs):
format_dateaxis(ax, ax.freq)
return lines
-
def _maybe_resample(series, ax, kwargs):
# resample against axes freq if necessary
freq, ax_freq = _get_freq(ax, series)
@@ -75,11 +73,20 @@ def _maybe_resample(series, ax, kwargs):
series = getattr(series.resample(ax_freq), how)().dropna()
freq = ax_freq
elif frequencies.is_subperiod(freq, ax_freq) or _is_sub(freq, ax_freq):
- _upsample_others(ax, freq, kwargs)
+ _upsample_others(ax, freq)
ax_freq = freq
else: # pragma: no cover
raise ValueError('Incompatible frequency conversion')
- return freq, series
+
+ # Set ax with freq info
+ _decorate_axes(ax, freq)
+ # digging deeper
+ if hasattr(ax, 'left_ax'):
+ _decorate_axes(ax.left_ax, freq)
+ elif hasattr(ax, 'right_ax'):
+ _decorate_axes(ax.right_ax, freq)
+
+ return series
def _is_sub(f1, f2):
@@ -92,61 +99,84 @@ def _is_sup(f1, f2):
(f2.startswith('W') and frequencies.is_superperiod(f1, 'D')))
-def _upsample_others(ax, freq, kwargs):
- legend = ax.get_legend()
- lines, labels = _replot_ax(ax, freq, kwargs)
- _replot_ax(ax, freq, kwargs)
+def _get_plot_func(plotf):
+ """ get actual function when plotf is specified with str """
+ # for tsplot
+ if isinstance(plotf, compat.string_types):
+ from pandas.tools.plotting import _plot_klass
+ plotf = _plot_klass[plotf]._plot
+ return plotf
+
+
+def _upsample_others(ax, freq):
- other_ax = None
+ def _replot(ax):
+ data = getattr(ax, '_plot_data', None)
+ if data is None:
+ return
+
+ # preserve legend
+ leg = ax.get_legend()
+ handles, labels = ax.get_legend_handles_labels()
+
+ ax._plot_data = []
+ ax.clear()
+ _decorate_axes(ax, freq)
+
+ for series, plotf, kwds in data:
+ series = series.copy()
+ idx = series.index.asfreq(freq, how='s')
+ series.index = idx
+ ax._plot_data.append((series, plotf, kwds))
+
+ plotf = _get_plot_func(plotf)
+ plotf(ax, series.index._mpl_repr(), series.values, **kwds)
+
+
+ if leg is not None:
+ ax.legend(handles, labels, title=leg.get_title().get_text())
+
+ _replot(ax)
if hasattr(ax, 'left_ax'):
- other_ax = ax.left_ax
- if hasattr(ax, 'right_ax'):
- other_ax = ax.right_ax
+ _replot(ax.left_ax)
+ elif hasattr(ax, 'right_ax'):
+ _replot(ax.right_ax)
- if other_ax is not None:
- rlines, rlabels = _replot_ax(other_ax, freq, kwargs)
- lines.extend(rlines)
- labels.extend(rlabels)
- if (legend is not None and kwargs.get('legend', True) and
- len(lines) > 0):
- title = legend.get_title().get_text()
- if title == 'None':
- title = None
- ax.legend(lines, labels, loc='best', title=title)
+def _replot_x_compat(ax):
+ def _replot(ax):
+ data = getattr(ax, '_plot_data', None)
+ if data is None:
+ return
-def _replot_ax(ax, freq, kwargs):
- data = getattr(ax, '_plot_data', None)
+ # preserve legend
+ leg = ax.get_legend()
+ handles, labels = ax.get_legend_handles_labels()
- # clear current axes and data
- ax._plot_data = []
- ax.clear()
+ ax._plot_data = None
+ ax.clear()
- _decorate_axes(ax, freq, kwargs)
+ _decorate_axes(ax, None)
- lines = []
- labels = []
- if data is not None:
for series, plotf, kwds in data:
- series = series.copy()
- idx = series.index.asfreq(freq, how='S')
+ idx = series.index.to_timestamp(how='s')
series.index = idx
- ax._plot_data.append((series, plotf, kwds))
- # for tsplot
- if isinstance(plotf, compat.string_types):
- from pandas.tools.plotting import _plot_klass
- plotf = _plot_klass[plotf]._plot
+ plotf = _get_plot_func(plotf)
+ plotf(ax, series.index._mpl_repr(), series, **kwds)
- lines.append(plotf(ax, series.index._mpl_repr(),
- series.values, **kwds)[0])
- labels.append(pprint_thing(series.name))
+ if leg is not None:
+ ax.legend(handles, labels, title=leg.get_title().get_text())
- return lines, labels
+ _replot(ax)
+ if hasattr(ax, 'left_ax'):
+ _replot(ax.left_ax)
+ elif hasattr(ax, 'right_ax'):
+ _replot(ax.right_ax)
-def _decorate_axes(ax, freq, kwargs):
+def _decorate_axes(ax, freq):
"""Initialize axes for time-series plotting"""
if not hasattr(ax, '_plot_data'):
ax._plot_data = []
@@ -154,19 +184,27 @@ def _decorate_axes(ax, freq, kwargs):
ax.freq = freq
xaxis = ax.get_xaxis()
xaxis.freq = freq
- if not hasattr(ax, 'legendlabels'):
- ax.legendlabels = [kwargs.get('label', None)]
- else:
- ax.legendlabels.append(kwargs.get('label', None))
ax.view_interval = None
ax.date_axis_info = None
-def _get_freq(ax, series):
+def _get_index_freq(data):
+ freq = getattr(data.index, 'freq', None)
+ if freq is None:
+ freq = getattr(data.index, 'inferred_freq', None)
+ if freq == 'B':
+ weekdays = np.unique(data.index.dayofweek)
+ if (5 in weekdays) or (6 in weekdays):
+ freq = None
+ return freq
+
+
+def _get_freq(ax, data):
# get frequency from data
- freq = getattr(series.index, 'freq', None)
+ freq = getattr(data.index, 'freq', None)
+
if freq is None:
- freq = getattr(series.index, 'inferred_freq', None)
+ freq = getattr(data.index, 'inferred_freq', None)
ax_freq = getattr(ax, 'freq', None)
if ax_freq is None:
@@ -175,17 +213,17 @@ def _get_freq(ax, series):
elif hasattr(ax, 'right_ax'):
ax_freq = getattr(ax.right_ax, 'freq', None)
- # use axes freq if no data freq
- if freq is None:
- freq = ax_freq
+ if freq is not None:
+ # get the period frequency
+ if isinstance(freq, DateOffset):
+ freq = freq.rule_code
+ else:
+ freq = frequencies.get_base_alias(freq)
- # get the period frequency
- if isinstance(freq, DateOffset):
- freq = freq.rule_code
- else:
- freq = frequencies.get_base_alias(freq)
+ if freq is None:
+ raise ValueError('Could not get frequency alias for plotting')
+ freq = frequencies.get_period_alias(freq)
- freq = frequencies.get_period_alias(freq)
return freq, ax_freq
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 4a06a5500094a..baadabd49aea4 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -90,19 +90,16 @@ def test_nonnumeric_exclude(self):
@slow
def test_tsplot(self):
- from pandas.tseries.plotting import tsplot
import matplotlib.pyplot as plt
ax = plt.gca()
ts = tm.makeTimeSeries()
- f = lambda *args, **kwds: tsplot(s, plt.Axes.plot, *args, **kwds)
-
for s in self.period_ser:
- _check_plot_works(f, s.index.freq, ax=ax, series=s)
+ _check_plot_works(s.plot, ax=ax)
for s in self.datetime_ser:
- _check_plot_works(f, s.index.freq.rule_code, ax=ax, series=s)
+ _check_plot_works(s.plot, ax=ax)
for s in self.period_ser:
_check_plot_works(s.plot, ax=ax)
@@ -640,27 +637,35 @@ def test_secondary_bar_frame(self):
self.assertEqual(axes[2].get_yaxis().get_ticks_position(), 'right')
def test_mixed_freq_regular_first(self):
- import matplotlib.pyplot as plt # noqa
s1 = tm.makeTimeSeries()
s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]]
+ self.assertIsNone(s2.index.freq)
- # it works!
- s1.plot()
-
- ax2 = s2.plot(style='g')
- lines = ax2.get_lines()
- idx1 = PeriodIndex(lines[0].get_xdata())
- idx2 = PeriodIndex(lines[1].get_xdata())
+ # the result has PeriodIndex axis
+ ax1 = s1.plot()
+ lines1 = ax1.get_lines()
+ idx1 = PeriodIndex(lines1[0].get_xdata())
self.assertTrue(idx1.equals(s1.index.to_period('B')))
- self.assertTrue(idx2.equals(s2.index.to_period('B')))
- left, right = ax2.get_xlim()
+ left, right = ax1.get_xlim()
pidx = s1.index.to_period()
self.assertEqual(left, pidx[0].ordinal)
self.assertEqual(right, pidx[-1].ordinal)
+ # because s2 doesn't have freq, the result has x_compat axis
+ ax2 = s2.plot(style='g')
+ lines2 = ax2.get_lines()
+
+ exp = s1.index.to_pydatetime()
+ tm.assert_numpy_array_equal(lines2[0].get_xdata(), exp)
+ tm.assert_numpy_array_equal(lines2[1].get_xdata(),
+ s2.index.to_pydatetime())
+ left, right = ax2.get_xlim()
+ from matplotlib.dates import date2num
+ self.assertEqual(left, date2num(exp[0]))
+ self.assertEqual(right, date2num(exp[-1]))
+
@slow
def test_mixed_freq_irregular_first(self):
- import matplotlib.pyplot as plt # noqa
s1 = tm.makeTimeSeries()
s2 = s1[[0, 5, 10, 11, 12, 13, 14, 15]]
s2.plot(style='g')
@@ -672,23 +677,34 @@ def test_mixed_freq_irregular_first(self):
x2 = lines[1].get_xdata()
tm.assert_numpy_array_equal(x2, s1.index.asobject.values)
- def test_mixed_freq_regular_first_df(self):
+ def test_aaamixed_freq_regular_first_df(self):
# GH 9852
- import matplotlib.pyplot as plt # noqa
s1 = tm.makeTimeSeries().to_frame()
s2 = s1.iloc[[0, 5, 10, 11, 12, 13, 14, 15], :]
- ax = s1.plot()
- ax2 = s2.plot(style='g', ax=ax)
- lines = ax2.get_lines()
- idx1 = PeriodIndex(lines[0].get_xdata())
- idx2 = PeriodIndex(lines[1].get_xdata())
+
+ # the result has PeriodIndex axis
+ ax1 = s1.plot()
+ lines1 = ax1.get_lines()
+ idx1 = PeriodIndex(lines1[0].get_xdata())
self.assertTrue(idx1.equals(s1.index.to_period('B')))
- self.assertTrue(idx2.equals(s2.index.to_period('B')))
- left, right = ax2.get_xlim()
+ left, right = ax1.get_xlim()
pidx = s1.index.to_period()
self.assertEqual(left, pidx[0].ordinal)
self.assertEqual(right, pidx[-1].ordinal)
+ # because s2 doesn't have freq, the result has x_compat axis
+ ax2 = s2.plot(style='g', ax=ax1)
+ lines2 = ax2.get_lines()
+
+ exp = s1.index.to_pydatetime()
+ tm.assert_numpy_array_equal(lines2[0].get_xdata(), exp)
+ tm.assert_numpy_array_equal(lines2[1].get_xdata(),
+ s2.index.to_pydatetime())
+ left, right = ax2.get_xlim()
+ from matplotlib.dates import date2num
+ self.assertEqual(left, date2num(exp[0]))
+ self.assertEqual(right, date2num(exp[-1]))
+
@slow
def test_mixed_freq_irregular_first_df(self):
# GH 9852
| Must be revisited after #7717.
#6608 seems to be solved by following 4 fixes.
- [ ] `PeriodIndex` should support the same freqs as `DatetimeIndex` (Related to #7222, maybe solved by #5148)
- [ ] Better logic to find common divisor frequency (try to use `tsplot` as much)
- [x] When plotting with `x_compat` to the `ax` which already holds `tsplot` lines, `tsplot` lines must be redrawn on `x_compat` coordinates.
- [ ] Check whether `to_timestamp(how='e')` always revert `PeriodIndex` back to original `DatetimeIndex`
- [x] If target ax already holds `x_compat` lines, continuous plot should be drawn on `x_compat` coordinates (current version already have a logic, but cannot work if `ax` once have `freq` property).
Other refactoring:
- [x] Do not re-plot every row in `tsplot`
- [x] Simplify `LinePlot` flow (separated as #7717)
- [x] Store plot_func for line and area mixed time-series plot (separated as #7733)
Result using current PR (modified #6608 a little to show legend).
```
s1 = pd.Series([1, 2, 3], index=[datetime.datetime(1995, 12, 31),
datetime.datetime(2000, 12, 31),
datetime.datetime(2005, 12, 31)], name='idx1')
s2 = pd.Series([1, 2, 3], index=[datetime.datetime(1997, 12, 31),
datetime.datetime(2003, 12, 31),
datetime.datetime(2008, 12, 31)], name='idx2')
ax = s1.plot(legend=True)
ax = s2.plot(legend=True)
s1.plot(ax=ax, legend=True)
```

One question is whether `tsplot` is categorized as public function? If so, I'll prepare separate func.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7670 | 2014-07-05T13:08:45Z | 2017-03-20T13:48:33Z | null | 2017-03-20T13:48:34Z |
Update docs to use display.width instead of deprecated line_width. | diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index 44bff4e5a8885..7c43a03e68013 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -630,19 +630,19 @@ default:
DataFrame(randn(3, 12))
-You can change how much to print on a single row by setting the ``line_width``
+You can change how much to print on a single row by setting the ``display.width``
option:
.. ipython:: python
- set_option('line_width', 40) # default is 80
+ set_option('display.width', 40) # default is 80
DataFrame(randn(3, 12))
.. ipython:: python
:suppress:
- reset_option('line_width')
+ reset_option('display.width')
You can also disable this feature via the ``expand_frame_repr`` option.
This will print the table in one block.
| https://api.github.com/repos/pandas-dev/pandas/pulls/7669 | 2014-07-05T07:52:42Z | 2014-07-05T09:05:37Z | 2014-07-05T09:05:37Z | 2014-07-05T09:05:37Z | |
BUG: windows failure on GH7667 | diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 280c4073b0f94..d387cb647d8c2 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -348,6 +348,9 @@ def _setitem_with_indexer(self, indexer, value):
"with a different length than the value"
)
+ # make sure we have an ndarray
+ value = getattr(value,'values',value).ravel()
+
# we can directly set the series here
# as we select a slice indexer on the mi
idx = index._convert_slice_indexer(idx)
| https://api.github.com/repos/pandas-dev/pandas/pulls/7668 | 2014-07-04T17:23:44Z | 2014-07-04T17:23:48Z | 2014-07-04T17:23:48Z | 2014-07-22T18:40:51Z | |
BUG: Bug in multi-index slice setting, related GH3738 | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 525d17c7612a7..51ddacd00af08 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -523,6 +523,10 @@ def _convert_slice_indexer_getitem(self, key, is_index_slice=False):
def _convert_slice_indexer(self, key, typ=None):
""" convert a slice indexer. disallow floats in the start/stop/step """
+ # if we are not a slice, then we are done
+ if not isinstance(key, slice):
+ return key
+
# validate iloc
if typ == 'iloc':
@@ -2008,6 +2012,11 @@ def _convert_scalar_indexer(self, key, typ=None):
def _convert_slice_indexer(self, key, typ=None):
""" convert a slice indexer, by definition these are labels
unless we are iloc """
+
+ # if we are not a slice, then we are done
+ if not isinstance(key, slice):
+ return key
+
if typ == 'iloc':
return super(Float64Index, self)._convert_slice_indexer(key,
typ=typ)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 1a4da63a135a2..0e962800fef08 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1883,6 +1883,29 @@ def f():
df.loc['bar'] *= 2
self.assertRaises(TypeError, f)
+ # from SO
+ #http://stackoverflow.com/questions/24572040/pandas-access-the-level-of-multiindex-for-inplace-operation
+ df_orig = DataFrame.from_dict({'price': {
+ ('DE', 'Coal', 'Stock'): 2,
+ ('DE', 'Gas', 'Stock'): 4,
+ ('DE', 'Elec', 'Demand'): 1,
+ ('FR', 'Gas', 'Stock'): 5,
+ ('FR', 'Solar', 'SupIm'): 0,
+ ('FR', 'Wind', 'SupIm'): 0}})
+ df_orig.index = MultiIndex.from_tuples(df_orig.index, names=['Sit', 'Com', 'Type'])
+
+ expected = df_orig.copy()
+ expected.iloc[[0,2,3]] *= 2
+
+ idx = pd.IndexSlice
+ df = df_orig.copy()
+ df.loc[idx[:,:,'Stock'],:] *= 2
+ assert_frame_equal(df, expected)
+
+ df = df_orig.copy()
+ df.loc[idx[:,:,'Stock'],'price'] *= 2
+ assert_frame_equal(df, expected)
+
def test_getitem_multiindex(self):
# GH 5725
| https://api.github.com/repos/pandas-dev/pandas/pulls/7667 | 2014-07-04T15:41:25Z | 2014-07-04T16:10:17Z | 2014-07-04T16:10:16Z | 2014-07-04T16:10:17Z | |
TST/CLN: Refactor io.data.options class to improve testing | diff --git a/pandas/io/data.py b/pandas/io/data.py
index 67a841a27f992..13ced745b7b3f 100644
--- a/pandas/io/data.py
+++ b/pandas/io/data.py
@@ -661,31 +661,35 @@ def get_options_data(self, month=None, year=None, expiry=None):
_OPTIONS_BASE_URL = 'http://finance.yahoo.com/q/op?s={sym}'
- def _get_option_tables(self, month, year, expiry):
+ def _get_option_tables(self, expiry):
+ root = self._get_option_page_from_yahoo(expiry)
+ tables = self._parse_option_page_from_yahoo(root)
+ m1 = _two_char_month(expiry.month)
+ table_name = '_tables' + m1 + str(expiry.year)[-2:]
+ setattr(self, table_name, tables)
+ return tables
- year, month, expiry = self._try_parse_dates(year, month, expiry)
+ def _get_option_page_from_yahoo(self, expiry):
url = self._OPTIONS_BASE_URL.format(sym=self.symbol)
- if month and year: # try to get specified month from yahoo finance
- m1 = _two_char_month(month)
+ m1 = _two_char_month(expiry.month)
- # if this month use other url
- if month == CUR_MONTH and year == CUR_YEAR:
- url += '+Options'
- else:
- url += '&m={year}-{m1}'.format(year=year, m1=m1)
- else: # Default to current month
+ # if this month use other url
+ if expiry.month == CUR_MONTH and expiry.year == CUR_YEAR:
url += '+Options'
+ else:
+ url += '&m={year}-{m1}'.format(year=expiry.year, m1=m1)
root = self._parse_url(url)
+ return root
+
+ def _parse_option_page_from_yahoo(self, root):
+
tables = root.xpath('.//table')
ntables = len(tables)
if ntables == 0:
- raise RemoteDataError("No tables found at {0!r}".format(url))
-
- table_name = '_tables' + m1 + str(year)[-2:]
- setattr(self, table_name, tables)
+ raise RemoteDataError("No tables found")
try:
self.underlying_price, self.quote_time = self._get_underlying_price(root)
@@ -723,7 +727,7 @@ def _get_option_data(self, month, year, expiry, name):
try:
tables = getattr(self, table_name)
except AttributeError:
- tables = self._get_option_tables(month, year, expiry)
+ tables = self._get_option_tables(expiry)
ntables = len(tables)
table_loc = self._TABLE_LOC[name]
@@ -903,13 +907,14 @@ def get_near_stock_price(self, above_below=2, call=True, put=False,
meth_name = 'get_{0}_data'.format(nam[:-1])
df = getattr(self, meth_name)(expiry=expiry)
- start_index = np.where(df.index.get_level_values('Strike')
+ if self.underlying_price:
+ start_index = np.where(df.index.get_level_values('Strike')
> self.underlying_price)[0][0]
- get_range = slice(start_index - above_below,
+ get_range = slice(start_index - above_below,
start_index + above_below + 1)
- chop = df[get_range].dropna(how='all')
- data[nam] = chop
+ chop = df[get_range].dropna(how='all')
+ data[nam] = chop
return concat([data[nam] for nam in to_ret]).sortlevel()
@@ -948,6 +953,8 @@ def _try_parse_dates(year, month, expiry):
year = CUR_YEAR
month = CUR_MONTH
expiry = dt.date(year, month, 1)
+ else:
+ expiry = dt.date(year, month, 1)
return year, month, expiry
@@ -1127,7 +1134,11 @@ def _get_expiry_months(self):
url = 'http://finance.yahoo.com/q/op?s={sym}'.format(sym=self.symbol)
root = self._parse_url(url)
- links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
+ try:
+ links = root.xpath('.//*[@id="yfncsumtab"]')[0].xpath('.//a')
+ except IndexError:
+ return RemoteDataError('Expiry months not available')
+
month_gen = (element.attrib['href'].split('=')[-1]
for element in links
if '/q/op?s=' in element.attrib['href']
diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py
index 5d2a8ef08c95b..8b5a81f050ced 100644
--- a/pandas/io/tests/test_data.py
+++ b/pandas/io/tests/test_data.py
@@ -334,7 +334,7 @@ def test_sample_page_price_quote_time2(self):
@network
def test_sample_page_chg_float(self):
#Tests that numeric columns with comma's are appropriately dealt with
- tables = self.root1.xpath('.//table')
+ tables = self.aapl._parse_option_page_from_yahoo(self.root1)
data = web._parse_options_data(tables[self.aapl._TABLE_LOC['puts']])
option_data = self.aapl._process_data(data)
self.assertEqual(option_data['Chg'].dtype, 'float64')
| Removed most references to month and year (passing expiry date around instead).
Also protected with RemoteDataError checking for the expiry month links.
Fixes #7648
| https://api.github.com/repos/pandas-dev/pandas/pulls/7665 | 2014-07-04T04:58:46Z | 2014-07-07T11:09:41Z | 2014-07-07T11:09:41Z | 2014-07-07T11:09:46Z |
FIX: Scalar timedelta NaT results raise | diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index aa6140383a27a..b38bc9142f7f5 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -230,7 +230,9 @@ def _wrap_results(result, dtype):
from pandas import Series
# coerce float to results
- if is_float(result):
+ if isnull(result):
+ result = tslib.NaT
+ elif is_float(result):
result = int(result)
result = Series([result], dtype='timedelta64[ns]')
else:
diff --git a/pandas/tests/test_nanops.py b/pandas/tests/test_nanops.py
index 3e8a5fecbb579..483a07264482c 100644
--- a/pandas/tests/test_nanops.py
+++ b/pandas/tests/test_nanops.py
@@ -5,6 +5,7 @@
import numpy as np
from pandas.core.common import isnull
+from pandas.tslib import iNaT
import pandas.core.nanops as nanops
import pandas.util.testing as tm
@@ -30,6 +31,8 @@ def setUp(self):
self.arr_shape).astype('m8[ns]')
self.arr_nan = np.tile(np.nan, self.arr_shape)
+ self.arr_datenat = np.tile(iNaT, self.arr_shape).astype('M8[ns]')
+ self.arr_tdeltanat = np.tile(iNaT, self.arr_shape).astype('m8[ns]')
self.arr_float_nan = np.vstack([self.arr_float, self.arr_nan])
self.arr_float1_nan = np.vstack([self.arr_float1, self.arr_nan])
self.arr_nan_float1 = np.vstack([self.arr_nan, self.arr_float1])
@@ -244,13 +247,19 @@ def check_funs(self, testfunc, targfunc,
else:
self.check_fun(testfunc, targfunc, 'arr_date', **kwargs)
objs += [self.arr_date.astype('O')]
+ if allow_all_nan:
+ self.check_fun(testfunc, targfunc, 'arr_datenat',
+ **kwargs)
try:
targfunc(self.arr_tdelta)
except TypeError:
pass
else:
self.check_fun(testfunc, targfunc, 'arr_tdelta', **kwargs)
- objs += [self.arr_tdelta.astype('O')]
+ objs += [self.arr_date.astype('O')]
+ if allow_all_nan:
+ self.check_fun(testfunc, targfunc, 'arr_tdeltanat',
+ **kwargs)
if allow_obj:
self.arr_obj = np.vstack(objs)
@@ -291,18 +300,15 @@ def test_nanall(self):
allow_all_nan=False, allow_str=False, allow_date=False)
def test_nansum(self):
- self.check_funs(nanops.nansum, np.sum,
- allow_str=False, allow_date=False)
+ self.check_funs(nanops.nansum, np.sum, allow_str=False)
def test_nanmean(self):
- self.check_funs(nanops.nanmean, np.mean,
- allow_complex=False, allow_obj=False,
- allow_str=False, allow_date=False)
+ self.check_funs(nanops.nanmean, np.mean, allow_complex=False,
+ allow_obj=False, allow_str=False)
def test_nanmedian(self):
- self.check_funs(nanops.nanmedian, np.median,
- allow_complex=False, allow_str=False, allow_date=False,
- allow_obj='convert')
+ self.check_funs(nanops.nanmedian, np.median, allow_complex=False,
+ allow_str=False, allow_obj='convert')
def test_nanvar(self):
self.check_funs_ddof(nanops.nanvar, np.var,
@@ -349,7 +355,6 @@ def test_nanargmin(self):
func = partial(self._argminmax_wrap, func=np.argmin)
if tm.sys.version_info[0:2] == (2, 6):
self.check_funs(nanops.nanargmin, func,
- allow_date=False,
allow_str=False, allow_obj=False)
else:
self.check_funs(nanops.nanargmin, func,
| Currently, coercion of scalar results from `float` to `timedelta64[ns]` passes through `int`, which raises when attempting to coerce `NaN` to `NaT`.
To reproduce:
``` python
import pandas as pd
import numpy as np
pd.Series([np.timedelta64('NaT')]).sum()
# TypeError: reduction operation 'sum' not allowed for this dtype
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7661 | 2014-07-03T21:36:46Z | 2014-09-14T14:20:10Z | null | 2014-09-14T14:20:10Z |
API: disallow inplace setting with where and a non-np.nan value (GH7656) | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 850e7e13db2ff..8ede5f32dded6 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -24,11 +24,6 @@ users upgrade to this version.
API changes
~~~~~~~~~~~
-
-
-
-
-
- All ``offsets`` suppports ``normalize`` keyword to specify whether ``offsets.apply``, ``rollforward`` and ``rollback`` resets time (hour, minute, etc) or not (default ``False``, preserves time) (:issue:`7156`)
@@ -60,6 +55,8 @@ API changes
- Bug in ``.loc`` performing fallback integer indexing with ``object`` dtype indices (:issue:`7496`)
- Add back ``#N/A N/A`` as a default NA value in text parsing, (regresion from 0.12) (:issue:`5521`)
+- Raise a ``TypeError`` on inplace-setting with a ``.where`` and a non ``np.nan`` value as this is inconsistent
+ with a set-item expression like ``df[mask] = None`` (:issue:`7656`)
.. _whatsnew_0141.prior_deprecations:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 049d3b6a8578c..da9fb44f80b09 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -679,8 +679,8 @@ def to_gbq(self, destination_table, project_id=None, chunksize=10000,
the defined table schema and column types. For simplicity, this method
uses the Google BigQuery streaming API. The to_gbq method chunks data
into a default chunk size of 10,000. Failures return the complete error
- response which can be quite long depending on the size of the insert.
- There are several important limitations of the Google streaming API
+ response which can be quite long depending on the size of the insert.
+ There are several important limitations of the Google streaming API
which are detailed at:
https://developers.google.com/bigquery/streaming-data-into-bigquery.
@@ -1925,11 +1925,7 @@ def _setitem_frame(self, key, value):
if key.values.dtype != np.bool_:
raise TypeError('Must pass DataFrame with boolean values only')
- if self._is_mixed_type:
- if not self._is_numeric_mixed_type:
- raise TypeError(
- 'Cannot do boolean setting on mixed-type frame')
-
+ self._check_inplace_setting(value)
self._check_setitem_copy()
self.where(-key, value, inplace=True)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 756de479a471a..c88aced3de8a2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -1910,6 +1910,24 @@ def _is_datelike_mixed_type(self):
f = lambda: self._data.is_datelike_mixed_type
return self._protect_consolidate(f)
+ def _check_inplace_setting(self, value):
+ """ check whether we allow in-place setting with this type of value """
+
+ if self._is_mixed_type:
+ if not self._is_numeric_mixed_type:
+
+ # allow an actual np.nan thru
+ try:
+ if np.isnan(value):
+ return True
+ except:
+ pass
+
+ raise TypeError(
+ 'Cannot do inplace boolean setting on mixed-types with a non np.nan value')
+
+ return True
+
def _protect_consolidate(self, f):
blocks_before = len(self._data.blocks)
result = f()
@@ -3214,6 +3232,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
if inplace:
# we may have different type blocks come out of putmask, so
# reconstruct the block manager
+
+ self._check_inplace_setting(other)
new_data = self._data.putmask(mask=cond, new=other, align=axis is None,
inplace=True)
self._update_inplace(new_data)
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 7368fcf8dac26..d7f8d235d4229 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -9242,6 +9242,12 @@ def test_where_none(self):
expected = DataFrame({'series': Series([0,1,2,3,4,5,6,7,np.nan,np.nan]) })
assert_frame_equal(df, expected)
+ # GH 7656
+ df = DataFrame([{'A': 1, 'B': np.nan, 'C': 'Test'}, {'A': np.nan, 'B': 'Test', 'C': np.nan}])
+ expected = df.where(~isnull(df), None)
+ with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'):
+ df.where(~isnull(df), None, inplace=True)
+
def test_where_align(self):
def create():
| closes #7656
| https://api.github.com/repos/pandas-dev/pandas/pulls/7657 | 2014-07-03T15:40:07Z | 2014-07-03T16:26:12Z | 2014-07-03T16:26:12Z | 2023-05-18T15:27:18Z |
PERF: fix perf issue in tz conversions w/o affecting DST transitions | diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 09ff6578160f8..441a5e8a99c78 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -599,7 +599,7 @@ def _period_group(freqstr):
def _period_str_to_code(freqstr):
# hack
freqstr = _rule_aliases.get(freqstr, freqstr)
-
+
if freqstr not in _dont_uppercase:
freqstr = _rule_aliases.get(freqstr.lower(), freqstr)
@@ -659,6 +659,25 @@ def infer_freq(index, warn=True):
_ONE_HOUR = 60 * _ONE_MINUTE
_ONE_DAY = 24 * _ONE_HOUR
+def _tz_convert_with_transitions(values, to_tz, from_tz):
+ """
+ convert i8 values from the specificed timezone to the to_tz zone, taking
+ into account DST transitions
+ """
+
+ # vectorization is slow, so tests if we can do this via the faster tz_convert
+ f = lambda x: tslib.tz_convert_single(x, to_tz, from_tz)
+
+ if len(values) > 2:
+ first_slow, last_slow = f(values[0]),f(values[-1])
+
+ first_fast, last_fast = tslib.tz_convert(np.array([values[0],values[-1]],dtype='i8'),to_tz,from_tz)
+
+ # don't cross a DST, so ok
+ if first_fast == first_slow and last_fast == last_slow:
+ return tslib.tz_convert(values,to_tz,from_tz)
+
+ return np.vectorize(f)(values)
class _FrequencyInferer(object):
"""
@@ -670,10 +689,7 @@ def __init__(self, index, warn=True):
self.values = np.asarray(index).view('i8')
if index.tz is not None:
- f = lambda x: tslib.tz_convert_single(x, 'UTC', index.tz)
- self.values = np.vectorize(f)(self.values)
- # This cant work, because of DST
- # self.values = tslib.tz_convert(self.values, 'UTC', index.tz)
+ self.values = _tz_convert_with_transitions(self.values,'UTC',index.tz)
self.warn = warn
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 9473b10876600..d022911fe2909 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -14,7 +14,7 @@
from pandas.compat import u
from pandas.tseries.frequencies import (
infer_freq, to_offset, get_period_alias,
- Resolution, get_reso_string)
+ Resolution, get_reso_string, _tz_convert_with_transitions)
from pandas.core.base import DatetimeIndexOpsMixin
from pandas.tseries.offsets import DateOffset, generate_range, Tick, CDay
from pandas.tseries.tools import parse_time_string, normalize_date
@@ -1376,7 +1376,10 @@ def __getitem__(self, key):
else:
if com._is_bool_indexer(key):
key = np.asarray(key)
- key = lib.maybe_booleans_to_slice(key.view(np.uint8))
+ if key.all():
+ key = slice(0,None,None)
+ else:
+ key = lib.maybe_booleans_to_slice(key.view(np.uint8))
new_offset = None
if isinstance(key, slice):
@@ -1588,9 +1591,7 @@ def insert(self, loc, item):
new_dates = np.concatenate((self[:loc].asi8, [item.view(np.int64)],
self[loc:].asi8))
if self.tz is not None:
- f = lambda x: tslib.tz_convert_single(x, 'UTC', self.tz)
- new_dates = np.vectorize(f)(new_dates)
- # new_dates = tslib.tz_convert(new_dates, 'UTC', self.tz)
+ new_dates = _tz_convert_with_transitions(new_dates,'UTC',self.tz)
return DatetimeIndex(new_dates, name=self.name, freq=freq, tz=self.tz)
except (AttributeError, TypeError):
| ```
-------------------------------------------------------------------------------
Test name | head[ms] | base[ms] | ratio |
-------------------------------------------------------------------------------
datetimeindex_normalize | 3.3297 | 83.0923 | 0.0401 |
Ratio < 1.0 means the target commit is faster then the baseline.
Seed used: 1234
Target [fc88541] : PERF: allow slice indexers to be computed faster
Base [160419e] : TST: fixes for 2.6 comparisons
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/7652 | 2014-07-02T18:02:53Z | 2014-07-02T18:38:23Z | 2014-07-02T18:38:23Z | 2014-07-02T18:38:23Z |
REGR: Add back #N/A N/A as a default NA value (regresion from 0.12) (GH5521) | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 42041cceeb81b..9392fd299b674 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -59,6 +59,7 @@ API changes
object isn't a ``Period`` ``False`` is returned. (:issue:`7376`)
- Bug in ``.loc`` performing fallback integer indexing with ``object`` dtype indices (:issue:`7496`)
+- Add back ``#N/A N/A`` as a default NA value in text parsing, (regresion from 0.12) (:issue:`5521`)
.. _whatsnew_0141.prior_deprecations:
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 3f7a8ce9b2788..0dcbdb86b9069 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -487,7 +487,7 @@ def read_fwf(filepath_or_buffer, colspecs='infer', widths=None, **kwds):
# no longer excluding inf representations
# '1.#INF','-1.#INF', '1.#INF000000',
_NA_VALUES = set([
- '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A', 'N/A', 'NA', '#NA',
+ '-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN', '#N/A N/A', '#N/A', 'N/A', 'NA', '#NA',
'NULL', 'NaN', '-NaN', 'nan', '-nan', ''
])
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index b9c7621c19ab0..ab9a6f58119c2 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -706,7 +706,7 @@ def test_non_string_na_values(self):
def test_default_na_values(self):
_NA_VALUES = set(['-1.#IND', '1.#QNAN', '1.#IND', '-1.#QNAN',
'#N/A','N/A', 'NA', '#NA', 'NULL', 'NaN',
- 'nan', '-NaN', '-nan', ''])
+ 'nan', '-NaN', '-nan', '#N/A N/A',''])
assert_array_equal (_NA_VALUES, parsers._NA_VALUES)
nv = len(_NA_VALUES)
def f(i, v):
| closes #5521
| https://api.github.com/repos/pandas-dev/pandas/pulls/7639 | 2014-07-01T15:57:47Z | 2014-07-01T16:33:07Z | 2014-07-01T16:33:07Z | 2014-07-01T16:33:07Z |
BUG: Bug in Series.get with a boolean accessor (GH7407) | diff --git a/doc/source/v0.14.1.txt b/doc/source/v0.14.1.txt
index 42041cceeb81b..5a731ae3dcacf 100644
--- a/doc/source/v0.14.1.txt
+++ b/doc/source/v0.14.1.txt
@@ -173,7 +173,7 @@ Bug Fixes
- Bug in timeops with non-aligned Series (:issue:`7500`)
- Bug in timedelta inference when assigning an incomplete Series (:issue:`7592`)
- Bug in groupby ``.nth`` with a Series and integer-like column name (:issue:`7559`)
-
+- Bug in ``Series.get`` with a boolean accessor (:issue:`7407`)
- Bug in ``value_counts`` where ``NaT`` did not qualify as missing (``NaN``) (:issue:`7423`)
- Bug in ``to_timedelta`` that accepted invalid units and misinterpreted 'm/h' (:issue:`7611`, :issue: `6423`)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 4d7e14c9e026f..525d17c7612a7 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1191,7 +1191,7 @@ def get_value(self, series, key):
try:
return self._engine.get_value(s, k)
except KeyError as e1:
- if len(self) > 0 and self.inferred_type == 'integer':
+ if len(self) > 0 and self.inferred_type in ['integer','boolean']:
raise
try:
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 093954f1d8c1d..f4f8495b1dafd 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -123,6 +123,20 @@ def test_get(self):
expected = 43
self.assertEqual(result,expected)
+ # GH 7407
+ # with a boolean accessor
+ df = pd.DataFrame({'i':[0]*3, 'b':[False]*3})
+ vc = df.i.value_counts()
+ result = vc.get(99,default='Missing')
+ self.assertEquals(result,'Missing')
+
+ vc = df.b.value_counts()
+ result = vc.get(False,default='Missing')
+ self.assertEquals(result,3)
+
+ result = vc.get(True,default='Missing')
+ self.assertEquals(result,'Missing')
+
def test_delitem(self):
# GH 5542
| closes #7407
| https://api.github.com/repos/pandas-dev/pandas/pulls/7638 | 2014-07-01T15:39:11Z | 2014-07-01T16:02:12Z | 2014-07-01T16:02:12Z | 2014-07-01T16:02:12Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.