title stringlengths 1 185 | diff stringlengths 0 32.2M | body stringlengths 0 123k โ | url stringlengths 57 58 | created_at stringlengths 20 20 | closed_at stringlengths 20 20 | merged_at stringlengths 20 20 โ | updated_at stringlengths 20 20 |
|---|---|---|---|---|---|---|---|
DOC: Fix name in interpolate plot. | diff --git a/doc/source/missing_data.rst b/doc/source/missing_data.rst
index e7966aa71486c..3e2183b072ac6 100644
--- a/doc/source/missing_data.rst
+++ b/doc/source/missing_data.rst
@@ -375,11 +375,11 @@ Compare several methods:
np.random.seed(2)
ser = Series(np.arange(1, 10.1, .25)**2 + np.random.randn(37))
- bad = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29, 34, 35, 36])
+ bad = np.array([4, 13, 14, 15, 16, 17, 18, 20, 29])
ser[bad] = np.nan
methods = ['linear', 'quadratic', 'cubic']
- df = DataFrame({m: s.interpolate(method=m) for m in methods})
+ df = DataFrame({m: ser.interpolate(method=m) for m in methods})
@savefig compare_interpolations.png
df.plot()
| https://api.github.com/repos/pandas-dev/pandas/pulls/5165 | 2013-10-09T23:03:25Z | 2013-10-09T23:39:21Z | 2013-10-09T23:39:21Z | 2017-04-05T02:05:41Z | |
ENH: add to_frame method to Series | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 8772b858de2cc..dfb562bcc3298 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -438,6 +438,7 @@ Serialization / IO / Conversion
Series.to_pickle
Series.to_csv
Series.to_dict
+ Series.to_frame
Series.to_sparse
Series.to_string
Series.to_clipboard
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index c6a4c280ca4bb..946a45b7dbe7c 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -492,6 +492,8 @@ Enhancements
- DatetimeIndex is now in the API documentation, see :ref:`here<api.datetimeindex>`
- :meth:`~pandas.io.json.json_normalize` is a new method to allow you to create a flat table
from semi-structured JSON data. :ref:`See the docs<io.json_normalize>` (:issue:`1067`)
+ - ``Series`` now supports ``to_frame`` method to convert it to a single-column DataFrame
+ (:issue:`5164`)
.. _whatsnew_0130.experimental:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d185939d6abc9..34b3634996f4b 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -827,12 +827,7 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
raise TypeError('Cannot reset_index inplace on a Series '
'to create a DataFrame')
else:
- from pandas.core.frame import DataFrame
- if name is None:
- df = DataFrame(self)
- else:
- df = DataFrame({name: self})
-
+ df = self.to_frame(name)
return df.reset_index(level=level, drop=drop)
def __unicode__(self):
@@ -1028,6 +1023,27 @@ def to_dict(self):
"""
return dict(compat.iteritems(self))
+ def to_frame(self, name=None):
+ """
+ Convert Series to DataFrame
+
+ Parameters
+ ----------
+ name : object, default None
+ The passed name should substitute for the series name (if it has one).
+
+ Returns
+ -------
+ data_frame : DataFrame
+ """
+ from pandas.core.frame import DataFrame
+ if name is None:
+ df = DataFrame(self)
+ else:
+ df = DataFrame({name: self})
+
+ return df
+
def to_sparse(self, kind='block', fill_value=None):
"""
Convert Series to SparseSeries
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 8abc068fd6d24..22d8d79ecc82f 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -30,6 +30,7 @@
from pandas import compat
from pandas.util.testing import (assert_series_equal,
assert_almost_equal,
+ assert_frame_equal,
ensure_clean)
import pandas.util.testing as tm
@@ -3606,6 +3607,21 @@ def test_tolist(self):
rs = s.tolist()
self.assertEqual(self.ts.index[0], rs[0])
+ def test_to_frame(self):
+ self.ts.name = None
+ rs = self.ts.to_frame()
+ xp = pd.DataFrame(self.ts.values, index=self.ts.index)
+ assert_frame_equal(rs, xp)
+
+ self.ts.name = 'testname'
+ rs = self.ts.to_frame()
+ xp = pd.DataFrame(dict(testname=self.ts.values), index=self.ts.index)
+ assert_frame_equal(rs, xp)
+
+ rs = self.ts.to_frame(name='testdifferent')
+ xp = pd.DataFrame(dict(testdifferent=self.ts.values), index=self.ts.index)
+ assert_frame_equal(rs, xp)
+
def test_to_dict(self):
self.assert_(np.array_equal(Series(self.ts.to_dict()), self.ts))
| Adding a method to Series to promote it to a DataFrame. I've wished this existed so that when a method returns a series, and you could easily promote it to a DataFrame so that it's easier to add columns to it.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5164 | 2013-10-09T20:02:07Z | 2013-10-10T13:00:50Z | 2013-10-10T13:00:50Z | 2014-06-13T06:38:18Z |
DOC: add DataFrame et al class docstring to api docs | diff --git a/doc/source/_templates/autosummary/class.rst b/doc/source/_templates/autosummary/class.rst
new file mode 100644
index 0000000000000..e9af7e8df8bab
--- /dev/null
+++ b/doc/source/_templates/autosummary/class.rst
@@ -0,0 +1,33 @@
+{% extends "!autosummary/class.rst" %}
+
+{% block methods %}
+{% if methods %}
+
+..
+ HACK -- the point here is that we don't want this to appear in the output, but the autosummary should still generate the pages.
+ .. autosummary::
+ :toctree:
+ {% for item in all_methods %}
+ {%- if not item.startswith('_') or item in ['__call__'] %}
+ {{ name }}.{{ item }}
+ {%- endif -%}
+ {%- endfor %}
+
+{% endif %}
+{% endblock %}
+
+{% block attributes %}
+{% if attributes %}
+
+..
+ HACK -- the point here is that we don't want this to appear in the output, but the autosummary should still generate the pages.
+ .. autosummary::
+ :toctree:
+ {% for item in all_attributes %}
+ {%- if not item.startswith('_') %}
+ {{ name }}.{{ item }}
+ {%- endif -%}
+ {%- endfor %}
+
+{% endif %}
+{% endblock %}
diff --git a/doc/source/api.rst b/doc/source/api.rst
index 2e817e9d19c3f..46d77d0dcceb7 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -206,6 +206,13 @@ Exponentially-weighted moving window functions
Series
------
+Constructor
+~~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Series
+
Attributes and underlying data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Axes**
@@ -219,13 +226,11 @@ Attributes and underlying data
Series.isnull
Series.notnull
-Conversion / Constructors
-~~~~~~~~~~~~~~~~~~~~~~~~~
-
+Conversion
+~~~~~~~~~~
.. autosummary::
:toctree: generated/
- Series.__init__
Series.astype
Series.copy
Series.isnull
@@ -418,6 +423,13 @@ Serialization / IO / Conversion
DataFrame
---------
+Constructor
+~~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ DataFrame
+
Attributes and underlying data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Axes**
@@ -436,12 +448,11 @@ Attributes and underlying data
DataFrame.ndim
DataFrame.shape
-Conversion / Constructors
-~~~~~~~~~~~~~~~~~~~~~~~~~
+Conversion
+~~~~~~~~~~
.. autosummary::
:toctree: generated/
- DataFrame.__init__
DataFrame.astype
DataFrame.convert_objects
DataFrame.copy
@@ -665,6 +676,13 @@ Serialization / IO / Conversion
Panel
------
+Constructor
+~~~~~~~~~~~
+.. autosummary::
+ :toctree: generated/
+
+ Panel
+
Attributes and underlying data
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
**Axes**
@@ -681,12 +699,11 @@ Attributes and underlying data
Panel.ndim
Panel.shape
-Conversion / Constructors
-~~~~~~~~~~~~~~~~~~~~~~~~~
+Conversion
+~~~~~~~~~~
.. autosummary::
:toctree: generated/
- Panel.__init__
Panel.astype
Panel.copy
Panel.isnull
@@ -853,6 +870,11 @@ Index
**Many of these methods or variants thereof are available on the objects that contain an index (Series/Dataframe)
and those should most likely be used before calling these methods directly.**
+.. autosummary::
+ :toctree: generated/
+
+ Index
+
Modifying and Computations
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
@@ -934,6 +956,11 @@ Properties
DatetimeIndex
-------------
+.. autosummary::
+ :toctree: generated/
+
+ DatetimeIndex
+
Time/Date Components
~~~~~~~~~~~~~~~~~~~~
* **year**
diff --git a/doc/source/conf.py b/doc/source/conf.py
index 99d1703b9ca34..a500289b27ab1 100644
--- a/doc/source/conf.py
+++ b/doc/source/conf.py
@@ -51,7 +51,7 @@
]
# Add any paths that contain templates here, relative to this directory.
-templates_path = ['_templates', '_templates/autosummary']
+templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
diff --git a/doc/sphinxext/docscrape.py b/doc/sphinxext/docscrape.py
index 9a8ac59b32714..a8323c2c74361 100755
--- a/doc/sphinxext/docscrape.py
+++ b/doc/sphinxext/docscrape.py
@@ -8,7 +8,7 @@
import pydoc
from StringIO import StringIO
from warnings import warn
-
+import collections
class Reader(object):
@@ -473,6 +473,8 @@ def __str__(self):
class ClassDoc(NumpyDocString):
+ extra_public_methods = ['__call__']
+
def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
config={}):
if not inspect.isclass(cls) and cls is not None:
@@ -502,12 +504,16 @@ def __init__(self, cls, doc=None, modulename='', func_doc=FunctionDoc,
def methods(self):
if self._cls is None:
return []
- return [name for name, func in inspect.getmembers(self._cls)
- if not name.startswith('_') and callable(func)]
+ return [name for name,func in inspect.getmembers(self._cls)
+ if ((not name.startswith('_')
+ or name in self.extra_public_methods)
+ and isinstance(func, collections.Callable))]
@property
def properties(self):
if self._cls is None:
return []
- return [name for name, func in inspect.getmembers(self._cls)
- if not name.startswith('_') and func is None]
+ return [name for name,func in inspect.getmembers(self._cls)
+ if not name.startswith('_') and
+ (func is None or isinstance(func, property) or
+ inspect.isgetsetdescriptor(func))]
\ No newline at end of file
| closes #4790
I copied the approach of numpy: extending the autosummary class template to automatically generate all method stub files:
- https://github.com/numpy/numpy/blob/master/doc/source/_templates/autosummary/class.rst
- example output: http://docs.scipy.org/doc/numpy/reference/generated/numpy.ndarray.html
I added `DataFrame`, `Series`, `Panel`, `Index` and `DatetimeIndex`. Others? (eg `Timestamp`) Or too much?
The only issue at the moment is that normally also a list of attributes should be included, which is not the case (only a list of methods)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5160 | 2013-10-08T20:54:57Z | 2013-10-14T18:18:38Z | 2013-10-14T18:18:38Z | 2014-06-18T10:12:24Z |
API/PERF: restore auto-boxing on isnull/notnull for Series (w/o copy) preserves perf gains | diff --git a/pandas/core/common.py b/pandas/core/common.py
index 009de0395f361..a417e00af5d3e 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -202,6 +202,11 @@ def _isnull_ndarraylike(obj):
else:
result = np.isnan(values)
+ # box
+ if isinstance(obj, ABCSeries):
+ from pandas import Series
+ result = Series(result, index=obj.index, name=obj.name, copy=False)
+
return result
@@ -226,6 +231,11 @@ def _isnull_ndarraylike_old(obj):
else:
result = -np.isfinite(values)
+ # box
+ if isinstance(obj, ABCSeries):
+ from pandas import Series
+ result = Series(result, index=obj.index, name=obj.name, copy=False)
+
return result
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 5efc22ef927d5..68195fb3d6ec5 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -66,7 +66,7 @@ def test_notnull():
with cf.option_context("mode.use_inf_as_null", False):
for s in [tm.makeFloatSeries(),tm.makeStringSeries(),
tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]:
- assert(isinstance(isnull(s), np.ndarray))
+ assert(isinstance(isnull(s), Series))
def test_isnull():
assert not isnull(1.)
@@ -77,7 +77,7 @@ def test_isnull():
for s in [tm.makeFloatSeries(),tm.makeStringSeries(),
tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]:
- assert(isinstance(isnull(s), np.ndarray))
+ assert(isinstance(isnull(s), Series))
# call on DataFrame
df = DataFrame(np.random.randn(10, 5))
| restore API before #5154
boxing via null/isnull is same as 0.12
(aside from now `isnull/notnull` are now available on `NDFrame` objs as well)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5159 | 2013-10-08T17:05:16Z | 2013-10-08T17:26:45Z | 2013-10-08T17:26:45Z | 2014-07-16T08:33:46Z |
DOC: Added filtering example to groupby doc | diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 723aee64fd0d9..7b769eeccbe68 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -571,6 +571,13 @@ with NaNs.
dff.groupby('B').filter(lambda x: len(x) > 2, dropna=False)
+For dataframes with multiple columns, filters should explicitly specify a column as the filter criterion.
+
+.. ipython:: python
+
+ dff['C'] = np.arange(8)
+ dff.groupby('B').filter(lambda x: len(x['C']) > 2)
+
.. _groupby.dispatch:
Dispatching to instance methods
@@ -650,7 +657,7 @@ The dimension of the returned result can also change:
.. ipython:: python
def f(x):
- return Series([ x, x**2 ], index = ['x', 'x^s'])
+ return Series([ x, x**2 ], index = ['x', 'x^s'])
s = Series(np.random.rand(5))
s
s.apply(f)
diff --git a/doc/source/release.rst b/doc/source/release.rst
index a1434c0bc927f..f008109f9de8e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -576,6 +576,7 @@ Bug Fixes
function (:issue:`5150`).
- Fix a bug with ``NDFrame.replace()`` which made replacement appear as
though it was (incorrectly) using regular expressions (:issue:`5143`).
+ - Fix better error message for to_datetime (:issue:`4928`)
pandas 0.12.0
-------------
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index cda84a99a95db..473ea21da1585 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -931,6 +931,16 @@ def test_to_datetime_types(self):
### expected = to_datetime('2012')
### self.assert_(result == expected)
+ def test_to_datetime_unprocessable_input(self):
+ # GH 4928
+ self.assert_(
+ np.array_equal(
+ to_datetime([1, '1']),
+ np.array([1, '1'], dtype='O')
+ )
+ )
+ self.assertRaises(TypeError, to_datetime, [1, '1'], errors='raise')
+
def test_to_datetime_other_datetime64_units(self):
# 5/25/2012
scalar = np.int64(1337904000000000).view('M8[us]')
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index ff3284b72aecb..3dcfa3621895e 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1055,7 +1055,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
val = values[i]
if util._checknull(val):
oresult[i] = val
- else:
+ elif util.is_string_object(val):
if len(val) == 0:
# TODO: ??
oresult[i] = 'NaT'
@@ -1069,6 +1069,10 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
raise
return values
# oresult[i] = val
+ else:
+ if raise_:
+ raise
+ return values
return oresult
| closes https://github.com/pydata/pandas/issues/4630
| https://api.github.com/repos/pandas-dev/pandas/pulls/5158 | 2013-10-08T14:30:31Z | 2013-10-08T14:51:04Z | null | 2013-10-08T14:51:18Z |
BUG: Fix to_datetime() uncaught error with unparseable inputs #4928 | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 85d9be1295e29..b3668ed80096e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -575,6 +575,7 @@ Bug Fixes
function (:issue:`5150`).
- Fix a bug with ``NDFrame.replace()`` which made replacement appear as
though it was (incorrectly) using regular expressions (:issue:`5143`).
+ - Fix better error message for to_datetime (:issue:`4928`)
pandas 0.12.0
-------------
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index cda84a99a95db..473ea21da1585 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -931,6 +931,16 @@ def test_to_datetime_types(self):
### expected = to_datetime('2012')
### self.assert_(result == expected)
+ def test_to_datetime_unprocessable_input(self):
+ # GH 4928
+ self.assert_(
+ np.array_equal(
+ to_datetime([1, '1']),
+ np.array([1, '1'], dtype='O')
+ )
+ )
+ self.assertRaises(TypeError, to_datetime, [1, '1'], errors='raise')
+
def test_to_datetime_other_datetime64_units(self):
# 5/25/2012
scalar = np.int64(1337904000000000).view('M8[us]')
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index ff3284b72aecb..3dcfa3621895e 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -1055,7 +1055,7 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
val = values[i]
if util._checknull(val):
oresult[i] = val
- else:
+ elif util.is_string_object(val):
if len(val) == 0:
# TODO: ??
oresult[i] = 'NaT'
@@ -1069,6 +1069,10 @@ def array_to_datetime(ndarray[object] values, raise_=False, dayfirst=False,
raise
return values
# oresult[i] = val
+ else:
+ if raise_:
+ raise
+ return values
return oresult
| closes #4928
| https://api.github.com/repos/pandas-dev/pandas/pulls/5157 | 2013-10-08T14:17:38Z | 2013-10-08T14:30:56Z | 2013-10-08T14:30:56Z | 2014-06-17T13:24:48Z |
PERF: remove auto-boxing on isnull/notnull | diff --git a/doc/source/api.rst b/doc/source/api.rst
index d062512ac3a5b..8772b858de2cc 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -258,6 +258,8 @@ Conversion / Constructors
Series.__init__
Series.astype
Series.copy
+ Series.isnull
+ Series.notnull
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
@@ -472,6 +474,8 @@ Conversion / Constructors
DataFrame.astype
DataFrame.convert_objects
DataFrame.copy
+ DataFrame.isnull
+ DataFrame.notnull
Indexing, iteration
~~~~~~~~~~~~~~~~~~~
@@ -714,6 +718,8 @@ Conversion / Constructors
Panel.__init__
Panel.astype
Panel.copy
+ Panel.isnull
+ Panel.notnull
Getting and setting
~~~~~~~~~~~~~~~~~~~
@@ -976,7 +982,7 @@ Time/Date Components
* **week**: Same as weekofyear
* **dayofweek**: (0=Monday, 6=Sunday)
* **weekday**: (0=Monday, 6=Sunday)
- * **dayofyear**
+ * **dayofyear**
* **quarter**
* **date**: Returns date component of Timestamps
@@ -990,7 +996,7 @@ Selecting
DatetimeIndex.indexer_at_time
DatetimeIndex.indexer_between_time
-
+
Time-specific operations
~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 85d9be1295e29..a1434c0bc927f 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -323,6 +323,7 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- ``filter`` (also added axis argument to selectively filter on a different axis)
- ``reindex,reindex_axis,take``
- ``truncate`` (moved to become part of ``NDFrame``)
+ - ``isnull/notnull`` now available on ``NDFrame`` objects
- These are API changes which make ``Panel`` more consistent with ``DataFrame``
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 33df305a721a6..009de0395f361 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -182,7 +182,7 @@ def _use_inf_as_null(key):
def _isnull_ndarraylike(obj):
- values = obj
+ values = getattr(obj,'values',obj)
dtype = values.dtype
if dtype.kind in ('O', 'S', 'U'):
@@ -198,22 +198,15 @@ def _isnull_ndarraylike(obj):
elif dtype in _DATELIKE_DTYPES:
# this is the NaT pattern
- v = getattr(values, 'asi8', None)
- if v is None:
- v = values.view('i8')
- result = v == tslib.iNaT
+ result = values.view('i8') == tslib.iNaT
else:
- result = np.isnan(obj)
-
- if isinstance(obj, ABCSeries):
- from pandas import Series
- result = Series(result, index=obj.index, copy=False)
+ result = np.isnan(values)
return result
def _isnull_ndarraylike_old(obj):
- values = obj
+ values = getattr(obj,'values',obj)
dtype = values.dtype
if dtype.kind in ('O', 'S', 'U'):
@@ -229,16 +222,9 @@ def _isnull_ndarraylike_old(obj):
elif dtype in _DATELIKE_DTYPES:
# this is the NaT pattern
- v = getattr(values, 'asi8', None)
- if v is None:
- v = values.view('i8')
- result = v == tslib.iNaT
+ result = values.view('i8') == tslib.iNaT
else:
- result = -np.isfinite(obj)
-
- if isinstance(obj, ABCSeries):
- from pandas import Series
- result = Series(result, index=obj.index, copy=False)
+ result = -np.isfinite(values)
return result
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index bb47709532523..9dadeb4ef6e97 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -2019,6 +2019,18 @@ def interpolate(self, to_replace, method='pad', axis=0, inplace=False,
#----------------------------------------------------------------------
# Action Methods
+ def isnull(self):
+ """
+ Return a boolean same-sized object indicating if the values are null
+ """
+ return self.__class__(isnull(self),**self._construct_axes_dict())._propogate_attributes(self)
+
+ def notnull(self):
+ """
+ Return a boolean same-sized object indicating if the values are not null
+ """
+ return self.__class__(notnull(self),**self._construct_axes_dict())._propogate_attributes(self)
+
def clip(self, lower=None, upper=None, out=None):
"""
Trim values at input threshold(s)
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index a10f3582bfe45..17d524978158b 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -539,7 +539,7 @@ def wrapper(self, other):
# mask out the invalids
if mask.any():
- res[mask.values] = masker
+ res[mask] = masker
return res
return wrapper
diff --git a/pandas/core/series.py b/pandas/core/series.py
index ddbb67cc0c323..d185939d6abc9 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2174,9 +2174,6 @@ def dropna(self):
valid = lambda self: self.dropna()
- isnull = isnull
- notnull = notnull
-
def first_valid_index(self):
"""
Return label for first non-NA/null value
diff --git a/pandas/tests/test_common.py b/pandas/tests/test_common.py
index 740e3d0821cd7..5efc22ef927d5 100644
--- a/pandas/tests/test_common.py
+++ b/pandas/tests/test_common.py
@@ -64,11 +64,9 @@ def test_notnull():
assert result.sum() == 2
with cf.option_context("mode.use_inf_as_null", False):
- float_series = Series(np.random.randn(5))
- obj_series = Series(np.random.randn(5), dtype=object)
- assert(isinstance(notnull(float_series), Series))
- assert(isinstance(notnull(obj_series), Series))
-
+ for s in [tm.makeFloatSeries(),tm.makeStringSeries(),
+ tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]:
+ assert(isinstance(isnull(s), np.ndarray))
def test_isnull():
assert not isnull(1.)
@@ -77,10 +75,9 @@ def test_isnull():
assert not isnull(np.inf)
assert not isnull(-np.inf)
- float_series = Series(np.random.randn(5))
- obj_series = Series(np.random.randn(5), dtype=object)
- assert(isinstance(isnull(float_series), Series))
- assert(isinstance(isnull(obj_series), Series))
+ for s in [tm.makeFloatSeries(),tm.makeStringSeries(),
+ tm.makeObjectSeries(),tm.makeTimeSeries(),tm.makePeriodSeries()]:
+ assert(isinstance(isnull(s), np.ndarray))
# call on DataFrame
df = DataFrame(np.random.randn(10, 5))
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 2e386a7e2816a..64a45d344f2a9 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1546,7 +1546,7 @@ def test_set_value_resize(self):
res = self.frame.copy()
res3 = res.set_value('foobar', 'baz', 5)
self.assert_(com.is_float_dtype(res3['baz']))
- self.assert_(isnull(res3['baz'].drop(['foobar'])).values.all())
+ self.assert_(isnull(res3['baz'].drop(['foobar'])).all())
self.assertRaises(ValueError, res3.set_value, 'foobar', 'baz', 'sam')
def test_set_value_with_index_dtype_change(self):
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 7f3ea130259dc..8abc068fd6d24 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -3667,17 +3667,17 @@ def test_valid(self):
def test_isnull(self):
ser = Series([0, 5.4, 3, nan, -0.001])
- assert_series_equal(
- ser.isnull(), Series([False, False, False, True, False]))
+ np.array_equal(
+ ser.isnull(), Series([False, False, False, True, False]).values)
ser = Series(["hi", "", nan])
- assert_series_equal(ser.isnull(), Series([False, False, True]))
+ np.array_equal(ser.isnull(), Series([False, False, True]).values)
def test_notnull(self):
ser = Series([0, 5.4, 3, nan, -0.001])
- assert_series_equal(
- ser.notnull(), Series([True, True, True, False, True]))
+ np.array_equal(
+ ser.notnull(), Series([True, True, True, False, True]).values)
ser = Series(["hi", "", nan])
- assert_series_equal(ser.notnull(), Series([True, True, False]))
+ np.array_equal(ser.notnull(), Series([True, True, False]).values)
def test_shift(self):
shifted = self.ts.shift(1)
| - API/CLN: provide isnull/notnull on NDFrame objects
- PERF: remove auto-boxing on isnull/notnull
related #5135
| https://api.github.com/repos/pandas-dev/pandas/pulls/5154 | 2013-10-08T13:04:26Z | 2013-10-08T13:14:12Z | 2013-10-08T13:14:12Z | 2014-07-16T08:33:41Z |
TST/CI/BUG: fix borked call to read_html testing code | diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 9b0fb1cacfb65..71567fe2e599a 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -86,13 +86,14 @@ def test_bs4_version_fails():
class TestReadHtml(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ _skip_if_none_of(('bs4', 'html5lib'))
+
def read_html(self, *args, **kwargs):
kwargs['flavor'] = kwargs.get('flavor', self.flavor)
return read_html(*args, **kwargs)
- def try_skip(self):
- _skip_if_none_of(('bs4', 'html5lib'))
-
def setup_data(self):
self.spam_data = os.path.join(DATA_PATH, 'spam.html')
self.banklist_data = os.path.join(DATA_PATH, 'banklist.html')
@@ -101,7 +102,6 @@ def setup_flavor(self):
self.flavor = 'bs4'
def setUp(self):
- self.try_skip()
self.setup_data()
self.setup_flavor()
@@ -569,32 +569,29 @@ def test_different_number_of_rows(self):
def test_parse_dates_list(self):
df = DataFrame({'date': date_range('1/1/2001', periods=10)})
expected = df.to_html()
- res = read_html(expected, parse_dates=[0], index_col=0)
+ res = self.read_html(expected, parse_dates=[0], index_col=0)
tm.assert_frame_equal(df, res[0])
def test_parse_dates_combine(self):
raw_dates = Series(date_range('1/1/2001', periods=10))
df = DataFrame({'date': raw_dates.map(lambda x: str(x.date())),
'time': raw_dates.map(lambda x: str(x.time()))})
- res = read_html(df.to_html(), parse_dates={'datetime': [1, 2]},
- index_col=1)
+ res = self.read_html(df.to_html(), parse_dates={'datetime': [1, 2]},
+ index_col=1)
newdf = DataFrame({'datetime': raw_dates})
tm.assert_frame_equal(newdf, res[0])
class TestReadHtmlLxml(unittest.TestCase):
- def setUp(self):
- self.try_skip()
+ @classmethod
+ def setUpClass(cls):
+ _skip_if_no('lxml')
def read_html(self, *args, **kwargs):
self.flavor = ['lxml']
- self.try_skip()
kwargs['flavor'] = kwargs.get('flavor', self.flavor)
return read_html(*args, **kwargs)
- def try_skip(self):
- _skip_if_no('lxml')
-
def test_data_fail(self):
from lxml.etree import XMLSyntaxError
spam_data = os.path.join(DATA_PATH, 'spam.html')
@@ -616,8 +613,22 @@ def test_works_on_valid_markup(self):
def test_fallback_success(self):
_skip_if_none_of(('bs4', 'html5lib'))
banklist_data = os.path.join(DATA_PATH, 'banklist.html')
- self.read_html(banklist_data, '.*Water.*', flavor=['lxml',
- 'html5lib'])
+ self.read_html(banklist_data, '.*Water.*', flavor=['lxml', 'html5lib'])
+
+ def test_parse_dates_list(self):
+ df = DataFrame({'date': date_range('1/1/2001', periods=10)})
+ expected = df.to_html()
+ res = self.read_html(expected, parse_dates=[0], index_col=0)
+ tm.assert_frame_equal(df, res[0])
+
+ def test_parse_dates_combine(self):
+ raw_dates = Series(date_range('1/1/2001', periods=10))
+ df = DataFrame({'date': raw_dates.map(lambda x: str(x.date())),
+ 'time': raw_dates.map(lambda x: str(x.time()))})
+ res = self.read_html(df.to_html(), parse_dates={'datetime': [1, 2]},
+ index_col=1)
+ newdf = DataFrame({'datetime': raw_dates})
+ tm.assert_frame_equal(newdf, res[0])
def test_invalid_flavor():
| closes #5150.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5151 | 2013-10-08T03:12:45Z | 2013-10-08T03:41:58Z | 2013-10-08T03:41:58Z | 2014-06-27T22:14:09Z |
TST: use default utf8 encoding when passing encoding to parser as Bytes (GH5141) | diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index ada6ffdc34257..cf0c01c8dff50 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -2147,7 +2147,7 @@ def test_fwf_compression(self):
def test_BytesIO_input(self):
if not compat.PY3:
raise nose.SkipTest("Bytes-related test - only needs to work on Python 3")
- result = pd.read_fwf(BytesIO("ืฉืืื\nืฉืืื".encode('utf8')), widths=[2,2])
+ result = pd.read_fwf(BytesIO("ืฉืืื\nืฉืืื".encode('utf8')), widths=[2,2], encoding='utf8')
expected = pd.DataFrame([["ืฉื", "ืื"]], columns=["ืฉื", "ืื"])
tm.assert_frame_equal(result, expected)
data = BytesIO("ืฉืืื::1234\n562::123".encode('cp1255'))
@@ -2319,9 +2319,9 @@ def test_variable_width_unicode(self):
ืฉื ืื
'''.strip('\r\n')
expected = pd.read_fwf(BytesIO(test.encode('utf8')),
- colspecs=[(0, 4), (5, 9)], header=None)
+ colspecs=[(0, 4), (5, 9)], header=None, encoding='utf8')
tm.assert_frame_equal(expected, read_fwf(BytesIO(test.encode('utf8')),
- header=None))
+ header=None, encoding='utf8'))
class TestCParserHighMemory(ParserTests, unittest.TestCase):
| closes #5141
| https://api.github.com/repos/pandas-dev/pandas/pulls/5149 | 2013-10-08T01:50:52Z | 2013-10-08T02:04:28Z | 2013-10-08T02:04:28Z | 2014-07-16T08:33:38Z |
ENH: Cleanup backend for Offsets and Period | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 121cfb92b0eb2..e16a12ce0bc91 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -312,6 +312,11 @@ Improvements to existing features
in item handling (:issue:`6745`, :issue:`6988`).
- Improve performance in certain reindexing operations by optimizing ``take_2d`` (:issue:`6749`)
- Arrays of strings can be wrapped to a specified width (``str.wrap``) (:issue:`6999`)
+- Constructor for ``Period`` now takes full set of possible ``Offset`` objects for ``freq``
+ parameter. (:issue:`4878`)
+- Extends the number of ``Period``s supported by allowing for Python defined ``Period``s (:issue:`5148`)
+- Added ``inferred_freq_offset`` as property on ``DatetimeIndex`` to provide the actual
+ Offset object rather than the string representation (:issue:`5082`).
.. _release.bug_fixes-0.14.0:
@@ -459,6 +464,7 @@ Bug Fixes
- Bug in timeseries-with-frequency plot cursor display (:issue:`5453`)
- Bug surfaced in groupby.plot when using a ``Float64Index`` (:issue:`7025`)
- Stopped tests from failing if options data isn't able to be downloaded from Yahoo (:issue:`7034`)
+- Bug in not correctly treating 'QS', 'BQS', 'BQ', 'Y' as frquency aliases (:issue:`5028`).
pandas 0.13.1
-------------
diff --git a/doc/source/v0.14.0.txt b/doc/source/v0.14.0.txt
index f19c1210b6a37..fc561a1f99387 100644
--- a/doc/source/v0.14.0.txt
+++ b/doc/source/v0.14.0.txt
@@ -234,7 +234,7 @@ API changes
covs[df.index[-1]]
- ``Series.iteritems()`` is now lazy (returns an iterator rather than a list). This was the documented behavior prior to 0.14. (:issue:`6760`)
-
+- ``pd.infer_freq`` and ``DatetimeIndex.inferred_freq`` now return a DateOffset subclass rather than a string. (:issue:`5082`)
- Added ``nunique`` and ``value_counts`` functions to ``Index`` for counting unique elements. (:issue:`6734`)
- ``stack`` and ``unstack`` now raise a ``ValueError`` when the ``level`` keyword refers
to a non-unique item in the ``Index`` (previously raised a ``KeyError``).
@@ -554,6 +554,9 @@ Enhancements
values='Quantity', aggfunc=np.sum)
- str.wrap implemented (:issue:`6999`)
+- Constructor for ``Period`` now takes full set of possible ``Offset`` objects for ``freq``
+ parameter. (:issue:`4878`)
+- Extends the number of ``Period``s supported by allowing for Python defined ``Period``s (:issue:`5148`)
.. _whatsnew_0140.performance:
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 74f038b2bad23..d3f8b4c4ed831 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -2361,7 +2361,7 @@ def to_period(self, freq=None, copy=True):
new_values = new_values.copy()
if freq is None:
- freq = self.index.freqstr or self.index.inferred_freq
+ freq = self.index.freq or self.index.inferred_freq
new_index = self.index.to_period(freq=freq)
return self._constructor(new_values,
index=new_index).__finalize__(self)
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index e3c933e116987..12a8ac4844552 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -71,7 +71,7 @@ def get_freq(freq):
return freq
-def get_freq_code(freqstr):
+def get_freq_code(freqstr, as_periodstr=False):
"""
Parameters
@@ -81,7 +81,13 @@ def get_freq_code(freqstr):
-------
"""
if isinstance(freqstr, DateOffset):
- freqstr = (get_offset_name(freqstr), freqstr.n)
+ freqstr_raw = get_offset_name(freqstr)
+
+ #if we can, convert to canonical period str
+ if as_periodstr:
+ freqstr_raw = get_period_alias(freqstr_raw)
+
+ freqstr = (freqstr_raw, freqstr.n)
if isinstance(freqstr, tuple):
if (com.is_integer(freqstr[0]) and
@@ -113,7 +119,7 @@ def _get_freq_str(base, mult=1):
code = _reverse_period_code_map.get(base)
if mult == 1:
return code
- return str(mult) + code
+ return "%s%s" % (mult, code)
#----------------------------------------------------------------------
@@ -157,6 +163,7 @@ def _get_freq_str(base, mult=1):
'H': 'H',
'Q': 'Q',
'A': 'A',
+ 'Y': 'A',
'W': 'W',
'M': 'M'
}
@@ -202,6 +209,9 @@ def get_period_alias(offset_str):
'Q@FEB': 'BQ-FEB',
'Q@MAR': 'BQ-MAR',
'Q': 'Q-DEC',
+ 'QS': 'QS-JAN',
+ 'BQ': 'BQ-DEC',
+ 'BQS': 'BQS-JAN',
'A': 'A-DEC', # YearEnd(month=12),
'AS': 'AS-JAN', # YearBegin(month=1),
@@ -387,19 +397,44 @@ def get_legacy_offset_name(offset):
name = offset.name
return _legacy_reverse_map.get(name, name)
-def get_standard_freq(freq):
+def get_standard_freq(freq, as_periodstr=False):
"""
- Return the standardized frequency string
+ Return the standardized frequency string.
+ as_periodstr=True returns the string representing the period rather than
+ the frequency. An example when these may differ is MonthBegin.
+ MonthBegin and MonthEnd are two different frequencies but they define the
+ same period.
+
+ >>> get_standard_freq(pandas.tseries.offsets.MonthBegin(), as_periodstr=False)
+ 'L'
+ >>> get_standard_freq(pandas.tseries.offsets.MonthEnd(), as_periodstr=False)
+ 'M'
+ >>> get_standard_freq(pandas.tseries.offsets.MonthBegin(), as_periodstr=True)
+ 'M'
+ >>> get_standard_freq(pandas.tseries.offsets.MonthEnd(), as_periodstr=True)
+ 'M'
"""
if freq is None:
return None
- if isinstance(freq, DateOffset):
- return get_offset_name(freq)
+ code, stride = get_freq_code(freq, as_periodstr=as_periodstr)
- code, stride = get_freq_code(freq)
return _get_freq_str(code, stride)
+def _get_standard_period_freq_impl(freq):
+ return get_standard_freq(freq, as_periodstr=True)
+
+def get_standard_period_freq(freq):
+ if isinstance(freq, DateOffset):
+ return freq.periodstr
+
+ return _get_standard_period_freq_impl(freq)
+
+def _assert_mult_1(mult):
+ if mult != 1:
+ # TODO: Better error message - this is slightly confusing
+ raise ValueError('Only mult == 1 supported')
+
#----------------------------------------------------------------------
# Period codes
@@ -629,7 +664,7 @@ def infer_freq(index, warn=True):
Returns
-------
- freq : string or None
+ freq : DateOffset object or None
None if no discernible frequency
TypeError if the index is not datetime-like
"""
@@ -650,7 +685,28 @@ def infer_freq(index, warn=True):
index = pd.DatetimeIndex(index)
inferer = _FrequencyInferer(index, warn=warn)
- return inferer.get_freq()
+ return to_offset(inferer.get_freq())
+
+
+def infer_freqstr(index, warn=True):
+ """
+ Infer the most likely frequency given the input index. If the frequency is
+ uncertain, a warning will be printed
+
+ Parameters
+ ----------
+ index : DatetimeIndex
+ if passed a Series will use the values of the series (NOT THE INDEX)
+ warn : boolean, default True
+
+ Returns
+ -------
+ freq : string or None
+ None if no discernible frequency
+ TypeError if the index is not datetime-like
+ """
+ return infer_freq(index, warn).freqstr
+
_ONE_MICRO = long(1000)
_ONE_MILLI = _ONE_MICRO * 1000
@@ -887,9 +943,11 @@ def is_subperiod(source, target):
-------
is_subperiod : boolean
"""
+ source_raw = source
if isinstance(source, offsets.DateOffset):
source = source.rule_code
+ target_raw = target
if isinstance(target, offsets.DateOffset):
target = target.rule_code
@@ -918,6 +976,12 @@ def is_subperiod(source, target):
return source in ['T', 'S']
elif target == 'S':
return source in ['S']
+ elif isinstance(source_raw, offsets._NonCythonPeriod):
+ return source_raw.is_subperiod(target_raw)
+ elif isinstance(target_raw, offsets._NonCythonPeriod):
+ return target_raw.is_superperiod(source_raw)
+ else:
+ return False
def is_superperiod(source, target):
@@ -936,9 +1000,11 @@ def is_superperiod(source, target):
-------
is_superperiod : boolean
"""
+ source_raw = source
if isinstance(source, offsets.DateOffset):
source = source.rule_code
+ target_raw = target
if isinstance(target, offsets.DateOffset):
target = target.rule_code
@@ -971,6 +1037,12 @@ def is_superperiod(source, target):
return target in ['T', 'S']
elif source == 'S':
return target in ['S']
+ elif isinstance(source_raw, offsets._NonCythonPeriod):
+ return source_raw.is_superperiod(target_raw)
+ elif isinstance(target_raw, offsets._NonCythonPeriod):
+ return target_raw.is_subperiod(source_raw)
+ else:
+ return False
def _get_rule_month(source, default='DEC'):
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index a2e01c8110261..5345fc6f8abcf 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -14,7 +14,7 @@
from pandas.compat import u
from pandas.tseries.frequencies import (
infer_freq, to_offset, get_period_alias,
- Resolution, get_reso_string, get_offset)
+ Resolution, get_reso_string, get_offset, infer_freqstr)
from pandas.tseries.offsets import DateOffset, generate_range, Tick, CDay
from pandas.tseries.tools import parse_time_string, normalize_date
from pandas.util.decorators import cache_readonly
@@ -792,8 +792,8 @@ def to_period(self, freq=None):
msg = "You must pass a freq argument as current index has none."
raise ValueError(msg)
- if freq is None:
- freq = get_period_alias(self.freqstr)
+ if freq is None: # No reason no convert to str; keep w/e freq is
+ freq = self.freq
return PeriodIndex(self.values, freq=freq, tz=self.tz)
@@ -1427,6 +1427,13 @@ def inferred_freq(self):
except ValueError:
return None
+ @cache_readonly
+ def inferred_freqstr(self):
+ try:
+ return infer_freqstr(self)
+ except ValueError:
+ return None
+
@property
def freqstr(self):
""" return the frequency object as a string if its set, otherwise None """
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 1b8b82235cf08..67950587b9026 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -3,7 +3,7 @@
from pandas import compat
import numpy as np
-from pandas.tseries.tools import to_datetime
+from pandas.tseries.tools import to_datetime, _try_parse_qtr_time_string
# import after tools, dateutil check
from dateutil.relativedelta import relativedelta, weekday
@@ -60,6 +60,54 @@ class CacheableOffset(object):
_cacheable = True
+class _NonCythonPeriod(object):
+ """
+ This class represents the base class for Offsets for which Period logic is
+ not implemented in Cython. This allows fully Python defined Offsets with
+ Period support.
+ All subclasses are expected to implement get_start_dt, get_end_dt,
+ period_format, get_period_ordinal, is_superperiod and is_subperiod.
+ """
+
+ def get_start_dt(self, ordinal):
+ raise NotImplementedError("get_start_dt")
+
+ def get_end_dt(self, ordinal):
+ raise NotImplementedError("get_end_dt")
+
+ def period_format(self, ordinal, fmt=None):
+ raise NotImplementedError("period_format")
+
+ def get_period_ordinal(self, dt):
+ raise NotImplementedError("get_period_ordinal")
+
+ def dt64arr_to_periodarr(self, data, tz):
+ f = np.vectorize(lambda x: self.get_period_ordinal(Timestamp(x)))
+ return f(data.view('i8'))
+
+ def period_asfreq_arr(self, values, freq, end):
+ from pandas.tseries.period import Period
+ f = np.vectorize(lambda x:
+ Period(value=self.period_asfreq_value(x, end), freq=freq).ordinal)
+ return f(values.view('i8'))
+
+ def period_fromfreq_arr(self, values, freq_int_from, end):
+ from pandas.tseries.period import _change_period_freq
+ offset = 0 if end else 1
+ f = np.vectorize(lambda x:
+ _change_period_freq(x, freq_int_from, self).ordinal - offset)
+ return f(values.view('i8'))
+
+ def period_asfreq_value(self, ordinal, end):
+ return self.get_end_dt(ordinal) if end else self.get_start_dt(ordinal)
+
+ def is_superperiod(self, target):
+ raise NotImplementedError("is_superperiod")
+
+ def is_subperiod(self, target):
+ raise NotImplementedError("is_subperiod")
+
+
class DateOffset(object):
"""
Standard kind of date increment used for a date range.
@@ -295,6 +343,19 @@ def freqstr(self):
return fstr
+ @property
+ def periodstr(self):
+ """
+ The string representation for the Period defined by this offset.
+ This may differ from freqstr which defines a freq. For example Month vs.
+ start of Month.
+ """
+ from pandas.tseries.frequencies import _get_standard_period_freq_impl
+ return _get_standard_period_freq_impl(self)
+
+ def parse_time_string(self, arg):
+ return None
+
class SingleConstructorOffset(DateOffset):
@classmethod
@@ -1654,14 +1715,14 @@ def get_rule_code_suffix(self):
_int_to_weekday[self.weekday])
@classmethod
- def _parse_suffix(cls, varion_code, startingMonth_code, weekday_code):
- if varion_code == "N":
+ def _parse_suffix(cls, variation_code, startingMonth_code, weekday_code):
+ if variation_code == "N":
variation = "nearest"
- elif varion_code == "L":
+ elif variation_code == "L":
variation = "last"
else:
raise ValueError(
- "Unable to parse varion_code: %s" % (varion_code,))
+ "Unable to parse variation_code: %s" % (variation_code,))
startingMonth = _month_to_int[startingMonth_code]
weekday = _weekday_to_int[weekday_code]
@@ -1677,7 +1738,7 @@ def _from_name(cls, *args):
return cls(**cls._parse_suffix(*args))
-class FY5253Quarter(DateOffset):
+class FY5253Quarter(_NonCythonPeriod, DateOffset):
"""
DateOffset increments between business quarter dates
for 52-53 week fiscal year (also known as a 4-4-5 calendar).
@@ -1828,6 +1889,85 @@ def rule_code(self):
def _from_name(cls, *args):
return cls(**dict(FY5253._parse_suffix(*args[:-1]),
qtr_with_extra_week=int(args[-1])))
+
+ def _get_ordinal_from_y_q(self, fy, fq):
+ """Take zero indexed fq"""
+ return fy * 4 + fq
+
+ def get_period_ordinal(self, dt):
+ year_end = self._offset.get_year_end(dt)
+ year_end_year = year_end.year
+
+ if dt <= year_end:
+ if year_end.month < self._offset.startingMonth:
+ year_end_year -= 1
+ fy = year_end_year
+ else:
+ fy = year_end_year + 1
+ year_end = year_end + self._offset
+
+ fq = 4
+ while dt <= year_end:
+ year_end = year_end - self
+ fq -= 1
+
+ return self._get_ordinal_from_y_q(fy, fq)
+
+ @property
+ def periodstr(self):
+ return self.rule_code
+
+ def period_format(self, ordinal, fmt=None):
+ fy = ordinal // 4
+ fq = (ordinal % 4) + 1
+
+ return "%dQ%d" % (fy, fq)
+
+ def parse_time_string(self, arg):
+ qtr_parsed = _try_parse_qtr_time_string(arg)
+ if qtr_parsed is None:
+ return None
+ else:
+ fy, fq = qtr_parsed
+ return self.get_end_dt(self._get_ordinal_from_y_q(fy, fq - 1))
+
+ def get_start_dt(self, ordinal):
+ fy = ordinal // 4
+ fq = (ordinal % 4) + 1
+
+ year_end = self._offset.get_year_end(datetime(fy, 1, 1))
+ countdown = 4-fq+1
+ while countdown:
+ countdown -= 1
+ year_end = year_end-self
+
+ return year_end + relativedelta(days=1)
+
+ def get_end_dt(self, ordinal):
+ fy = ordinal // 4
+ fq = (ordinal % 4) + 1
+
+ year_end = self._offset.get_year_end(datetime(fy, 1, 1))
+ countdown = 4-fq
+ while countdown:
+ countdown -= 1
+ year_end = year_end-self
+
+ return year_end
+
+ def is_superperiod(self, target):
+ if not isinstance(target, DateOffset):
+ from pandas.tseries.frequencies import get_offset
+ target = get_offset(target)
+
+ if type(target) == Week:
+ return target.weekday == self._offset.weekday
+ elif type(target) == Day:
+ return True
+
+ def is_subperiod(self, target):
+ #TODO Return True for FY5253 after FY5253 handles periods methods
+ return False
class Easter(DateOffset):
'''
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index 6d9e32433cd1e..3411303188a7d 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -6,7 +6,8 @@
from pandas.core.base import PandasObject
from pandas.tseries.frequencies import (get_freq_code as _gfc,
- _month_numbers, FreqGroup)
+ _month_numbers, FreqGroup,
+ _assert_mult_1)
from pandas.tseries.index import DatetimeIndex, Int64Index, Index
from pandas.tseries.tools import parse_time_string
import pandas.tseries.frequencies as _freq_mod
@@ -20,6 +21,8 @@
import pandas.tslib as tslib
import pandas.algos as _algos
from pandas.compat import map, zip, u
+from pandas.tseries.offsets import DateOffset, _NonCythonPeriod
+from pandas.util.decorators import cache_readonly
#---------------
@@ -27,7 +30,7 @@
def _period_field_accessor(name, alias):
def f(self):
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
return tslib.get_period_field(alias, self.ordinal, base)
f.__name__ = name
return property(f)
@@ -35,11 +38,21 @@ def f(self):
def _field_accessor(name, alias):
def f(self):
- base, mult = _gfc(self.freq)
+ base, _ = _gfc(self.freq)
return tslib.get_period_field_arr(alias, self.values, base)
f.__name__ = name
return property(f)
+def _check_freq_mult(freq):
+ if isinstance(freq, DateOffset):
+ mult = freq.n
+ else:
+ _, mult = _gfc(freq, as_periodstr=True)
+
+ _assert_mult_1(mult)
+
+def _change_period_freq(ordinal_from, freq_int_from, freq_to):
+ return Period(Timestamp(tslib.period_ordinal_to_dt64(ordinal_from, freq=freq_int_from)), freq=freq_to)
class Period(PandasObject):
"""
@@ -70,8 +83,6 @@ def __init__(self, value=None, freq=None, ordinal=None,
# periods such as A, Q, etc. Every five minutes would be, e.g.,
# ('T', 5) but may be passed in as a string like '5T'
- self.freq = None
-
# ordinal is the period offset from the gregorian proleptic epoch
self.ordinal = None
@@ -94,7 +105,9 @@ def __init__(self, value=None, freq=None, ordinal=None,
elif isinstance(value, Period):
other = value
- if freq is None or _gfc(freq) == _gfc(other.freq):
+ if freq is None \
+ or freq == other.freq \
+ or _gfc(freq, as_periodstr=True) == _gfc(other.freq, as_periodstr=True):#TODO: use freqstr?
self.ordinal = other.ordinal
freq = other.freq
else:
@@ -118,22 +131,35 @@ def __init__(self, value=None, freq=None, ordinal=None,
else:
msg = "Value must be Period, string, integer, or datetime"
raise ValueError(msg)
+
+ _check_freq_mult(freq)
+
+ #TODO: Fix this
+ if not isinstance(freq, DateOffset):
+ freq = _freq_mod._get_freq_str(_gfc(freq)[0])
- base, mult = _gfc(freq)
- if mult != 1:
- # TODO: Better error message - this is slightly confusing
- raise ValueError('Only mult == 1 supported')
+ self.freq = freq
if self.ordinal is None:
- self.ordinal = tslib.period_ordinal(dt.year, dt.month, dt.day,
- dt.hour, dt.minute, dt.second, dt.microsecond, 0,
- base)
+ if isinstance(freq, _NonCythonPeriod):
+ self.ordinal = freq.get_period_ordinal(dt)
+ else:
+ base, _ = _gfc(freq, as_periodstr=True)
- self.freq = _freq_mod._get_freq_str(base)
+ self.ordinal = tslib.period_ordinal(dt.year, dt.month, dt.day,
+ dt.hour, dt.minute, dt.second, dt.microsecond, 0,
+ base)
+
+ @cache_readonly
+ def freqstr(self):
+ return _freq_mod.get_standard_period_freq(self.freq)
+
+ def _same_freq(self, other):
+ return other.freq == self.freq or other.freqstr == self.freqstr
def __eq__(self, other):
if isinstance(other, Period):
- if other.freq != self.freq:
+ if not self._same_freq(other):
raise ValueError("Cannot compare non-conforming periods")
return (self.ordinal == other.ordinal
and _gfc(self.freq) == _gfc(other.freq))
@@ -197,16 +223,23 @@ def asfreq(self, freq, how='E'):
resampled : Period
"""
how = _validate_end_alias(how)
- base1, mult1 = _gfc(self.freq)
- base2, mult2 = _gfc(freq)
+ _check_freq_mult(freq)
+ end = how == 'E'
- if mult2 != 1:
- raise ValueError('Only mult == 1 supported')
+ if isinstance(self.freq, _NonCythonPeriod):
+ value = self.freq.period_asfreq_value(self.ordinal, end)
+ return Period(value=value, freq=freq)
+ elif isinstance(freq, _NonCythonPeriod):
+ freq_int, _ = _gfc(self.freq)
+ return _change_period_freq(ordinal_from=self.ordinal, freq_int_from=freq_int, freq_to=freq)
+ else:
- end = how == 'E'
- new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2, end)
+ base1, _ = _gfc(self.freq)
+ base2, _ = _gfc(freq)
+
+ new_ordinal = tslib.period_asfreq(self.ordinal, base1, base2, end)
- return Period(ordinal=new_ordinal, freq=base2)
+ return Period(ordinal=new_ordinal, freq=base2)
@property
def start_time(self):
@@ -264,17 +297,22 @@ def to_timestamp(self, freq=None, how='start', tz=None):
@classmethod
def now(cls, freq=None):
return Period(datetime.now(), freq=freq)
+
+ def __get_formatted(self, fmt=None):
+ if isinstance(self.freq, _NonCythonPeriod):
+ return self.freq.period_format(self.ordinal, fmt=fmt)
+
+ base, mult = _gfc(self.freq, as_periodstr=True)
+ return tslib.period_format(self.ordinal, base, fmt=fmt)
def __repr__(self):
- base, mult = _gfc(self.freq)
- formatted = tslib.period_format(self.ordinal, base)
- freqstr = _freq_mod._reverse_period_code_map[base]
-
+ formatted = self.__get_formatted()
+
if not compat.PY3:
encoding = com.get_option("display.encoding")
formatted = formatted.encode(encoding)
- return "Period('%s', '%s')" % (formatted, freqstr)
+ return "Period('%s', '%s')" % (formatted, self.freqstr)
def __unicode__(self):
"""
@@ -283,9 +321,9 @@ def __unicode__(self):
Invoked by unicode(df) in py2 only. Yields a Unicode String in both
py2/py3.
"""
- base, mult = _gfc(self.freq)
- formatted = tslib.period_format(self.ordinal, base)
- value = ("%s" % formatted)
+
+ formatted = self.__get_formatted()
+ value = str(formatted)
return value
def strftime(self, fmt):
@@ -425,8 +463,7 @@ def strftime(self, fmt):
>>> a.strftime('%b. %d, %Y was a %A')
'Jan. 01, 2001 was a Monday'
"""
- base, mult = _gfc(self.freq)
- return tslib.period_format(self.ordinal, base, fmt)
+ return self.__get_formatted(fmt)
def _get_date_and_freq(value, freq):
@@ -471,11 +508,15 @@ def dt64arr_to_periodarr(data, freq, tz):
if data.dtype != np.dtype('M8[ns]'):
raise ValueError('Wrong dtype: %s' % data.dtype)
- base, mult = _gfc(freq)
- return tslib.dt64arr_to_periodarr(data.view('i8'), base, tz)
+ if isinstance(freq, _NonCythonPeriod):
+ return freq.dt64arr_to_periodarr(data, tz)
+ else:
+ base, _ = _gfc(freq, as_periodstr=True)
+ return tslib.dt64arr_to_periodarr(data.view('i8'), base, tz)
# --- Period index sketch
+
def _period_index_cmp(opname):
"""
Wrap comparison operations to convert datetime-like to datetime64
@@ -483,12 +524,12 @@ def _period_index_cmp(opname):
def wrapper(self, other):
if isinstance(other, Period):
func = getattr(self.values, opname)
- if other.freq != self.freq:
+ if not other._same_freq(self):
raise AssertionError("Frequencies must be equal")
result = func(other.ordinal)
elif isinstance(other, PeriodIndex):
- if other.freq != self.freq:
+ if not other._same_freq(self):
raise AssertionError("Frequencies must be equal")
return getattr(self.values, opname)(other.values)
else:
@@ -523,7 +564,7 @@ class PeriodIndex(Int64Index):
dtype : NumPy dtype (default: i8)
copy : bool
Make a copy of input ndarray
- freq : string or period object, optional
+ freq : string or DateOffset object, optional
One of pandas period strings or corresponding objects
start : starting value, period-like, optional
If data is None, used as the start point in generating regular
@@ -565,7 +606,7 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
quarter=None, day=None, hour=None, minute=None, second=None,
tz=None):
- freq = _freq_mod.get_standard_freq(freq)
+ freq_orig = freq
if periods is not None:
if com.is_float(periods):
@@ -580,17 +621,26 @@ def __new__(cls, data=None, ordinal=None, freq=None, start=None, end=None,
else:
fields = [year, month, quarter, day, hour, minute, second]
data, freq = cls._generate_range(start, end, periods,
- freq, fields)
+ freq_orig, fields)
else:
- ordinal, freq = cls._from_arraylike(data, freq, tz)
+ ordinal, freq = cls._from_arraylike(data, freq_orig, tz)
data = np.array(ordinal, dtype=np.int64, copy=False)
subarr = data.view(cls)
subarr.name = name
- subarr.freq = freq
+
+ # If freq_orig was initially none, fall back to freq
+ subarr.freq = freq_orig if freq_orig is not None else freq
return subarr
+ @cache_readonly
+ def freqstr(self):
+ return _freq_mod.get_standard_period_freq(self.freq)
+
+ def _same_freq(self, other):
+ return other.freq == self.freq or other.freqstr == self.freqstr
+
@classmethod
def _generate_range(cls, start, end, periods, freq, fields):
field_count = com._count_not_none(*fields)
@@ -681,7 +731,8 @@ def __contains__(self, key):
return key.ordinal in self._engine
def _box_values(self, values):
- f = lambda x: Period(ordinal=x, freq=self.freq)
+ freq = self.freq
+ f = lambda x: Period(ordinal=x, freq=freq)
return lib.map_infer(values, f)
def asof_locs(self, where, mask):
@@ -748,27 +799,33 @@ def factorize(self):
uniques = PeriodIndex(ordinal=uniques, freq=self.freq)
return labels, uniques
- @property
- def freqstr(self):
- return self.freq
-
def asfreq(self, freq=None, how='E'):
how = _validate_end_alias(how)
+ _check_freq_mult(freq)
- freq = _freq_mod.get_standard_freq(freq)
+ freq_orig = freq
- base1, mult1 = _gfc(self.freq)
- base2, mult2 = _gfc(freq)
+ end = how == 'E'
- if mult2 != 1:
- raise ValueError('Only mult == 1 supported')
+ if isinstance(self.freq, _NonCythonPeriod):
+ new_data = self.freq.period_asfreq_arr(
+ self.values, freq_orig, end)
+ freq = _freq_mod.get_standard_freq(freq)
+ elif isinstance(freq_orig, _NonCythonPeriod):
+ freq = freq_orig.periodstr
+ freq_int_from, _ = _gfc(self.freq)
+ new_data = freq_orig.period_fromfreq_arr(
+ self.values, freq_int_from, end)
+ else:
+ freq = _freq_mod.get_standard_freq(freq)
+ base1, _ = _gfc(self.freq)
+ base2, _ = _gfc(freq)
- end = how == 'E'
- new_data = tslib.period_asfreq_arr(self.values, base1, base2, end)
+ new_data = tslib.period_asfreq_arr(self.values, base1, base2, end)
result = new_data.view(PeriodIndex)
result.name = self.name
- result.freq = freq
+ result.freq = freq_orig
return result
def to_datetime(self, dayfirst=False):
@@ -1079,7 +1136,7 @@ def __array_finalize__(self, obj):
def __repr__(self):
output = com.pprint_thing(self.__class__) + '\n'
- output += 'freq: %s\n' % self.freq
+ output += 'freq: %s\n' % self.freqstr
n = len(self)
if n == 1:
output += '[%s]\n' % (self[0])
@@ -1096,7 +1153,7 @@ def __unicode__(self):
prefix = '' if compat.PY3 else 'u'
mapper = "{0}'{{0}}'".format(prefix)
output += '[{0}]'.format(', '.join(map(mapper.format, self)))
- output += ", freq='{0}'".format(self.freq)
+ output += ", freq='{0}'".format(self.freqstr)
output += ')'
return output
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index dd72a5245e7b2..5ac2f4308ed46 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -295,6 +295,7 @@ def _resample_timestamps(self):
def _resample_periods(self):
# assumes set_grouper(obj) already called
axlabels = self.ax
+ source_freq = axlabels.freq
obj = self.obj
if len(axlabels) == 0:
@@ -309,7 +310,7 @@ def _resample_periods(self):
# Start vs. end of period
memb = axlabels.asfreq(self.freq, how=self.convention)
- if is_subperiod(axlabels.freq, self.freq) or self.how is not None:
+ if is_subperiod(source_freq, self.freq) or self.how is not None:
# Downsampling
rng = np.arange(memb.values[0], memb.values[-1] + 1)
bins = memb.searchsorted(rng, side='right')
@@ -317,7 +318,7 @@ def _resample_periods(self):
grouped = obj.groupby(grouper, axis=self.axis)
return grouped.aggregate(self._agg_method)
- elif is_superperiod(axlabels.freq, self.freq):
+ elif is_superperiod(source_freq, self.freq):
# Get the fill indexer
indexer = memb.get_indexer(new_index, method=self.fill_method,
limit=self.limit)
@@ -325,7 +326,7 @@ def _resample_periods(self):
else:
raise ValueError('Frequency %s cannot be resampled to %s'
- % (axlabels.freq, self.freq))
+ % (source_freq, self.freq))
def _take_new_index(obj, indexer, new_index, axis=0):
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 896f469f934c6..40ae8f7dc7a11 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -327,6 +327,11 @@ def test_is_superperiod_subperiod():
assert(fmod.is_superperiod(offsets.Hour(), offsets.Minute()))
assert(fmod.is_subperiod(offsets.Minute(), offsets.Hour()))
+
+def test_get_period_alias_yearly():
+ assert fmod.get_period_alias('Y') == fmod.get_period_alias('A')
+
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 86635271eb9c1..ecafbfdf3cf22 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -17,9 +17,9 @@
WeekOfMonth, format, ole2datetime, QuarterEnd, to_datetime, normalize_date,
get_offset, get_offset_name, get_standard_freq)
-from pandas.tseries.frequencies import _offset_map
+from pandas.tseries.frequencies import _offset_map, cday
from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache
-from pandas.tseries.tools import parse_time_string
+from pandas.tseries.tools import parse_time_string, DateParseError
import pandas.tseries.offsets as offsets
from pandas.tslib import monthrange, OutOfBoundsDatetime, NaT
@@ -1650,6 +1650,7 @@ def test_onOffset(self):
offset_n = FY5253(weekday=WeekDay.TUE, startingMonth=12,
variation="nearest")
+
tests = [
# From Wikipedia (see: http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Saturday_nearest_the_end_of_month)
# 2006-09-02 2006 September 2
@@ -1700,6 +1701,7 @@ def test_onOffset(self):
(offset_n, datetime(2012, 12, 31), False),
(offset_n, datetime(2013, 1, 1), True),
(offset_n, datetime(2013, 1, 2), False),
+
]
for offset, date, expected in tests:
@@ -1716,6 +1718,7 @@ def test_apply(self):
datetime(2011, 1, 2), datetime(2012, 1, 1),
datetime(2012, 12, 30)]
+
DEC_SAT = FY5253(n=-1, startingMonth=12, weekday=5, variation="nearest")
tests = [
@@ -1932,6 +1935,7 @@ def test_onOffset(self):
(offset_n, datetime(2012, 12, 31), False),
(offset_n, datetime(2013, 1, 1), True),
(offset_n, datetime(2013, 1, 2), False)
+
]
for offset, date, expected in tests:
@@ -2626,6 +2630,7 @@ def test_get_offset_name(self):
self.assertEqual(get_offset_name(makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=4)),"REQ-L-MAR-TUE-4")
self.assertEqual(get_offset_name(makeFY5253NearestEndMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=3)), "REQ-N-MAR-TUE-3")
+
def test_get_offset():
assertRaisesRegexp(ValueError, "rule.*GIBBERISH", get_offset, 'gibberish')
assertRaisesRegexp(ValueError, "rule.*QS-JAN-B", get_offset, 'QS-JAN-B')
@@ -2648,12 +2653,18 @@ def test_get_offset():
(name, expected, offset))
-def test_parse_time_string():
- (date, parsed, reso) = parse_time_string('4Q1984')
- (date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984')
- assert date == date_lower
- assert parsed == parsed_lower
- assert reso == reso_lower
+class TestParseTimeString(tm.TestCase):
+ def test_case_sensitivity(self):
+ (date, parsed, reso) = parse_time_string('4Q1984')
+ (date_lower, parsed_lower, reso_lower) = parse_time_string('4q1984')
+
+ self.assertEqual(date, date_lower)
+ self.assertEqual(parsed, parsed_lower)
+ self.assertEqual(reso, reso_lower)
+
+ def test_invalid_string(self):
+ self.assertRaises(DateParseError,
+ parse_time_string, '2013Q1', freq="INVLD-L-DEC-SAT")
def test_get_standard_freq():
@@ -2714,6 +2725,37 @@ def test_rule_code(self):
self.assertEqual(alias, get_offset(alias).rule_code)
self.assertEqual(alias, (get_offset(alias) * 5).rule_code)
+ def test_offset_map(self):
+ #GH5028
+ for name, offset in compat.iteritems(_offset_map):
+ if name == 'C' and cday is None:
+ continue
+ self.assertEqual(name, None if offset is None else offset.rule_code)
+
+ def test_many_to_one_mapping(self):
+ #GH5028
+ offsets = [
+ QuarterBegin(startingMonth=1),
+ BQuarterBegin(startingMonth=1),
+ BQuarterEnd(startingMonth=12),
+ ]
+
+ for offset in offsets:
+ self.assertEqual(get_offset_name(offset), offset.rule_code)
+
+ def test_aliased_offset_equality(self):
+ self.assertEqual(get_offset("Q"), get_offset("Q"))
+ self.assertEqual(get_offset("Q"), get_offset("Q-DEC"))
+ self.assertEqual(get_offset("QS"), get_offset("QS-JAN"))
+ self.assertEqual(get_offset("BQ"), get_offset("BQ-DEC"))
+ self.assertEqual(get_offset("BQS"), get_offset("BQS-JAN"))
+
+ def test_aliased_offset_repr_equality(self):
+ self.assertEqual(repr(get_offset("Q")), repr(get_offset("Q")))
+ self.assertEqual(repr(get_offset("Q")), repr(get_offset("Q-DEC")))
+ self.assertEqual(repr(get_offset("QS")), repr(get_offset("QS-JAN")))
+ self.assertEqual(repr(get_offset("BQ")), repr(get_offset("BQ-DEC")))
+ self.assertEqual(repr(get_offset("BQS")), repr(get_offset("BQS-JAN")))
def test_apply_ticks():
result = offsets.Hour(3).apply(offsets.Hour(4))
@@ -2814,7 +2856,7 @@ def test_str_for_named_is_name(self):
names += ['WOM-' + week + day for week in ('1', '2', '3', '4')
for day in days]
#singletons
- names += ['S', 'T', 'U', 'BM', 'BMS', 'BQ', 'QS'] # No 'Q'
+ names += ['S', 'T', 'U', 'BM', 'BMS', ] # No 'Q', 'BQ', 'QS', 'BQS',
_offset_map.clear()
for name in names:
offset = get_offset(name)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index a6326794c1b12..cb6d75ffe0d70 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -14,7 +14,7 @@
from pandas.tseries.frequencies import MONTHS, DAYS, _period_code_map
from pandas.tseries.period import Period, PeriodIndex, period_range
from pandas.tseries.index import DatetimeIndex, date_range, Index
-from pandas.tseries.tools import to_datetime
+from pandas.tseries.tools import to_datetime, _try_parse_qtr_time_string
import pandas.tseries.period as pmod
import pandas.core.datetools as datetools
@@ -29,6 +29,7 @@
import pandas.util.testing as tm
from pandas import compat
from numpy.testing import assert_array_equal
+from pandas.tseries.offsets import FY5253Quarter, WeekDay, Week, Day
class TestPeriodProperties(tm.TestCase):
@@ -1698,6 +1699,11 @@ def test_ts_repr(self):
expected = "<class 'pandas.tseries.period.PeriodIndex'>\nfreq: Q-DEC\n[2013Q1, ..., 2013Q3]\nlength: 3"
assert_equal(repr(val), expected)
+ def test_period_weeklies(self):
+ p1 = Period('2006-12-31', 'W')
+ p2 = Period('2006-12-31', '1w')
+ assert_equal(p1.freq, p2.freq)
+
def test_period_index_unicode(self):
pi = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
assert_equal(len(pi), 9)
@@ -1861,21 +1867,21 @@ def test_to_period_quarterlyish(self):
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'Q-DEC')
+ self.assertEqual(prng.freqstr, 'Q-DEC')
def test_to_period_annualish(self):
offsets = ['BA', 'AS', 'BAS']
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'A-DEC')
+ self.assertEqual(prng.freqstr, 'A-DEC')
def test_to_period_monthish(self):
offsets = ['MS', 'EOM', 'BM']
for off in offsets:
rng = date_range('01-Jan-2012', periods=8, freq=off)
prng = rng.to_period()
- self.assertEqual(prng.freq, 'M')
+ self.assertEqual(prng.freqstr, 'M')
def test_no_multiples(self):
self.assertRaises(ValueError, period_range, '1989Q3', periods=10,
@@ -2169,12 +2175,45 @@ def test_pickle_freq(self):
import pickle
prng = period_range('1/1/2011', '1/1/2012', freq='M')
new_prng = pickle.loads(pickle.dumps(prng))
- self.assertEqual(new_prng.freq,'M')
+ self.assertEqual(new_prng.freq, 'M')
def test_slice_keep_name(self):
idx = period_range('20010101', periods=10, freq='D', name='bob')
self.assertEqual(idx.name, idx[1:].name)
+ def test_period_range_alias(self):
+ self.assertTrue(
+ pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.MonthEnd()).to_period().identical(
+ pd.period_range('1/1/2012', periods=4,
+ freq=pd.offsets.MonthEnd())))
+
+ # GH 4878
+ self.assertTrue(
+ pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.BusinessMonthEnd()).to_period().identical(
+ pd.period_range('1/1/2012', periods=4,
+ freq=pd.offsets.BusinessMonthEnd())))
+
+ def test_period_range_alias2(self):
+ self.assertTrue(
+ pd.Series(range(4),
+ index=pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.MonthEnd())).to_period().index.identical(
+ pd.Series(range(4),
+ index=pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.MonthEnd()).to_period()).index))
+
+ # GH 4878
+ self.assertTrue(
+ pd.Series(range(4),
+ index=pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.BusinessMonthEnd())
+ ).to_period().index.identical(
+ pd.Series(range(4),
+ index=pd.date_range('1/1/2012', periods=4,
+ freq=pd.offsets.BusinessMonthEnd()).to_period()).index))
+
def _permute(obj):
return obj.take(np.random.permutation(len(obj)))
@@ -2313,6 +2352,294 @@ def test_sort(self):
self.assertEqual(sorted(periods), correctPeriods)
+class TestFY5253QuarterPeriods(tm.TestCase):
+ def test_get_period_ordinal(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(offset.get_period_ordinal(
+ datetime(2013, 10, 27)), 2013 * 4 + 3)
+ self.assertEqual(offset.get_period_ordinal(
+ datetime(2013, 12, 28)), 2013 * 4 + 3)
+ self.assertEqual(offset.get_period_ordinal(
+ datetime(2013, 12, 29)), 2014 * 4 + 0)
+
+ offset_n = FY5253Quarter(weekday=WeekDay.TUE, startingMonth=12,
+ variation="nearest", qtr_with_extra_week=4)
+
+ self.assertEqual(offset_n.get_period_ordinal(datetime(2013, 1, 2)),
+ offset_n.get_period_ordinal(datetime(2013, 1, 30)))
+
+ self.assertEqual(offset_n.get_period_ordinal(datetime(2013, 1, 1)) + 1,
+ offset_n.get_period_ordinal(datetime(2013, 1, 2)))
+
+ def test_period_format(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(offset.period_format(2013 * 4 + 3), "2013Q4")
+ self.assertEqual(offset.period_format(2014 * 4 + 0), "2014Q1")
+
+ def test_get_end_dt(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(offset.get_end_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 27))),
+ datetime(2013, 12, 28))
+ self.assertEqual(offset.get_end_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 28))),
+ datetime(2013, 12, 28))
+ self.assertEqual(offset.get_end_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 28))),
+ datetime(2013, 12, 28))
+ self.assertEqual(offset.get_end_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 29))),
+ datetime(2014, 3, 29))
+
+ def test_get_start_dt(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(offset.get_start_dt(
+ offset.get_period_ordinal(datetime(2013, 9, 29))),
+ datetime(2013, 9, 29))
+ self.assertEqual(offset.get_start_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 27))),
+ datetime(2013, 9, 29))
+ self.assertEqual(offset.get_start_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 28))),
+ datetime(2013, 9, 29))
+ self.assertEqual(offset.get_start_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 28))),
+ datetime(2013, 9, 29))
+ self.assertEqual(offset.get_start_dt(
+ offset.get_period_ordinal(datetime(2013, 12, 29))),
+ datetime(2013, 12, 29))
+
+ def test_period_str(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(str(Period("2013-12-27", freq=offset)), "2013Q4")
+ self.assertEqual(str(Period("2013-12-28", freq=offset)), "2013Q4")
+ self.assertEqual(str(Period("2013-12-29", freq=offset)), "2014Q1")
+ self.assertEqual(str(Period("2013-9-29", freq=offset)), "2013Q4")
+ self.assertEqual(str(Period("2013-9-28", freq=offset)), "2013Q3")
+
+ offset_n = FY5253Quarter(weekday=WeekDay.TUE, startingMonth=12,
+ variation="nearest", qtr_with_extra_week=4)
+ self.assertEqual(str(Period("2013-01-01", freq=offset_n)), "2012Q4")
+ self.assertEqual(str(Period("2013-01-03", freq=offset_n)), "2013Q1")
+ self.assertEqual(str(Period("2013-01-02", freq=offset_n)), "2013Q1")
+
+ offset_sun = FY5253Quarter(weekday=WeekDay.SUN, startingMonth=12,
+ variation="nearest", qtr_with_extra_week=4)
+ self.assertEqual(str(Period("2011-1-2", freq=offset_sun)), "2010Q4")
+ self.assertEqual(str(Period("2011-1-3", freq=offset_sun)), "2011Q1")
+ self.assertEqual(str(Period("2011-4-3", freq=offset_sun)), "2011Q1")
+ self.assertEqual(str(Period("2011-4-4", freq=offset_sun)), "2011Q2")
+ self.assertEqual(str(Period("2011-7-3", freq=offset_sun)), "2011Q2")
+ self.assertEqual(str(Period("2011-7-4", freq=offset_sun)), "2011Q3")
+ self.assertEqual(str(Period("2003-9-28", freq=offset_sun)), "2003Q3")
+ self.assertEqual(str(Period("2003-9-29", freq=offset_sun)), "2003Q4")
+ self.assertEqual(str(Period("2004-9-26", freq=offset_sun)), "2004Q3")
+ self.assertEqual(str(Period("2004-9-27", freq=offset_sun)), "2004Q4")
+ self.assertEqual(str(Period("2005-1-2", freq=offset_sun)), "2004Q4")
+ self.assertEqual(str(Period("2005-1-3", freq=offset_sun)), "2005Q1")
+
+ def test_period_str_parsing(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ self.assertEqual(_try_parse_qtr_time_string("2013Q4"), (2013, 4))
+ self.assertEqual(_try_parse_qtr_time_string("2013q4"), (2013, 4))
+ self.assertEqual(_try_parse_qtr_time_string("13Q4"), (2013, 4))
+ self.assertEqual(_try_parse_qtr_time_string("1Q14"), (2014, 1))
+
+ self.assertEqual(
+ str(Period(offset.parse_time_string("2013Q4"),
+ freq=offset)), "2013Q4")
+
+ self.assertEqual(offset.get_period_ordinal(
+ offset.parse_time_string("2013Q4")), 2013 * 4 + 3)
+
+ self.assertEqual(offset.period_format(
+ offset.get_period_ordinal(
+ offset.parse_time_string("2013Q4"))), "2013Q4")
+
+ def test_period_asfreq1(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+ period = Period("2013-12-27", freq=offset)
+
+ week_offset = Week(weekday=WeekDay.SAT)
+ self.assertEqual(str(period.asfreq(freq=week_offset, how="E")),
+ "2013-12-22/2013-12-28")
+ self.assertEqual(str(period.asfreq(freq=week_offset, how="S")),
+ "2013-09-29/2013-10-05")
+
+ day = Day()
+ self.assertEqual(str(period.asfreq(freq=day, how="E")),
+ "2013-12-28")
+
+ self.assertEqual(str(period.asfreq(freq=day, how="S")),
+ "2013-09-29")
+
+ def test_period_asfreq2(self):
+ qtr_offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+ week_offset = Week(weekday=WeekDay.SAT)
+
+ period = Period("2013-12-22/2013-12-28", freq=week_offset)
+
+ self.assertEqual(str(period.asfreq(freq=qtr_offset, how="E")),
+ "2013Q4")
+ self.assertEqual(str(period.asfreq(freq=qtr_offset, how="S")),
+ "2013Q4")
+
+ def test_period_range(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+# prange = period_range('2013Q1', periods=2, freq=offset)
+ prange = period_range(datetime(2013, 1, 15), periods=2, freq=offset)
+
+ self.assertEqual(len(prange), 2)
+ self.assertEqual(prange.freq, offset.periodstr)
+ self.assertEqual(str(prange[0]), '2013Q1')
+ self.assertEqual(str(prange[1]), '2013Q2')
+
+ def test_period_range_from_ts(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ drange = date_range(datetime(2013, 1, 15), periods=2, freq=offset)
+ prange = drange.to_period()
+
+ self.assertEqual(len(prange), 2)
+ self.assertEqual(prange.freq, offset.periodstr)
+ self.assertEqual(str(prange[0]), '2013Q1')
+ self.assertEqual(str(prange[1]), '2013Q2')
+
+ def test_periodindex_asfreq(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ prange = period_range(datetime(2013, 1, 15), periods=2, freq=offset)
+
+ week_offset = Week(weekday=WeekDay.SAT)
+
+ week_end = prange.asfreq(freq=week_offset, how="E")
+ self.assertEqual(len(week_end), 2)
+ self.assertEqual(week_end.freq, week_offset.periodstr)
+ self.assertEqual(str(week_end[0]), '2013-03-24/2013-03-30')
+ self.assertEqual(str(week_end[1]), '2013-06-23/2013-06-29')
+
+ week_start = prange.asfreq(freq=week_offset, how="S")
+ self.assertEqual(len(week_start), 2)
+ self.assertEqual(week_start.freq, week_offset.periodstr)
+ self.assertEqual(str(week_start[0]), '2012-12-30/2013-01-05')
+ self.assertEqual(str(week_start[1]), '2013-03-31/2013-04-06')
+
+ day = Day()
+ day_end = prange.asfreq(freq=day, how="E")
+ self.assertEqual(len(day_end), 2)
+ self.assertEqual(day_end.freq, day.periodstr)
+ self.assertEqual(str(day_end[0]), '2013-03-30')
+ self.assertEqual(str(day_end[1]), '2013-06-29')
+
+ day_start = prange.asfreq(freq=day, how="S")
+ self.assertEqual(len(day_start), 2)
+ self.assertEqual(day_start.freq, day.periodstr)
+ self.assertEqual(str(day_start[0]), '2012-12-30')
+ self.assertEqual(str(day_start[1]), '2013-03-31')
+
+ def test_resample_to_weekly(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ prange = period_range(datetime(2013, 1, 15), periods=2, freq=offset)
+
+ df = DataFrame({"A": [1, 2]}, index=prange)
+ resampled = df.resample(Week(weekday=WeekDay.SAT), fill_method="ffill")
+ self.assertEquals(len(resampled), 2 * 13)
+ self.assertEquals(str(resampled.index[0]), '2012-12-30/2013-01-05')
+ self.assertEquals(str(resampled.index[-1]), '2013-06-23/2013-06-29')
+
+ tm.assert_frame_equal(resampled,
+ df.resample("W-SAT", fill_method="ffill"))
+
+ assertRaisesRegexp(ValueError,
+ "cannot be resampled to",
+ df.resample,
+ "W-MON", fill_method="ffill")
+
+ def test_resample_to_daily(self):
+ offset = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ prange = period_range(datetime(2013, 1, 15), periods=2, freq=offset)
+
+ df = DataFrame({"A": [1, 2]}, index=prange)
+ resampled = df.resample(Day(), fill_method="ffill")
+ self.assertEquals(len(resampled), 2 * 7 * 13)
+ self.assertEquals(str(resampled.index[0]), '2012-12-30')
+ self.assertEquals(str(resampled.index[-1]), '2013-06-29')
+
+ tm.assert_frame_equal(resampled,
+ df.resample("D", fill_method="ffill"))
+
+ def test_resample_from_weekly(self):
+ offset_fyq = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+ freq_week = Week(weekday=WeekDay.SAT)
+
+ prange = period_range(datetime(2013, 1, 5),
+ periods=2 * 13,
+ freq=freq_week)
+
+ df = DataFrame({"A": [1] * 13 + [2] * 13}, index=prange)
+ resampled = df.resample(offset_fyq, fill_method="mean")
+
+ self.assertEquals(len(resampled), 2)
+ self.assertEquals(str(resampled.index[0]), '2013Q1')
+ self.assertEquals(str(resampled.index[-1]), '2013Q2')
+ self.assertEquals(resampled["A"][0], 1)
+ self.assertEquals(resampled["A"]["2013Q1"], 1)
+ self.assertEquals(resampled["A"][1], 2)
+ self.assertEquals(resampled["A"]["2013Q2"], 2)
+
+ offset_fyq2 = FY5253Quarter(weekday=WeekDay.MON, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ assertRaisesRegexp(ValueError,
+ "cannot be resampled to",
+ df.resample, offset_fyq2, fill_method="ffill")
+
+ def test_resample_from_daily(self):
+ offset_fyq = FY5253Quarter(weekday=WeekDay.SAT, startingMonth=12,
+ variation="last", qtr_with_extra_week=4)
+
+ prange = period_range(datetime(2012, 12, 30),
+ periods=2 * 7 * 13,
+ freq=Day())
+
+ df = DataFrame({"A": [1] * 13 * 7 + [2] * 13 * 7}, index=prange)
+ resampled = df.resample(offset_fyq, fill_method="mean")
+
+ self.assertEquals(len(resampled), 2)
+ self.assertEquals(str(resampled.index[0]), '2013Q1')
+ self.assertEquals(str(resampled.index[-1]), '2013Q2')
+ self.assertEquals(resampled["A"][0], 1)
+ self.assertEquals(resampled["A"][1], 2)
+
+ def test_freq_to_period(self):
+ r = pd.date_range('01-Jan-2012', periods=8, freq='QS')
+ x = r.to_period()
+ self.assert_("freq='Q-DEC'" in str(x))
+ self.assert_("freq: Q-DEC" in repr(x))
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index 5d1e4b67041f7..d07af679c0d47 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -13,6 +13,7 @@
from pandas.tseries.offsets import DateOffset
from pandas.tseries.period import period_range, Period, PeriodIndex
from pandas.tseries.resample import DatetimeIndex
+from pandas.tseries.frequencies import get_period_alias
from pandas.util.testing import assert_series_equal, ensure_clean
import pandas.util.testing as tm
@@ -97,7 +98,7 @@ def test_tsplot(self):
f = lambda *args, **kwds: tsplot(s, plt.Axes.plot, *args, **kwds)
for s in self.period_ser:
- _check_plot_works(f, s.index.freq, ax=ax, series=s)
+ _check_plot_works(f, s.index.freq, ax=ax, series=s, is_period=True)
for s in self.datetime_ser:
_check_plot_works(f, s.index.freq.rule_code, ax=ax, series=s)
@@ -149,7 +150,7 @@ def check_format_of_first_point(ax, expected_string):
@slow
def test_line_plot_period_series(self):
for s in self.period_ser:
- _check_plot_works(s.plot, s.index.freq)
+ _check_plot_works(s.plot, s.index.freq, is_period=True)
@slow
def test_line_plot_datetime_series(self):
@@ -159,7 +160,7 @@ def test_line_plot_datetime_series(self):
@slow
def test_line_plot_period_frame(self):
for df in self.period_df:
- _check_plot_works(df.plot, df.index.freq)
+ _check_plot_works(df.plot, df.index.freq, is_period=True)
@slow
def test_line_plot_datetime_frame(self):
@@ -676,7 +677,7 @@ def test_mixed_freq_lf_first(self):
low.plot()
ax = high.plot()
for l in ax.get_lines():
- self.assertEqual(PeriodIndex(data=l.get_xdata()).freq, 'T')
+ self.assertEqual(PeriodIndex(data=l.get_xdata()).freqstr, 'T')
def test_mixed_freq_irreg_period(self):
ts = tm.makeTimeSeries()
@@ -695,7 +696,7 @@ def test_to_weekly_resampling(self):
high.plot()
ax = low.plot()
for l in ax.get_lines():
- self.assert_(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_(PeriodIndex(data=l.get_xdata()).freqstr.startswith('W'))
@slow
def test_from_weekly_resampling(self):
@@ -706,7 +707,7 @@ def test_from_weekly_resampling(self):
low.plot()
ax = high.plot()
for l in ax.get_lines():
- self.assert_(PeriodIndex(data=l.get_xdata()).freq.startswith('W'))
+ self.assert_(PeriodIndex(data=l.get_xdata()).freqstr.startswith('W'))
@slow
def test_irreg_dtypes(self):
@@ -924,7 +925,7 @@ def test_mpl_nopandas(self):
line2.get_xydata()[:, 0])
-def _check_plot_works(f, freq=None, series=None, *args, **kwargs):
+def _check_plot_works(f, freq=None, series=None, is_period=False, *args, **kwargs):
import matplotlib.pyplot as plt
fig = plt.gcf()
@@ -944,10 +945,16 @@ def _check_plot_works(f, freq=None, series=None, *args, **kwargs):
if isinstance(dfreq, DateOffset):
dfreq = dfreq.rule_code
if orig_axfreq is None:
- assert ax.freq == dfreq
+ if is_period:
+ assert get_period_alias(ax.freq) == get_period_alias(dfreq)
+ else:
+ assert ax.freq == dfreq
if freq is not None and orig_axfreq is None:
- assert ax.freq == freq
+ if is_period:
+ assert get_period_alias(ax.freq) == get_period_alias(freq)
+ else:
+ assert ax.freq == freq
ax = fig.add_subplot(212)
try:
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index d01ad56165880..28c9d80e1c6d4 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -319,9 +319,11 @@ def _convert_listlike(arg, box, format):
return _convert_listlike(np.array([ arg ]), box, format)[0]
+
class DateParseError(ValueError):
pass
+
def _attempt_YYYYMMDD(arg):
""" try to parse the YYYYMMDD/%Y%m%d format, try to deal with NaT-like,
arg is a passed in as an object dtype, but could really be ints/strings with nan-like/or floats (e.g. with nan) """
@@ -369,6 +371,34 @@ def calc_with_mask(carg,mask):
has_time = re.compile('(.+)([\s]|T)+(.+)')
+def _try_parse_qtr_time_string(arg):
+ arg = arg.upper()
+
+ add_century = False
+ if len(arg) == 4:
+ add_century = True
+ qpats = [(qpat1, 1), (qpat2, 0)]
+ else:
+ qpats = [(qpat1full, 1), (qpat2full, 0)]
+
+ for pat, yfirst in qpats:
+ qparse = pat.match(arg)
+ if qparse is not None:
+ if yfirst:
+ yi, qi = 1, 2
+ else:
+ yi, qi = 2, 1
+ q = int(qparse.group(yi))
+ y_str = qparse.group(qi)
+ y = int(y_str)
+ if add_century:
+ y += 2000
+
+ return y, q
+
+ return None
+
+
def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
"""
Try hard to parse datetime string, leveraging dateutil plus some extra
@@ -389,15 +419,19 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
datetime, datetime/dateutil.parser._result, str
"""
from pandas.core.config import get_option
+ from pandas.tseries.frequencies import (_get_rule_month, _month_numbers)
from pandas.tseries.offsets import DateOffset
- from pandas.tseries.frequencies import (_get_rule_month, _month_numbers,
- _get_freq_str)
if not isinstance(arg, compat.string_types):
return arg
arg = arg.upper()
+ if isinstance(freq, DateOffset):
+ parsed_dt = freq.parse_time_string(arg)
+ if parsed_dt is not None:
+ return parsed_dt, parsed_dt, freq.name
+
default = datetime(1, 1, 1).replace(hour=0, minute=0,
second=0, microsecond=0)
@@ -408,37 +442,26 @@ def parse_time_string(arg, freq=None, dayfirst=None, yearfirst=None):
ret = default.replace(year=int(m.group(1)))
return ret, ret, 'year'
- add_century = False
- if len(arg) == 4:
- add_century = True
- qpats = [(qpat1, 1), (qpat2, 0)]
- else:
- qpats = [(qpat1full, 1), (qpat2full, 0)]
-
- for pat, yfirst in qpats:
- qparse = pat.match(arg)
- if qparse is not None:
- if yfirst:
- yi, qi = 1, 2
- else:
- yi, qi = 2, 1
- q = int(qparse.group(yi))
- y_str = qparse.group(qi)
- y = int(y_str)
- if add_century:
- y += 2000
-
- if freq is not None:
- # hack attack, #1228
- mnum = _month_numbers[_get_rule_month(freq)] + 1
- month = (mnum + (q - 1) * 3) % 12 + 1
- if month > mnum:
- y -= 1
- else:
- month = (q - 1) * 3 + 1
-
- ret = default.replace(year=y, month=month)
- return ret, ret, 'quarter'
+ qtr_parsed = _try_parse_qtr_time_string(arg)
+ if qtr_parsed is not None:
+ y, q = qtr_parsed
+
+ if freq is not None:
+ # hack attack, #1228
+ month_name = _get_rule_month(freq)
+ try:
+ mnum = _month_numbers[month_name] + 1
+ except KeyError:
+ raise DateParseError(
+ "Do not understand freq: %s" % freq)
+ month = (mnum + (q - 1) * 3) % 12 + 1
+ if month > mnum:
+ y -= 1
+ else:
+ month = (q - 1) * 3 + 1
+
+ ret = default.replace(year=y, month=month)
+ return ret, ret, 'quarter'
is_mo_str = freq is not None and freq == 'M'
is_mo_off = getattr(freq, 'rule_code', None) == 'M'
| This PR is a work in progress
Fixes #5306 - Removing unused imports
Fixes #5082 - Added `inferred_freq_offset`
Fixes #5028 - fixed issues with `_offset_map`
Fixes #4878 - `freq` parameter for `Period` init supports full set of Offsets.
Fixes #5418 - raise `DateParseError` rather than incorrect `KeyError` for invalid frequencies
| https://api.github.com/repos/pandas-dev/pandas/pulls/5148 | 2013-10-08T00:39:32Z | 2014-07-28T14:59:20Z | null | 2014-10-23T17:15:44Z |
BUG: allow tuples in recursive call to replace | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 848bd1035fadc..85d9be1295e29 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -573,6 +573,8 @@ Bug Fixes
- Fix bound checking for Timestamp() with datetime64 input (:issue:`4065`)
- Fix a bug where ``TestReadHtml`` wasn't calling the correct ``read_html()``
function (:issue:`5150`).
+ - Fix a bug with ``NDFrame.replace()`` which made replacement appear as
+ though it was (incorrectly) using regular expressions (:issue:`5143`).
pandas 0.12.0
-------------
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 108b82eaf9056..33df305a721a6 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -7,13 +7,8 @@
import numbers
import codecs
import csv
-import sys
import types
-from datetime import timedelta
-
-from distutils.version import LooseVersion
-
from numpy.lib.format import read_array, write_array
import numpy as np
@@ -21,9 +16,7 @@
import pandas.lib as lib
import pandas.tslib as tslib
from pandas import compat
-from pandas.compat import (StringIO, BytesIO, range, long, u, zip, map,
- string_types)
-from datetime import timedelta
+from pandas.compat import StringIO, BytesIO, range, long, u, zip, map
from pandas.core.config import get_option
from pandas.core import array as pa
@@ -36,6 +29,7 @@ class PandasError(Exception):
class AmbiguousIndexError(PandasError, KeyError):
pass
+
_POSSIBLY_CAST_DTYPES = set([np.dtype(t)
for t in ['M8[ns]', 'm8[ns]', 'O', 'int8',
'uint8', 'int16', 'uint16', 'int32',
@@ -101,6 +95,7 @@ class to receive bound method
else:
setattr(cls, name, func)
+
def isnull(obj):
"""Detect missing values (NaN in numeric arrays, None/NaN in object arrays)
@@ -772,6 +767,7 @@ def diff(arr, n, axis=0):
return out_arr
+
def _coerce_to_dtypes(result, dtypes):
""" given a dtypes and a result set, coerce the result elements to the dtypes """
if len(result) != len(dtypes):
@@ -800,6 +796,7 @@ def conv(r,dtype):
return np.array([ conv(r,dtype) for r, dtype in zip(result,dtypes) ])
+
def _infer_dtype_from_scalar(val):
""" interpret the dtype from a scalar, upcast floats and ints
return the new value and the dtype """
@@ -986,6 +983,7 @@ def changeit():
return result, False
+
def _maybe_upcast(values, fill_value=np.nan, dtype=None, copy=False):
""" provide explicty type promotion and coercion
@@ -1166,6 +1164,7 @@ def pad_1d(values, limit=None, mask=None):
_method(values, mask, limit=limit)
return values
+
def backfill_1d(values, limit=None, mask=None):
dtype = values.dtype.name
@@ -1190,6 +1189,7 @@ def backfill_1d(values, limit=None, mask=None):
_method(values, mask, limit=limit)
return values
+
def pad_2d(values, limit=None, mask=None):
dtype = values.dtype.name
@@ -1218,6 +1218,7 @@ def pad_2d(values, limit=None, mask=None):
pass
return values
+
def backfill_2d(values, limit=None, mask=None):
dtype = values.dtype.name
@@ -1246,6 +1247,7 @@ def backfill_2d(values, limit=None, mask=None):
pass
return values
+
def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None):
""" perform an actual interpolation of values, values will be make 2-d if needed
fills inplace, returns the result """
@@ -1371,6 +1373,7 @@ def _possibly_convert_platform(values):
return values
+
def _possibly_cast_to_datetime(value, dtype, coerce=False):
""" try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
@@ -1787,6 +1790,7 @@ def is_datetime64_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype.type
return issubclass(tipo, np.datetime64)
+
def is_datetime64_ns_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype
@@ -1796,6 +1800,7 @@ def is_datetime64_ns_dtype(arr_or_dtype):
tipo = arr_or_dtype.dtype
return tipo == _NS_DTYPE
+
def is_timedelta64_dtype(arr_or_dtype):
if isinstance(arr_or_dtype, np.dtype):
tipo = arr_or_dtype.type
@@ -1851,6 +1856,7 @@ def _is_sequence(x):
except (TypeError, AttributeError):
return False
+
_ensure_float64 = algos.ensure_float64
_ensure_float32 = algos.ensure_float32
_ensure_int64 = algos.ensure_int64
@@ -1987,6 +1993,7 @@ def _get_handle(path, mode, encoding=None, compression=None):
return f
+
if compat.PY3: # pragma: no cover
def UnicodeReader(f, dialect=csv.excel, encoding="utf-8", **kwds):
# ignore encoding
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1bbaeffff77bc..daaf9d9966635 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -12,7 +12,6 @@
# pylint: disable=E1101,E1103
# pylint: disable=W0212,W0231,W0703,W0622
-import operator
import sys
import collections
import warnings
@@ -25,7 +24,7 @@
from pandas.core.common import (isnull, notnull, PandasError, _try_sort,
_default_index, _maybe_upcast, _is_sequence,
_infer_dtype_from_scalar, _values_from_object,
- _coerce_to_dtypes, _DATELIKE_DTYPES, is_list_like)
+ _DATELIKE_DTYPES, is_list_like)
from pandas.core.generic import NDFrame, _shared_docs
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (_maybe_droplevels,
@@ -48,7 +47,6 @@
from pandas.tseries.index import DatetimeIndex
import pandas.core.algorithms as algos
-import pandas.core.datetools as datetools
import pandas.core.common as com
import pandas.core.format as fmt
import pandas.core.nanops as nanops
@@ -4292,6 +4290,7 @@ def combineMult(self, other):
"""
return self.mul(other, fill_value=1.)
+
DataFrame._setup_axes(
['index', 'columns'], info_axis=1, stat_axis=0, axes_are_reversed=True)
DataFrame._add_numeric_operations()
@@ -4552,6 +4551,7 @@ def _masked_rec_array_to_mgr(data, index, columns, dtype, copy):
mgr = mgr.copy()
return mgr
+
def _reorder_arrays(arrays, arr_columns, columns):
# reorder according to the columns
if columns is not None and len(columns) and arr_columns is not None and len(arr_columns):
@@ -4562,6 +4562,7 @@ def _reorder_arrays(arrays, arr_columns, columns):
arrays = [arrays[i] for i in indexer]
return arrays, arr_columns
+
def _list_to_arrays(data, columns, coerce_float=False, dtype=None):
if len(data) > 0 and isinstance(data[0], tuple):
content = list(lib.to_object_array_tuples(data).T)
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index d8a03cef16c9e..bb47709532523 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -18,9 +18,7 @@
from pandas import compat, _np_version_under1p7
from pandas.compat import map, zip, lrange, string_types, isidentifier
from pandas.core.common import (isnull, notnull, is_list_like,
- _values_from_object,
- _infer_dtype_from_scalar, _maybe_promote,
- ABCSeries)
+ _values_from_object, _maybe_promote, ABCSeries)
import pandas.core.nanops as nanops
from pandas.util.decorators import Appender, Substitution
@@ -36,6 +34,7 @@
def is_dictlike(x):
return isinstance(x, (dict, com.ABCSeries))
+
def _single_replace(self, to_replace, method, inplace, limit):
orig_dtype = self.dtype
result = self if inplace else self.copy()
@@ -1844,7 +1843,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
self._consolidate_inplace()
if value is None:
- if isinstance(to_replace, list):
+ if isinstance(to_replace, (tuple, list)):
return _single_replace(self, to_replace, method, inplace,
limit)
@@ -1856,7 +1855,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
to_replace = regex
regex = True
- items = to_replace.items()
+ items = list(compat.iteritems(to_replace))
keys, values = zip(*items)
are_mappings = [is_dictlike(v) for v in values]
@@ -1899,7 +1898,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
regex=regex)
# {'A': NA} -> 0
- elif not isinstance(value, (list, np.ndarray)):
+ elif not com.is_list_like(value):
new_data = self._data
for k, src in compat.iteritems(to_replace):
if k in self:
@@ -1911,9 +1910,8 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
raise TypeError('Fill value must be scalar, dict, or '
'Series')
- elif isinstance(to_replace, (list, np.ndarray)):
- # [NA, ''] -> [0, 'missing']
- if isinstance(value, (list, np.ndarray)):
+ elif com.is_list_like(to_replace): # [NA, ''] -> [0, 'missing']
+ if com.is_list_like(value):
if len(to_replace) != len(value):
raise ValueError('Replacement lists must match '
'in length. Expecting %d got %d ' %
@@ -1928,11 +1926,13 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
inplace=inplace, regex=regex)
elif to_replace is None:
if not (com.is_re_compilable(regex) or
- isinstance(regex, (list, np.ndarray)) or is_dictlike(regex)):
+ com.is_list_like(regex) or
+ is_dictlike(regex)):
raise TypeError("'regex' must be a string or a compiled "
"regular expression or a list or dict of "
"strings or regular expressions, you "
- "passed a {0}".format(type(regex)))
+ "passed a"
+ " {0!r}".format(type(regex).__name__))
return self.replace(regex, value, inplace=inplace, limit=limit,
regex=True)
else:
@@ -1948,12 +1948,13 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
inplace=inplace,
regex=regex)
- elif not isinstance(value, (list, np.ndarray)): # NA -> 0
+ elif not com.is_list_like(value): # NA -> 0
new_data = self._data.replace(to_replace, value,
inplace=inplace, regex=regex)
else:
- raise TypeError('Invalid "to_replace" type: '
- '{0}'.format(type(to_replace))) # pragma: no cover
+ msg = ('Invalid "to_replace" type: '
+ '{0!r}').format(type(to_replace).__name__)
+ raise TypeError(msg) # pragma: no cover
new_data = new_data.convert(copy=not inplace, convert_numeric=False)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 9abcdd8ea4780..070745d73b307 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -992,6 +992,7 @@ class NumericBlock(Block):
is_numeric = True
_can_hold_na = True
+
class FloatBlock(NumericBlock):
is_float = True
_downcast_dtype = 'int64'
@@ -1064,6 +1065,7 @@ def _try_cast(self, element):
def should_store(self, value):
return com.is_integer_dtype(value) and value.dtype == self.dtype
+
class TimeDeltaBlock(IntBlock):
is_timedelta = True
_can_hold_na = True
@@ -1130,6 +1132,7 @@ def to_native_types(self, slicer=None, na_rep=None, **kwargs):
for val in values.ravel()[imask]], dtype=object)
return rvalues.tolist()
+
class BoolBlock(NumericBlock):
is_bool = True
_can_hold_na = False
@@ -1677,6 +1680,7 @@ def split_block_at(self, item):
def _try_cast_result(self, result, dtype=None):
return result
+
def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None, fastpath=False, placement=None):
if klass is None:
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 6f8031538e520..2e386a7e2816a 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -17,10 +17,10 @@
map, zip, range, long, lrange, lmap, lzip,
OrderedDict, cPickle as pickle, u, StringIO
)
-from pandas import compat, _np_version_under1p7
+from pandas import compat
from numpy import random, nan
-from numpy.random import randn, rand
+from numpy.random import randn
import numpy as np
import numpy.ma as ma
from numpy.testing import assert_array_equal
@@ -47,9 +47,6 @@
ensure_clean)
from pandas.core.indexing import IndexingError
from pandas.core.common import PandasError
-from pandas.compat import OrderedDict
-from pandas.computation.expr import Expr
-import pandas.computation as comp
import pandas.util.testing as tm
import pandas.lib as lib
@@ -2367,7 +2364,6 @@ def test_insert_error_msmgs(self):
with assertRaisesRegexp(TypeError, msg):
df['gr'] = df.groupby(['b', 'c']).count()
-
def test_constructor_subclass_dict(self):
# Test for passing dict subclass to constructor
data = {'col1': tm.TestSubDict((x, 10.0 * x) for x in range(10)),
@@ -2498,7 +2494,6 @@ def test_constructor_ndarray(self):
frame = DataFrame(['foo', 'bar'], index=[0, 1], columns=['A'])
self.assertEqual(len(frame), 2)
-
def test_constructor_maskedarray(self):
self._check_basic_constructor(ma.masked_all)
@@ -3052,7 +3047,6 @@ def test_constructor_column_duplicates(self):
[('a', [8]), ('a', [5]), ('b', [6])],
columns=['b', 'a', 'a'])
-
def test_column_dups_operations(self):
def check(result, expected=None):
@@ -6845,7 +6839,7 @@ def test_replace_inplace(self):
self.tsframe['A'][-5:] = nan
tsframe = self.tsframe.copy()
- res = tsframe.replace(nan, 0, inplace=True)
+ tsframe.replace(nan, 0, inplace=True)
assert_frame_equal(tsframe, self.tsframe.fillna(0))
self.assertRaises(TypeError, self.tsframe.replace, nan, inplace=True)
@@ -7618,6 +7612,46 @@ def test_replace_input_formats(self):
def test_replace_limit(self):
pass
+ def test_replace_dict_no_regex(self):
+ answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3:
+ 'Disagree', 4: 'Strongly Disagree'})
+ weights = {'Agree': 4, 'Disagree': 2, 'Neutral': 3, 'Strongly Agree':
+ 5, 'Strongly Disagree': 1}
+ expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1})
+ result = answer.replace(weights)
+ tm.assert_series_equal(result, expected)
+
+ def test_replace_series_no_regex(self):
+ answer = Series({0: 'Strongly Agree', 1: 'Agree', 2: 'Neutral', 3:
+ 'Disagree', 4: 'Strongly Disagree'})
+ weights = Series({'Agree': 4, 'Disagree': 2, 'Neutral': 3,
+ 'Strongly Agree': 5, 'Strongly Disagree': 1})
+ expected = Series({0: 5, 1: 4, 2: 3, 3: 2, 4: 1})
+ result = answer.replace(weights)
+ tm.assert_series_equal(result, expected)
+
+ def test_replace_dict_tuple_list_ordering_remains_the_same(self):
+ df = DataFrame(dict(A=[nan, 1]))
+ res1 = df.replace(to_replace={nan: 0, 1: -1e8})
+ res2 = df.replace(to_replace=(1, nan), value=[-1e8, 0])
+ res3 = df.replace(to_replace=[1, nan], value=[-1e8, 0])
+
+ expected = DataFrame({'A': [0, -1e8]})
+ tm.assert_frame_equal(res1, res2)
+ tm.assert_frame_equal(res2, res3)
+ tm.assert_frame_equal(res3, expected)
+
+ def test_replace_doesnt_replace_with_no_regex(self):
+ from pandas.compat import StringIO
+ raw = """fol T_opp T_Dir T_Enh
+ 0 1 0 0 vo
+ 1 2 vr 0 0
+ 2 2 0 0 0
+ 3 3 0 bt 0"""
+ df = read_csv(StringIO(raw), sep=r'\s+')
+ res = df.replace({'\D': 1})
+ tm.assert_frame_equal(df, res)
+
def test_combine_multiple_frames_dtypes(self):
from pandas import concat
@@ -8713,7 +8747,6 @@ def test_apply_ignore_failures(self):
expected = self.mixed_frame._get_numeric_data().apply(np.mean)
assert_series_equal(result, expected)
-
def test_apply_mixed_dtype_corner(self):
df = DataFrame({'A': ['foo'],
'B': [1.]})
| This avoids the seeming passage of regular expressions
closes #5143.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5145 | 2013-10-07T23:24:10Z | 2013-10-08T04:35:21Z | 2013-10-08T04:35:21Z | 2014-06-24T07:44:28Z |
TST: fillna(values) equiv of replace(np.nan,values) | diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 8a40c359d99a5..6f8031538e520 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -3744,31 +3744,31 @@ def tuple_generator(length):
for i in range(length):
letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
yield (i, letters[i % len(letters)], i/length)
-
+
columns_names = ['Integer', 'String', 'Float']
columns = [[i[j] for i in tuple_generator(10)] for j in range(len(columns_names))]
data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]}
expected = DataFrame(data, columns=columns_names)
-
+
generator = tuple_generator(10)
result = DataFrame.from_records(generator, columns=columns_names)
assert_frame_equal(result, expected)
-
+
def test_from_records_lists_generator(self):
def list_generator(length):
for i in range(length):
letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'
yield [i, letters[i % len(letters)], i/length]
-
+
columns_names = ['Integer', 'String', 'Float']
columns = [[i[j] for i in list_generator(10)] for j in range(len(columns_names))]
data = {'Integer': columns[0], 'String': columns[1], 'Float': columns[2]}
expected = DataFrame(data, columns=columns_names)
-
+
generator = list_generator(10)
result = DataFrame.from_records(generator, columns=columns_names)
assert_frame_equal(result, expected)
-
+
def test_from_records_columns_not_modified(self):
tuples = [(1, 2, 3),
(1, 2, 3),
@@ -6745,6 +6745,13 @@ def test_fillna_dtype_conversion(self):
expected = DataFrame('nan',index=lrange(3),columns=['A','B'])
assert_frame_equal(result, expected)
+ # equiv of replace
+ df = DataFrame(dict(A = [1,np.nan], B = [1.,2.]))
+ for v in ['',1,np.nan,1.0]:
+ expected = df.replace(np.nan,v)
+ result = df.fillna(v)
+ assert_frame_equal(result, expected)
+
def test_ffill(self):
self.tsframe['A'][:5] = nan
self.tsframe['A'][-5:] = nan
| closes #5136 (just tests), as not a bug
| https://api.github.com/repos/pandas-dev/pandas/pulls/5139 | 2013-10-07T12:48:49Z | 2013-10-07T13:00:51Z | 2013-10-07T13:00:51Z | 2014-06-26T11:25:23Z |
ENH: json default_handler param | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 0fabfa7077a95..9a893fb18cc8e 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -1054,8 +1054,9 @@ with optional parameters:
- ``double_precision`` : The number of decimal places to use when encoding floating point values, default 10.
- ``force_ascii`` : force encoded string to be ASCII, default True.
- ``date_unit`` : The time unit to encode to, governs timestamp and ISO8601 precision. One of 's', 'ms', 'us' or 'ns' for seconds, milliseconds, microseconds and nanoseconds respectively. Default 'ms'.
+- ``default_handler`` : The handler to call if an object cannot otherwise be converted to a suitable format for JSON. Takes a single argument, which is the object to convert, and returns a serialisable object.
-Note NaN's, NaT's and None will be converted to null and datetime objects will be converted based on the date_format and date_unit parameters.
+Note ``NaN``'s, ``NaT``'s and ``None`` will be converted to ``null`` and ``datetime`` objects will be converted based on the ``date_format`` and ``date_unit`` parameters.
.. ipython:: python
@@ -1098,6 +1099,48 @@ Writing to a file, with a date index and a date column
dfj2.to_json('test.json')
open('test.json').read()
+If the JSON serialiser cannot handle the container contents directly it will fallback in the following manner:
+
+- if a ``toDict`` method is defined by the unrecognised object then that
+ will be called and its returned ``dict`` will be JSON serialised.
+- if a ``default_handler`` has been passed to ``to_json`` that will
+ be called to convert the object.
+- otherwise an attempt is made to convert the object to a ``dict`` by
+ parsing its contents. However if the object is complex this will often fail
+ with an ``OverflowError``.
+
+Your best bet when encountering ``OverflowError`` during serialisation
+is to specify a ``default_handler``. For example ``timedelta`` can cause
+problems:
+
+.. ipython:: python
+ :suppress:
+
+ from datetime import timedelta
+ dftd = DataFrame([timedelta(23), timedelta(seconds=5), 42])
+
+.. code-block:: ipython
+
+ In [141]: from datetime import timedelta
+
+ In [142]: dftd = DataFrame([timedelta(23), timedelta(seconds=5), 42])
+
+ In [143]: dftd.to_json()
+
+ ---------------------------------------------------------------------------
+ OverflowError Traceback (most recent call last)
+ OverflowError: Maximum recursion level reached
+
+which can be dealt with by specifying a simple ``default_handler``:
+
+.. ipython:: python
+
+ dftd.to_json(default_handler=str)
+
+ def my_handler(obj):
+ return obj.total_seconds()
+ dftd.to_json(default_handler=my_handler)
+
Reading JSON
~~~~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 3e072da164ab2..661e55f21e3ee 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -233,6 +233,8 @@ API Changes
- added ``date_unit`` parameter to specify resolution of timestamps. Options
are seconds, milliseconds, microseconds and nanoseconds. (:issue:`4362`, :issue:`4498`).
+ - added ``default_handler`` parameter to allow a callable to be passed which will be
+ responsible for handling otherwise unserialisable objects.
- ``Index`` and ``MultiIndex`` changes (:issue:`4039`):
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 5ac9d12de8a9a..d8a03cef16c9e 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -707,7 +707,8 @@ def __setstate__(self, state):
# I/O Methods
def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
- double_precision=10, force_ascii=True, date_unit='ms'):
+ double_precision=10, force_ascii=True, date_unit='ms',
+ default_handler=None):
"""
Convert the object to a JSON string.
@@ -728,18 +729,21 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
* DataFrame
- default is 'columns'
- - allowed values are: {'split','records','index','columns','values'}
+ - allowed values are:
+ {'split','records','index','columns','values'}
* The format of the JSON string
- - split : dict like {index -> [index], columns -> [columns], data -> [values]}
- - records : list like [{column -> value}, ... , {column -> value}]
+ - split : dict like
+ {index -> [index], columns -> [columns], data -> [values]}
+ - records : list like
+ [{column -> value}, ... , {column -> value}]
- index : dict like {index -> {column -> value}}
- columns : dict like {column -> {index -> value}}
- values : just the values array
- date_format : type of date conversion (epoch = epoch milliseconds, iso = ISO8601)
- default is epoch
+ date_format : type of date conversion, epoch or iso
+ epoch = epoch milliseconds, iso = ISO8601, default is epoch
double_precision : The number of decimal places to use when encoding
floating point values, default 10.
force_ascii : force encoded string to be ASCII, default True.
@@ -747,6 +751,10 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
The time unit to encode to, governs timestamp and ISO8601
precision. One of 's', 'ms', 'us', 'ns' for second, millisecond,
microsecond, and nanosecond respectively.
+ default_handler : callable, default None
+ Handler to call if object cannot otherwise be converted to a
+ suitable format for JSON. Should receive a single argument which is
+ the object to convert and return a serialisable object.
Returns
-------
@@ -761,7 +769,8 @@ def to_json(self, path_or_buf=None, orient=None, date_format='epoch',
date_format=date_format,
double_precision=double_precision,
force_ascii=force_ascii,
- date_unit=date_unit)
+ date_unit=date_unit,
+ default_handler=default_handler)
def to_hdf(self, path_or_buf, key, **kwargs):
""" activate the HDFStore
diff --git a/pandas/io/json.py b/pandas/io/json.py
index 497831f597681..c81064d1c0516 100644
--- a/pandas/io/json.py
+++ b/pandas/io/json.py
@@ -17,19 +17,21 @@
dumps = _json.dumps
### interface to/from ###
+
def to_json(path_or_buf, obj, orient=None, date_format='epoch',
- double_precision=10, force_ascii=True, date_unit='ms'):
+ double_precision=10, force_ascii=True, date_unit='ms',
+ default_handler=None):
if isinstance(obj, Series):
s = SeriesWriter(
obj, orient=orient, date_format=date_format,
double_precision=double_precision, ensure_ascii=force_ascii,
- date_unit=date_unit).write()
+ date_unit=date_unit, default_handler=default_handler).write()
elif isinstance(obj, DataFrame):
s = FrameWriter(
obj, orient=orient, date_format=date_format,
double_precision=double_precision, ensure_ascii=force_ascii,
- date_unit=date_unit).write()
+ date_unit=date_unit, default_handler=default_handler).write()
else:
raise NotImplementedError
@@ -45,7 +47,7 @@ def to_json(path_or_buf, obj, orient=None, date_format='epoch',
class Writer(object):
def __init__(self, obj, orient, date_format, double_precision,
- ensure_ascii, date_unit):
+ ensure_ascii, date_unit, default_handler=None):
self.obj = obj
if orient is None:
@@ -56,6 +58,7 @@ def __init__(self, obj, orient, date_format, double_precision,
self.double_precision = double_precision
self.ensure_ascii = ensure_ascii
self.date_unit = date_unit
+ self.default_handler = default_handler
self.is_copy = False
self._format_axes()
@@ -70,7 +73,9 @@ def write(self):
double_precision=self.double_precision,
ensure_ascii=self.ensure_ascii,
date_unit=self.date_unit,
- iso_dates=self.date_format == 'iso')
+ iso_dates=self.date_format == 'iso',
+ default_handler=self.default_handler)
+
class SeriesWriter(Writer):
_default_orient = 'index'
@@ -121,13 +126,17 @@ def read_json(path_or_buf=None, orient=None, typ='frame', dtype=True,
- default is ``'columns'``
- allowed values are: {'split','records','index','columns','values'}
- - The DataFrame index must be unique for orients 'index' and 'columns'.
- - The DataFrame columns must be unique for orients 'index', 'columns', and 'records'.
+ - The DataFrame index must be unique for orients 'index' and
+ 'columns'.
+ - The DataFrame columns must be unique for orients 'index',
+ 'columns', and 'records'.
* The format of the JSON string
- - split : dict like ``{index -> [index], columns -> [columns], data -> [values]}``
- - records : list like ``[{column -> value}, ... , {column -> value}]``
+ - split : dict like
+ ``{index -> [index], columns -> [columns], data -> [values]}``
+ - records : list like
+ ``[{column -> value}, ... , {column -> value}]``
- index : dict like ``{index -> {column -> value}}``
- columns : dict like ``{column -> {index -> value}}``
- values : just the values array
@@ -384,7 +393,6 @@ class SeriesParser(Parser):
_default_orient = 'index'
_split_keys = ('name', 'index', 'data')
-
def _parse_no_numpy(self):
json = self.json
@@ -542,7 +550,7 @@ def is_ok(col):
#----------------------------------------------------------------------
# JSON normalization routines
-def nested_to_record(ds,prefix="",level=0):
+def nested_to_record(ds, prefix="", level=0):
"""a simplified json_normalize
converts a nested dict into a flat dict ("record"), unlike json_normalize,
@@ -557,7 +565,8 @@ def nested_to_record(ds,prefix="",level=0):
d - dict or list of dicts, matching `ds`
Example:
- IN[52]: nested_to_record(dict(flat1=1,dict1=dict(c=1,d=2),nested=dict(e=dict(c=1,d=2),d=2)))
+ IN[52]: nested_to_record(dict(flat1=1,dict1=dict(c=1,d=2),
+ nested=dict(e=dict(c=1,d=2),d=2)))
Out[52]:
{'dict1.c': 1,
'dict1.d': 2,
@@ -567,7 +576,7 @@ def nested_to_record(ds,prefix="",level=0):
'nested.e.d': 2}
"""
singleton = False
- if isinstance(ds,dict):
+ if isinstance(ds, dict):
ds = [ds]
singleton = True
@@ -575,23 +584,23 @@ def nested_to_record(ds,prefix="",level=0):
for d in ds:
new_d = copy.deepcopy(d)
- for k,v in d.items():
+ for k, v in d.items():
# each key gets renamed with prefix
if level == 0:
newkey = str(k)
else:
- newkey = prefix+'.'+ str(k)
+ newkey = prefix + '.' + str(k)
# only dicts gets recurse-flattend
# only at level>1 do we rename the rest of the keys
- if not isinstance(v,dict):
- if level!=0: # so we skip copying for top level, common case
+ if not isinstance(v, dict):
+ if level != 0: # so we skip copying for top level, common case
v = new_d.pop(k)
- new_d[newkey]= v
+ new_d[newkey] = v
continue
else:
v = new_d.pop(k)
- new_d.update(nested_to_record(v,newkey,level+1))
+ new_d.update(nested_to_record(v, newkey, level+1))
new_ds.append(new_d)
if singleton:
@@ -663,13 +672,14 @@ def _pull_field(js, spec):
data = [data]
if record_path is None:
- if any([isinstance(x,dict) for x in compat.itervalues(data[0])]):
+ if any([isinstance(x, dict) for x in compat.itervalues(data[0])]):
# naive normalization, this is idempotent for flat records
# and potentially will inflate the data considerably for
# deeply nested structures:
# {VeryLong: { b: 1,c:2}} -> {VeryLong.b:1 ,VeryLong.c:@}
#
- # TODO: handle record value which are lists, at least error reasonabley
+ # TODO: handle record value which are lists, at least error
+ # reasonably
data = nested_to_record(data)
return DataFrame(data)
elif not isinstance(record_path, list):
diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py
index dea7f2b079cef..8c7d89641bdd4 100644
--- a/pandas/io/tests/test_json/test_pandas.py
+++ b/pandas/io/tests/test_json/test_pandas.py
@@ -575,3 +575,16 @@ def test_url(self):
url = 'http://search.twitter.com/search.json?q=pandas%20python'
result = read_json(url)
+
+ def test_default_handler(self):
+ from datetime import timedelta
+ frame = DataFrame([timedelta(23), timedelta(seconds=5)])
+ self.assertRaises(OverflowError, frame.to_json)
+ expected = DataFrame([str(timedelta(23)), str(timedelta(seconds=5))])
+ assert_frame_equal(
+ expected, pd.read_json(frame.to_json(default_handler=str)))
+
+ def my_handler_raises(obj):
+ raise TypeError
+ self.assertRaises(
+ TypeError, frame.to_json, default_handler=my_handler_raises)
diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py
index 13ccf0bbd1742..4eb5b94ccf091 100644
--- a/pandas/io/tests/test_json/test_ujson.py
+++ b/pandas/io/tests/test_json/test_ujson.py
@@ -836,6 +836,51 @@ def toDict(self):
dec = ujson.decode(output)
self.assertEquals(dec, d)
+ def test_defaultHandler(self):
+
+ class _TestObject(object):
+
+ def __init__(self, val):
+ self.val = val
+
+ @property
+ def recursive_attr(self):
+ return _TestObject("recursive_attr")
+
+ def __str__(self):
+ return str(self.val)
+
+ self.assertRaises(OverflowError, ujson.encode, _TestObject("foo"))
+ self.assertEquals('"foo"', ujson.encode(_TestObject("foo"),
+ default_handler=str))
+
+ def my_handler(obj):
+ return "foobar"
+ self.assertEquals('"foobar"', ujson.encode(_TestObject("foo"),
+ default_handler=my_handler))
+
+ def my_handler_raises(obj):
+ raise TypeError("I raise for anything")
+ with tm.assertRaisesRegexp(TypeError, "I raise for anything"):
+ ujson.encode(_TestObject("foo"), default_handler=my_handler_raises)
+
+ def my_int_handler(obj):
+ return 42
+ self.assertEquals(
+ 42, ujson.decode(ujson.encode(_TestObject("foo"),
+ default_handler=my_int_handler)))
+
+ def my_obj_handler(obj):
+ return datetime.datetime(2013, 2, 3)
+ self.assertEquals(
+ ujson.decode(ujson.encode(datetime.datetime(2013, 2, 3))),
+ ujson.decode(ujson.encode(_TestObject("foo"),
+ default_handler=my_obj_handler)))
+
+ l = [_TestObject("foo"), _TestObject("bar")]
+ self.assertEquals(json.loads(json.dumps(l, default=str)),
+ ujson.decode(ujson.encode(l, default_handler=str)))
+
class NumpyJSONTests(TestCase):
diff --git a/pandas/src/ujson/python/objToJSON.c b/pandas/src/ujson/python/objToJSON.c
index aefddd7e47bcb..50010f4e7641a 100644
--- a/pandas/src/ujson/python/objToJSON.c
+++ b/pandas/src/ujson/python/objToJSON.c
@@ -120,6 +120,8 @@ typedef struct __PyObjectEncoder
// output format style for pandas data types
int outputFormat;
int originalOutputFormat;
+
+ PyObject *defaultHandler;
} PyObjectEncoder;
#define GET_TC(__ptrtc) ((TypeContext *)((__ptrtc)->prv))
@@ -256,6 +258,7 @@ static void *PandasDateTimeStructToJSON(pandas_datetimestruct *dts, JSONTypeCont
{
PRINTMARK();
PyErr_SetString(PyExc_ValueError, "Could not convert datetime value to string");
+ ((JSONObjectEncoder*) tc->encoder)->errorMsg = "";
PyObject_Free(GET_TC(tc)->cStr);
return NULL;
}
@@ -1160,7 +1163,7 @@ char** NpyArr_encodeLabels(PyArrayObject* labels, JSONObjectEncoder* enc, npy_in
void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
{
- PyObject *obj, *exc, *toDictFunc;
+ PyObject *obj, *exc, *toDictFunc, *defaultObj;
TypeContext *pc;
PyObjectEncoder *enc;
double val;
@@ -1630,6 +1633,23 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
PyErr_Clear();
+ if (enc->defaultHandler)
+ {
+ PRINTMARK();
+ defaultObj = PyObject_CallFunctionObjArgs(enc->defaultHandler, obj, NULL);
+ if (defaultObj == NULL || PyErr_Occurred())
+ {
+ if (!PyErr_Occurred())
+ {
+ PyErr_SetString(PyExc_TypeError, "Failed to execute default handler");
+ }
+ goto INVALID;
+ }
+ encode (defaultObj, enc, NULL, 0);
+ Py_DECREF(defaultObj);
+ goto INVALID;
+ }
+
PRINTMARK();
tc->type = JT_OBJECT;
pc->iterBegin = Dir_iterBegin;
@@ -1716,7 +1736,7 @@ char *Object_iterGetName(JSOBJ obj, JSONTypeContext *tc, size_t *outLen)
PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs)
{
- static char *kwlist[] = { "obj", "ensure_ascii", "double_precision", "encode_html_chars", "orient", "date_unit", "iso_dates", NULL};
+ static char *kwlist[] = { "obj", "ensure_ascii", "double_precision", "encode_html_chars", "orient", "date_unit", "iso_dates", "default_handler", NULL};
char buffer[65536];
char *ret;
@@ -1728,6 +1748,7 @@ PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs)
char *sOrient = NULL;
char *sdateFormat = NULL;
PyObject *oisoDates = 0;
+ PyObject *odefHandler = 0;
PyObjectEncoder pyEncoder =
{
@@ -1759,10 +1780,11 @@ PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs)
pyEncoder.datetimeIso = 0;
pyEncoder.datetimeUnit = PANDAS_FR_ms;
pyEncoder.outputFormat = COLUMNS;
+ pyEncoder.defaultHandler = 0;
PRINTMARK();
- if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OiOssO", kwlist, &oinput, &oensureAscii, &idoublePrecision, &oencodeHTMLChars, &sOrient, &sdateFormat, &oisoDates))
+ if (!PyArg_ParseTupleAndKeywords(args, kwargs, "O|OiOssOO", kwlist, &oinput, &oensureAscii, &idoublePrecision, &oencodeHTMLChars, &sOrient, &sdateFormat, &oisoDates, &odefHandler))
{
return NULL;
}
@@ -1851,6 +1873,16 @@ PyObject* objToJSON(PyObject* self, PyObject *args, PyObject *kwargs)
}
+ if (odefHandler != NULL && odefHandler != Py_None)
+ {
+ if (!PyCallable_Check(odefHandler))
+ {
+ PyErr_SetString (PyExc_TypeError, "Default handler is not callable");
+ return NULL;
+ }
+ pyEncoder.defaultHandler = odefHandler;
+ }
+
pyEncoder.originalOutputFormat = pyEncoder.outputFormat;
PRINTMARK();
ret = JSON_EncodeObject (oinput, encoder, buffer, sizeof (buffer));
| This adds a `default_handler` param to `to_json`, which if passed must be a callable which takes one argument (the object to convert) and returns a serialisable object. The `default_handler` will only be used if an object cannot otherwise be serialised.
Note right now the JSON serialiser has direct handling for:
- Python basic types `bool`, `int`, `long`, `float`.
- Python containers `dict`, `list`, `tuple` and `set`.
- Python byte and unicode strings.
- Python decimal.
- Python `datetime` and `date`.
- Python `None`.
- Numpy arrays.
- Numpy scalars
- `NaN` and `NaT`
- Pandas `Index`, `Series`, `DataFrame` and `Timestamp`
- any subclasses of the above.
If an object is not recognised the fallback behaviour with this PR would be:
1. if a `toDict` method is defined by the unrecognised object that
will be called and its returned `dict` will be JSON serialised.
2. if a `default_handler` has been passed to `to_json` that will
be called to convert the object.
3. otherwise an attempt is made to convert the object to a `dict` by
iterating over its attributes.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5138 | 2013-10-07T06:02:37Z | 2013-10-07T12:36:33Z | 2013-10-07T12:36:33Z | 2014-06-15T16:16:36Z |
ENH: Json support for datetime.time | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 3e072da164ab2..e6453e7d7411d 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -97,6 +97,7 @@ Improvements to existing features
overlapping color and style arguments (:issue:`4402`)
- Significant table writing performance improvements in ``HDFStore``
- JSON date serialisation now performed in low-level C code.
+ - JSON support for encoding datetime.time
- Add ``drop_level`` argument to xs (:issue:`4180`)
- Can now resample a DataFrame with ohlc (:issue:`2320`)
- ``Index.copy()`` and ``MultiIndex.copy()`` now accept keyword arguments to
diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py
index 13ccf0bbd1742..632a712af874a 100644
--- a/pandas/io/tests/test_json/test_ujson.py
+++ b/pandas/io/tests/test_json/test_ujson.py
@@ -22,6 +22,7 @@
from numpy.testing import (assert_array_equal,
assert_array_almost_equal_nulp,
assert_approx_equal)
+import pytz
from pandas import DataFrame, Series, Index, NaT, DatetimeIndex
import pandas.util.testing as tm
@@ -356,6 +357,17 @@ def test_encodeDateConversion(self):
self.assertEquals(int(expected), json.loads(output))
self.assertEquals(int(expected), ujson.decode(output))
+ def test_encodeTimeConversion(self):
+ tests = [
+ datetime.time(),
+ datetime.time(1, 2, 3),
+ datetime.time(10, 12, 15, 343243),
+ datetime.time(10, 12, 15, 343243, pytz.utc)]
+ for test in tests:
+ output = ujson.encode(test)
+ expected = '"%s"' % test.isoformat()
+ self.assertEquals(expected, output)
+
def test_nat(self):
input = NaT
assert ujson.encode(input) == 'null', "Expected null"
diff --git a/pandas/src/ujson/python/objToJSON.c b/pandas/src/ujson/python/objToJSON.c
index aefddd7e47bcb..18e4950a8c1b5 100644
--- a/pandas/src/ujson/python/objToJSON.c
+++ b/pandas/src/ujson/python/objToJSON.c
@@ -309,6 +309,30 @@ static void *NpyDatetime64ToJSON(JSOBJ _obj, JSONTypeContext *tc, void *outValue
return PandasDateTimeStructToJSON(&dts, tc, outValue, _outLen);
}
+static void *PyTimeToJSON(JSOBJ _obj, JSONTypeContext *tc, void *outValue, size_t *outLen)
+{
+ PyObject *obj = (PyObject *) _obj;
+ PyObject *str;
+ PyObject *tmp;
+
+ str = PyObject_CallMethod(obj, "isoformat", NULL);
+ if (str == NULL) {
+ PRINTMARK();
+ PyErr_SetString(PyExc_ValueError, "Failed to convert time");
+ return NULL;
+ }
+ if (PyUnicode_Check(str))
+ {
+ tmp = str;
+ str = PyUnicode_AsUTF8String(str);
+ Py_DECREF(tmp);
+ }
+ outValue = (void *) PyString_AS_STRING (str);
+ *outLen = strlen ((char *) outValue);
+ Py_DECREF(str);
+ return outValue;
+}
+
//=============================================================================
// Numpy array iteration functions
//=============================================================================
@@ -1361,6 +1385,13 @@ void Object_beginTypeContext (JSOBJ _obj, JSONTypeContext *tc)
return;
}
else
+ if (PyTime_Check(obj))
+ {
+ PRINTMARK();
+ pc->PyTypeToJSON = PyTimeToJSON; tc->type = JT_UTF8;
+ return;
+ }
+ else
if (obj == Py_None)
{
PRINTMARK();
| Adds `datetime.time` support to JSON serialiser. Note:
- times are encoded to strings and will be decoded to strings.
- `date_unit` and `date_format` params have no affect on `time` serialisation, they are always iso formatted.
Fixes #4873
| https://api.github.com/repos/pandas-dev/pandas/pulls/5137 | 2013-10-07T03:03:53Z | 2013-10-07T12:27:06Z | 2013-10-07T12:27:05Z | 2014-06-12T23:21:59Z |
BUG: Treat a list/ndarray identically for iloc indexing with list-like (GH5006) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 8488d03f97cbd..3e072da164ab2 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -297,7 +297,6 @@ API Changes
(:issue:`4501`)
- Support non-unique axes in a Panel via indexing operations (:issue:`4960`)
-
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
@@ -566,7 +565,7 @@ Bug Fixes
- Provide automatic conversion of ``object`` dtypes on fillna, related (:issue:`5103`)
- Fixed a bug where default options were being overwritten in the option
parser cleaning (:issue:`5121`).
-
+ - Treat a list/ndarray identically for ``iloc`` indexing with list-like (:issue:`5006`)
pandas 0.12.0
-------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 7502b3898d7fb..69114166b3406 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1092,7 +1092,9 @@ def _getitem_axis(self, key, axis=0):
else:
if _is_list_like(key):
- pass
+
+ # force an actual list
+ key = list(key)
else:
key = self._convert_scalar_indexer(key, axis)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 67c87277647c8..6292c5874772f 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -333,8 +333,15 @@ def test_iloc_getitem_list_int(self):
# list of ints
self.check_result('list int', 'iloc', [0,1,2], 'ix', { 0 : [0,2,4], 1 : [0,3,6], 2: [0,4,8] }, typs = ['ints'])
+ self.check_result('list int', 'iloc', [2], 'ix', { 0 : [4], 1 : [6], 2: [8] }, typs = ['ints'])
self.check_result('list int', 'iloc', [0,1,2], 'indexer', [0,1,2], typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+ # array of ints
+ # (GH5006), make sure that a single indexer is returning the correct type
+ self.check_result('array int', 'iloc', np.array([0,1,2]), 'ix', { 0 : [0,2,4], 1 : [0,3,6], 2: [0,4,8] }, typs = ['ints'])
+ self.check_result('array int', 'iloc', np.array([2]), 'ix', { 0 : [4], 1 : [6], 2: [8] }, typs = ['ints'])
+ self.check_result('array int', 'iloc', np.array([0,1,2]), 'indexer', [0,1,2], typs = ['labels','mixed','ts','floats','empty'], fails = IndexError)
+
def test_iloc_getitem_dups(self):
# no dups in panel (bug?)
| closes #5006
| https://api.github.com/repos/pandas-dev/pandas/pulls/5134 | 2013-10-06T23:55:10Z | 2013-10-07T00:10:08Z | 2013-10-07T00:10:08Z | 2014-06-18T14:51:20Z |
BUG: set_names should not change is_ relationship | diff --git a/pandas/core/index.py b/pandas/core/index.py
index c39364d58e205..8e98cc6fb25bb 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -335,7 +335,6 @@ def set_names(self, names, inplace=False):
raise TypeError("Must pass list-like as `names`.")
if inplace:
idx = self
- idx._reset_identity()
else:
idx = self._shallow_copy()
idx._set_names(names)
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index a568b72a9ace6..5404b30af8d1c 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -1887,6 +1887,9 @@ def test_is_(self):
self.assertTrue(mi2.is_(mi))
self.assertTrue(mi.is_(mi2))
self.assertTrue(mi.is_(mi.set_names(["C", "D"])))
+ mi2 = mi.view()
+ mi2.set_names(["E", "F"], inplace=True)
+ self.assertTrue(mi.is_(mi2))
# levels are inherent properties, they change identity
mi3 = mi2.set_levels([lrange(10), lrange(10)])
self.assertFalse(mi3.is_(mi2))
| inplace change shouldn't change identity on set names (my bad)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5132 | 2013-10-06T22:25:20Z | 2013-10-06T23:24:16Z | 2013-10-06T23:24:16Z | 2014-07-16T08:33:25Z |
ENH: Added lxml-liberal html parsing flavor | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 8488d03f97cbd..c1a369ffa3ae4 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -172,7 +172,8 @@ Improvements to existing features
- :meth:`~pandas.io.json.json_normalize` is a new method to allow you to create a flat table
from semi-structured JSON data. :ref:`See the docs<io.json_normalize>` (:issue:`1067`)
- ``DataFrame.from_records()`` will now accept generators (:issue:`4910`)
-
+ - Added ``lxml-liberal`` html parsing flavor (:issue:`5130`)
+
API Changes
~~~~~~~~~~~
diff --git a/pandas/io/html.py b/pandas/io/html.py
index 96bedbf390af6..f0cc5de7b2a32 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -8,6 +8,7 @@
import numbers
import collections
import warnings
+from functools import partial
from distutils.version import LooseVersion
@@ -165,13 +166,12 @@ class _HtmlFrameParser(object):
See each method's respective documentation for details on their
functionality.
"""
- def __init__(self, io, match, attrs):
- self.io = io
+ def __init__(self, match, attrs):
self.match = match
self.attrs = attrs
- def parse_tables(self):
- tables = self._parse_tables(self._build_doc(), self.match, self.attrs)
+ def parse_tables(self, io):
+ tables = self._parse_tables(self._build_doc(io), self.match, self.attrs)
return (self._build_table(table) for table in tables)
def _parse_raw_data(self, rows):
@@ -314,7 +314,7 @@ def _parse_tfoot(self, table):
"""
raise NotImplementedError
- def _build_doc(self):
+ def _build_doc(self, io):
"""Return a tree-like object that can be used to iterate over the DOM.
Returns
@@ -414,15 +414,15 @@ def _parse_tables(self, doc, match, attrs):
match.pattern)
return result
- def _setup_build_doc(self):
- raw_text = _read(self.io)
+ def _setup_build_doc(self, io):
+ raw_text = _read(io)
if not raw_text:
- raise ValueError('No text parsed from document: %s' % self.io)
+ raise ValueError('No text parsed from document: %s' % io)
return raw_text
- def _build_doc(self):
+ def _build_doc(self, io):
from bs4 import BeautifulSoup
- return BeautifulSoup(self._setup_build_doc(), features='html5lib')
+ return BeautifulSoup(self._setup_build_doc(io), features='html5lib')
def _build_xpath_expr(attrs):
@@ -469,6 +469,8 @@ class _LxmlFrameParser(_HtmlFrameParser):
:class:`_HtmlFrameParser`.
"""
def __init__(self, *args, **kwargs):
+ self.strict = kwargs.pop('strict', True)
+
super(_LxmlFrameParser, self).__init__(*args, **kwargs)
def _text_getter(self, obj):
@@ -500,7 +502,7 @@ def _parse_tables(self, doc, match, kwargs):
raise ValueError("No tables found matching regex %r" % pattern)
return tables
- def _build_doc(self):
+ def _build_doc(self, io):
"""
Raises
------
@@ -519,11 +521,11 @@ def _build_doc(self):
from lxml.html import parse, fromstring, HTMLParser
from lxml.etree import XMLSyntaxError
- parser = HTMLParser(recover=False)
+ parser = HTMLParser(recover=not self.strict)
try:
# try to parse the input in the simplest way
- r = parse(self.io, parser=parser)
+ r = parse(io, parser=parser)
try:
r = r.getroot()
@@ -531,8 +533,8 @@ def _build_doc(self):
pass
except (UnicodeDecodeError, IOError):
# if the input is a blob of html goop
- if not _is_url(self.io):
- r = fromstring(self.io, parser=parser)
+ if not _is_url(io):
+ r = fromstring(io, parser=parser)
try:
r = r.getroot()
@@ -540,7 +542,7 @@ def _build_doc(self):
pass
else:
# not a url
- scheme = parse_url(self.io).scheme
+ scheme = parse_url(io).scheme
if scheme not in _valid_schemes:
# lxml can't parse it
msg = ('%r is not a valid url scheme, valid schemes are '
@@ -572,7 +574,7 @@ def _parse_raw_tfoot(self, table):
expr = './/tfoot//th'
return [_remove_whitespace(x.text_content()) for x in
table.xpath(expr)]
-
+
def _expand_elements(body):
lens = Series(lmap(len, body))
@@ -611,7 +613,8 @@ def _data_to_frame(data, header, index_col, skiprows, infer_types,
_valid_parsers = {'lxml': _LxmlFrameParser, None: _LxmlFrameParser,
'html5lib': _BeautifulSoupHtml5LibFrameParser,
- 'bs4': _BeautifulSoupHtml5LibFrameParser}
+ 'bs4': _BeautifulSoupHtml5LibFrameParser,
+ 'lxml-liberal': partial(_LxmlFrameParser, strict=False),}
def _parser_dispatch(flavor):
@@ -696,10 +699,10 @@ def _parse(flavor, io, match, header, index_col, skiprows, infer_types,
retained = None
for flav in flavor:
parser = _parser_dispatch(flav)
- p = parser(io, compiled_match, attrs)
+ p = parser(compiled_match, attrs)
try:
- tables = p.parse_tables()
+ tables = p.parse_tables(io)
except Exception as caught:
retained = caught
else:
@@ -737,6 +740,9 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
each other, they are both there for backwards compatibility. The
default of ``None`` tries to use ``lxml`` to parse and if that fails it
falls back on ``bs4`` + ``html5lib``.
+ ``lxml-liberal`` - uses lxml parser but allows errors
+ to pass silently and then returns what it can from the parsed tables
+ that lxml is able to find.
header : int or list-like or None, optional
The row (or list of rows for a :class:`~pandas.MultiIndex`) to use to
@@ -816,6 +822,24 @@ def read_html(io, match='.+', flavor=None, header=None, index_col=None,
This function will *always* return a list of :class:`DataFrame` *or*
it will fail, e.g., it will *not* return an empty list.
+
+ lxml-liberal tries hard to parse through broken XML.
+ It lets libxml2 try its best to return a valid HTML tree
+ with all content it can manage to parse.
+ It will not raise an exception on parser errors.
+ You should use libxml2 version 2.6.21 or newer
+ to take advantage of this feature.
+
+ The support for parsing broken HTML depends entirely on libxml2's
+ recovery algorithm.
+ It is not the fault of lxml if you find documents that
+ are so heavily broken that the parser cannot handle them.
+ There is also no guarantee that the resulting tree will
+ contain all data from the original document.
+ The parser may have to drop seriously broken parts when
+ struggling to keep parsing.
+ Especially misplaced meta tags can suffer from this,
+ which may lead to encoding problems.
Examples
--------
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 9b0fb1cacfb65..7415e33c1ece0 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -606,6 +606,33 @@ def test_data_fail(self):
with tm.assertRaises(XMLSyntaxError):
self.read_html(banklist_data, flavor=['lxml'])
+ def test_lxml_liberal(self):
+ banklist_data = os.path.join(DATA_PATH, 'banklist.html')
+
+ dfs = self.read_html(banklist_data, flavor=['lxml-liberal'])
+ for df in dfs:
+ tm.assert_isinstance(df, DataFrame)
+ self.assertFalse(df.empty)
+
+ @slow
+ def test_lxml_liberal2(self):
+ _skip_if_no('bs4')
+ banklist_data = os.path.join(DATA_PATH, 'banklist.html')
+
+ dfs_lxml = self.read_html(banklist_data, flavor=['lxml-liberal'])
+ dfs_bs4 = self.read_html(banklist_data, flavor=['bs4'])
+
+ if len(dfs_lxml) != len(dfs_bs4):
+ return
+
+ for df_lxml,df_bs4 in zip(dfs_lxml, dfs_bs4):
+ try:
+ tm.assert_frame_equal(df_lxml,df_bs4)
+ except AssertionError:
+ return
+
+ self.fail()
+
def test_works_on_valid_markup(self):
filename = os.path.join(DATA_PATH, 'valid_markup.html')
dfs = self.read_html(filename, index_col=0, flavor=['lxml'])
| Closes #5130
| https://api.github.com/repos/pandas-dev/pandas/pulls/5131 | 2013-10-06T20:51:31Z | 2013-11-01T00:23:43Z | null | 2014-07-21T06:35:36Z |
ENH: Support for PyExcelerate as an Excel writer engine. | diff --git a/.travis.yml b/.travis.yml
index 387dec1ed2658..f46d9c4735ca4 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -41,6 +41,8 @@ before_install:
install:
- echo "Waldo2"
- ci/install.sh
+ # Temp testing measure while waiting for PyPi release.
+ - pip install git+git://github.com/kz26/PyExcelerate.git
before_script:
- mysql -e 'create database pandas_nosetest;'
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 532c90b83ebb0..4beea77c716ed 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -101,8 +101,8 @@ Optional Dependencies
* `openpyxl <http://packages.python.org/openpyxl/>`__, `xlrd/xlwt <http://www.python-excel.org/>`__
* openpyxl version 1.6.1 or higher
* Needed for Excel I/O
- * `XlsxWriter <https://pypi.python.org/pypi/XlsxWriter>`__
- * Alternative Excel writer.
+ * `XlsxWriter <http://pypi.python.org/pypi/XlsxWriter>`__, `PyExcelerate <http://pypi.python.org/pypi/PyExcelerate>`__
+ * Alternative Excel writers.
* `boto <https://pypi.python.org/pypi/boto>`__: necessary for Amazon S3
access.
* One of `PyQt4
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 8488d03f97cbd..a5c55eaea9e2b 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -136,6 +136,8 @@ Improvements to existing features
- Added XlsxWriter as an optional ``ExcelWriter`` engine. This is about 5x
faster than the default openpyxl xlsx writer and is equivalent in speed
to the xlwt xls writer module. (:issue:`4542`)
+ - Added PyExcelerate as an optional ``ExcelWriter`` engine. This is about
+ 14x faster than the default openpyxl xlsx writer.
- allow DataFrame constructor to accept more list-like objects, e.g. list of
``collections.Sequence`` and ``array.Array`` objects (:issue:`3783`,
:issue:`4297`, :issue:`4851`), thanks @lgautier
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 6b83fada19001..3beafed094778 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -745,3 +745,55 @@ def _convert_to_style(self, style_dict, num_format_str=None):
return xl_format
register_writer(_XlsxWriter)
+
+
+class _PyExcelerate(ExcelWriter):
+ engine = 'pyexcelerate'
+ supported_extensions = ('.xlsx',)
+
+ def __init__(self, path, **engine_kwargs):
+ # Use the pyexcelerate module as the Excel writer.
+ import pyexcelerate
+
+ super(_PyExcelerate, self).__init__(path, **engine_kwargs)
+
+ self.book = pyexcelerate.Workbook(path, **engine_kwargs)
+
+ def save(self):
+ """
+ Save workbook to disk.
+ """
+ return self.book.save(self.path)
+
+ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0):
+ # Write the frame cells using pyexcelerate.
+
+ sheet_name = self._get_sheet_name(sheet_name)
+
+ if sheet_name in self.sheets:
+ wks = self.sheets[sheet_name]
+ else:
+ wks = self.book.new_sheet(sheet_name)
+ self.sheets[sheet_name] = wks
+
+ for cell in cells:
+ val = _conv_value(cell.val)
+
+ if isinstance(cell.val, datetime.date):
+ val = datetime.datetime.fromordinal(val.toordinal())
+
+ if cell.mergestart is not None and cell.mergeend is not None:
+# wks.merge_range(startrow + cell.row,
+# startrow + cell.mergestart,
+# startcol + cell.col,
+# startcol + cell.mergeend,
+# val, style)
+ pass
+ else:
+ wks.set_cell_value(1 + startrow + cell.row,
+ 1 + startcol + cell.col,
+ val)
+
+
+register_writer(_PyExcelerate)
+
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 38b3ee192ab7a..b04647eb43ae0 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -52,6 +52,13 @@ def _skip_if_no_xlsxwriter():
raise nose.SkipTest('xlsxwriter not installed, skipping')
+def _skip_if_no_pyexcelerate():
+ try:
+ import pyexcelerate # NOQA
+ except ImportError:
+ raise nose.SkipTest('pyexcelerate not installed, skipping')
+
+
def _skip_if_no_excelsuite():
_skip_if_no_xlrd()
_skip_if_no_xlwt()
@@ -953,6 +960,51 @@ def test_roundtrip_indexlabels(self):
self.assertAlmostEqual(frame.index.names, recons.index.names)
+class PyExcelerateTests(ExcelWriterBase, unittest.TestCase):
+ ext = 'xlsx'
+ engine_name = 'pyexcelerate'
+ check_skip = staticmethod(_skip_if_no_pyexcelerate)
+
+ # Override test from the Superclass to use assertAlmostEqual on the
+ # floating point values read back in from the output PyExcelerate file.
+ def test_roundtrip_indexlabels(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
+ path = '__tmp_to_excel_from_excel_indexlabels__.' + ext
+
+ with ensure_clean(path) as path:
+
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
+
+ # test index_label
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(path, 'test1', index_label=['test'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(
+ path, 'test1', index_label=['test', 'dummy', 'dummy2'])
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertEqual(frame.index.names, recons.index.names)
+
+ frame = (DataFrame(np.random.randn(10, 2)) >= 0)
+ frame.to_excel(path, 'test1', index_label='test')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1', index_col=0).astype(np.int64)
+ frame.index.names = ['test']
+ self.assertAlmostEqual(frame.index.names, recons.index.names)
+
+
class ExcelWriterEngineTests(unittest.TestCase):
def test_ExcelWriter_dispatch(self):
with tm.assertRaisesRegexp(ValueError, 'No engine'):
@@ -966,11 +1018,11 @@ def test_ExcelWriter_dispatch(self):
writer = ExcelWriter('apple.xls')
tm.assert_isinstance(writer, _XlwtWriter)
-
def test_register_writer(self):
# some awkward mocking to test out dispatch and such actually works
called_save = []
called_write_cells = []
+
class DummyClass(ExcelWriter):
called_save = False
called_write_cells = False
@@ -998,7 +1050,6 @@ def check_called(func):
func = lambda: df.to_excel('something.test')
check_called(func)
check_called(lambda: panel.to_excel('something.test'))
- from pandas import set_option, get_option
val = get_option('io.excel.xlsx.writer')
set_option('io.excel.xlsx.writer', 'dummy')
check_called(lambda: df.to_excel('something.xlsx'))
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 07b33266d88a1..bf0a37aad67cc 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1595,6 +1595,26 @@ def test_to_excel_xlsxwriter(self):
recdf = reader.parse(str(item), index_col=0)
assert_frame_equal(df, recdf)
+ def test_to_excel_pyexcelerate(self):
+ try:
+ import xlrd
+ import pyexcelerate
+ from pandas.io.excel import ExcelFile
+ except ImportError:
+ raise nose.SkipTest("Requires xlrd and pyexcelerate. Skipping.")
+
+ path = '__tmp__.xlsx'
+ with ensure_clean(path) as path:
+ self.panel.to_excel(path, engine='pyexcelerate')
+ try:
+ reader = ExcelFile(path)
+ except ImportError as e:
+ raise nose.SkipTest("cannot write excel file: %s" % e)
+
+ for item, df in compat.iteritems(self.panel):
+ recdf = reader.parse(str(item), index_col=0)
+ assert_frame_equal(df, recdf)
+
def test_dropna(self):
p = Panel(np.random.randn(4, 5, 6), major_axis=list('abcde'))
p.ix[:, ['b', 'd'], 0] = np.nan
| This is an initial patch to add PyExcelerate as an Excel writer engine.
**It isn't suitable for merge** yet since there is issue with date handling in PyExcelerate that I am working to resolve with the authors. I just want this to appear on the pandas timeline for now so I can refer to it in other PRs.
1. The requirements files are missing since the PR will require a PyExcelerate update and release. Also, the `print_versions.py` entry.
2. There is also a failing test due to versioning.
3. There is currently no merged cell support.
I'll fix all these before the final PR request.
John
| https://api.github.com/repos/pandas-dev/pandas/pulls/5128 | 2013-10-06T19:42:15Z | 2013-12-22T21:02:16Z | null | 2014-06-29T09:44:06Z |
add test for indexing with integer arrays | diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 67c87277647c8..574d1a4ddb0c3 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -15,7 +15,8 @@
from pandas.core.api import (DataFrame, Index, Series, Panel, notnull, isnull,
MultiIndex, DatetimeIndex, Float64Index, Timestamp)
from pandas.util.testing import (assert_almost_equal, assert_series_equal,
- assert_frame_equal, assert_panel_equal)
+ assert_frame_equal, assert_panel_equal,
+ assert_isinstance)
from pandas import compat, concat
import pandas.util.testing as tm
@@ -1841,6 +1842,16 @@ def check_slicing_positional(index):
#self.assertRaises(TypeError, lambda : s.iloc[2.0:5.0])
#self.assertRaises(TypeError, lambda : s.iloc[2:5.0])
+ def test_array_indexing(self):
+ """test that array indexing returns a sequence by calling len()"""
+ column = Series(np.arange(10))
+ indices = np.arange(5, 10)
+ assert_isinstance(column.iloc[indices], Series)
+ indices = np.array([5], dtype = int)
+ assert_isinstance(column.iloc[indices], Series)
+ indices = np.array([], dtype = int)
+ assert_isinstance(column.iloc[indices], Series)
+
if __name__ == '__main__':
import nose
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 946a4d94b6045..9394c41627458 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -167,7 +167,7 @@ def assert_isinstance(obj, class_type_or_tuple):
"""asserts that obj is an instance of class_type_or_tuple"""
assert isinstance(obj, class_type_or_tuple), (
"Expected object to be of type %r, found %r instead" % (
- type(obj), class_type_or_tuple))
+ class_type_or_tuple, type(obj)))
def assert_equal(a, b, msg=""):
"""asserts that a equals b, like nose's assert_equal, but allows custom message to start.
| I had a try at solving issue #5006, but it does not look as trivial to me as @jreback suggested. ;-)
I added a test for indexing with int-typed arrays containing single indices, which currently fails. I spent some time reading test_indexing, because my hunch is that it could be better integrated with the existing tests (e.g. using check_result), but anyhow - any test is better than none, I guess.
The failing test goes through `_iLocIndexer._getitem_axis()` -> `_NDFrameIndexer._get_loc()` -> `Series._ixs()` -> `index.get_value_at()` -> `util.get_value_at()`, but I don't see the point where that should've been different right now.
Sorry, but I lack time for solving this at the moment.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5127 | 2013-10-06T19:39:58Z | 2013-10-06T23:55:33Z | null | 2014-06-23T23:20:34Z |
BUG: Fix segfault on isnull(MultiIndex) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 23f3533500849..3892d5b77a08e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -584,6 +584,8 @@ Bug Fixes
- Made sure different locales are tested on travis-ci (:issue:`4918`). Also
adds a couple of utilities for getting locales and setting locales with a
context manager.
+ - Fixed segfault on ``isnull(MultiIndex)`` (now raises an error instead)
+ (:issue:`5123`, :issue:`5125`)
pandas 0.12.0
-------------
diff --git a/pandas/core/common.py b/pandas/core/common.py
index c87bea2abc2c2..c34dfedc7130c 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -12,6 +12,7 @@
from numpy.lib.format import read_array, write_array
import numpy as np
+import pandas as pd
import pandas.algos as algos
import pandas.lib as lib
import pandas.tslib as tslib
@@ -116,8 +117,10 @@ def isnull(obj):
def _isnull_new(obj):
if lib.isscalar(obj):
return lib.checknull(obj)
-
- if isinstance(obj, (ABCSeries, np.ndarray)):
+ # hack (for now) because MI registers as ndarray
+ elif isinstance(obj, pd.MultiIndex):
+ raise NotImplementedError("isnull is not defined for MultiIndex")
+ elif isinstance(obj, (ABCSeries, np.ndarray)):
return _isnull_ndarraylike(obj)
elif isinstance(obj, ABCGeneric):
return obj.apply(isnull)
@@ -141,8 +144,10 @@ def _isnull_old(obj):
'''
if lib.isscalar(obj):
return lib.checknull_old(obj)
-
- if isinstance(obj, (ABCSeries, np.ndarray)):
+ # hack (for now) because MI registers as ndarray
+ elif isinstance(obj, pd.MultiIndex):
+ raise NotImplementedError("isnull is not defined for MultiIndex")
+ elif isinstance(obj, (ABCSeries, np.ndarray)):
return _isnull_ndarraylike_old(obj)
elif isinstance(obj, ABCGeneric):
return obj.apply(_isnull_old)
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 7e801c0a202db..11538ae8b3ab8 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -2314,6 +2314,13 @@ def test_slice_keep_name(self):
names=['x', 'y'])
self.assertEqual(x[1:].names, x.names)
+ def test_isnull_behavior(self):
+ # should not segfault GH5123
+ # NOTE: if MI representation changes, may make sense to allow
+ # isnull(MI)
+ with tm.assertRaises(NotImplementedError):
+ pd.isnull(self.index)
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
| Now raises NotImplementedError b/c not yet clear what it should return.
Fixes #5123.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5125 | 2013-10-06T17:25:14Z | 2013-10-11T12:01:34Z | 2013-10-11T12:01:34Z | 2014-06-13T10:01:30Z |
Add indexer and str to API documentation | diff --git a/doc/make.py b/doc/make.py
index dbce5aaa7a1b4..532395b41ce95 100755
--- a/doc/make.py
+++ b/doc/make.py
@@ -72,7 +72,12 @@ def upload_prev(ver, doc_root='./'):
if os.system(pdf_cmd):
raise SystemExit('Upload PDF to %s from %s failed' % (ver, doc_root))
-
+def build_pandas():
+ os.chdir('..')
+ os.system('python setup.py clean')
+ os.system('python setup.py build_ext --inplace')
+ os.chdir('doc')
+
def build_prev(ver):
if os.system('git checkout v%s' % ver) != 1:
os.chdir('..')
@@ -238,6 +243,7 @@ def _get_config():
'clean': clean,
'auto_dev': auto_dev_build,
'auto_debug': lambda: auto_dev_build(True),
+ 'build_pandas': build_pandas,
'all': all,
}
diff --git a/doc/source/api.rst b/doc/source/api.rst
index 5706fa7864ed5..d73a8d3ad7489 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -153,6 +153,7 @@ Top-level dealing with datetimes
:toctree: generated/
to_datetime
+ to_timedelta
Top-level evaluation
~~~~~~~~~~~~~~~~~~~~
@@ -253,10 +254,17 @@ Indexing, iteration
:toctree: generated/
Series.get
+ Series.at
+ Series.iat
Series.ix
+ Series.loc
+ Series.iloc
Series.__iter__
Series.iteritems
+For more information on ``.at``, ``.iat``, ``.ix``, ``.loc``, and
+``.iloc``, see the :ref:`indexing documentation <indexing>`.
+
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
@@ -407,8 +415,49 @@ Time series-related
Series.tz_convert
Series.tz_localize
+String handling
+~~~~~~~~~~~~~~~~~~~
+``Series.str`` can be used to access the values of the series as
+strings and apply several methods to it. Due to implementation
+details the methods show up here as methods of the
+``StringMethods`` class.
+
+.. currentmodule:: pandas.core.strings
+
+.. autosummary::
+ :toctree: generated/
+
+ StringMethods.cat
+ StringMethods.center
+ StringMethods.contains
+ StringMethods.count
+ StringMethods.decode
+ StringMethods.encode
+ StringMethods.endswith
+ StringMethods.extract
+ StringMethods.findall
+ StringMethods.get
+ StringMethods.join
+ StringMethods.len
+ StringMethods.lower
+ StringMethods.lstrip
+ StringMethods.match
+ StringMethods.pad
+ StringMethods.repeat
+ StringMethods.replace
+ StringMethods.rstrip
+ StringMethods.slice
+ StringMethods.slice_replace
+ StringMethods.split
+ StringMethods.startswith
+ StringMethods.strip
+ StringMethods.title
+ StringMethods.upper
+
Plotting
~~~~~~~~
+.. currentmodule:: pandas
+
.. autosummary::
:toctree: generated/
@@ -476,7 +525,11 @@ Indexing, iteration
:toctree: generated/
DataFrame.head
+ DataFrame.at
+ DataFrame.iat
DataFrame.ix
+ DataFrame.loc
+ DataFrame.iloc
DataFrame.insert
DataFrame.__iter__
DataFrame.iteritems
@@ -489,6 +542,10 @@ Indexing, iteration
DataFrame.isin
DataFrame.query
+For more information on ``.at``, ``.iat``, ``.ix``, ``.loc``, and
+``.iloc``, see the :ref:`indexing documentation <indexing>`.
+
+
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
@@ -733,7 +790,11 @@ Indexing, iteration, slicing
.. autosummary::
:toctree: generated/
+ Panel.at
+ Panel.iat
Panel.ix
+ Panel.loc
+ Panel.iloc
Panel.__iter__
Panel.iteritems
Panel.pop
@@ -741,6 +802,9 @@ Indexing, iteration, slicing
Panel.major_xs
Panel.minor_xs
+For more information on ``.at``, ``.iat``, ``.ix``, ``.loc``, and
+``.iloc``, see the :ref:`indexing documentation <indexing>`.
+
Binary operator functions
~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
@@ -878,8 +942,9 @@ Serialization / IO / Conversion
Index
-----
-**Many of these methods or variants thereof are available on the objects that contain an index (Series/Dataframe)
-and those should most likely be used before calling these methods directly.**
+**Many of these methods or variants thereof are available on the objects
+that contain an index (Series/Dataframe) and those should most likely be
+used before calling these methods directly.**
.. autosummary::
:toctree: generated/
@@ -1026,7 +1091,7 @@ Conversion
..
HACK - see github issue #4539. To ensure old links remain valid, include
- here the autosummaries with previous currentmodules as a comment and add
+ here the autosummaries with previous currentmodules as a comment and add
them to a hidden toctree (to avoid warnings):
.. toctree::
| closes https://github.com/pydata/pandas/issues/5068
| https://api.github.com/repos/pandas-dev/pandas/pulls/5124 | 2013-10-06T16:58:05Z | 2013-10-15T18:40:49Z | 2013-10-15T18:40:49Z | 2014-09-03T22:59:19Z |
TST: Tests and fix for unhandled data types in Excel writers. | diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 6b83fada19001..44b323abf45c2 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -16,6 +16,7 @@
from pandas.core import config
from pandas.core.common import pprint_thing
import pandas.compat as compat
+import pandas.core.common as com
from warnings import warn
__all__ = ["read_excel", "ExcelWriter", "ExcelFile"]
@@ -290,7 +291,6 @@ def __enter__(self):
def __exit__(self, exc_type, exc_value, traceback):
self.close()
-
def _trim_excel_header(row):
# trim header row so auto-index inference works
# xlrd uses '' , openpyxl None
@@ -298,12 +298,13 @@ def _trim_excel_header(row):
row = row[1:]
return row
-
def _conv_value(val):
- # convert value for excel dump
- if isinstance(val, np.int64):
+ # Convert numpy types to Python types for the Excel writers.
+ if com.is_integer(val):
val = int(val)
- elif isinstance(val, np.bool8):
+ elif com.is_float(val):
+ val = float(val)
+ elif com.is_bool(val):
val = bool(val)
elif isinstance(val, Period):
val = "%s" % val
@@ -686,8 +687,6 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0):
style_dict = {}
for cell in cells:
- val = _conv_value(cell.val)
-
num_format_str = None
if isinstance(cell.val, datetime.datetime):
num_format_str = "YYYY-MM-DD HH:MM:SS"
@@ -709,11 +708,11 @@ def write_cells(self, cells, sheet_name=None, startrow=0, startcol=0):
startrow + cell.mergestart,
startcol + cell.col,
startcol + cell.mergeend,
- val, style)
+ cell.val, style)
else:
wks.write(startrow + cell.row,
startcol + cell.col,
- val, style)
+ cell.val, style)
def _convert_to_style(self, style_dict, num_format_str=None):
"""
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 38b3ee192ab7a..b279c7ffd2892 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -221,7 +221,6 @@ def test_excel_table_sheet_by_index(self):
(self.xlsx1, self.csv1)]:
self.check_excel_table_sheet_by_index(filename, csvfile)
-
def test_excel_table(self):
_skip_if_no_xlrd()
@@ -405,7 +404,6 @@ def test_mixed(self):
recons = reader.parse('test1', index_col=0)
tm.assert_frame_equal(self.mixed_frame, recons)
-
def test_tsframe(self):
_skip_if_no_xlrd()
ext = self.ext
@@ -419,45 +417,70 @@ def test_tsframe(self):
recons = reader.parse('test1')
tm.assert_frame_equal(df, recons)
- def test_int64(self):
+ def test_int_types(self):
_skip_if_no_xlrd()
ext = self.ext
- path = '__tmp_to_excel_from_excel_int64__.' + ext
-
- with ensure_clean(path) as path:
- self.frame['A'][:5] = nan
-
- self.frame.to_excel(path, 'test1')
- self.frame.to_excel(path, 'test1', cols=['A', 'B'])
- self.frame.to_excel(path, 'test1', header=False)
- self.frame.to_excel(path, 'test1', index=False)
+ path = '__tmp_to_excel_from_excel_int_types__.' + ext
- # Test np.int64, values read come back as float
- frame = DataFrame(np.random.randint(-10, 10, size=(10, 2)), dtype=np.int64)
- frame.to_excel(path, 'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1').astype(np.int64)
- tm.assert_frame_equal(frame, recons, check_dtype=False)
+ for np_type in (np.int8, np.int16, np.int32, np.int64):
- def test_bool(self):
+ with ensure_clean(path) as path:
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
+
+ # Test np.int values read come back as float.
+ frame = DataFrame(np.random.randint(-10, 10, size=(10, 2)),
+ dtype=np_type)
+ frame.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1').astype(np_type)
+ tm.assert_frame_equal(frame, recons, check_dtype=False)
+
+ def test_float_types(self):
_skip_if_no_xlrd()
ext = self.ext
- path = '__tmp_to_excel_from_excel_bool__.' + ext
+ path = '__tmp_to_excel_from_excel_float_types__.' + ext
- with ensure_clean(path) as path:
- self.frame['A'][:5] = nan
+ for np_type in (np.float16, np.float32, np.float64):
+ with ensure_clean(path) as path:
+ self.frame['A'][:5] = nan
- self.frame.to_excel(path, 'test1')
- self.frame.to_excel(path, 'test1', cols=['A', 'B'])
- self.frame.to_excel(path, 'test1', header=False)
- self.frame.to_excel(path, 'test1', index=False)
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
- # Test reading/writing np.bool8, roundtrip only works for xlsx
- frame = (DataFrame(np.random.randn(10, 2)) >= 0)
- frame.to_excel(path, 'test1')
- reader = ExcelFile(path)
- recons = reader.parse('test1').astype(np.bool8)
- tm.assert_frame_equal(frame, recons)
+ # Test np.float values read come back as float.
+ frame = DataFrame(np.random.random_sample(10), dtype=np_type)
+ frame.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1').astype(np_type)
+ tm.assert_frame_equal(frame, recons, check_dtype=False)
+
+ def test_bool_types(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
+ path = '__tmp_to_excel_from_excel_bool_types__.' + ext
+
+ for np_type in (np.bool8, np.bool_):
+ with ensure_clean(path) as path:
+ self.frame['A'][:5] = nan
+
+ self.frame.to_excel(path, 'test1')
+ self.frame.to_excel(path, 'test1', cols=['A', 'B'])
+ self.frame.to_excel(path, 'test1', header=False)
+ self.frame.to_excel(path, 'test1', index=False)
+
+ # Test np.bool values read come back as float.
+ frame = (DataFrame([1, 0, True, False], dtype=np_type))
+ frame.to_excel(path, 'test1')
+ reader = ExcelFile(path)
+ recons = reader.parse('test1').astype(np_type)
+ tm.assert_frame_equal(frame, recons)
def test_sheets(self):
_skip_if_no_xlrd()
| Added tests and a fix for unhandled numpy data types in the
Excel writers. Issue #3122.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5122 | 2013-10-06T08:18:32Z | 2013-10-14T01:40:48Z | 2013-10-14T01:40:48Z | 2014-06-16T01:40:59Z |
CLN: clean up parser options | diff --git a/doc/source/release.rst b/doc/source/release.rst
index a3908ab01903d..8488d03f97cbd 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -564,6 +564,8 @@ Bug Fixes
- Fixed a bug where ``groupby.plot()`` and friends were duplicating figures
multiple times (:issue:`5102`).
- Provide automatic conversion of ``object`` dtypes on fillna, related (:issue:`5103`)
+ - Fixed a bug where default options were being overwritten in the option
+ parser cleaning (:issue:`5121`).
pandas 0.12.0
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 8a2f249f6af06..76d6a3909f89f 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2,11 +2,10 @@
Module contains tools for processing files into DataFrames or other objects
"""
from __future__ import print_function
-from pandas.compat import range, lrange, StringIO, lzip, zip, string_types
+from pandas.compat import range, lrange, StringIO, lzip, zip, string_types, map
from pandas import compat
import re
import csv
-from warnings import warn
import numpy as np
@@ -266,7 +265,6 @@ def _read(filepath_or_buffer, kwds):
'buffer_lines': None,
'error_bad_lines': True,
'warn_bad_lines': True,
- 'factorize': True,
'dtype': None,
'decimal': b'.'
}
@@ -340,8 +338,7 @@ def parser_f(filepath_or_buffer,
encoding=None,
squeeze=False,
mangle_dupe_cols=True,
- tupleize_cols=False,
- ):
+ tupleize_cols=False):
# Alias sep -> delimiter.
if delimiter is None:
@@ -400,8 +397,7 @@ def parser_f(filepath_or_buffer,
low_memory=low_memory,
buffer_lines=buffer_lines,
mangle_dupe_cols=mangle_dupe_cols,
- tupleize_cols=tupleize_cols,
- )
+ tupleize_cols=tupleize_cols)
return _read(filepath_or_buffer, kwds)
@@ -490,27 +486,24 @@ def _get_options_with_defaults(self, engine):
kwds = self.orig_options
options = {}
- for argname, default in compat.iteritems(_parser_defaults):
- if argname in kwds:
- value = kwds[argname]
- else:
- value = default
- options[argname] = value
+ for argname, default in compat.iteritems(_parser_defaults):
+ options[argname] = kwds.get(argname, default)
for argname, default in compat.iteritems(_c_parser_defaults):
if argname in kwds:
value = kwds[argname]
+
if engine != 'c' and value != default:
- raise ValueError('%s is not supported with %s parser' %
- (argname, engine))
+ raise ValueError('The %r option is not supported with the'
+ ' %r engine' % (argname, engine))
+ else:
+ value = default
options[argname] = value
if engine == 'python-fwf':
for argname, default in compat.iteritems(_fwf_defaults):
- if argname in kwds:
- value = kwds[argname]
- options[argname] = value
+ options[argname] = kwds.get(argname, default)
return options
@@ -518,7 +511,9 @@ def _clean_options(self, options, engine):
result = options.copy()
sep = options['delimiter']
- if (sep is None and not options['delim_whitespace']):
+ delim_whitespace = options['delim_whitespace']
+
+ if sep is None and not delim_whitespace:
if engine == 'c':
print('Using Python parser to sniff delimiter')
engine = 'python'
@@ -667,21 +662,24 @@ def __init__(self, kwds):
self.header = kwds.get('header')
if isinstance(self.header,(list,tuple,np.ndarray)):
if kwds.get('as_recarray'):
- raise Exception("cannot specify as_recarray when "
- "specifying a multi-index header")
+ raise ValueError("cannot specify as_recarray when "
+ "specifying a multi-index header")
if kwds.get('usecols'):
- raise Exception("cannot specify usecols when "
- "specifying a multi-index header")
+ raise ValueError("cannot specify usecols when "
+ "specifying a multi-index header")
if kwds.get('names'):
- raise Exception("cannot specify names when "
- "specifying a multi-index header")
+ raise ValueError("cannot specify names when "
+ "specifying a multi-index header")
# validate index_col that only contains integers
if self.index_col is not None:
- if not (isinstance(self.index_col,(list,tuple,np.ndarray)) and all(
- [ com.is_integer(i) for i in self.index_col ]) or com.is_integer(self.index_col)):
- raise Exception("index_col must only contain row numbers "
- "when specifying a multi-index header")
+ is_sequence = isinstance(self.index_col, (list, tuple,
+ np.ndarray))
+ if not (is_sequence and
+ all(map(com.is_integer, self.index_col)) or
+ com.is_integer(self.index_col)):
+ raise ValueError("index_col must only contain row numbers "
+ "when specifying a multi-index header")
self._name_processed = False
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 44e40dc34ff25..ada6ffdc34257 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -1230,17 +1230,17 @@ def test_header_multi_index(self):
#### invalid options ####
# no as_recarray
- self.assertRaises(Exception, read_csv, StringIO(data), header=[0,1,2,3],
+ self.assertRaises(ValueError, read_csv, StringIO(data), header=[0,1,2,3],
index_col=[0,1], as_recarray=True, tupleize_cols=False)
# names
- self.assertRaises(Exception, read_csv, StringIO(data), header=[0,1,2,3],
+ self.assertRaises(ValueError, read_csv, StringIO(data), header=[0,1,2,3],
index_col=[0,1], names=['foo','bar'], tupleize_cols=False)
# usecols
- self.assertRaises(Exception, read_csv, StringIO(data), header=[0,1,2,3],
+ self.assertRaises(ValueError, read_csv, StringIO(data), header=[0,1,2,3],
index_col=[0,1], usecols=['foo','bar'], tupleize_cols=False)
# non-numeric index_col
- self.assertRaises(Exception, read_csv, StringIO(data), header=[0,1,2,3],
+ self.assertRaises(ValueError, read_csv, StringIO(data), header=[0,1,2,3],
index_col=['foo','bar'], tupleize_cols=False)
def test_pass_names_with_index(self):
@@ -2715,6 +2715,24 @@ def test_warn_if_chunks_have_mismatched_type(self):
df = self.read_csv(StringIO(data))
self.assertEqual(df.a.dtype, np.object)
+ def test_invalid_c_parser_opts_with_not_c_parser(self):
+ from pandas.io.parsers import _c_parser_defaults as c_defaults
+
+ data = """1,2,3,,
+1,2,3,4,
+1,2,3,4,5
+1,2,,,
+1,2,3,4,"""
+
+ engines = 'python', 'python-fwf'
+ for default in c_defaults:
+ for engine in engines:
+ kwargs = {default: object()}
+ with tm.assertRaisesRegexp(ValueError,
+ 'The %r option is not supported '
+ 'with the %r engine' % (default,
+ engine)):
+ read_csv(StringIO(data), engine=engine, **kwargs)
class TestParseSQL(unittest.TestCase):
@@ -2783,7 +2801,7 @@ def test_convert_sql_column_decimals(self):
def assert_same_values_and_dtype(res, exp):
- assert(res.dtype == exp.dtype)
+ tm.assert_equal(res.dtype, exp.dtype)
tm.assert_almost_equal(res, exp)
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index d08c020c9e9bc..8625038c57b23 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -237,7 +237,7 @@ cdef class TextReader:
cdef:
parser_t *parser
object file_handle, na_fvalues
- bint factorize, na_filter, verbose, has_usecols, has_mi_columns
+ bint na_filter, verbose, has_usecols, has_mi_columns
int parser_start
list clocks
char *c_encoding
@@ -276,7 +276,6 @@ cdef class TextReader:
converters=None,
- factorize=True,
as_recarray=False,
skipinitialspace=False,
@@ -338,8 +337,6 @@ cdef class TextReader:
raise ValueError('only length-1 separators excluded right now')
self.parser.delimiter = ord(delimiter)
- self.factorize = factorize
-
#----------------------------------------
# parser options
| Also add a test to make sure that the C parser options validation is actually
covered
example travis fail:
https://travis-ci.org/pydata/pandas/jobs/12171563
explanation (for the record):
if you print out `options` before this commit you'll see that they are all overwritten with a single value. Whatever this value is will depend on how the `_parser_defaults` dictionary happens to be ordered (which is random because of hashing).
This caused a bug because the value of `value` is retained from the loop over `_parser_defaults`, and `default` is _not_ reassigned in the two loops that follow the loop over `_parser_defaults`, so all the values end up being the same as the last argument that was iterated over from `_parser_defaults`.
magic explained :tada:
| https://api.github.com/repos/pandas-dev/pandas/pulls/5121 | 2013-10-05T22:34:34Z | 2013-10-05T23:28:25Z | 2013-10-05T23:28:25Z | 2014-06-24T19:40:22Z |
CLN: Remove leftover test.py file | diff --git a/test.py b/test.py
deleted file mode 100644
index b3295e2d830e7..0000000000000
--- a/test.py
+++ /dev/null
@@ -1,12 +0,0 @@
-
-
-import pandas as pd
-df = pd.DataFrame(
- {'pid' : [1,1,1,2,2,3,3,3],
- 'tag' : [23,45,62,24,45,34,25,62],
- })
-
-g = df.groupby('tag')
-
-import pdb; pdb.set_trace()
-g.filter(lambda x: len(x) > 1)
| Extra test file at top level with a set_trace on it was checked in with
the json normalize PR. I'm assuming it shouldn't be here. @jreback
@cpcloud? I think it's causing the test suite to hang for me, since I
run it from within the pandas directory.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5119 | 2013-10-05T16:25:24Z | 2013-10-05T16:32:34Z | 2013-10-05T16:32:34Z | 2014-07-16T08:33:15Z |
CLN: Remove redundant initialization from Int64Index and Float64Index constructor | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 3f491b4271ddc..c39364d58e205 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -201,6 +201,8 @@ def _string_data_error(cls, data):
@classmethod
def _coerce_to_ndarray(cls, data):
+ """coerces data to ndarray, raises on scalar data. Converts other
+ iterables to list first and then to array. Does not touch ndarrays."""
if not isinstance(data, np.ndarray):
if np.isscalar(data):
@@ -1624,7 +1626,7 @@ class Int64Index(Index):
Parameters
----------
data : array-like (1-dimensional)
- dtype : NumPy dtype (default: object)
+ dtype : NumPy dtype (default: int64)
copy : bool
Make a copy of input ndarray
name : object
@@ -1651,33 +1653,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
subarr.name = name
return subarr
- if not isinstance(data, np.ndarray):
- if np.isscalar(data):
- cls._scalar_data_error(data)
-
- data = cls._coerce_to_ndarray(data)
-
- if issubclass(data.dtype.type, compat.string_types):
- cls._string_data_error(data)
-
- elif issubclass(data.dtype.type, np.integer):
- # don't force the upcast as we may be dealing
- # with a platform int
- if dtype is None or not issubclass(np.dtype(dtype).type, np.integer):
- dtype = np.int64
-
- subarr = np.array(data, dtype=dtype, copy=copy)
- else:
- subarr = np.array(data, dtype=np.int64, copy=copy)
- if len(data) > 0:
- if (subarr != data).any():
- raise TypeError('Unsafe NumPy casting, you must '
- 'explicitly cast')
-
- # other iterable of some kind
- if not isinstance(data, (list, tuple)):
- data = list(data)
- data = np.asarray(data)
+ # isscalar, generators handled in coerce_to_ndarray
+ data = cls._coerce_to_ndarray(data)
if issubclass(data.dtype.type, compat.string_types):
cls._string_data_error(data)
@@ -1767,11 +1744,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
subarr.name = name
return subarr
- if not isinstance(data, np.ndarray):
- if np.isscalar(data):
- cls._scalar_data_error(data)
-
- data = cls._coerce_to_ndarray(data)
+ data = cls._coerce_to_ndarray(data)
if issubclass(data.dtype.type, compat.string_types):
cls._string_data_error(data)
| `Int64Index` repeated itself and actually reconstructed an ndarray twice when passed in data was an iterable. This simplifies it down but does not actually change any behavior. (you'll see that there was basically exactly the same code twice)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5117 | 2013-10-05T16:02:04Z | 2013-10-05T19:43:22Z | 2013-10-05T19:43:22Z | 2014-07-16T08:33:14Z |
API: convert objects on fillna when object result dtype, related (GH5103) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 9fa111d32e4bb..8f5308a90a0c7 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -562,6 +562,7 @@ Bug Fixes
(:issue:`5102`).
- Fixed a bug where ``groupby.plot()`` and friends were duplicating figures
multiple times (:issue:`5102`).
+ - Provide automatic conversion of ``object`` dtypes on fillna, related (:issue:`5103`)
pandas 0.12.0
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 3b451e2a3b196..9abcdd8ea4780 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1177,7 +1177,7 @@ def convert(self, convert_dates=True, convert_numeric=True, copy=True, by_item=T
# attempt to create new type blocks
is_unique = self.items.is_unique
blocks = []
- if by_item:
+ if by_item and not self._is_single_block:
for i, c in enumerate(self.items):
values = self.iget(i)
@@ -1200,6 +1200,17 @@ def convert(self, convert_dates=True, convert_numeric=True, copy=True, by_item=T
return blocks
+ def _maybe_downcast(self, blocks, downcast=None):
+
+ if downcast is not None:
+ return blocks
+
+ # split and convert the blocks
+ result_blocks = []
+ for blk in blocks:
+ result_blocks.extend(blk.convert(convert_dates=True,convert_numeric=False))
+ return result_blocks
+
def _can_hold_element(self, element):
return True
@@ -2050,6 +2061,8 @@ def apply(self, f, *args, **kwargs):
result_blocks.extend(applied)
else:
result_blocks.append(applied)
+ if len(result_blocks) == 0:
+ return self.make_empty(axes or self.axes)
bm = self.__class__(
result_blocks, axes or self.axes, do_integrity_check=do_integrity_check)
bm._consolidate_inplace()
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index c1c6e6e2f83d3..a10f3582bfe45 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -411,11 +411,13 @@ def na_op(x, y):
result = expressions.evaluate(op, str_rep, x, y,
raise_on_error=True, **eval_kwargs)
except TypeError:
- result = pa.empty(len(x), dtype=x.dtype)
if isinstance(y, (pa.Array, pd.Series)):
+ dtype = np.find_common_type([x.dtype,y.dtype],[])
+ result = np.empty(x.size, dtype=dtype)
mask = notnull(x) & notnull(y)
result[mask] = op(x[mask], y[mask])
else:
+ result = pa.empty(len(x), dtype=x.dtype)
mask = notnull(x)
result[mask] = op(x[mask], y)
@@ -690,12 +692,14 @@ def na_op(x, y):
op, str_rep, x, y, raise_on_error=True, **eval_kwargs)
except TypeError:
xrav = x.ravel()
- result = np.empty(x.size, dtype=x.dtype)
if isinstance(y, (np.ndarray, pd.Series)):
+ dtype = np.find_common_type([x.dtype,y.dtype],[])
+ result = np.empty(x.size, dtype=dtype)
yrav = y.ravel()
mask = notnull(xrav) & notnull(yrav)
result[mask] = op(xrav[mask], yrav[mask])
else:
+ result = np.empty(x.size, dtype=x.dtype)
mask = notnull(xrav)
result[mask] = op(xrav[mask], y)
@@ -855,6 +859,8 @@ def na_op(x, y):
result = expressions.evaluate(op, str_rep, x, y,
raise_on_error=True, **eval_kwargs)
except TypeError:
+
+ # TODO: might need to find_common_type here?
result = pa.empty(len(x), dtype=x.dtype)
mask = notnull(x)
result[mask] = op(x[mask], y)
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index ff0e1b08d7247..0411934b9ef87 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -1664,8 +1664,9 @@ def get_atom_string(self, block, itemsize):
def set_atom_string(
self, block, existing_col, min_itemsize, nan_rep, encoding):
- # fill nan items with myself
- block = block.fillna(nan_rep)[0]
+ # fill nan items with myself, don't disturb the blocks by
+ # trying to downcast
+ block = block.fillna(nan_rep, downcast=False)[0]
data = block.values
# see if we have a valid string type
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 1e4e988431f43..9cb7483340817 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -4311,15 +4311,16 @@ def test_operators_none_as_na(self):
ops = [operator.add, operator.sub, operator.mul, operator.truediv]
+ # since filling converts dtypes from object, changed expected to be object
for op in ops:
filled = df.fillna(np.nan)
result = op(df, 3)
- expected = op(filled, 3)
+ expected = op(filled, 3).astype(object)
expected[com.isnull(expected)] = None
assert_frame_equal(result, expected)
result = op(df, df)
- expected = op(filled, filled)
+ expected = op(filled, filled).astype(object)
expected[com.isnull(expected)] = None
assert_frame_equal(result, expected)
@@ -4327,7 +4328,7 @@ def test_operators_none_as_na(self):
assert_frame_equal(result, expected)
result = op(df.fillna(7), df)
- assert_frame_equal(result, expected)
+ assert_frame_equal(result, expected, check_dtype=False)
def test_comparison_invalid(self):
@@ -6695,6 +6696,25 @@ def test_fillna(self):
df.fillna({ 2: 'foo' }, inplace=True)
assert_frame_equal(df, expected)
+ def test_fillna_dtype_conversion(self):
+ # make sure that fillna on an empty frame works
+ df = DataFrame(index=["A","B","C"], columns = [1,2,3,4,5])
+ result = df.get_dtype_counts().order()
+ expected = Series({ 'object' : 5 })
+ assert_series_equal(result, expected)
+
+ result = df.fillna(1)
+ expected = DataFrame(1, index=["A","B","C"], columns = [1,2,3,4,5])
+ result = result.get_dtype_counts().order()
+ expected = Series({ 'int64' : 5 })
+ assert_series_equal(result, expected)
+
+ # empty block
+ df = DataFrame(index=lrange(3),columns=['A','B'],dtype='float64')
+ result = df.fillna('nan')
+ expected = DataFrame('nan',index=lrange(3),columns=['A','B'])
+ assert_frame_equal(result, expected)
+
def test_ffill(self):
self.tsframe['A'][:5] = nan
self.tsframe['A'][-5:] = nan
@@ -10812,7 +10832,6 @@ def test_boolean_indexing_mixed(self):
expected.loc[35,4] = 1
assert_frame_equal(df2,expected)
- # add object, should this raise?
df['foo'] = 'test'
with tm.assertRaisesRegexp(TypeError, 'boolean setting on mixed-type'):
df[df > 0.3] = 1
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 5c94f378b88ea..07b33266d88a1 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -732,8 +732,9 @@ def test_logical_with_nas(self):
expected = DataFrame({'a': [np.nan, True]})
assert_frame_equal(result, expected)
+ # this is autodowncasted here
result = d['ItemA'].fillna(False) | d['ItemB']
- expected = DataFrame({'a': [True, True]}, dtype=object)
+ expected = DataFrame({'a': [True, True]})
assert_frame_equal(result, expected)
def test_neg(self):
| related #5103
| https://api.github.com/repos/pandas-dev/pandas/pulls/5112 | 2013-10-04T18:18:26Z | 2013-10-04T19:53:42Z | 2013-10-04T19:53:42Z | 2014-07-04T13:18:07Z |
DOC: doc fixups | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 5ad36b3c8b45c..d062512ac3a5b 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -870,7 +870,7 @@ Serialization / IO / Conversion
.. currentmodule:: pandas.core.index
-.. _api.index
+.. _api.index:
Index
-----
@@ -878,7 +878,6 @@ Index
**Many of these methods or variants thereof are available on the objects that contain an index (Series/Dataframe)
and those should most likely be used before calling these methods directly.**
- * **values**
Modifying and Computations
~~~~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 33b16c1448ed8..c27bdd829f3a3 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1114,6 +1114,7 @@ The infer_dst argument in tz_localize will attempt
to determine the right offset.
.. ipython:: python
+ :okexcept:
rng_hourly = DatetimeIndex(['11/06/2011 00:00', '11/06/2011 01:00',
'11/06/2011 01:00', '11/06/2011 02:00',
@@ -1132,7 +1133,7 @@ They can be both positive and negative. :ref:`DateOffsets<timeseries.offsets>` t
.. ipython:: python
from datetime import datetime, timedelta
- s = Series(date_range('2012-1-1', periods=3, freq='D'))
+ s = Series(date_range('2012-1-1', periods=3, freq='D'))
td = Series([ timedelta(days=i) for i in range(3) ])
df = DataFrame(dict(A = s, B = td))
df
| closes #5110.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5111 | 2013-10-04T16:54:20Z | 2013-10-04T16:54:53Z | 2013-10-04T16:54:53Z | 2014-07-16T08:33:10Z |
BUG: read_html should convert array skiprows into a list | diff --git a/pandas/io/html.py b/pandas/io/html.py
index 96bedbf390af6..878ef37ea4459 100644
--- a/pandas/io/html.py
+++ b/pandas/io/html.py
@@ -90,8 +90,10 @@ def _get_skiprows(skiprows):
"""
if isinstance(skiprows, slice):
return lrange(skiprows.start or 0, skiprows.stop, skiprows.step or 1)
- elif isinstance(skiprows, numbers.Integral) or com.is_list_like(skiprows):
+ elif isinstance(skiprows, numbers.Integral):
return skiprows
+ elif com.is_list_like(skiprows):
+ return list(skiprows)
elif skiprows is None:
return 0
raise TypeError('%r is not a valid type for skipping rows' %
| not sure why this is only showing up here, but it should be converted to a `list` anyway:
https://travis-ci.org/cpcloud/pandas/jobs/12117482
| https://api.github.com/repos/pandas-dev/pandas/pulls/5106 | 2013-10-04T01:23:01Z | 2013-10-04T02:56:27Z | null | 2014-07-11T22:34:05Z |
BUG: allow plot, boxplot, hist and completion on GroupBy objects | diff --git a/doc/source/10min.rst b/doc/source/10min.rst
index 705514ac0c364..85aafd6787f16 100644
--- a/doc/source/10min.rst
+++ b/doc/source/10min.rst
@@ -74,6 +74,43 @@ Having specific :ref:`dtypes <basics.dtypes>`
df2.dtypes
+If you're using IPython, tab completion for column names (as well as public
+attributes) is automatically enabled. Here's a subset of the attributes that
+will be completed:
+
+.. ipython::
+
+ @verbatim
+ In [1]: df2.<TAB>
+
+ df2.A df2.boxplot
+ df2.abs df2.C
+ df2.add df2.clip
+ df2.add_prefix df2.clip_lower
+ df2.add_suffix df2.clip_upper
+ df2.align df2.columns
+ df2.all df2.combine
+ df2.any df2.combineAdd
+ df2.append df2.combine_first
+ df2.apply df2.combineMult
+ df2.applymap df2.compound
+ df2.as_blocks df2.consolidate
+ df2.asfreq df2.convert_objects
+ df2.as_matrix df2.copy
+ df2.astype df2.corr
+ df2.at df2.corrwith
+ df2.at_time df2.count
+ df2.axes df2.cov
+ df2.B df2.cummax
+ df2.between_time df2.cummin
+ df2.bfill df2.cumprod
+ df2.blocks df2.cumsum
+ df2.bool df2.D
+
+As you can see, the columns ``A``, ``B``, ``C``, and ``D`` are automatically
+tab completed. ``E`` is there as well; the rest of the attributes have been
+truncated for brevity.
+
Viewing Data
------------
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index a8900bd83309f..723aee64fd0d9 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -188,6 +188,45 @@ however pass ``sort=False`` for potential speedups:
df2.groupby(['X'], sort=True).sum()
df2.groupby(['X'], sort=False).sum()
+.. _groupby.tabcompletion:
+
+``GroupBy`` will tab complete column names (and other attributes)
+
+.. ipython:: python
+ :suppress:
+
+ n = 10
+ weight = np.random.normal(166, 20, size=n)
+ height = np.random.normal(60, 10, size=n)
+ time = date_range('1/1/2000', periods=n)
+ gender = tm.choice(['male', 'female'], size=n)
+ df = DataFrame({'height': height, 'weight': weight,
+ 'gender': gender}, index=time)
+
+.. ipython:: python
+
+ df
+ gb = df.groupby('gender')
+
+
+.. ipython::
+
+ @verbatim
+ In [1]: gb.<TAB>
+ gb.agg gb.boxplot gb.cummin gb.describe gb.filter gb.get_group gb.height gb.last gb.median gb.ngroups gb.plot gb.rank gb.std gb.transform
+ gb.aggregate gb.count gb.cumprod gb.dtype gb.first gb.groups gb.hist gb.max gb.min gb.nth gb.prod gb.resample gb.sum gb.var
+ gb.apply gb.cummax gb.cumsum gb.fillna gb.gender gb.head gb.indices gb.mean gb.name gb.ohlc gb.quantile gb.size gb.tail gb.weight
+
+
+.. ipython:: python
+ :suppress:
+
+ df = DataFrame({'A' : ['foo', 'bar', 'foo', 'bar',
+ 'foo', 'bar', 'foo', 'foo'],
+ 'B' : ['one', 'one', 'two', 'three',
+ 'two', 'two', 'one', 'three'],
+ 'C' : randn(8), 'D' : randn(8)})
+
.. _groupby.multiindex:
GroupBy with MultiIndex
diff --git a/doc/source/release.rst b/doc/source/release.rst
index ebba7444e82d8..9fa111d32e4bb 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -380,6 +380,8 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
function signature.
- :func:`~pandas.read_html` now uses ``TextParser`` to parse HTML data from
bs4/lxml (:issue:`4770`).
+ - Removed the ``keep_internal`` keyword parameter in
+ ``pandas/core/groupby.py`` because it wasn't being used (:issue:`5102`).
.. _release.bug_fixes-0.13.0:
@@ -544,7 +546,7 @@ Bug Fixes
- Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`)
- Make sure series-series boolean comparions are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
- - Tests/fix for multi-index construction of an all-nan frame (:isue:`4078`)
+ - Tests/fix for multi-index construction of an all-nan frame (:issue:`4078`)
- Fixed a bug where :func:`~pandas.read_html` wasn't correctly inferring
values of tables with commas (:issue:`5029`)
- Fixed a bug where :func:`~pandas.read_html` wasn't providing a stable
@@ -555,6 +557,11 @@ Bug Fixes
type of headers (:issue:`5048`).
- Fixed a bug where ``DatetimeIndex`` joins with ``PeriodIndex`` caused a
stack overflow (:issue:`3899`).
+ - Fixed a bug where ``groupby`` objects didn't allow plots (:issue:`5102`).
+ - Fixed a bug where ``groupby`` objects weren't tab-completing column names
+ (:issue:`5102`).
+ - Fixed a bug where ``groupby.plot()`` and friends were duplicating figures
+ multiple times (:issue:`5102`).
pandas 0.12.0
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index e70c01ffcb12f..8938e48eb493b 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1,4 +1,5 @@
import types
+from functools import wraps
import numpy as np
from pandas.compat import(
@@ -45,6 +46,10 @@
"""
+# special case to prevent duplicate plots when catching exceptions when
+# forwarding methods from NDFrames
+_plotting_methods = frozenset(['plot', 'boxplot', 'hist'])
+
_apply_whitelist = frozenset(['last', 'first',
'mean', 'sum', 'min', 'max',
'head', 'tail',
@@ -52,7 +57,8 @@
'resample',
'describe',
'rank', 'quantile', 'count',
- 'fillna', 'dtype'])
+ 'fillna', 'dtype']) | _plotting_methods
+
class GroupByError(Exception):
@@ -180,7 +186,6 @@ class GroupBy(PandasObject):
len(grouped) : int
Number of groups
"""
-
def __init__(self, obj, keys=None, axis=0, level=None,
grouper=None, exclusions=None, selection=None, as_index=True,
sort=True, group_keys=True, squeeze=False):
@@ -244,6 +249,9 @@ def _selection_list(self):
return [self._selection]
return self._selection
+ def _local_dir(self):
+ return sorted(set(self.obj._local_dir() + list(_apply_whitelist)))
+
def __getattr__(self, attr):
if attr in self.obj:
return self[attr]
@@ -285,6 +293,15 @@ def curried_with_axis(x):
def curried(x):
return f(x, *args, **kwargs)
+ # preserve the name so we can detect it when calling plot methods,
+ # to avoid duplicates
+ curried.__name__ = curried_with_axis.__name__ = name
+
+ # special case otherwise extra plots are created when catching the
+ # exception below
+ if name in _plotting_methods:
+ return self.apply(curried)
+
try:
return self.apply(curried_with_axis)
except Exception:
@@ -348,7 +365,11 @@ def apply(self, func, *args, **kwargs):
applied : type depending on grouped object and function
"""
func = _intercept_function(func)
- f = lambda g: func(g, *args, **kwargs)
+
+ @wraps(func)
+ def f(g):
+ return func(g, *args, **kwargs)
+
return self._python_apply_general(f)
def _python_apply_general(self, f):
@@ -598,7 +619,7 @@ def __iter__(self):
def nkeys(self):
return len(self.groupings)
- def get_iterator(self, data, axis=0, keep_internal=True):
+ def get_iterator(self, data, axis=0):
"""
Groupby iterator
@@ -607,16 +628,14 @@ def get_iterator(self, data, axis=0, keep_internal=True):
Generator yielding sequence of (name, subsetted object)
for each group
"""
- splitter = self._get_splitter(data, axis=axis,
- keep_internal=keep_internal)
+ splitter = self._get_splitter(data, axis=axis)
keys = self._get_group_keys()
for key, (i, group) in zip(keys, splitter):
yield key, group
- def _get_splitter(self, data, axis=0, keep_internal=True):
+ def _get_splitter(self, data, axis=0):
comp_ids, _, ngroups = self.group_info
- return get_splitter(data, comp_ids, ngroups, axis=axis,
- keep_internal=keep_internal)
+ return get_splitter(data, comp_ids, ngroups, axis=axis)
def _get_group_keys(self):
if len(self.groupings) == 1:
@@ -627,19 +646,19 @@ def _get_group_keys(self):
mapper = _KeyMapper(comp_ids, ngroups, self.labels, self.levels)
return [mapper.get_key(i) for i in range(ngroups)]
- def apply(self, f, data, axis=0, keep_internal=False):
+ def apply(self, f, data, axis=0):
mutated = False
- splitter = self._get_splitter(data, axis=axis,
- keep_internal=keep_internal)
+ splitter = self._get_splitter(data, axis=axis)
group_keys = self._get_group_keys()
# oh boy
- if hasattr(splitter, 'fast_apply') and axis == 0:
+ if (f.__name__ not in _plotting_methods and
+ hasattr(splitter, 'fast_apply') and axis == 0):
try:
values, mutated = splitter.fast_apply(f, group_keys)
return group_keys, values, mutated
- except (Exception) as detail:
- # we detect a mutatation of some kind
+ except Exception:
+ # we detect a mutation of some kind
# so take slow path
pass
@@ -1043,7 +1062,7 @@ def get_iterator(self, data, axis=0):
inds = lrange(start, n)
yield self.binlabels[-1], data.take(inds, axis=axis)
- def apply(self, f, data, axis=0, keep_internal=False):
+ def apply(self, f, data, axis=0):
result_keys = []
result_values = []
mutated = False
@@ -1617,6 +1636,7 @@ def filter(self, func, dropna=True, *args, **kwargs):
else:
return filtered.reindex(self.obj.index) # Fill with NaNs.
+
class NDFrameGroupBy(GroupBy):
def _iterate_slices(self):
@@ -1939,14 +1959,14 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
index = key_index
else:
stacked_values = np.vstack([np.asarray(x)
- for x in values]).T
+ for x in values]).T
index = values[0].index
columns = key_index
- except ValueError:
- #GH1738,, values is list of arrays of unequal lengths
- # fall through to the outer else caluse
+ except (ValueError, AttributeError):
+ # GH1738: values is list of arrays of unequal lengths fall
+ # through to the outer else caluse
return Series(values, index=key_index)
return DataFrame(stacked_values, index=index,
@@ -2268,6 +2288,7 @@ def ohlc(self):
"""
return self._apply_to_column_groupbys(lambda x: x._cython_agg_general('ohlc'))
+
from pandas.tools.plotting import boxplot_frame_groupby
DataFrameGroupBy.boxplot = boxplot_frame_groupby
@@ -2364,7 +2385,7 @@ class NDArrayGroupBy(GroupBy):
class DataSplitter(object):
- def __init__(self, data, labels, ngroups, axis=0, keep_internal=False):
+ def __init__(self, data, labels, ngroups, axis=0):
self.data = data
self.labels = com._ensure_int64(labels)
self.ngroups = ngroups
@@ -2419,10 +2440,8 @@ def _chop(self, sdata, slice_obj):
class FrameSplitter(DataSplitter):
-
- def __init__(self, data, labels, ngroups, axis=0, keep_internal=False):
- DataSplitter.__init__(self, data, labels, ngroups, axis=axis,
- keep_internal=keep_internal)
+ def __init__(self, data, labels, ngroups, axis=0):
+ super(FrameSplitter, self).__init__(data, labels, ngroups, axis=axis)
def fast_apply(self, f, names):
# must return keys::list, values::list, mutated::bool
@@ -2445,10 +2464,8 @@ def _chop(self, sdata, slice_obj):
class NDFrameSplitter(DataSplitter):
-
- def __init__(self, data, labels, ngroups, axis=0, keep_internal=False):
- DataSplitter.__init__(self, data, labels, ngroups, axis=axis,
- keep_internal=keep_internal)
+ def __init__(self, data, labels, ngroups, axis=0):
+ super(NDFrameSplitter, self).__init__(data, labels, ngroups, axis=axis)
self.factory = data._constructor
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index fec6460ea31f3..01cea90fa1e5a 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -2,6 +2,8 @@
import nose
import unittest
+from numpy.testing.decorators import slow
+
from datetime import datetime
from numpy import nan
@@ -9,8 +11,7 @@
from pandas.core.index import Index, MultiIndex
from pandas.core.common import rands
from pandas.core.api import Categorical, DataFrame
-from pandas.core.groupby import (GroupByError, SpecificationError, DataError,
- _apply_whitelist)
+from pandas.core.groupby import SpecificationError, DataError
from pandas.core.series import Series
from pandas.util.testing import (assert_panel_equal, assert_frame_equal,
assert_series_equal, assert_almost_equal,
@@ -18,14 +19,12 @@
from pandas.compat import(
range, long, lrange, StringIO, lmap, lzip, map, zip, builtins, OrderedDict
)
-from pandas import compat, _np_version_under1p7
+from pandas import compat
from pandas.core.panel import Panel
from pandas.tools.merge import concat
from collections import defaultdict
import pandas.core.common as com
-import pandas.core.datetools as dt
import numpy as np
-from numpy.testing import assert_equal
import pandas.core.nanops as nanops
@@ -2728,6 +2727,85 @@ def test_groupby_whitelist(self):
with tm.assertRaisesRegexp(AttributeError, msg):
getattr(gb, bl)
+ def test_series_groupby_plotting_nominally_works(self):
+ try:
+ import matplotlib as mpl
+ mpl.use('Agg')
+ except ImportError:
+ raise nose.SkipTest("matplotlib not installed")
+ n = 10
+ weight = Series(np.random.normal(166, 20, size=n))
+ height = Series(np.random.normal(60, 10, size=n))
+ gender = tm.choice(['male', 'female'], size=n)
+
+ weight.groupby(gender).plot()
+ tm.close()
+ height.groupby(gender).hist()
+ tm.close()
+
+ @slow
+ def test_frame_groupby_plot_boxplot(self):
+ try:
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ mpl.use('Agg')
+ except ImportError:
+ raise nose.SkipTest("matplotlib not installed")
+ tm.close()
+
+ n = 10
+ weight = Series(np.random.normal(166, 20, size=n))
+ height = Series(np.random.normal(60, 10, size=n))
+ gender = tm.choice(['male', 'female'], size=n)
+ df = DataFrame({'height': height, 'weight': weight, 'gender': gender})
+ gb = df.groupby('gender')
+
+ res = gb.plot()
+ self.assertEqual(len(plt.get_fignums()), 2)
+ self.assertEqual(len(res), 2)
+ tm.close()
+
+ res = gb.boxplot()
+ self.assertEqual(len(plt.get_fignums()), 1)
+ self.assertEqual(len(res), 2)
+ tm.close()
+
+ with tm.assertRaisesRegexp(TypeError, '.*str.+float'):
+ gb.hist()
+
+ @slow
+ def test_frame_groupby_hist(self):
+ try:
+ import matplotlib.pyplot as plt
+ import matplotlib as mpl
+ mpl.use('Agg')
+ except ImportError:
+ raise nose.SkipTest("matplotlib not installed")
+ tm.close()
+
+ n = 10
+ weight = Series(np.random.normal(166, 20, size=n))
+ height = Series(np.random.normal(60, 10, size=n))
+ gender_int = tm.choice([0, 1], size=n)
+ df_int = DataFrame({'height': height, 'weight': weight,
+ 'gender': gender_int})
+ gb = df_int.groupby('gender')
+ axes = gb.hist()
+ self.assertEqual(len(axes), 2)
+ self.assertEqual(len(plt.get_fignums()), 2)
+ tm.close()
+
+ def test_tab_completion(self):
+ grp = self.mframe.groupby(level='second')
+ results = set([v for v in dir(grp) if not v.startswith('_')])
+ expected = set(['A','B','C',
+ 'agg','aggregate','apply','boxplot','filter','first','get_group',
+ 'groups','hist','indices','last','max','mean','median',
+ 'min','name','ngroups','nth','ohlc','plot', 'prod',
+ 'size','std','sum','transform','var', 'count', 'head', 'describe',
+ 'cummax', 'dtype', 'quantile', 'rank', 'cumprod', 'tail',
+ 'resample', 'cummin', 'fillna', 'cumsum'])
+ self.assertEqual(results, expected)
def assert_fp_equal(a, b):
assert (np.abs(a - b) < 1e-12).all()
@@ -2764,7 +2842,5 @@ def testit(label_list, shape):
if __name__ == '__main__':
- import nose
- nose.runmodule(
- argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure', '-s'],
- exit=False)
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure',
+ '-s'], exit=False)
| closes #5102
also fixes extra plots created by groupby plotting (couldn't find an issue), now there's a test that ensures that groupby only plots the number of axes/figures it's supposed to
removes an unused internal groupby parameter
| https://api.github.com/repos/pandas-dev/pandas/pulls/5105 | 2013-10-04T00:57:36Z | 2013-10-04T17:19:18Z | 2013-10-04T17:19:18Z | 2014-06-20T23:44:00Z |
BUG: fix PeriodIndex join with DatetimeIndex stack overflow | diff --git a/doc/source/release.rst b/doc/source/release.rst
index eaf10977af4f7..ebba7444e82d8 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -553,6 +553,8 @@ Bug Fixes
passed ``index_col=0`` (:issue:`5066`).
- Fixed a bug where :func:`~pandas.read_html` was incorrectly infering the
type of headers (:issue:`5048`).
+ - Fixed a bug where ``DatetimeIndex`` joins with ``PeriodIndex`` caused a
+ stack overflow (:issue:`3899`).
pandas 0.12.0
diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index cd81867ff8f08..579b0b3019fdc 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -12,8 +12,8 @@
import pandas.tseries.frequencies as _freq_mod
import pandas.core.common as com
-from pandas.core.common import (isnull, _NS_DTYPE, _INT64_DTYPE,
- _maybe_box, _values_from_object)
+from pandas.core.common import (isnull, _INT64_DTYPE, _maybe_box,
+ _values_from_object)
from pandas import compat
from pandas.lib import Timestamp
import pandas.lib as lib
@@ -712,13 +712,10 @@ def _array_values(self):
def astype(self, dtype):
dtype = np.dtype(dtype)
if dtype == np.object_:
- result = np.empty(len(self), dtype=dtype)
- result[:] = [x for x in self]
- return result
+ return Index(np.array(list(self), dtype), dtype)
elif dtype == _INT64_DTYPE:
- return self.values.copy()
- else: # pragma: no cover
- raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype)
+ return Index(self.values, dtype)
+ raise ValueError('Cannot cast PeriodIndex to dtype %s' % dtype)
def __iter__(self):
for val in self.values:
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 9abecc0aeeec6..55963b01d2779 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -22,8 +22,8 @@
import pandas.core.datetools as datetools
import pandas as pd
import numpy as np
-from pandas.compat import range, lrange, lmap, map, zip
-randn = np.random.randn
+from numpy.random import randn
+from pandas.compat import range, lrange, lmap, zip
from pandas import Series, TimeSeries, DataFrame
from pandas.util.testing import(assert_series_equal, assert_almost_equal,
@@ -1207,7 +1207,6 @@ def test_is_(self):
self.assertFalse(index.is_(index - 2))
self.assertFalse(index.is_(index - 0))
-
def test_comp_period(self):
idx = period_range('2007-01', periods=20, freq='M')
@@ -1913,6 +1912,17 @@ def test_join_self(self):
res = index.join(index, how=kind)
self.assert_(index is res)
+ def test_join_does_not_recur(self):
+ df = tm.makeCustomDataframe(3, 2, data_gen_f=lambda *args:
+ np.random.randint(2), c_idx_type='p',
+ r_idx_type='dt')
+ s = df.iloc[:2, 0]
+
+ res = s.index.join(df.columns, how='outer')
+ expected = Index([s.index[0], s.index[1],
+ df.columns[0], df.columns[1]], object)
+ tm.assert_index_equal(res, expected)
+
def test_align_series(self):
rng = period_range('1/1/2000', '1/1/2010', freq='A')
ts = Series(np.random.randn(len(rng)), index=rng)
@@ -2185,15 +2195,15 @@ def test_minutely(self):
def test_secondly(self):
self._check_freq('S', '1970-01-01')
-
+
def test_millisecondly(self):
self._check_freq('L', '1970-01-01')
def test_microsecondly(self):
self._check_freq('U', '1970-01-01')
-
+
def test_nanosecondly(self):
- self._check_freq('N', '1970-01-01')
+ self._check_freq('N', '1970-01-01')
def _check_freq(self, freq, base_date):
rng = PeriodIndex(start=base_date, periods=10, freq=freq)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index f3598dd2d210b..5329f37095961 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -31,16 +31,12 @@
import pandas.index as _index
-from pandas.compat import(
- range, long, StringIO, lrange, lmap, map, zip, cPickle as pickle, product
-)
-from pandas import read_pickle
+from pandas.compat import range, long, StringIO, lrange, lmap, zip, product
import pandas.core.datetools as dt
from numpy.random import rand
from numpy.testing import assert_array_equal
from pandas.util.testing import assert_frame_equal
import pandas.compat as compat
-from pandas.core.datetools import BDay
import pandas.core.common as com
from pandas import concat
from pandas import _np_version_under1p7
@@ -2064,6 +2060,18 @@ def test_ns_index(self):
new_index = pd.DatetimeIndex(start=index[0], end=index[-1], freq=index.freq)
self.assert_index_parameters(new_index)
+ def test_join_with_period_index(self):
+ df = tm.makeCustomDataframe(10, 10, data_gen_f=lambda *args:
+ np.random.randint(2), c_idx_type='p',
+ r_idx_type='dt')
+ s = df.iloc[:5, 0]
+ joins = 'left', 'right', 'inner', 'outer'
+
+ for join in joins:
+ with tm.assertRaisesRegexp(ValueError, 'can only call with other '
+ 'PeriodIndex-ed objects'):
+ df.columns.join(s.index, how=join)
+
class TestDatetime64(unittest.TestCase):
"""
| closes #3899.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5101 | 2013-10-03T22:04:19Z | 2013-10-04T02:39:18Z | 2013-10-04T02:39:18Z | 2014-07-16T08:33:03Z |
TST: win32 paths cannot be turned into URLs by prefixing them with "file://" v2 | diff --git a/.gitignore b/.gitignore
index 201a965a0f409..edc6a54cf4345 100644
--- a/.gitignore
+++ b/.gitignore
@@ -38,3 +38,4 @@ pandas/io/*.json
.pydevproject
.settings
.idea
+*.pdb
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 49de8dddd7210..77d86b8a7a9f1 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -755,6 +755,8 @@ Bug Fixes
- Bug when renaming then set_index on a DataFrame (:issue:`5344`)
- Test suite no longer leaves around temporary files when testing graphics. (:issue:`5347`)
(thanks for catching this @yarikoptic!)
+ - Fixed html tests on win32. (:issue:`4580`)
+
pandas 0.12.0
-------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
index aa5fdb29f3b5b..6b8186e253199 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -9,18 +9,18 @@
if compat.PY3:
- from urllib.request import urlopen
+ from urllib.request import urlopen, pathname2url
_urlopen = urlopen
from urllib.parse import urlparse as parse_url
import urllib.parse as compat_parse
- from urllib.parse import uses_relative, uses_netloc, uses_params, urlencode
+ from urllib.parse import uses_relative, uses_netloc, uses_params, urlencode, urljoin
from urllib.error import URLError
from http.client import HTTPException
else:
from urllib2 import urlopen as _urlopen
- from urllib import urlencode
+ from urllib import urlencode, pathname2url
from urlparse import urlparse as parse_url
- from urlparse import uses_relative, uses_netloc, uses_params
+ from urlparse import uses_relative, uses_netloc, uses_params, urljoin
from urllib2 import URLError
from httplib import HTTPException
from contextlib import contextmanager, closing
@@ -134,6 +134,21 @@ def get_filepath_or_buffer(filepath_or_buffer, encoding=None):
return filepath_or_buffer, None
+def file_path_to_url(path):
+ """
+ converts an absolute native path to a FILE URL.
+
+ Parameters
+ ----------
+ path : a path in native format
+
+ Returns
+ -------
+ a valid FILE URL
+ """
+ return urljoin('file:', pathname2url(path))
+
+
# ZipFile is not a context manager for <= 2.6
# must be tuple index here since 2.6 doesn't use namedtuple for version_info
if sys.version_info[1] <= 6:
diff --git a/pandas/io/tests/test_html.py b/pandas/io/tests/test_html.py
index 71567fe2e599a..c26048d4cf20b 100644
--- a/pandas/io/tests/test_html.py
+++ b/pandas/io/tests/test_html.py
@@ -21,7 +21,7 @@
from pandas import (DataFrame, MultiIndex, read_csv, Timestamp, Index,
date_range, Series)
from pandas.compat import map, zip, StringIO, string_types
-from pandas.io.common import URLError, urlopen
+from pandas.io.common import URLError, urlopen, file_path_to_url
from pandas.io.html import read_html
import pandas.util.testing as tm
@@ -311,7 +311,7 @@ def test_invalid_url(self):
@slow
def test_file_url(self):
url = self.banklist_data
- dfs = self.read_html('file://' + url, 'First', attrs={'id': 'table'})
+ dfs = self.read_html(file_path_to_url(url), 'First', attrs={'id': 'table'})
tm.assert_isinstance(dfs, list)
for df in dfs:
tm.assert_isinstance(df, DataFrame)
@@ -362,7 +362,7 @@ def test_multiindex_header_index_skiprows(self):
@slow
def test_regex_idempotency(self):
url = self.banklist_data
- dfs = self.read_html('file://' + url,
+ dfs = self.read_html(file_path_to_url(url),
match=re.compile(re.compile('Florida')),
attrs={'id': 'table'})
tm.assert_isinstance(dfs, list)
@@ -637,9 +637,9 @@ def test_invalid_flavor():
flavor='not a* valid**++ flaver')
-def get_elements_from_url(url, element='table', base_url="file://"):
+def get_elements_from_file(url, element='table'):
_skip_if_none_of(('bs4', 'html5lib'))
- url = "".join([base_url, url])
+ url = file_path_to_url(url)
from bs4 import BeautifulSoup
with urlopen(url) as f:
soup = BeautifulSoup(f, features='html5lib')
@@ -651,7 +651,7 @@ def test_bs4_finds_tables():
filepath = os.path.join(DATA_PATH, "spam.html")
with warnings.catch_warnings():
warnings.filterwarnings('ignore')
- assert get_elements_from_url(filepath, 'table')
+ assert get_elements_from_file(filepath, 'table')
def get_lxml_elements(url, element):
| Rebased on master (unable to use same pull request). Closes #4580.
see http://stackoverflow.com/questions/11687478/convert-a-filename-to-a-file-url
| https://api.github.com/repos/pandas-dev/pandas/pulls/5100 | 2013-10-03T19:40:15Z | 2013-10-28T23:20:03Z | 2013-10-28T23:20:02Z | 2014-07-16T08:33:01Z |
date_range should (at least optionally) deal with right-open intervals | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4a25a98f2cfbe..68bcc9c14a01b 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -174,6 +174,8 @@ Improvements to existing features
- :meth:`~pandas.io.json.json_normalize` is a new method to allow you to create a flat table
from semi-structured JSON data. :ref:`See the docs<io.json_normalize>` (:issue:`1067`)
- ``DataFrame.from_records()`` will now accept generators (:issue:`4910`)
+ - DatetimeIndex (and date_range) can now be constructed in a left- or
+ right-open fashion using the ``closed`` parameter (:issue:`4579`)
API Changes
~~~~~~~~~~~
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 33c90d3714e8a..a2b46f74244e2 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -115,6 +115,9 @@ class DatetimeIndex(Int64Index):
end : end time, datetime-like, optional
If periods is none, generated index will extend to first conforming
time on or just past end argument
+ closed : string or None, default None
+ Make the interval closed with respect to the given frequency to
+ the 'left', 'right', or both sides (None)
"""
_join_precedence = 10
@@ -143,7 +146,8 @@ class DatetimeIndex(Int64Index):
def __new__(cls, data=None,
freq=None, start=None, end=None, periods=None,
copy=False, name=None, tz=None,
- verify_integrity=True, normalize=False, **kwds):
+ verify_integrity=True, normalize=False,
+ closed=None, **kwds):
dayfirst = kwds.pop('dayfirst', None)
yearfirst = kwds.pop('yearfirst', None)
@@ -184,7 +188,7 @@ def __new__(cls, data=None,
if data is None:
return cls._generate(start, end, periods, name, offset,
- tz=tz, normalize=normalize,
+ tz=tz, normalize=normalize, closed=closed,
infer_dst=infer_dst)
if not isinstance(data, np.ndarray):
@@ -289,7 +293,7 @@ def __new__(cls, data=None,
@classmethod
def _generate(cls, start, end, periods, name, offset,
- tz=None, normalize=False, infer_dst=False):
+ tz=None, normalize=False, infer_dst=False, closed=None):
if com._count_not_none(start, end, periods) != 2:
raise ValueError('Must specify two of start, end, or periods')
@@ -301,6 +305,24 @@ def _generate(cls, start, end, periods, name, offset,
if end is not None:
end = Timestamp(end)
+ left_closed = False
+ right_closed = False
+
+ if start is None and end is None:
+ if closed is not None:
+ raise ValueError("Closed has to be None if not both of start"
+ "and end are defined")
+
+ if closed is None:
+ left_closed = True
+ right_closed = True
+ elif closed == "left":
+ left_closed = True
+ elif closed == "right":
+ right_closed = True
+ else:
+ raise ValueError("Closed has to be either 'left', 'right' or None")
+
try:
inferred_tz = tools._infer_tzinfo(start, end)
except:
@@ -387,6 +409,11 @@ def _generate(cls, start, end, periods, name, offset,
index.offset = offset
index.tz = tz
+ if not left_closed:
+ index = index[1:]
+ if not right_closed:
+ index = index[:-1]
+
return index
def _box_values(self, values):
@@ -1715,7 +1742,7 @@ def _generate_regular_range(start, end, periods, offset):
def date_range(start=None, end=None, periods=None, freq='D', tz=None,
- normalize=False, name=None):
+ normalize=False, name=None, closed=None):
"""
Return a fixed frequency datetime index, with day (calendar) as the default
frequency
@@ -1737,6 +1764,9 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
Normalize start/end dates to midnight before generating date range
name : str, default None
Name of the resulting index
+ closed : string or None, default None
+ Make the interval closed with respect to the given frequency to
+ the 'left', 'right', or both sides (None)
Notes
-----
@@ -1747,11 +1777,12 @@ def date_range(start=None, end=None, periods=None, freq='D', tz=None,
rng : DatetimeIndex
"""
return DatetimeIndex(start=start, end=end, periods=periods,
- freq=freq, tz=tz, normalize=normalize, name=name)
+ freq=freq, tz=tz, normalize=normalize, name=name,
+ closed=closed)
def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
- normalize=True, name=None):
+ normalize=True, name=None, closed=None):
"""
Return a fixed frequency datetime index, with business day as the default
frequency
@@ -1773,6 +1804,9 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
Normalize start/end dates to midnight before generating date range
name : str, default None
Name for the resulting index
+ closed : string or None, default None
+ Make the interval closed with respect to the given frequency to
+ the 'left', 'right', or both sides (None)
Notes
-----
@@ -1784,11 +1818,12 @@ def bdate_range(start=None, end=None, periods=None, freq='B', tz=None,
"""
return DatetimeIndex(start=start, end=end, periods=periods,
- freq=freq, tz=tz, normalize=normalize, name=name)
+ freq=freq, tz=tz, normalize=normalize, name=name,
+ closed=closed)
def cdate_range(start=None, end=None, periods=None, freq='C', tz=None,
- normalize=True, name=None, **kwargs):
+ normalize=True, name=None, closed=None, **kwargs):
"""
**EXPERIMENTAL** Return a fixed frequency datetime index, with
CustomBusinessDay as the default frequency
@@ -1820,6 +1855,9 @@ def cdate_range(start=None, end=None, periods=None, freq='C', tz=None,
holidays : list
list/array of dates to exclude from the set of valid business days,
passed to ``numpy.busdaycalendar``
+ closed : string or None, default None
+ Make the interval closed with respect to the given frequency to
+ the 'left', 'right', or both sides (None)
Notes
-----
@@ -1835,7 +1873,8 @@ def cdate_range(start=None, end=None, periods=None, freq='C', tz=None,
weekmask = kwargs.pop('weekmask', 'Mon Tue Wed Thu Fri')
freq = CDay(holidays=holidays, weekmask=weekmask)
return DatetimeIndex(start=start, end=end, periods=periods, freq=freq,
- tz=tz, normalize=normalize, name=name, **kwargs)
+ tz=tz, normalize=normalize, name=name,
+ closed=closed, **kwargs)
def _to_m8(key, tz=None):
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index e496bf46cf57a..ad7d3ba03a129 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -8,6 +8,7 @@
# import after tools, dateutil check
from dateutil.relativedelta import relativedelta
import pandas.tslib as tslib
+from pandas.tslib import Timestamp
import numpy as np
from pandas import _np_version_under1p7
@@ -92,9 +93,9 @@ def apply(self, other):
else:
for i in range(-self.n):
other = other - self._offset
- return other
+ return Timestamp(other)
else:
- return other + timedelta(self.n)
+ return Timestamp(other + timedelta(self.n))
def isAnchored(self):
return (self.n == 1)
@@ -373,7 +374,7 @@ def apply(self, other):
if self.offset:
result = result + self.offset
- return result
+ return Timestamp(result)
elif isinstance(other, (timedelta, Tick)):
return BDay(self.n, offset=self.offset + other,
@@ -516,7 +517,7 @@ def apply(self, other):
if n <= 0:
n = n + 1
other = other + relativedelta(months=n, day=31)
- return other
+ return Timestamp(other)
@classmethod
def onOffset(cls, dt):
@@ -538,7 +539,7 @@ def apply(self, other):
n += 1
other = other + relativedelta(months=n, day=1)
- return other
+ return Timestamp(other)
@classmethod
def onOffset(cls, dt):
@@ -660,7 +661,7 @@ def apply(self, other):
other = other + timedelta((self.weekday - otherDay) % 7)
for i in range(-k):
other = other - self._inc
- return other
+ return Timestamp(other)
def onOffset(self, dt):
return dt.weekday() == self.weekday
@@ -901,7 +902,7 @@ def apply(self, other):
other = other + relativedelta(months=monthsToGo + 3 * n, day=31)
- return other
+ return Timestamp(other)
def onOffset(self, dt):
modMonth = (dt.month - self.startingMonth) % 3
@@ -941,7 +942,7 @@ def apply(self, other):
n = n + 1
other = other + relativedelta(months=3 * n - monthsSince, day=1)
- return other
+ return Timestamp(other)
@property
def rule_code(self):
@@ -1093,7 +1094,7 @@ def _rollf(date):
# n == 0, roll forward
result = _rollf(result)
- return result
+ return Timestamp(result)
def onOffset(self, dt):
wkday, days_in_month = tslib.monthrange(dt.year, self.month)
@@ -1151,7 +1152,7 @@ def _rollf(date):
# n == 0, roll forward
result = _rollf(result)
- return result
+ return Timestamp(result)
def onOffset(self, dt):
return dt.month == self.month and dt.day == 1
diff --git a/pandas/tseries/tests/test_daterange.py b/pandas/tseries/tests/test_daterange.py
index cb17375266edf..3b40e75194d11 100644
--- a/pandas/tseries/tests/test_daterange.py
+++ b/pandas/tseries/tests/test_daterange.py
@@ -394,6 +394,21 @@ def test_month_range_union_tz(self):
early_dr.union(late_dr)
+ def test_range_closed(self):
+ begin = datetime(2011, 1, 1)
+ end = datetime(2014, 1, 1)
+
+ for freq in ["3D", "2M", "7W", "3H", "A"]:
+ closed = date_range(begin, end, closed=None, freq=freq)
+ left = date_range(begin, end, closed="left", freq=freq)
+ right = date_range(begin, end, closed="right", freq=freq)
+
+ expected_left = closed[:-1]
+ expected_right = closed[1:]
+
+ self.assert_(expected_left.equals(left))
+ self.assert_(expected_right.equals(right))
+
class TestCustomDateRange(unittest.TestCase):
| It is confusing that, albeit the name suggests otherwise, `date_range` behaves differently compared to the standard Python `range` generator in that it returns the right-closed interval.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4579 | 2013-10-03T19:15:12Z | 2013-10-15T12:38:35Z | 2013-10-15T12:38:35Z | 2014-06-27T14:49:22Z |
DOC: str.match should (obviously) be str.extract. | diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 44ef8f0d1a57e..fe6d796d95968 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -428,14 +428,14 @@ Enhancements
.. ipython:: python
- Series(['a1', 'b2', 'c3']).str.match(
+ Series(['a1', 'b2', 'c3']).str.extract(
'(?P<letter>[ab])(?P<digit>\d)')
and optional groups can also be used.
.. ipython:: python
- Series(['a1', 'b2', '3']).str.match(
+ Series(['a1', 'b2', '3']).str.extract(
'(?P<letter>[ab])?(?P<digit>\d)')
- ``read_stata`` now accepts Stata 13 format (:issue:`4291`)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5099 | 2013-10-03T17:17:49Z | 2013-10-03T17:25:17Z | 2013-10-03T17:25:17Z | 2014-07-16T08:32:59Z | |
BUG: non-unique indexing in a Panel (GH4960) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4f4681b112664..9b755c9ad2cda 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -290,6 +290,7 @@ API Changes
call with additional keyword args (:issue:`4435`)
- Provide __dir__ method (and local context) for tab completion / remove ipython completers code
(:issue:`4501`)
+ - Support non-unique axes in a Panel via indexing operations (:issue:`4960`)
Internal Refactoring
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 0d19736ed8083..7502b3898d7fb 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -623,8 +623,9 @@ def _getitem_lowerdim(self, tup):
# might have been a MultiIndex
elif section.ndim == self.ndim:
+
new_key = tup[:i] + (_NS,) + tup[i + 1:]
- # new_key = tup[:i] + tup[i+1:]
+
else:
new_key = tup[:i] + tup[i + 1:]
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 6fddc44d7552e..3b451e2a3b196 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -2413,12 +2413,17 @@ def _interleave(self, items):
return result
- def xs(self, key, axis=1, copy=True):
+ def xs(self, key, axis=1, copy=True, takeable=False):
if axis < 1:
raise AssertionError('Can only take xs across axis >= 1, got %d'
% axis)
- loc = self.axes[axis].get_loc(key)
+ # take by position
+ if takeable:
+ loc = key
+ else:
+ loc = self.axes[axis].get_loc(key)
+
slicer = [slice(None, None) for _ in range(self.ndim)]
slicer[axis] = loc
slicer = tuple(slicer)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index b1752f94b8d97..1185e9514f7fc 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -504,6 +504,15 @@ def set_value(self, *args):
return result.set_value(*args)
def _box_item_values(self, key, values):
+ if self.ndim == values.ndim:
+ result = self._constructor(values)
+
+ # a dup selection will yield a full ndim
+ if result._get_axis(0).is_unique:
+ result = result[key]
+
+ return result
+
d = self._construct_axes_dict_for_slice(self._AXIS_ORDERS[1:])
return self._constructor_sliced(values, **d)
@@ -745,15 +754,27 @@ def xs(self, key, axis=1, copy=True):
_xs = xs
def _ixs(self, i, axis=0):
- # for compatibility with .ix indexing
- # Won't work with hierarchical indexing yet
+ """
+ i : int, slice, or sequence of integers
+ axis : int
+ """
+
key = self._get_axis(axis)[i]
# xs cannot handle a non-scalar key, so just reindex here
if _is_list_like(key):
- return self.reindex(**{self._get_axis_name(axis): key})
+ indexer = { self._get_axis_name(axis): key }
+ return self.reindex(**indexer)
+
+ # a reduction
+ if axis == 0:
+ values = self._data.iget(i)
+ return self._box_item_values(key,values)
- return self.xs(key, axis=axis)
+ # xs by position
+ self._consolidate_inplace()
+ new_data = self._data.xs(i, axis=axis, copy=True, takeable=True)
+ return self._construct_return_type(new_data)
def groupby(self, function, axis='major'):
"""
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index dd0204f11edfb..65a24dc1bf25f 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -172,6 +172,21 @@ def _set_items(self, new_items):
# DataFrame's columns / "items"
minor_axis = SparsePanelAxis('_minor_axis', 'columns')
+ def _ixs(self, i, axis=0):
+ """
+ for compat as we don't support Block Manager here
+ i : int, slice, or sequence of integers
+ axis : int
+ """
+
+ key = self._get_axis(axis)[i]
+
+ # xs cannot handle a non-scalar key, so just reindex here
+ if com.is_list_like(key):
+ return self.reindex(**{self._get_axis_name(axis): key})
+
+ return self.xs(key, axis=axis)
+
def _get_item_cache(self, key):
return self._frames[key]
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 5d3f7b350250d..5c94f378b88ea 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1335,6 +1335,65 @@ def test_to_panel_duplicates(self):
idf = df.set_index(['a', 'b'])
assertRaisesRegexp(ValueError, 'non-uniquely indexed', idf.to_panel)
+ def test_panel_dups(self):
+
+ # GH 4960
+ # duplicates in an index
+
+ # items
+ data = np.random.randn(5, 100, 5)
+ no_dup_panel = Panel(data, items=list("ABCDE"))
+ panel = Panel(data, items=list("AACDE"))
+
+ expected = no_dup_panel['A']
+ result = panel.iloc[0]
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel['E']
+ result = panel.loc['E']
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel.loc[['A','B']]
+ expected.items = ['A','A']
+ result = panel.loc['A']
+ assert_panel_equal(result, expected)
+
+ # major
+ data = np.random.randn(5, 5, 5)
+ no_dup_panel = Panel(data, major_axis=list("ABCDE"))
+ panel = Panel(data, major_axis=list("AACDE"))
+
+ expected = no_dup_panel.loc[:,'A']
+ result = panel.iloc[:,0]
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel.loc[:,'E']
+ result = panel.loc[:,'E']
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel.loc[:,['A','B']]
+ expected.major_axis = ['A','A']
+ result = panel.loc[:,'A']
+ assert_panel_equal(result, expected)
+
+ # minor
+ data = np.random.randn(5, 100, 5)
+ no_dup_panel = Panel(data, minor_axis=list("ABCDE"))
+ panel = Panel(data, minor_axis=list("AACDE"))
+
+ expected = no_dup_panel.loc[:,:,'A']
+ result = panel.iloc[:,:,0]
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel.loc[:,:,'E']
+ result = panel.loc[:,:,'E']
+ assert_frame_equal(result, expected)
+
+ expected = no_dup_panel.loc[:,:,['A','B']]
+ expected.minor_axis = ['A','A']
+ result = panel.loc[:,:,'A']
+ assert_panel_equal(result, expected)
+
def test_filter(self):
pass
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index b25f85c961798..946a4d94b6045 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -357,12 +357,14 @@ def assert_panelnd_equal(left, right,
right_ind = getattr(right, axis)
assert_index_equal(left_ind, right_ind)
- for col, series in compat.iteritems(left):
- assert col in right, "non-matching column '%s'" % col
- assert_func(series, right[col], check_less_precise=check_less_precise)
-
- for col in right:
- assert col in left
+ for i, item in enumerate(left._get_axis(0)):
+ assert item in right, "non-matching item (right) '%s'" % item
+ litem = left.iloc[i]
+ ritem = right.iloc[i]
+ assert_func(litem, ritem, check_less_precise=check_less_precise)
+
+ for i, item in enumerate(right._get_axis(0)):
+ assert item in left, "non-matching item (left) '%s'" % item
# TODO: strangely check_names fails in py3 ?
_panel_frame_equal = partial(assert_frame_equal, check_names=False)
| TST: update Panel tests to iterate by position rather than location (for matching non-unique)
closes #4960
| https://api.github.com/repos/pandas-dev/pandas/pulls/5097 | 2013-10-03T15:57:50Z | 2013-10-03T21:14:18Z | 2013-10-03T21:14:18Z | 2014-06-30T18:24:07Z |
TST: Groupby filter tests involved len, closing #4447 | diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index e70c01ffcb12f..538831692fd67 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -17,7 +17,7 @@
from pandas.util.decorators import cache_readonly, Appender
import pandas.core.algorithms as algos
import pandas.core.common as com
-from pandas.core.common import _possibly_downcast_to_dtype, notnull
+from pandas.core.common import _possibly_downcast_to_dtype, isnull, notnull
import pandas.lib as lib
import pandas.algos as _algos
@@ -1605,8 +1605,19 @@ def filter(self, func, dropna=True, *args, **kwargs):
else:
wrapper = lambda x: func(x, *args, **kwargs)
- indexers = [self.obj.index.get_indexer(group.index) \
- if wrapper(group) else [] for _ , group in self]
+ # Interpret np.nan as False.
+ def true_and_notnull(x, *args, **kwargs):
+ b = wrapper(x, *args, **kwargs)
+ return b and notnull(b)
+
+ try:
+ indexers = [self.obj.index.get_indexer(group.index) \
+ if true_and_notnull(group) else [] \
+ for _ , group in self]
+ except ValueError:
+ raise TypeError("the filter must return a boolean result")
+ except TypeError:
+ raise TypeError("the filter must return a boolean result")
if len(indexers) == 0:
filtered = self.obj.take([]) # because np.concatenate would fail
@@ -2124,7 +2135,8 @@ def add_indexer():
add_indexer()
else:
if getattr(res,'ndim',None) == 1:
- if res.ravel()[0]:
+ val = res.ravel()[0]
+ if val and notnull(val):
add_indexer()
else:
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index fec6460ea31f3..babe72e3ca106 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -2642,9 +2642,37 @@ def raise_if_sum_is_zero(x):
s = pd.Series([-1,0,1,2])
grouper = s.apply(lambda x: x % 2)
grouped = s.groupby(grouper)
- self.assertRaises(ValueError,
+ self.assertRaises(TypeError,
lambda: grouped.filter(raise_if_sum_is_zero))
+ def test_filter_bad_shapes(self):
+ df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)})
+ s = df['B']
+ g_df = df.groupby('B')
+ g_s = s.groupby(s)
+
+ f = lambda x: x
+ self.assertRaises(TypeError, lambda: g_df.filter(f))
+ self.assertRaises(TypeError, lambda: g_s.filter(f))
+
+ f = lambda x: x == 1
+ self.assertRaises(TypeError, lambda: g_df.filter(f))
+ self.assertRaises(TypeError, lambda: g_s.filter(f))
+
+ f = lambda x: np.outer(x, x)
+ self.assertRaises(TypeError, lambda: g_df.filter(f))
+ self.assertRaises(TypeError, lambda: g_s.filter(f))
+
+ def test_filter_nan_is_false(self):
+ df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)})
+ s = df['B']
+ g_df = df.groupby(df['B'])
+ g_s = s.groupby(s)
+
+ f = lambda x: np.nan
+ assert_frame_equal(g_df.filter(f), df.loc[[]])
+ assert_series_equal(g_s.filter(f), s[[]])
+
def test_filter_against_workaround(self):
np.random.seed(0)
# Series of ints
@@ -2697,6 +2725,29 @@ def test_filter_against_workaround(self):
new_way = grouped.filter(lambda x: x['ints'].mean() > N/20)
assert_frame_equal(new_way.sort_index(), old_way.sort_index())
+ def test_filter_using_len(self):
+ # BUG GH4447
+ df = DataFrame({'A': np.arange(8), 'B': list('aabbbbcc'), 'C': np.arange(8)})
+ grouped = df.groupby('B')
+ actual = grouped.filter(lambda x: len(x) > 2)
+ expected = DataFrame({'A': np.arange(2, 6), 'B': list('bbbb'), 'C': np.arange(2, 6)}, index=np.arange(2, 6))
+ assert_frame_equal(actual, expected)
+
+ actual = grouped.filter(lambda x: len(x) > 4)
+ expected = df.ix[[]]
+ assert_frame_equal(actual, expected)
+
+ # Series have always worked properly, but we'll test anyway.
+ s = df['B']
+ grouped = s.groupby(s)
+ actual = grouped.filter(lambda x: len(x) > 2)
+ expected = Series(4*['b'], index=np.arange(2, 6))
+ assert_series_equal(actual, expected)
+
+ actual = grouped.filter(lambda x: len(x) > 4)
+ expected = s[[]]
+ assert_series_equal(actual, expected)
+
def test_groupby_whitelist(self):
from string import ascii_lowercase
letters = np.array(list(ascii_lowercase))
| closed #4447 (tests only, so no release notes necessary)
...which was apparently fixed by #4657. More tests looking for shape-related bugs in filter would be nice. So far I have no discovered any.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5096 | 2013-10-03T14:39:24Z | 2013-10-13T23:37:53Z | 2013-10-13T23:37:53Z | 2014-06-29T17:32:18Z |
TST: Add skip test to excelwriter contextmanager | diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 2cc94524b5d19..38b3ee192ab7a 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -21,12 +21,12 @@
import pandas as pd
-def _skip_if_no_xlrd(version=(0, 9)):
+def _skip_if_no_xlrd():
try:
import xlrd
ver = tuple(map(int, xlrd.__VERSION__.split(".")[:2]))
- if ver < version:
- raise nose.SkipTest('xlrd < %s, skipping' % str(version))
+ if ver < (0, 9):
+ raise nose.SkipTest('xlrd < 0.9, skipping')
except ImportError:
raise nose.SkipTest('xlrd not installed, skipping')
@@ -343,6 +343,7 @@ def test_excel_sheet_by_name_raise(self):
self.assertRaises(xlrd.XLRDError, xl.parse, '0')
def test_excelwriter_contextmanager(self):
+ _skip_if_no_xlrd()
ext = self.ext
pth = os.path.join(self.dirpath, 'testit.{0}'.format(ext))
@@ -350,10 +351,7 @@ def test_excelwriter_contextmanager(self):
with ExcelWriter(pth) as writer:
self.frame.to_excel(writer, 'Data1')
self.frame2.to_excel(writer, 'Data2')
- # If above test passes with outdated xlrd, next test
- # does require fresh xlrd
- # http://nipy.bic.berkeley.edu/builders/pandas-py2.x-wheezy-sparc/builds/148/steps/shell_4/logs/stdio
- _skip_if_no_xlrd((0, 9))
+
with ExcelFile(pth) as reader:
found_df = reader.parse('Data1')
found_df2 = reader.parse('Data2')
| Fixes #5094.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5095 | 2013-10-03T03:22:10Z | 2013-10-03T22:07:48Z | 2013-10-03T22:07:48Z | 2014-07-16T08:32:54Z |
BUG: MultiIndex.get_level_values() replaces NA by another value (#5074) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 216b7f2ca6e5a..40ad07aea1ecf 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -569,6 +569,7 @@ Bug Fixes
- Fixed a bug where default options were being overwritten in the option
parser cleaning (:issue:`5121`).
- Treat a list/ndarray identically for ``iloc`` indexing with list-like (:issue:`5006`)
+ - Fix ``MultiIndex.get_level_values()`` with missing values (:issue:`5074`)
pandas 0.12.0
-------------
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 8e98cc6fb25bb..465a0439c6eb3 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -393,6 +393,9 @@ def values(self):
def get_values(self):
return self.values
+ _na_value = np.nan
+ """The expected NA value to use with this index."""
+
@property
def is_monotonic(self):
return self._engine.is_monotonic
@@ -2256,7 +2259,8 @@ def get_level_values(self, level):
num = self._get_level_number(level)
unique_vals = self.levels[num] # .values
labels = self.labels[num]
- values = unique_vals.take(labels)
+ values = Index(com.take_1d(unique_vals.values, labels,
+ fill_value=unique_vals._na_value))
values.name = self.names[num]
return values
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 5404b30af8d1c..7e801c0a202db 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -1445,6 +1445,39 @@ def test_get_level_values(self):
expected = self.index.get_level_values(0)
self.assert_(np.array_equal(result, expected))
+ def test_get_level_values_na(self):
+ arrays = [['a', 'b', 'b'], [1, np.nan, 2]]
+ index = pd.MultiIndex.from_arrays(arrays)
+ values = index.get_level_values(1)
+ expected = [1, np.nan, 2]
+ assert_array_equal(values.values.astype(float), expected)
+
+ arrays = [['a', 'b', 'b'], [np.nan, np.nan, 2]]
+ index = pd.MultiIndex.from_arrays(arrays)
+ values = index.get_level_values(1)
+ expected = [np.nan, np.nan, 2]
+ assert_array_equal(values.values.astype(float), expected)
+
+ arrays = [[np.nan, np.nan, np.nan], ['a', np.nan, 1]]
+ index = pd.MultiIndex.from_arrays(arrays)
+ values = index.get_level_values(0)
+ expected = [np.nan, np.nan, np.nan]
+ assert_array_equal(values.values.astype(float), expected)
+ values = index.get_level_values(1)
+ expected = ['a', np.nan, 1]
+ assert_array_equal(values.values, expected)
+
+ arrays = [['a', 'b', 'b'], pd.DatetimeIndex([0, 1, pd.NaT])]
+ index = pd.MultiIndex.from_arrays(arrays)
+ values = index.get_level_values(1)
+ expected = pd.DatetimeIndex([0, 1, pd.NaT])
+ assert_array_equal(values.values, expected.values)
+
+ arrays = [[], []]
+ index = pd.MultiIndex.from_arrays(arrays)
+ values = index.get_level_values(0)
+ self.assertEqual(values.shape, (0,))
+
def test_reorder_levels(self):
# this blows up
assertRaisesRegexp(IndexError, '^Too many levels',
diff --git a/pandas/tseries/index.py b/pandas/tseries/index.py
index 281ac0cc8a35a..ce1bea96b9d4c 100644
--- a/pandas/tseries/index.py
+++ b/pandas/tseries/index.py
@@ -495,6 +495,9 @@ def _mpl_repr(self):
# how to represent ourselves to matplotlib
return tslib.ints_to_pydatetime(self.asi8, self.tz)
+ _na_value = tslib.NaT
+ """The expected NA value to use with this index."""
+
def __unicode__(self):
from pandas.core.format import _format_datetime64
values = self.values
| closes #5074
Do you really prefer using a mask? It seems to have a lower memory footprint but it requires some tricks for corner cases.
```
num = self._get_level_number(level)
unique_vals = self.levels[num] # .values
labels = self.labels[num]
mask = labels == -1
if len(unique_vals) > 0:
values = unique_vals.take(labels)
else:
values = np.empty(len(labels))
values[:] = np.nan
values = pd.Index(values)
if mask.any():
values = values.get_values()
values[mask] = np.nan
values = pd.Index(values)
values.name = self.names[num]
return values
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/5090 | 2013-10-02T21:44:18Z | 2013-10-07T21:25:49Z | 2013-10-07T21:25:49Z | 2014-06-21T03:19:29Z |
TST: Tests for multi-index construction of an all-nan frame (GH4078) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 4e5178d8a554a..057802dc51af5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -524,6 +524,7 @@ Bug Fixes
- Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`)
- Make sure series-series boolean comparions are label based (:issue:`4947`)
- Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
+ - Tests/fix for multi-index construction of an all-nan frame (:isue:`4078`)
pandas 0.12.0
-------------
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index fd9aed58798fe..6fddc44d7552e 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -3530,9 +3530,16 @@ def _shape_compat(x):
if ref_items.is_unique:
items = ref_items[ref_items.isin(names)]
else:
- items = _ensure_index([n for n in names if n in ref_items])
- if len(items) != len(stacked):
- raise Exception("invalid names passed _stack_arrays")
+ # a mi
+ if isinstance(ref_items, MultiIndex):
+ names = MultiIndex.from_tuples(names)
+ items = ref_items[ref_items.isin(names)]
+
+ # plain old dups
+ else:
+ items = _ensure_index([n for n in names if n in ref_items])
+ if len(items) != len(stacked):
+ raise ValueError("invalid names passed _stack_arrays")
return items, stacked, placement
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index e8d9f3a7fc7cc..1e4e988431f43 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -2263,10 +2263,6 @@ def test_constructor_overflow_int64(self):
df_crawls = DataFrame(data)
self.assert_(df_crawls['uid'].dtype == object)
- def test_is_mixed_type(self):
- self.assert_(not self.frame._is_mixed_type)
- self.assert_(self.mixed_frame._is_mixed_type)
-
def test_constructor_ordereddict(self):
import random
nitems = 100
@@ -2319,6 +2315,19 @@ def test_constructor_dict(self):
frame = DataFrame({'A': [], 'B': []}, columns=['A', 'B'])
self.assert_(frame.index.equals(Index([])))
+ def test_constructor_multi_index(self):
+ # GH 4078
+ # construction error with mi and all-nan frame
+ tuples = [(2, 3), (3, 3), (3, 3)]
+ mi = MultiIndex.from_tuples(tuples)
+ df = DataFrame(index=mi,columns=mi)
+ self.assert_(pd.isnull(df).values.ravel().all())
+
+ tuples = [(3, 3), (2, 3), (3, 3)]
+ mi = MultiIndex.from_tuples(tuples)
+ df = DataFrame(index=mi,columns=mi)
+ self.assert_(pd.isnull(df).values.ravel().all())
+
def test_constructor_error_msgs(self):
msg = "Mixing dicts with non-Series may lead to ambiguous ordering."
# mix dict and array, wrong size
@@ -9489,6 +9498,10 @@ def test_get_X_columns(self):
self.assert_(np.array_equal(df._get_numeric_data().columns,
['a', 'b', 'e']))
+ def test_is_mixed_type(self):
+ self.assert_(not self.frame._is_mixed_type)
+ self.assert_(self.mixed_frame._is_mixed_type)
+
def test_get_numeric_data(self):
intname = np.dtype(np.int_).name
floatname = np.dtype(np.float_).name
| closes #4078
| https://api.github.com/repos/pandas-dev/pandas/pulls/5089 | 2013-10-02T16:51:14Z | 2013-10-02T20:54:59Z | 2013-10-02T20:54:59Z | 2014-06-23T22:01:54Z |
BUG: fixed a bug in multi-level indexing with a Timestamp partial indexer (GH4294) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 058ea165120a6..4e5178d8a554a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -523,6 +523,7 @@ Bug Fixes
and other reshaping issues.
- Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`)
- Make sure series-series boolean comparions are label based (:issue:`4947`)
+ - Bug in multi-level indexing with a Timestamp partial indexer (:issue:`4294`)
pandas 0.12.0
-------------
diff --git a/pandas/core/index.py b/pandas/core/index.py
index f6a88f4164191..d6e74e16c8dae 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -1,4 +1,5 @@
# pylint: disable=E1101,E1103,W0232
+import datetime
from functools import partial
from pandas.compat import range, zip, lrange, lzip, u
from pandas import compat
@@ -2224,16 +2225,20 @@ def get_value(self, series, key):
# Label-based
s = _values_from_object(series)
k = _values_from_object(key)
+
+ def _try_mi(k):
+ # TODO: what if a level contains tuples??
+ loc = self.get_loc(k)
+ new_values = series.values[loc]
+ new_index = self[loc]
+ new_index = _maybe_droplevels(new_index, k)
+ return Series(new_values, index=new_index, name=series.name)
+
try:
return self._engine.get_value(s, k)
except KeyError as e1:
try:
- # TODO: what if a level contains tuples??
- loc = self.get_loc(key)
- new_values = series.values[loc]
- new_index = self[loc]
- new_index = _maybe_droplevels(new_index, key)
- return Series(new_values, index=new_index, name=series.name)
+ return _try_mi(key)
except KeyError:
pass
@@ -2250,6 +2255,16 @@ def get_value(self, series, key):
except Exception: # pragma: no cover
raise e1
except TypeError:
+
+ # a Timestamp will raise a TypeError in a multi-index
+ # rather than a KeyError, try it here
+ if isinstance(key, (datetime.datetime,np.datetime64)) or (
+ compat.PY3 and isinstance(key, compat.string_types)):
+ try:
+ return _try_mi(Timestamp(key))
+ except:
+ pass
+
raise InvalidIndexError(key)
def get_level_values(self, level):
@@ -2779,6 +2794,7 @@ def reindex(self, target, method=None, level=None, limit=None,
if level is not None:
if method is not None:
raise TypeError('Fill method not supported if level passed')
+ target = _ensure_index(target)
target, indexer, _ = self._join_level(target, level, how='right',
return_indexers=True)
else:
diff --git a/pandas/index.pyx b/pandas/index.pyx
index 53c96b1c55605..8aa4f69a1ec8e 100644
--- a/pandas/index.pyx
+++ b/pandas/index.pyx
@@ -408,7 +408,7 @@ cdef class Float64Engine(IndexEngine):
limit=limit)
-cdef Py_ssize_t _bin_search(ndarray values, object val):
+cdef Py_ssize_t _bin_search(ndarray values, object val) except -1:
cdef:
Py_ssize_t mid, lo = 0, hi = len(values) - 1
object pval
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index e4504420bacc2..f3598dd2d210b 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2751,6 +2751,24 @@ def f():
df_multi.loc[('2013-06-19', 'ACCT1', 'ABC')]
self.assertRaises(KeyError, f)
+ # GH 4294
+ # partial slice on a series mi
+ s = pd.DataFrame(randn(1000, 1000), index=pd.date_range('2000-1-1', periods=1000)).stack()
+
+ s2 = s[:-1].copy()
+ expected = s2['2000-1-4']
+ result = s2[pd.Timestamp('2000-1-4')]
+ assert_series_equal(result, expected)
+
+ result = s[pd.Timestamp('2000-1-4')]
+ expected = s['2000-1-4']
+ assert_series_equal(result, expected)
+
+ df2 = pd.DataFrame(s)
+ expected = df2.ix['2000-1-4']
+ result = df2.ix[pd.Timestamp('2000-1-4')]
+ assert_frame_equal(result, expected)
+
def test_date_range_normalize(self):
snap = datetime.today()
n = 50
| closes #4294
| https://api.github.com/repos/pandas-dev/pandas/pulls/5088 | 2013-10-02T16:16:55Z | 2013-10-02T19:10:22Z | 2013-10-02T19:10:22Z | 2014-06-24T23:01:59Z |
DOC: CONTRIBUTING.md: Gold plating: syntax, punctuation, Markdown format... | diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index ac972b47e7b60..2966aed5f57ee 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -3,147 +3,161 @@
All contributions, bug reports, bug fixes, documentation improvements,
enhancements and ideas are welcome.
-The Github "issues" tab contains some issues labels "Good as first PR", these are
+The [GitHub "issues" tab](https://github.com/pydata/pandas/issues)
+contains some issues labeled "Good as first PR"; these are
tasks which do not require deep knowledge of the package. Look those up if you're
looking for a quick way to help out.
Please try and follow these guidelines, as this makes it easier for us to accept
your contribution or address the issue you're having.
-- When submitting a bug report:
- - Please include a short, self-contained python snippet reproducing the problem.
- You can have the code formatted nicely by using [GitHub Flavored Markdown](http://github.github.com/github-flavored-markdown/) :
+#### Bug Reports
- ```
- ```python
+ - Please include a short, self-contained Python snippet reproducing the problem.
+ You can have the code formatted nicely by using [GitHub Flavored Markdown](http://github.github.com/github-flavored-markdown/) :
- print("I โฅ pandas!")
+ ```python
+
+ print("I โฅ pandas!")
- ``'
- ```
+ ```
- - Specify the pandas (and numpy) version used. (you can check `pandas.__version__`).
+ - A [test case](https://github.com/pydata/pandas/tree/master/pandas/tests) may be more helpful.
+ - Specify the pandas (and NumPy) version used. (check `pandas.__version__`
+ and `numpy.__version__`)
- Explain what the expected behavior was, and what you saw instead.
- - If the issue seems to involve some of pandas' dependencies such as matplotlib
- or PyTables, you should include (the relavent parts of) the output of
- [ci/print_versions.py](https://github.com/pydata/pandas/blob/master/ci/print_versions.py)
-
-- When submitting a Pull Request
- - **Make sure the test suite passes**., and that means on python3 as well.
- You can use "test_fast.sh", or tox locally and/or enable Travis-CI on your fork.
- See the "Getting Travis-CI going" below.
- - If you are changing any code, you need to enable Travis-CI on your fork,
+ - If the issue seems to involve some of [pandas' dependencies](https://github.com/pydata/pandas#dependencies)
+ such as
+ [NumPy](http://numpy.org),
+ [matplotlib](http://matplotlib.org/), and
+ [PyTables](http://www.pytables.org/)
+ you should include (the relevant parts of) the output of
+ [`ci/print_versions.py`](https://github.com/pydata/pandas/blob/master/ci/print_versions.py).
+
+#### Pull Requests
+
+ - **Make sure the test suite passes** for both python2 and python3.
+ You can use `test_fast.sh`, **tox** locally, and/or enable **Travis-CI** on your fork.
+ See "Getting Travis-CI going" below.
+ - An informal commit message format is in effect for the project. Please try
+ and adhere to it. Check `git log` for examples. Here are some common prefixes
+ along with general guidelines for when to use them:
+ - **ENH**: Enhancement, new functionality
+ - **BUG**: Bug fix
+ - **DOC**: Additions/updates to documentation
+ - **TST**: Additions/updates to tests
+ - **BLD**: Updates to the build process/scripts
+ - **PERF**: Performance improvement
+ - **CLN**: Code cleanup
+ - Commit messages should have:
+ - a subject line with `< 80` chars
+ - one blank line
+ - a commit message body, if there's a need for one
+ - If you are changing any code, you should enable Travis-CI on your fork
to make it easier for the team to see that the PR does indeed pass all the tests.
- - Back-compatibility **really** matters. Pandas already has a large user-base and
- a lot of existing user code. Don't break old code if you can avoid it
- Explain the need if there is one in the PR.
- Changes to method signatures should be made in a way which doesn't break existing
- code, for example you should beware of changes to ordering and naming of keyword
- arguments. Add deprecation warnings when needed.
- - Performance matters. You can use the included "test_perf.sh"
- script to make sure your PR does not introduce any performance regressions
+ - **Backward-compatibility really matters**. Pandas already has a large user base and
+ a lot of existing user code.
+ - Don't break old code if you can avoid it.
+ - If there is a need, explain it in the PR.
+ - Changes to method signatures should be made in a way which doesn't break existing
+ code. For example, you should beware of changes to ordering and naming of keyword
+ arguments.
+ - Add deprecation warnings where needed.
+ - Performance matters. You can use the included `test_perf.sh`
+ script to make sure your PR does not introduce any new performance regressions
in the library.
- - docstrings follow the [numpydoc](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt) format.
- - **Don't** merge upstream into a branch you're going to submit as a PR,
- This can create all sorts of problems. Use "git rebase" instead. This ensures
- no merge conflicts occur when you're code is merged by the core team.
- - An informal commit message format is in effect for the project, please try
- and adhere to it. View "git log" for examples. Here are some common prefixes
- along with general guidelines for when to use them:
- - ENH: Enhancement, new functionality
- - BUG: Bug fix
- - DOC: Additions/updates to documentation
- - TST: Additions/updates to tests
- - BLD: Updates to the build process/scripts
- - PERF: Performance improvement
- - CLN: Code cleanup
- - Commit messages should have subject line <80 chars, followed by one blank line,
- and finally a commit message body if there's a need for one.
- - Please reference the GH issue number in your commit message using GH1234
- or #1234, either style is fine.
- - Use "raise AssertionError" rather then plain `assert` in library code (using assert is fine
- for test code). python -o strips assertions. better safe then sorry.
- - When writing tests, don't use "new" assertion methods added to the unittest module
+ - Docstrings follow the [numpydoc](https://github.com/numpy/numpy/blob/master/doc/HOWTO_DOCUMENT.rst.txt) format.
+ - **Don't** merge upstream into a branch you're going to submit as a PR.
+ This can create all sorts of problems. Use `git rebase` instead. This ensures
+ no merge conflicts occur when your code is merged by the core team.
+ - Please reference the GH issue number in your commit message using `GH1234`
+ or `#1234`. Either style is fine.
+ - Use `raise AssertionError` rather then plain `assert` in library code (`assert` is fine
+ for test code). `python -o` strips assertions. Better safe than sorry.
+ - When writing tests, don't use "new" assertion methods added to the `unittest` module
in 2.7 since pandas currently supports 2.6. The most common pitfall is:
- with self.assertRaises(ValueError):
- foo
+ with self.assertRaises(ValueError):
+ foo
+
- which fails on python 2.6. You need to use `assertRaises` from
+ which fails with Python 2.6. You need to use `assertRaises` from
`pandas.util.testing` instead (or use `self.assertRaises(TheException,func,args)`).
- - doc/source/release.rst and doc/source/vx.y.z.txt contain an on-going
- changelog for each release as it is worked on. Add entries to these files
- as needed in a separate commit in your PR, documenting the fix, enhancement
+ - `doc/source/release.rst` and `doc/source/vx.y.z.txt` contain an ongoing
+ changelog for each release. Add entries to these files
+ as needed in a separate commit in your PR: document the fix, enhancement,
or (unavoidable) breaking change.
- - For extra brownie points, use "git rebase -i" to squash and reorder
+ - For extra brownie points, use `git rebase -i` to squash and reorder
commits in your PR so that the history makes the most sense. Use your own
judgment to decide what history needs to be preserved.
- - Pandas source code should not (with some exceptions, such as 3rd party licensed code),
- generally speaking, include an "Authors:" list or attribution to individuals in source code.
- The RELEASE.rst details changes and enhancements to the code over time,
- a "thanks goes to @JohnSmith." as part of the appropriate entry is a suitable way to acknowledge
- contributions, the rest is git blame/log.
+ - Pandas source code should not -- with some exceptions, such as 3rd party licensed code --
+ generally speaking, include an "Authors" list or attribution to individuals in source code.
+ `RELEASE.rst` details changes and enhancements to the code over time.
+ A "thanks goes to @JohnSmith." as part of the appropriate entry is a suitable way to acknowledge
+ contributions. The rest is `git blame`/`git log`.
Feel free to ask the commiter who merges your code to include such an entry
- or include it directly yourself as part of the PR if you'd like to. We're always glad to have
- new contributors join us from the ever-growing pandas community.
+ or include it directly yourself as part of the PR if you'd like to.
+ **We're always glad to have new contributors join us from the ever-growing pandas community.**
You may also be interested in the copyright policy as detailed in the pandas [LICENSE](https://github.com/pydata/pandas/blob/master/LICENSE).
- On the subject of [PEP8](http://www.python.org/dev/peps/pep-0008/): yes.
- On the subject of massive PEP8 fix PRs touching everything, please consider the following:
- - They create merge conflicts for people working in their own fork.
- - They makes git blame less effective.
+ - They create noisy merge conflicts for people working in their own fork.
+ - They make `git blame` less effective.
- Different tools / people achieve PEP8 in different styles. This can create
"style wars" and churn that produces little real benefit.
- If your code changes are intermixed with style fixes, they are harder to review
before merging. Keep style fixes in separate commits.
- - it's fine to clean-up a little around an area you just worked on.
- - Generally its a BAD idea to PEP8 on documentation.
+ - It's fine to clean-up a little around an area you just worked on.
+ - Generally it's a BAD idea to PEP8 on documentation.
Having said that, if you still feel a PEP8 storm is in order, go for it.
-### Notes on plotting functions convention
+### Notes on plotting function conventions
https://groups.google.com/forum/#!topic/pystatsmodels/biNlCvJPNNY/discussion
-###Getting Travis-CI going
+### Getting Travis-CI going
-Instructions for getting Travis-CI installed are available [here](http://about.travis-ci.org/docs/user/getting-started/). For those users who are new to travis-ci and continuous-integration in particular,
+Instructions for getting Travis-CI installed are available [here](http://about.travis-ci.org/docs/user/getting-started/).
+For those users who are new to Travis-CI and [continuous integration](https://en.wikipedia.org/wiki/Continuous_integration) in particular,
Here's a few high-level notes:
-- Travis-CI is a free service (with premium account available), that integrates
-well with Github.
-- Enabling Travis-CI on your github fork of a project will cause any *new* commit
-pushed to the repo to trigger a full build+test on Travis-CI's servers.
-- All the configuration for travis's builds is already specified by .travis.yml in the repo,
-That means all you have to do is enable Travis-CI once, and then just push commits
-and you'll get full testing across py2/3 with pandas's considerable test-suite.
-- Enabling travis-CI will attach the test results (red/green) to the Pull-Request
-page for any PR you submit. For example:
+- Travis-CI is a free service (with premium account upgrades available) that integrates
+ well with GitHub.
+- Enabling Travis-CI on your GitHub fork of a project will cause any *new* commit
+ pushed to the repo to trigger a full build+test on Travis-CI's servers.
+- All the configuration for Travis-CI builds is already specified by `.travis.yml` in the repo.
+ That means all you have to do is enable Travis-CI once, and then just push commits
+ and you'll get full testing across py2/3 with pandas' considerable
+ [test-suite](https://github.com/pydata/pandas/tree/master/pandas/tests).
+- Enabling Travis-CI will attach the test results (red/green) to the Pull-Request
+ page for any PR you submit. For example:
https://github.com/pydata/pandas/pull/2532,
See the Green "Good to merge!" banner? that's it.
This is especially important for new contributors, as members of the pandas dev team
-like to know the test suite passes before considering it for merging.
+like to know that the test suite passes before considering it for merging.
Even regular contributors who test religiously on their local box (using tox
for example) often rely on a PR+travis=green to make double sure everything
works ok on another system, as occasionally, it doesn't.
-####Steps to enable Travis-CI
+#### Steps to enable Travis-CI
-- go to https://travis-ci.org/
-- "Sign in with Github", on top panel.
-- \[your username\]/Account, on top-panel.
-- 'sync now' to refresh the list of repos on your GH account.
-- flip the switch on the repos you want Travis-CI enabled for,
-"pandas" obviously.
+- Open https://travis-ci.org/
+- Select "Sign in with GitHub" (Top Navbar)
+- Select \[your username\] -> "Accounts" (Top Navbar)
+- Select 'Sync now' to refresh the list of repos from your GH account.
+- Flip the switch for the repos you want Travis-CI enabled for.
+ "pandas", obviously.
- Then, pushing a *new* commit to a certain branch on that repo
-will trigger a build/test for that branch, for example the branch
-might be "master" or "PR1234_fix_all_the_things", if that's the
-name of your PR branch.
+ will trigger a build/test for that branch. For example, the branch
+ might be `master` or `PR1234_fix_everything__atomically`, if that's the
+ name of your PR branch.
You can see the build history and current builds for your fork
-on: https://travis-ci.org/(your_GH_username)/pandas.
+at: https://travis-ci.org/(your_GH_username)/pandas.
For example, the builds for the main pandas repo can be seen at:
https://travis-ci.org/pydata/pandas.
| ...ting
:sparkles:
| https://api.github.com/repos/pandas-dev/pandas/pulls/5086 | 2013-10-02T14:13:43Z | 2013-10-08T03:52:50Z | 2013-10-08T03:52:50Z | 2014-06-16T06:29:44Z |
CLN: Remove internal classes from MultiIndex pickle. | diff --git a/pandas/core/index.py b/pandas/core/index.py
index e966912e509e2..3f491b4271ddc 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2458,8 +2458,9 @@ def __contains__(self, key):
def __reduce__(self):
"""Necessary for making this object picklable"""
object_state = list(np.ndarray.__reduce__(self))
- subclass_state = (list(self.levels), list(
- self.labels), self.sortorder, list(self.names))
+ subclass_state = ([lev.view(np.ndarray) for lev in self.levels],
+ [label.view(np.ndarray) for label in self.labels],
+ self.sortorder, list(self.names))
object_state[2] = (object_state[2], subclass_state)
return tuple(object_state)
| FrozenNDArray had made it into the MI pickle. No reason to do that and just
complicates pickle compat going forward. Now they just output ndarray
instead (which also avoids the unnecessary nested pickle previously
occurring).
Fixes #5076.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5084 | 2013-10-02T12:01:23Z | 2013-10-02T23:36:29Z | 2013-10-02T23:36:29Z | 2014-06-25T18:39:41Z |
CLN: Fix order of index methods. | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 6d0a7d2f9f86a..f6a88f4164191 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -93,35 +93,6 @@ class Index(FrozenNDArray):
_engine_type = _index.ObjectEngine
- def is_(self, other):
- """
- More flexible, faster check like ``is`` but that works through views
-
- Note: this is *not* the same as ``Index.identical()``, which checks
- that metadata is also the same.
-
- Parameters
- ----------
- other : object
- other object to compare against.
-
- Returns
- -------
- True if both have same underlying data, False otherwise : bool
- """
- # use something other than None to be clearer
- return self._id is getattr(other, '_id', Ellipsis)
-
- def _reset_identity(self):
- "Initializes or resets ``_id`` attribute with new object"
- self._id = _Identity()
-
- def view(self, *args, **kwargs):
- result = super(Index, self).view(*args, **kwargs)
- if isinstance(result, Index):
- result._id = self._id
- return result
-
def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
**kwargs):
@@ -187,6 +158,35 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
subarr._set_names([name])
return subarr
+ def is_(self, other):
+ """
+ More flexible, faster check like ``is`` but that works through views
+
+ Note: this is *not* the same as ``Index.identical()``, which checks
+ that metadata is also the same.
+
+ Parameters
+ ----------
+ other : object
+ other object to compare against.
+
+ Returns
+ -------
+ True if both have same underlying data, False otherwise : bool
+ """
+ # use something other than None to be clearer
+ return self._id is getattr(other, '_id', Ellipsis)
+
+ def _reset_identity(self):
+ "Initializes or resets ``_id`` attribute with new object"
+ self._id = _Identity()
+
+ def view(self, *args, **kwargs):
+ result = super(Index, self).view(*args, **kwargs)
+ if isinstance(result, Index):
+ result._id = self._id
+ return result
+
# construction helpers
@classmethod
def _scalar_data_error(cls, data):
| I moved `__new__` below other methods on Index. Makes it look weird.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5081 | 2013-10-02T02:19:06Z | 2013-10-02T03:41:17Z | 2013-10-02T03:41:17Z | 2014-07-16T08:32:42Z |
BUG: Give correct ndim, shape and size properties | diff --git a/pandas/core/index.py b/pandas/core/index.py
index 6d0a7d2f9f86a..d0ea7c63804a0 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2430,6 +2430,20 @@ def nlevels(self):
def levshape(self):
return tuple(len(x) for x in self.levels)
+ @property
+ def ndim(self):
+ return 2
+
+ @property
+ def shape(self):
+ if len(self.levels):
+ return (len(self.levels[0]), len(self.levels))
+
+ @property
+ def size(self):
+ i, j = self.shape
+ return i * j
+
def __contains__(self, key):
hash(key)
# work around some kind of odd cython bug
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index f29cee6942672..57cf3915f291c 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -1,7 +1,7 @@
# pylint: disable=E1101,E1103,W0232
from datetime import datetime, timedelta
-from pandas.compat import range, lrange, lzip, u, zip
+from pandas.compat import range, lrange, lzip, u, zip, lmap
import operator
import pickle
import re
@@ -678,6 +678,19 @@ def test_join_self(self):
joined = res.join(res, how=kind)
self.assert_(res is joined)
+ def test_sizing_properties(self):
+ for ind in self.indices.items():
+ self.assertEqual(ind.ndim, 1)
+
+ for ind in self.indices.values():
+ self.assertEqual(ind.shape, (len(ind),))
+
+ for ind in self.indices.values():
+ self.assertEqual(ind.size, len(ind))
+
+ self.assertEqual(len(Index(range(10)), 10))
+ self.assertEqual(len(Index(list('abcd')), 4))
+
class TestFloat64Index(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -767,6 +780,14 @@ def test_astype(self):
self.assert_(i.equals(result))
self.check_is_index(result)
+ def test_sizing_properties(self):
+ self.assertEqual(self.index.ndim, 1)
+
+ self.assertEqual(self.index.shape, (len(self.index),))
+
+ self.assertEqual(self.index.size, len(self.index))
+
+ self.assertEqual(len(Float64Index(lmap(float(range(10)))), 10))
class TestInt64Index(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -2270,6 +2291,24 @@ def test_slice_keep_name(self):
names=['x', 'y'])
self.assertEqual(x[1:].names, x.names)
+ def test_sizing_properties(self):
+ # size is like a 2d array
+
+ # non-lex-sorted
+ mi = MultiIndex.from_tuples(zip([0, 0, 1, 2, -1, -2, 3, 5, 9, 10], list('cccdifghij'), range(10)))
+ self.assertEqual(mi.ndim, 2)
+ self.assertEqual(mi.shape, (10, 3))
+ self.assertEqual(mi.size, 30)
+ self.assertEqual(len(mi), 10)
+
+ # lex sorted
+ mi = MultiIndex.from_tuples(zip(range(5), range(5)))
+ self.assertEqual(mi.ndim, 2)
+ self.assertEqual(mi.shape, (5, 2))
+ self.assertEqual(mi.size, 10)
+ self.assertEqual(len(mi), 5)
+
+
def test_get_combined_index():
from pandas.core.index import _get_combined_index
| Treats it like 2d array rather than array of tuples. I like it because
it's more informative - but we could also just describe `.values`.
Fixes #4842.
Still working out some kinks.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5079 | 2013-10-02T01:21:00Z | 2013-10-02T01:30:08Z | null | 2014-08-05T12:07:55Z |
BUG: Make Index, Int64Index and MI repr evalable | diff --git a/pandas/core/base.py b/pandas/core/base.py
index f390592a6f6c4..2acc045156720 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -175,4 +175,4 @@ def __unicode__(self):
Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
"""
prepr = com.pprint_thing(self, escape_chars=('\t', '\r', '\n'),quote_strings=True)
- return '%s(%s, dtype=%s)' % (type(self).__name__, prepr, self.dtype)
+ return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype)
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 465a0439c6eb3..98f190360bc33 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -2044,7 +2044,7 @@ def __repr__(self):
attrs.append(('sortorder', default_pprint(self.sortorder)))
space = ' ' * (len(self.__class__.__name__) + 1)
- prepr = (u("\n%s") % space).join([u("%s=%s") % (k, v)
+ prepr = (u(",\n%s") % space).join([u("%s=%s") % (k, v)
for k, v in attrs])
res = u("%s(%s)") % (self.__class__.__name__, prepr)
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index 55f70e9e4fe28..d9bf8adb71298 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -1456,7 +1456,7 @@ def test_to_html_with_classes(self):
<table border="1" class="dataframe sortable draggable">
<tbody>
<tr>
- <td>Index([], dtype=object)</td>
+ <td>Index([], dtype='object')</td>
<td>Empty DataFrame</td>
</tr>
</tbody>
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 11538ae8b3ab8..cd26016acba5c 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -36,21 +36,23 @@ class TestIndex(unittest.TestCase):
_multiprocess_can_split_ = True
def setUp(self):
- self.unicodeIndex = tm.makeUnicodeIndex(100)
- self.strIndex = tm.makeStringIndex(100)
- self.dateIndex = tm.makeDateIndex(100)
- self.intIndex = tm.makeIntIndex(100)
- self.floatIndex = tm.makeFloatIndex(100)
- self.empty = Index([])
- self.tuples = Index(lzip(['foo', 'bar', 'baz'], [1, 2, 3]))
+ self.indices = dict(
+ unicodeIndex = tm.makeUnicodeIndex(100),
+ strIndex = tm.makeStringIndex(100),
+ dateIndex = tm.makeDateIndex(100),
+ intIndex = tm.makeIntIndex(100),
+ floatIndex = tm.makeFloatIndex(100),
+ empty = Index([]),
+ tuples = Index(lzip(['foo', 'bar', 'baz'], [1, 2, 3])),
+ )
+ for name, ind in self.indices.items():
+ setattr(self, name, ind)
def test_wrong_number_names(self):
def testit(ind):
ind.names = ["apple", "banana", "carrot"]
- indices = (self.dateIndex, self.unicodeIndex, self.strIndex,
- self.intIndex, self.floatIndex, self.empty, self.tuples)
- for ind in indices:
+ for ind in self.indices.values():
assertRaisesRegexp(ValueError, "^Length", testit, ind)
def test_set_name_methods(self):
@@ -700,6 +702,10 @@ def test_hash_error(self):
type(self.float).__name__):
hash(self.float)
+ def test_repr_roundtrip(self):
+ for ind in (self.mixed, self.float):
+ tm.assert_index_equal(eval(repr(ind)), ind)
+
def check_is_index(self, i):
self.assert_(isinstance(i, Index) and not isinstance(i, Float64Index))
@@ -1167,6 +1173,9 @@ def test_repr_summary(self):
self.assertTrue(len(r) < 100)
self.assertTrue("..." in r)
+ def test_repr_roundtrip(self):
+ tm.assert_index_equal(eval(repr(self.index)), self.index)
+
def test_unicode_string_with_unicode(self):
idx = Index(lrange(1000))
@@ -2291,6 +2300,9 @@ def test_repr_with_unicode_data(self):
index = pd.DataFrame(d).set_index(["a", "b"]).index
self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
+ def test_repr_roundtrip(self):
+ tm.assert_index_equal(eval(repr(self.index)), self.index)
+
def test_unicode_string_with_unicode(self):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
idx = pd.DataFrame(d).set_index(["a", "b"]).index
| MI repr was missing a comma between its arguments and Index reprs needed
to quote their dtypes.
Only talking about Index, Int64Index and
MultiIndex. Tseries indices (like PeriodIndex and DatetimeIndex) are
more complicated and could be covered separately.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5077 | 2013-10-02T00:43:52Z | 2013-10-12T17:21:01Z | 2013-10-12T17:21:01Z | 2014-09-02T13:21:29Z |
TST/CI: make sure that locales are tested | diff --git a/.travis.yml b/.travis.yml
index 387dec1ed2658..818278eebf5b5 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -6,13 +6,13 @@ python:
matrix:
include:
- python: 2.6
- env: NOSE_ARGS="not slow" CLIPBOARD=xclip
+ env: NOSE_ARGS="not slow" CLIPBOARD=xclip LOCALE_OVERRIDE="it_IT.UTF-8"
- python: 2.7
env: NOSE_ARGS="slow and not network" LOCALE_OVERRIDE="zh_CN.GB18030" FULL_DEPS=true JOB_TAG=_LOCALE
- python: 2.7
- env: NOSE_ARGS="not slow" FULL_DEPS=true GUI=gtk2
+ env: NOSE_ARGS="not slow" FULL_DEPS=true CLIPBOARD_GUI=gtk2
- python: 3.2
- env: NOSE_ARGS="not slow" FULL_DEPS=true GUI=qt4
+ env: NOSE_ARGS="not slow" FULL_DEPS=true CLIPBOARD_GUI=qt4
- python: 3.3
env: NOSE_ARGS="not slow" FULL_DEPS=true CLIPBOARD=xsel
exclude:
@@ -25,28 +25,25 @@ virtualenv:
system_site_packages: true
before_install:
- - echo "Waldo1"
+ - echo "before_install"
- echo $VIRTUAL_ENV
- df -h
- date
- # - export PIP_ARGS=-q # comment this this to debug travis install issues
- # - export APT_ARGS=-qq # comment this to debug travis install issues
- # - set -x # enable this to see bash commands
- - export ZIP_FLAGS=-q # comment this to debug travis install issues
- ci/before_install.sh
- python -V
+ # Xvfb stuff for clipboard functionality; see the travis-ci documentation
- export DISPLAY=:99.0
- sh -e /etc/init.d/xvfb start
install:
- - echo "Waldo2"
+ - echo "install"
- ci/install.sh
before_script:
- mysql -e 'create database pandas_nosetest;'
script:
- - echo "Waldo3"
+ - echo "script"
- ci/script.sh
after_script:
diff --git a/ci/install.sh b/ci/install.sh
index a30aba9338db2..528d669ae693c 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -13,20 +13,37 @@
# (no compiling needed), then directly goto script and collect 200$.
#
-echo "inside $0"
+function edit_init()
+{
+ if [ -n "$LOCALE_OVERRIDE" ]; then
+ echo "Adding locale to the first line of pandas/__init__.py"
+ rm -f pandas/__init__.pyc
+ sedc="3iimport locale\nlocale.setlocale(locale.LC_ALL, '$LOCALE_OVERRIDE')\n"
+ sed -i "$sedc" pandas/__init__.py
+ echo "head -4 pandas/__init__.py"
+ head -4 pandas/__init__.py
+ echo
+ fi
+}
+
+edit_init
# Install Dependencies
-# as of pip 1.4rc2, wheel files are still being broken regularly, this is a known good
-# commit. should revert to pypi when a final release is out
-pip install -I git+https://github.com/pypa/pip@42102e9deaea99db08b681d06906c2945f6f95e2#egg=pip
-pv="${TRAVIS_PYTHON_VERSION:0:1}"
-[ "$pv" == "2" ] && pv=""
+# as of pip 1.4rc2, wheel files are still being broken regularly, this is a
+# known good commit. should revert to pypi when a final release is out
+pip_commit=42102e9deaea99db08b681d06906c2945f6f95e2
+pip install -I git+https://github.com/pypa/pip@$pip_commit#egg=pip
+
+python_major_version="${TRAVIS_PYTHON_VERSION:0:1}"
+[ "$python_major_version" == "2" ] && python_major_version=""
pip install -I -U setuptools
pip install wheel
# comment this line to disable the fetching of wheel files
-PIP_ARGS+=" -I --use-wheel --find-links=http://cache27diy-cpycloud.rhcloud.com/${TRAVIS_PYTHON_VERSION}${JOB_TAG}/"
+base_url=http://cache27diy-cpycloud.rhcloud.com
+wheel_box=${TRAVIS_PYTHON_VERSION}${JOB_TAG}
+PIP_ARGS+=" -I --use-wheel --find-links=$base_url/$wheel_box/"
# Force virtualenv to accpet system_site_packages
rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt
@@ -35,25 +52,37 @@ rm -f $VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/no-global-site-packages.txt
if [ -n "$LOCALE_OVERRIDE" ]; then
# make sure the locale is available
# probably useless, since you would need to relogin
- sudo locale-gen "$LOCALE_OVERRIDE"
+ time sudo locale-gen "$LOCALE_OVERRIDE"
fi
-
# show-skipped is working at this particular commit
-time pip install git+git://github.com/cpcloud/nose-show-skipped.git@fa4ff84e53c09247753a155b428c1bf2c69cb6c3
-time pip install $PIP_ARGS -r ci/requirements-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.txt
-time sudo apt-get install libatlas-base-dev gfortran
+show_skipped_commit=fa4ff84e53c09247753a155b428c1bf2c69cb6c3
+time pip install git+git://github.com/cpcloud/nose-show-skipped.git@$show_skipped_commit
+time pip install $PIP_ARGS -r ci/requirements-${wheel_box}.txt
+
+# we need these for numpy
+time sudo apt-get $APT_ARGS install libatlas-base-dev gfortran
+
+
+# Need to enable for locale testing. The location of the locale file(s) is
+# distro specific. For example, on Arch Linux all of the locales are in a
+# commented file--/etc/locale.gen--that must be commented in to be used
+# whereas Ubuntu looks in /var/lib/locales/supported.d/* and generates locales
+# based on what's in the files in that folder
+time echo 'it_CH.UTF-8 UTF-8' | sudo tee -a /var/lib/locales/supported.d/it
+time sudo locale-gen
# install gui for clipboard testing
-if [ -n "$GUI" ]; then
- echo "Using GUI clipboard: $GUI"
- [ -n "$pv" ] && py="py"
- time sudo apt-get $APT_ARGS install python${pv}-${py}${GUI}
+if [ -n "$CLIPBOARD_GUI" ]; then
+ echo "Using CLIPBOARD_GUI: $CLIPBOARD_GUI"
+ [ -n "$python_major_version" ] && py="py"
+ python_cb_gui_pkg=python${python_major_version}-${py}${CLIPBOARD_GUI}
+ time sudo apt-get $APT_ARGS install $python_cb_gui_pkg
fi
-# install a clipboard
+# install a clipboard if $CLIPBOARD is not empty
if [ -n "$CLIPBOARD" ]; then
echo "Using clipboard: $CLIPBOARD"
time sudo apt-get $APT_ARGS install $CLIPBOARD
@@ -61,13 +90,15 @@ fi
# Optional Deps
-if [ x"$FULL_DEPS" == x"true" ]; then
+if [ -n "$FULL_DEPS" ]; then
echo "Installing FULL_DEPS"
- # for pytables gets the lib as well
+
+ # need libhdf5 for PyTables
time sudo apt-get $APT_ARGS install libhdf5-serial-dev
fi
-# build pandas
+
+# build and install pandas
time python setup.py build_ext install
true
diff --git a/ci/script.sh b/ci/script.sh
index 2bafe13687505..67dadde2b20fb 100755
--- a/ci/script.sh
+++ b/ci/script.sh
@@ -5,8 +5,8 @@ echo "inside $0"
if [ -n "$LOCALE_OVERRIDE" ]; then
export LC_ALL="$LOCALE_OVERRIDE";
echo "Setting LC_ALL to $LOCALE_OVERRIDE"
- (cd /; python -c 'import pandas; print("pandas detected console encoding: %s" % pandas.get_option("display.encoding"))')
-
+ pycmd='import pandas; print("pandas detected console encoding: %s" % pandas.get_option("display.encoding"))'
+ python -c "$pycmd"
fi
echo nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --show-skipped
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 7776ee1efba4f..4a25a98f2cfbe 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -578,6 +578,9 @@ Bug Fixes
- Fix a bug with ``NDFrame.replace()`` which made replacement appear as
though it was (incorrectly) using regular expressions (:issue:`5143`).
- Fix better error message for to_datetime (:issue:`4928`)
+ - Made sure different locales are tested on travis-ci (:issue:`4918`). Also
+ adds a couple of utilities for getting locales and setting locales with a
+ context manager.
pandas 0.12.0
-------------
diff --git a/pandas/io/tests/test_data.py b/pandas/io/tests/test_data.py
index f647b217fb260..4e2331f05001d 100644
--- a/pandas/io/tests/test_data.py
+++ b/pandas/io/tests/test_data.py
@@ -13,6 +13,7 @@
from pandas.io.data import DataReader, SymbolWarning
from pandas.util.testing import (assert_series_equal, assert_produces_warning,
network, assert_frame_equal)
+import pandas.util.testing as tm
from numpy.testing import assert_array_equal
@@ -35,6 +36,15 @@ def assert_n_failed_equals_n_null_columns(wngs, obj, cls=SymbolWarning):
class TestGoogle(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.locales = tm.get_locales(prefix='en_US')
+ if not cls.locales:
+ raise nose.SkipTest("US English locale not available for testing")
+
+ @classmethod
+ def tearDownClass(cls):
+ del cls.locales
@network
def test_google(self):
@@ -44,9 +54,10 @@ def test_google(self):
start = datetime(2010, 1, 1)
end = datetime(2013, 1, 27)
- self.assertEquals(
- web.DataReader("F", 'google', start, end)['Close'][-1],
- 13.68)
+ for locale in self.locales:
+ with tm.set_locale(locale):
+ panel = web.DataReader("F", 'google', start, end)
+ self.assertEquals(panel.Close[-1], 13.68)
self.assertRaises(Exception, web.DataReader, "NON EXISTENT TICKER",
'google', start, end)
@@ -58,38 +69,40 @@ def test_get_quote_fails(self):
@network
def test_get_goog_volume(self):
- df = web.get_data_google('GOOG')
- self.assertEqual(df.Volume.ix['OCT-08-2010'], 2863473)
+ for locale in self.locales:
+ with tm.set_locale(locale):
+ df = web.get_data_google('GOOG').sort_index()
+ self.assertEqual(df.Volume.ix['OCT-08-2010'], 2863473)
@network
def test_get_multi1(self):
- sl = ['AAPL', 'AMZN', 'GOOG']
- pan = web.get_data_google(sl, '2012')
-
- def testit():
+ for locale in self.locales:
+ sl = ['AAPL', 'AMZN', 'GOOG']
+ with tm.set_locale(locale):
+ pan = web.get_data_google(sl, '2012')
ts = pan.Close.GOOG.index[pan.Close.AAPL > pan.Close.GOOG]
- self.assertEquals(ts[0].dayofyear, 96)
-
- if (hasattr(pan, 'Close') and hasattr(pan.Close, 'GOOG') and
- hasattr(pan.Close, 'AAPL')):
- testit()
- else:
- self.assertRaises(AttributeError, testit)
+ if (hasattr(pan, 'Close') and hasattr(pan.Close, 'GOOG') and
+ hasattr(pan.Close, 'AAPL')):
+ self.assertEquals(ts[0].dayofyear, 96)
+ else:
+ self.assertRaises(AttributeError, lambda: pan.Close)
@network
def test_get_multi2(self):
with warnings.catch_warnings(record=True) as w:
- pan = web.get_data_google(['GE', 'MSFT', 'INTC'], 'JAN-01-12',
- 'JAN-31-12')
- result = pan.Close.ix['01-18-12']
- assert_n_failed_equals_n_null_columns(w, result)
-
- # sanity checking
-
- assert np.issubdtype(result.dtype, np.floating)
- result = pan.Open.ix['Jan-15-12':'Jan-20-12']
- self.assertEqual((4, 3), result.shape)
- assert_n_failed_equals_n_null_columns(w, result)
+ for locale in self.locales:
+ with tm.set_locale(locale):
+ pan = web.get_data_google(['GE', 'MSFT', 'INTC'],
+ 'JAN-01-12', 'JAN-31-12')
+ result = pan.Close.ix['01-18-12']
+ assert_n_failed_equals_n_null_columns(w, result)
+
+ # sanity checking
+
+ assert np.issubdtype(result.dtype, np.floating)
+ result = pan.Open.ix['Jan-15-12':'Jan-20-12']
+ self.assertEqual((4, 3), result.shape)
+ assert_n_failed_equals_n_null_columns(w, result)
class TestYahoo(unittest.TestCase):
diff --git a/pandas/io/tests/test_json/test_pandas.py b/pandas/io/tests/test_json/test_pandas.py
index 8c7d89641bdd4..6d392eb265752 100644
--- a/pandas/io/tests/test_json/test_pandas.py
+++ b/pandas/io/tests/test_json/test_pandas.py
@@ -1,11 +1,9 @@
# pylint: disable-msg=W0612,E1101
from pandas.compat import range, lrange, StringIO
from pandas import compat
-from pandas.io.common import URLError
import os
import unittest
-import nose
import numpy as np
from pandas import Series, DataFrame, DatetimeIndex, Timestamp
@@ -16,7 +14,6 @@
assert_series_equal, network,
ensure_clean, assert_index_equal)
import pandas.util.testing as tm
-from numpy.testing.decorators import slow
_seriesd = tm.getSeriesData()
_tsd = tm.getTimeSeriesData()
@@ -53,17 +50,35 @@ def setUp(self):
self.tsframe = _tsframe.copy()
self.mixed_frame = _mixed_frame.copy()
+ def tearDown(self):
+ del self.dirpath
+
+ del self.ts
+
+ del self.series
+
+ del self.objSeries
+
+ del self.empty_series
+ del self.empty_frame
+
+ del self.frame
+ del self.frame2
+ del self.intframe
+ del self.tsframe
+ del self.mixed_frame
+
def test_frame_double_encoded_labels(self):
df = DataFrame([['a', 'b'], ['c', 'd']],
index=['index " 1', 'index / 2'],
columns=['a \\ b', 'y / z'])
- assert_frame_equal(
- df, read_json(df.to_json(orient='split'), orient='split'))
- assert_frame_equal(
- df, read_json(df.to_json(orient='columns'), orient='columns'))
- assert_frame_equal(
- df, read_json(df.to_json(orient='index'), orient='index'))
+ assert_frame_equal(df, read_json(df.to_json(orient='split'),
+ orient='split'))
+ assert_frame_equal(df, read_json(df.to_json(orient='columns'),
+ orient='columns'))
+ assert_frame_equal(df, read_json(df.to_json(orient='index'),
+ orient='index'))
df_unser = read_json(df.to_json(orient='records'), orient='records')
assert_index_equal(df.columns, df_unser.columns)
np.testing.assert_equal(df.values, df_unser.values)
@@ -75,10 +90,10 @@ def test_frame_non_unique_index(self):
self.assertRaises(ValueError, df.to_json, orient='index')
self.assertRaises(ValueError, df.to_json, orient='columns')
- assert_frame_equal(
- df, read_json(df.to_json(orient='split'), orient='split'))
+ assert_frame_equal(df, read_json(df.to_json(orient='split'),
+ orient='split'))
unser = read_json(df.to_json(orient='records'), orient='records')
- self.assert_(df.columns.equals(unser.columns))
+ self.assertTrue(df.columns.equals(unser.columns))
np.testing.assert_equal(df.values, unser.values)
unser = read_json(df.to_json(orient='values'), orient='values')
np.testing.assert_equal(df.values, unser.values)
@@ -102,7 +117,8 @@ def test_frame_non_unique_columns(self):
assert_frame_equal(result, df)
def _check(df):
- result = read_json(df.to_json(orient='split'), orient='split', convert_dates=['x'])
+ result = read_json(df.to_json(orient='split'), orient='split',
+ convert_dates=['x'])
assert_frame_equal(result, df)
for o in [[['a','b'],['c','d']],
@@ -112,15 +128,15 @@ def _check(df):
_check(DataFrame(o, index=[1,2], columns=['x','x']))
def test_frame_from_json_to_json(self):
-
- def _check_orient(df, orient, dtype=None, numpy=False, convert_axes=True, check_dtype=True, raise_ok=None):
+ def _check_orient(df, orient, dtype=None, numpy=False,
+ convert_axes=True, check_dtype=True, raise_ok=None):
df = df.sort()
dfjson = df.to_json(orient=orient)
try:
unser = read_json(dfjson, orient=orient, dtype=dtype,
numpy=numpy, convert_axes=convert_axes)
- except (Exception) as detail:
+ except Exception as detail:
if raise_ok is not None:
if isinstance(detail, raise_ok):
return
@@ -151,7 +167,8 @@ def _check_orient(df, orient, dtype=None, numpy=False, convert_axes=True, check_
if convert_axes:
assert_frame_equal(df, unser, check_dtype=check_dtype)
else:
- assert_frame_equal(df, unser, check_less_precise=False, check_dtype=check_dtype)
+ assert_frame_equal(df, unser, check_less_precise=False,
+ check_dtype=check_dtype)
def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None):
@@ -171,17 +188,27 @@ def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None):
# numpy=True and raise_ok might be not None, so ignore the error
if convert_axes:
- _check_orient(df, "columns", dtype=dtype, numpy=True, raise_ok=raise_ok)
- _check_orient(df, "records", dtype=dtype, numpy=True, raise_ok=raise_ok)
- _check_orient(df, "split", dtype=dtype, numpy=True, raise_ok=raise_ok)
- _check_orient(df, "index", dtype=dtype, numpy=True, raise_ok=raise_ok)
- _check_orient(df, "values", dtype=dtype, numpy=True, raise_ok=raise_ok)
-
- _check_orient(df, "columns", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok)
- _check_orient(df, "records", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok)
- _check_orient(df, "split", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok)
- _check_orient(df, "index", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok)
- _check_orient(df, "values", dtype=dtype, numpy=True, convert_axes=False, raise_ok=raise_ok)
+ _check_orient(df, "columns", dtype=dtype, numpy=True,
+ raise_ok=raise_ok)
+ _check_orient(df, "records", dtype=dtype, numpy=True,
+ raise_ok=raise_ok)
+ _check_orient(df, "split", dtype=dtype, numpy=True,
+ raise_ok=raise_ok)
+ _check_orient(df, "index", dtype=dtype, numpy=True,
+ raise_ok=raise_ok)
+ _check_orient(df, "values", dtype=dtype, numpy=True,
+ raise_ok=raise_ok)
+
+ _check_orient(df, "columns", dtype=dtype, numpy=True,
+ convert_axes=False, raise_ok=raise_ok)
+ _check_orient(df, "records", dtype=dtype, numpy=True,
+ convert_axes=False, raise_ok=raise_ok)
+ _check_orient(df, "split", dtype=dtype, numpy=True,
+ convert_axes=False, raise_ok=raise_ok)
+ _check_orient(df, "index", dtype=dtype, numpy=True,
+ convert_axes=False, raise_ok=raise_ok)
+ _check_orient(df, "values", dtype=dtype, numpy=True,
+ convert_axes=False, raise_ok=raise_ok)
# basic
_check_all_orients(self.frame)
@@ -202,9 +229,10 @@ def _check_all_orients(df, dtype=None, convert_axes=True, raise_ok=None):
# dtypes
_check_all_orients(DataFrame(biggie, dtype=np.float64),
dtype=np.float64, convert_axes=False)
- _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int, convert_axes=False)
- _check_all_orients(DataFrame(biggie, dtype='U3'), dtype='U3', convert_axes=False,
- raise_ok=ValueError)
+ _check_all_orients(DataFrame(biggie, dtype=np.int), dtype=np.int,
+ convert_axes=False)
+ _check_all_orients(DataFrame(biggie, dtype='U3'), dtype='U3',
+ convert_axes=False, raise_ok=ValueError)
# empty
_check_all_orients(self.empty_frame)
@@ -258,37 +286,37 @@ def test_frame_from_json_bad_data(self):
def test_frame_from_json_nones(self):
df = DataFrame([[1, 2], [4, 5, 6]])
unser = read_json(df.to_json())
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
df = DataFrame([['1', '2'], ['4', '5', '6']])
unser = read_json(df.to_json())
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
unser = read_json(df.to_json(),dtype=False)
- self.assert_(unser[2][0] is None)
+ self.assertTrue(unser[2][0] is None)
unser = read_json(df.to_json(),convert_axes=False,dtype=False)
- self.assert_(unser['2']['0'] is None)
+ self.assertTrue(unser['2']['0'] is None)
unser = read_json(df.to_json(), numpy=False)
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
unser = read_json(df.to_json(), numpy=False, dtype=False)
- self.assert_(unser[2][0] is None)
+ self.assertTrue(unser[2][0] is None)
unser = read_json(df.to_json(), numpy=False, convert_axes=False, dtype=False)
- self.assert_(unser['2']['0'] is None)
+ self.assertTrue(unser['2']['0'] is None)
# infinities get mapped to nulls which get mapped to NaNs during
# deserialisation
df = DataFrame([[1, 2], [4, 5, 6]])
df[2][0] = np.inf
unser = read_json(df.to_json())
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
unser = read_json(df.to_json(), dtype=False)
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
df[2][0] = np.NINF
unser = read_json(df.to_json())
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
unser = read_json(df.to_json(),dtype=False)
- self.assert_(np.isnan(unser[2][0]))
+ self.assertTrue(np.isnan(unser[2][0]))
def test_frame_to_json_except(self):
df = DataFrame([1, 2, 3])
@@ -345,7 +373,7 @@ def _check_orient(series, orient, dtype=None, numpy=False):
except:
raise
if orient == "split":
- self.assert_(series.name == unser.name)
+ self.assertEqual(series.name, unser.name)
def _check_all_orients(series, dtype=None):
_check_orient(series, "columns", dtype=dtype)
@@ -403,12 +431,12 @@ def test_reconstruction_index(self):
result = read_json(df.to_json())
# the index is serialized as strings....correct?
- #assert_frame_equal(result,df)
+ assert_frame_equal(result, df)
def test_path(self):
with ensure_clean('test.json') as path:
-
- for df in [ self.frame, self.frame2, self.intframe, self.tsframe, self.mixed_frame ]:
+ for df in [self.frame, self.frame2, self.intframe, self.tsframe,
+ self.mixed_frame]:
df.to_json(path)
read_json(path)
@@ -512,7 +540,6 @@ def test_date_unit(self):
assert_frame_equal(result, df)
def test_weird_nested_json(self):
-
# this used to core dump the parser
s = r'''{
"status": "success",
@@ -528,9 +555,9 @@ def test_weird_nested_json(self):
"title": "Another blog post",
"body": "More content"
}
- ]
- }
-}'''
+ ]
+ }
+ }'''
read_json(s)
@@ -550,18 +577,19 @@ def test_misc_example(self):
# parsing unordered input fails
result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]',numpy=True)
expected = DataFrame([[1,2],[1,2]],columns=['a','b'])
- #assert_frame_equal(result,expected)
+ with tm.assertRaisesRegexp(AssertionError,
+ '\[index\] left \[.+\], right \[.+\]'):
+ assert_frame_equal(result, expected)
result = read_json('[{"a": 1, "b": 2}, {"b":2, "a" :1}]')
expected = DataFrame([[1,2],[1,2]],columns=['a','b'])
assert_frame_equal(result,expected)
@network
- @slow
def test_round_trip_exception_(self):
# GH 3867
-
- df = pd.read_csv('https://raw.github.com/hayd/lahman2012/master/csvs/Teams.csv')
+ csv = 'https://raw.github.com/hayd/lahman2012/master/csvs/Teams.csv'
+ df = pd.read_csv(csv)
s = df.to_json()
result = pd.read_json(s)
assert_frame_equal(result.reindex(index=df.index,columns=df.columns),df)
@@ -569,12 +597,9 @@ def test_round_trip_exception_(self):
@network
def test_url(self):
url = 'https://api.github.com/repos/pydata/pandas/issues?per_page=5'
- result = read_json(url,convert_dates=True)
- for c in ['created_at','closed_at','updated_at']:
- self.assert_(result[c].dtype == 'datetime64[ns]')
-
- url = 'http://search.twitter.com/search.json?q=pandas%20python'
- result = read_json(url)
+ result = read_json(url, convert_dates=True)
+ for c in ['created_at', 'closed_at', 'updated_at']:
+ self.assertEqual(result[c].dtype, 'datetime64[ns]')
def test_default_handler(self):
from datetime import timedelta
@@ -585,6 +610,6 @@ def test_default_handler(self):
expected, pd.read_json(frame.to_json(default_handler=str)))
def my_handler_raises(obj):
- raise TypeError
- self.assertRaises(
- TypeError, frame.to_json, default_handler=my_handler_raises)
+ raise TypeError("raisin")
+ self.assertRaises(TypeError, frame.to_json,
+ default_handler=my_handler_raises)
diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py
index 0b3bff7a151cc..06ff5abf7cd13 100644
--- a/pandas/io/tests/test_json/test_ujson.py
+++ b/pandas/io/tests/test_json/test_ujson.py
@@ -32,6 +32,7 @@ def _skip_if_python_ver(skip_major, skip_minor=None):
if major == skip_major and (skip_minor is None or minor == skip_minor):
raise nose.SkipTest("skipping Python version %d.%d" % (major, minor))
+
json_unicode = (json.dumps if sys.version_info[0] >= 3
else partial(json.dumps, encoding="utf-8"))
@@ -194,7 +195,6 @@ def test_invalidDoublePrecision(self):
# will throw typeError
self.assertRaises(TypeError, ujson.encode, input, double_precision = None)
-
def test_encodeStringConversion(self):
input = "A string \\ / \b \f \n \r \t"
output = ujson.encode(input)
@@ -220,7 +220,6 @@ def test_encodeControlEscaping(self):
self.assertEquals(input, dec)
self.assertEquals(enc, json_unicode(input))
-
def test_encodeUnicodeConversion2(self):
input = "\xe6\x97\xa5\xd1\x88"
enc = ujson.encode(input)
@@ -259,7 +258,6 @@ def test_encodeUnicode4BytesUTF8Highest(self):
self.assertEquals(enc, json_unicode(input))
self.assertEquals(dec, json.loads(enc))
-
def test_encodeArrayInArray(self):
input = [[[[]]]]
output = ujson.encode(input)
@@ -286,7 +284,6 @@ def test_encodeIntNegConversion(self):
self.assertEquals(input, ujson.decode(output))
pass
-
def test_encodeLongNegConversion(self):
input = -9223372036854775808
output = ujson.encode(input)
@@ -448,7 +445,6 @@ def test_encodeDoubleNegInf(self):
input = -np.inf
assert ujson.encode(input) == 'null', "Expected null"
-
def test_decodeJibberish(self):
input = "fdsa sda v9sa fdsa"
try:
@@ -566,7 +562,6 @@ def test_decodeNullBroken(self):
return
assert False, "Wrong exception"
-
def test_decodeBrokenDictKeyTypeLeakTest(self):
input = '{{1337:""}}'
for x in range(1000):
@@ -667,7 +662,6 @@ def test_decodeNullCharacter(self):
input = "\"31337 \\u0000 31337\""
self.assertEquals(ujson.decode(input), json.loads(input))
-
def test_encodeListLongConversion(self):
input = [9223372036854775807, 9223372036854775807, 9223372036854775807,
9223372036854775807, 9223372036854775807, 9223372036854775807 ]
@@ -1147,6 +1141,7 @@ def testArrayNumpyLabelled(self):
self.assertTrue((np.array(['1','2','3']) == output[1]).all())
self.assertTrue((np.array(['a', 'b']) == output[2]).all())
+
class PandasJSONTests(TestCase):
def testDataFrame(self):
@@ -1178,7 +1173,6 @@ def testDataFrame(self):
assert_array_equal(df.transpose().columns, outp.columns)
assert_array_equal(df.transpose().index, outp.index)
-
def testDataFrameNumpy(self):
df = DataFrame([[1,2,3], [4,5,6]], index=['a', 'b'], columns=['x', 'y', 'z'])
@@ -1486,7 +1480,6 @@ def test_decodeArrayFaultyUnicode(self):
else:
assert False, "expected ValueError"
-
def test_decodeFloatingPointAdditionalTests(self):
places = 15
@@ -1529,39 +1522,10 @@ def test_encodeSet(self):
self.assertTrue(v in s)
-"""
-def test_decodeNumericIntFrcOverflow(self):
-input = "X.Y"
-raise NotImplementedError("Implement this test!")
-
-
-def test_decodeStringUnicodeEscape(self):
-input = "\u3131"
-raise NotImplementedError("Implement this test!")
-
-def test_decodeStringUnicodeBrokenEscape(self):
-input = "\u3131"
-raise NotImplementedError("Implement this test!")
-
-def test_decodeStringUnicodeInvalidEscape(self):
-input = "\u3131"
-raise NotImplementedError("Implement this test!")
-
-def test_decodeStringUTF8(self):
-input = "someutfcharacters"
-raise NotImplementedError("Implement this test!")
-
-
-
-"""
-
def _clean_dict(d):
return dict((str(k), v) for k, v in compat.iteritems(d))
+
if __name__ == '__main__':
- # unittest.main()
- import nose
- # nose.runmodule(argv=[__file__,'-vvs','-x', '--ipdb-failure'],
- # exit=False)
nose.runmodule(argv=[__file__,'-vvs','-x','--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tools/tests/test_util.py b/pandas/tools/tests/test_util.py
index 1888f2ede35e0..614f5ecc39e9d 100644
--- a/pandas/tools/tests/test_util.py
+++ b/pandas/tools/tests/test_util.py
@@ -1,12 +1,21 @@
import os
-import nose
+import locale
+import codecs
import unittest
+import nose
+
import numpy as np
from numpy.testing import assert_equal
+import pandas.util.testing as tm
from pandas.tools.util import cartesian_product
+
+CURRENT_LOCALE = locale.getlocale()
+LOCALE_OVERRIDE = os.environ.get('LOCALE_OVERRIDE', None)
+
+
class TestCartesianProduct(unittest.TestCase):
def test_simple(self):
@@ -16,6 +25,61 @@ def test_simple(self):
np.array([ 1, 22, 1, 22, 1, 22])]
assert_equal(result, expected)
+
+class TestLocaleUtils(unittest.TestCase):
+ @classmethod
+ def setUpClass(cls):
+ cls.locales = tm.get_locales()
+
+ if not cls.locales:
+ raise nose.SkipTest("No locales found")
+
+ if os.name == 'nt': # we're on windows
+ raise nose.SkipTest("Running on Windows")
+
+ @classmethod
+ def tearDownClass(cls):
+ del cls.locales
+
+ def test_get_locales(self):
+ # all systems should have at least a single locale
+ assert len(tm.get_locales()) > 0
+
+ def test_get_locales_prefix(self):
+ if len(self.locales) == 1:
+ raise nose.SkipTest("Only a single locale found, no point in "
+ "trying to test filtering locale prefixes")
+ first_locale = self.locales[0]
+ assert len(tm.get_locales(prefix=first_locale[:2])) > 0
+
+ def test_set_locale(self):
+ if len(self.locales) == 1:
+ raise nose.SkipTest("Only a single locale found, no point in "
+ "trying to test setting another locale")
+
+ if LOCALE_OVERRIDE is not None:
+ lang, enc = LOCALE_OVERRIDE.split('.')
+ else:
+ lang, enc = 'it_CH', 'UTF-8'
+
+ enc = codecs.lookup(enc).name
+ new_locale = lang, enc
+
+ if not tm._can_set_locale('.'.join(new_locale)):
+ with tm.assertRaises(locale.Error):
+ with tm.set_locale(new_locale):
+ pass
+ else:
+ with tm.set_locale(new_locale) as normalized_locale:
+ new_lang, new_enc = normalized_locale.split('.')
+ new_enc = codecs.lookup(enc).name
+ normalized_locale = new_lang, new_enc
+ self.assertEqual(normalized_locale, new_locale)
+
+ current_locale = locale.getlocale()
+ self.assertEqual(current_locale, CURRENT_LOCALE)
+
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index bfbd28f7bb4a4..d059d229ef22e 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -10,7 +10,7 @@
from matplotlib.ticker import Formatter, AutoLocator, Locator
from matplotlib.transforms import nonsingular
-from pandas.compat import range, lrange
+from pandas.compat import lrange
import pandas.compat as compat
import pandas.lib as lib
import pandas.core.common as com
diff --git a/pandas/tseries/tests/test_plotting.py b/pandas/tseries/tests/test_plotting.py
index cfbde75f6ae21..a5e249b77fa52 100644
--- a/pandas/tseries/tests/test_plotting.py
+++ b/pandas/tseries/tests/test_plotting.py
@@ -317,7 +317,8 @@ def _test(ax):
result = ax.get_xlim()
self.assertEqual(int(result[0]), expected[0].ordinal)
self.assertEqual(int(result[1]), expected[1].ordinal)
- plt.close(ax.get_figure())
+ fig = ax.get_figure()
+ plt.close(fig)
ser = tm.makeTimeSeries()
ax = ser.plot()
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 3dcfa3621895e..c6c2b418f553d 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -35,6 +35,11 @@ from datetime import timedelta, datetime
from datetime import time as datetime_time
from pandas.compat import parse_date
+from sys import version_info
+
+# GH3363
+cdef bint PY2 = version_info[0] == 2
+
# initialize numpy
import_array()
#import_ufunc()
@@ -1757,20 +1762,20 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
# timestamp falls to the right side of the DST transition
if v + deltas[pos] == vals[i]:
result_b[i] = v
-
-
+
+
if infer_dst:
dst_hours = np.empty(n, dtype=np.int64)
dst_hours.fill(NPY_NAT)
-
+
# Get the ambiguous hours (given the above, these are the hours
- # where result_a != result_b and neither of them are NAT)
+ # where result_a != result_b and neither of them are NAT)
both_nat = np.logical_and(result_a != NPY_NAT, result_b != NPY_NAT)
both_eq = result_a == result_b
trans_idx = np.squeeze(np.nonzero(np.logical_and(both_nat, ~both_eq)))
if trans_idx.size == 1:
stamp = Timestamp(vals[trans_idx])
- raise pytz.AmbiguousTimeError("Cannot infer dst time from %s as"
+ raise pytz.AmbiguousTimeError("Cannot infer dst time from %s as"
"there are no repeated times" % stamp)
# Split the array into contiguous chunks (where the difference between
# indices is 1). These are effectively dst transitions in different years
@@ -1779,21 +1784,21 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
if trans_idx.size > 0:
one_diff = np.where(np.diff(trans_idx)!=1)[0]+1
trans_grp = np.array_split(trans_idx, one_diff)
-
+
# Iterate through each day, if there are no hours where the delta is negative
# (indicates a repeat of hour) the switch cannot be inferred
for grp in trans_grp:
-
+
delta = np.diff(result_a[grp])
if grp.size == 1 or np.all(delta>0):
stamp = Timestamp(vals[grp[0]])
raise pytz.AmbiguousTimeError(stamp)
-
+
# Find the index for the switch and pull from a for dst and b for standard
switch_idx = (delta<=0).nonzero()[0]
if switch_idx.size > 1:
raise pytz.AmbiguousTimeError("There are %i dst switches "
- "when there should only be 1."
+ "when there should only be 1."
% switch_idx.size)
switch_idx = switch_idx[0]+1 # Pull the only index and adjust
a_idx = grp[:switch_idx]
@@ -1812,7 +1817,7 @@ def tz_localize_to_utc(ndarray[int64_t] vals, object tz, bint infer_dst=False):
else:
stamp = Timestamp(vals[i])
raise pytz.AmbiguousTimeError("Cannot infer dst time from %r, "\
- "try using the 'infer_dst' argument"
+ "try using the 'infer_dst' argument"
% stamp)
elif left != NPY_NAT:
result[i] = left
@@ -2549,8 +2554,9 @@ cdef list extra_fmts = [(b"%q", b"^`AB`^"),
cdef list str_extra_fmts = ["^`AB`^", "^`CD`^", "^`EF`^", "^`GH`^", "^`IJ`^", "^`KL`^"]
-cdef _period_strftime(int64_t value, int freq, object fmt):
+cdef object _period_strftime(int64_t value, int freq, object fmt):
import sys
+
cdef:
Py_ssize_t i
date_info dinfo
@@ -2595,13 +2601,8 @@ cdef _period_strftime(int64_t value, int freq, object fmt):
result = result.replace(str_extra_fmts[i], repl)
- # Py3?
- if not PyString_Check(result):
- result = str(result)
-
- # GH3363
- if sys.version_info[0] == 2:
- result = result.decode('utf-8','strict')
+ if PY2:
+ result = result.decode('utf-8', 'ignore')
return result
diff --git a/pandas/util/testing.py b/pandas/util/testing.py
index 946a4d94b6045..4787c82282a1f 100644
--- a/pandas/util/testing.py
+++ b/pandas/util/testing.py
@@ -9,6 +9,8 @@
import warnings
import inspect
import os
+import subprocess
+import locale
from datetime import datetime
from functools import wraps, partial
@@ -20,6 +22,7 @@
import nose
+import pandas as pd
from pandas.core.common import isnull, _is_sequence
import pandas.core.index as index
import pandas.core.series as series
@@ -28,7 +31,7 @@
import pandas.core.panel4d as panel4d
import pandas.compat as compat
from pandas.compat import(
- map, zip, range, unichr, lrange, lmap, lzip, u, callable, Counter,
+ filter, map, zip, range, unichr, lrange, lmap, lzip, u, callable, Counter,
raise_with_traceback, httplib
)
@@ -97,6 +100,172 @@ def setUpClass(cls):
return cls
+#------------------------------------------------------------------------------
+# locale utilities
+
+def check_output(*popenargs, **kwargs): # shamelessly taken from Python 2.7 source
+ r"""Run command with arguments and return its output as a byte string.
+
+ If the exit code was non-zero it raises a CalledProcessError. The
+ CalledProcessError object will have the return code in the returncode
+ attribute and output in the output attribute.
+
+ The arguments are the same as for the Popen constructor. Example:
+
+ >>> check_output(["ls", "-l", "/dev/null"])
+ 'crw-rw-rw- 1 root root 1, 3 Oct 18 2007 /dev/null\n'
+
+ The stdout argument is not allowed as it is used internally.
+ To capture standard error in the result, use stderr=STDOUT.
+
+ >>> check_output(["/bin/sh", "-c",
+ ... "ls -l non_existent_file ; exit 0"],
+ ... stderr=STDOUT)
+ 'ls: non_existent_file: No such file or directory\n'
+ """
+ if 'stdout' in kwargs:
+ raise ValueError('stdout argument not allowed, it will be overridden.')
+ process = subprocess.Popen(stdout=subprocess.PIPE, *popenargs, **kwargs)
+ output, unused_err = process.communicate()
+ retcode = process.poll()
+ if retcode:
+ cmd = kwargs.get("args")
+ if cmd is None:
+ cmd = popenargs[0]
+ raise subprocess.CalledProcessError(retcode, cmd, output=output)
+ return output
+
+
+def _default_locale_getter():
+ try:
+ raw_locales = check_output(['locale -a'], shell=True)
+ except subprocess.CalledProcessError as e:
+ raise type(e)("%s, the 'locale -a' command cannot be foundon your "
+ "system" % e)
+ return raw_locales
+
+
+def get_locales(prefix=None, normalize=True,
+ locale_getter=_default_locale_getter):
+ """Get all the locales that are available on the system.
+
+ Parameters
+ ----------
+ prefix : str
+ If not ``None`` then return only those locales with the prefix
+ provided. For example to get all English language locales (those that
+ start with ``"en"``), pass ``prefix="en"``.
+ normalize : bool
+ Call ``locale.normalize`` on the resulting list of available locales.
+ If ``True``, only locales that can be set without throwing an
+ ``Exception`` are returned.
+ locale_getter : callable
+ The function to use to retrieve the current locales. This should return
+ a string with each locale separated by a newline character.
+
+ Returns
+ -------
+ locales : list of strings
+ A list of locale strings that can be set with ``locale.setlocale()``.
+ For example::
+
+ locale.setlocale(locale.LC_ALL, locale_string)
+ """
+ raw_locales = locale_getter()
+
+ try:
+ raw_locales = str(raw_locales, encoding=pd.options.display.encoding)
+ except TypeError:
+ pass
+
+ if prefix is None:
+ return _valid_locales(raw_locales.splitlines(), normalize)
+
+ found = re.compile('%s.*' % prefix).findall(raw_locales)
+ return _valid_locales(found, normalize)
+
+
+@contextmanager
+def set_locale(new_locale, lc_var=locale.LC_ALL):
+ """Context manager for temporarily setting a locale.
+
+ Parameters
+ ----------
+ new_locale : str or tuple
+ A string of the form <language_country>.<encoding>. For example to set
+ the current locale to US English with a UTF8 encoding, you would pass
+ "en_US.UTF-8".
+
+ Notes
+ -----
+ This is useful when you want to run a particular block of code under a
+ particular locale, without globally setting the locale. This probably isn't
+ thread-safe.
+ """
+ current_locale = locale.getlocale()
+
+ try:
+ locale.setlocale(lc_var, new_locale)
+
+ try:
+ normalized_locale = locale.getlocale()
+ except ValueError:
+ yield new_locale
+ else:
+ if all(lc is not None for lc in normalized_locale):
+ yield '.'.join(normalized_locale)
+ else:
+ yield new_locale
+ finally:
+ locale.setlocale(lc_var, current_locale)
+
+
+def _can_set_locale(lc):
+ """Check to see if we can set a locale without throwing an exception.
+
+ Parameters
+ ----------
+ lc : str
+ The locale to attempt to set.
+
+ Returns
+ -------
+ isvalid : bool
+ Whether the passed locale can be set
+ """
+ try:
+ with set_locale(lc):
+ pass
+ except locale.Error: # horrible name for a Exception subclass
+ return False
+ else:
+ return True
+
+
+def _valid_locales(locales, normalize):
+ """Return a list of normalized locales that do not throw an ``Exception``
+ when set.
+
+ Parameters
+ ----------
+ locales : str
+ A string where each locale is separated by a newline.
+ normalize : bool
+ Whether to call ``locale.normalize`` on each locale.
+
+ Returns
+ -------
+ valid_locales : list
+ A list of valid locales.
+ """
+ if normalize:
+ normalizer = lambda x: locale.normalize(x.strip())
+ else:
+ normalizer = lambda x: x.strip()
+
+ return list(filter(_can_set_locale, map(normalizer, locales)))
+
+
#------------------------------------------------------------------------------
# Console debugging tools
@@ -169,6 +338,7 @@ def assert_isinstance(obj, class_type_or_tuple):
"Expected object to be of type %r, found %r instead" % (
type(obj), class_type_or_tuple))
+
def assert_equal(a, b, msg=""):
"""asserts that a equals b, like nose's assert_equal, but allows custom message to start.
Passes a and b to format string as well. So you can use '{0}' and '{1}' to display a and b.
@@ -198,11 +368,11 @@ def assert_attr_equal(attr, left, right):
right_attr = getattr(right, attr)
assert_equal(left_attr,right_attr,"attr is not equal [{0}]" .format(attr))
+
def isiterable(obj):
return hasattr(obj, '__iter__')
-
def assert_almost_equal(a, b, check_less_precise=False):
if isinstance(a, dict) or isinstance(b, dict):
return assert_dict_equal(a, b)
@@ -378,6 +548,7 @@ def assert_contains_all(iterable, dic):
for k in iterable:
assert k in dic, "Did not contain item: '%r'" % k
+
def assert_copy(iter1, iter2, **eql_kwargs):
"""
iter1, iter2: iterables that produce elements comparable with assert_almost_equal
@@ -412,6 +583,7 @@ def makeFloatIndex(k=10):
values = sorted(np.random.random_sample(k)) - np.random.random_sample(1)
return Index(values * (10 ** np.random.randint(0, 9)))
+
def makeDateIndex(k=10):
dt = datetime(2000, 1, 1)
dr = bdate_range(dt, periods=k)
@@ -446,6 +618,7 @@ def getSeriesData():
index = makeStringIndex(N)
return dict((c, Series(randn(N), index=index)) for c in getCols(K))
+
def makeTimeSeries(nper=None):
if nper is None:
nper = N
@@ -503,11 +676,13 @@ def makePanel(nper=None):
data = dict((c, makeTimeDataFrame(nper)) for c in cols)
return Panel.fromDict(data)
+
def makePeriodPanel(nper=None):
cols = ['Item' + c for c in string.ascii_uppercase[:K - 1]]
data = dict((c, makePeriodFrame(nper)) for c in cols)
return Panel.fromDict(data)
+
def makePanel4D(nper=None):
return Panel4D(dict(l1=makePanel(nper), l2=makePanel(nper),
l3=makePanel(nper)))
| related #4918
| https://api.github.com/repos/pandas-dev/pandas/pulls/5073 | 2013-10-01T20:55:32Z | 2013-10-09T04:07:26Z | 2013-10-09T04:07:25Z | 2014-06-25T23:07:36Z |
CI/ENH: use nose-show-skipped plugin to show skipped tests | diff --git a/ci/install.sh b/ci/install.sh
index 357d962d9610d..a30aba9338db2 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -38,6 +38,9 @@ if [ -n "$LOCALE_OVERRIDE" ]; then
sudo locale-gen "$LOCALE_OVERRIDE"
fi
+
+# show-skipped is working at this particular commit
+time pip install git+git://github.com/cpcloud/nose-show-skipped.git@fa4ff84e53c09247753a155b428c1bf2c69cb6c3
time pip install $PIP_ARGS -r ci/requirements-${TRAVIS_PYTHON_VERSION}${JOB_TAG}.txt
time sudo apt-get install libatlas-base-dev gfortran
diff --git a/ci/print_skipped.py b/ci/print_skipped.py
deleted file mode 100755
index 9fb05df64bcea..0000000000000
--- a/ci/print_skipped.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-
-import sys
-import math
-import xml.etree.ElementTree as et
-
-
-def parse_results(filename):
- tree = et.parse(filename)
- root = tree.getroot()
- skipped = []
-
- current_class = old_class = ''
- i = 1
- assert i - 1 == len(skipped)
- for el in root.findall('testcase'):
- cn = el.attrib['classname']
- for sk in el.findall('skipped'):
- old_class = current_class
- current_class = cn
- name = '{classname}.{name}'.format(classname=current_class,
- name=el.attrib['name'])
- msg = sk.attrib['message']
- out = ''
- if old_class != current_class:
- ndigits = int(math.log(i, 10) + 1)
- out += ('-' * (len(name + msg) + 4 + ndigits) + '\n') # 4 for : + space + # + space
- out += '#{i} {name}: {msg}'.format(i=i, name=name, msg=msg)
- skipped.append(out)
- i += 1
- assert i - 1 == len(skipped)
- assert i - 1 == len(skipped)
- assert len(skipped) == int(root.attrib['skip'])
- return '\n'.join(skipped)
-
-
-def main(args):
- print('SKIPPED TESTS:')
- print(parse_results(args.filename))
- return 0
-
-
-def parse_args():
- import argparse
- parser = argparse.ArgumentParser()
- parser.add_argument('filename', help='XUnit file to parse')
- return parser.parse_args()
-
-
-if __name__ == '__main__':
- sys.exit(main(parse_args()))
diff --git a/ci/script.sh b/ci/script.sh
index 2e466e58bf377..2bafe13687505 100755
--- a/ci/script.sh
+++ b/ci/script.sh
@@ -9,5 +9,5 @@ if [ -n "$LOCALE_OVERRIDE" ]; then
fi
-echo nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --with-xunit --xunit-file=/tmp/nosetests.xml
-nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --with-xunit --xunit-file=/tmp/nosetests.xml
+echo nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --show-skipped
+nosetests --exe -w /tmp -A "$NOSE_ARGS" pandas --show-skipped
| https://api.github.com/repos/pandas-dev/pandas/pulls/5072 | 2013-10-01T20:51:21Z | 2013-10-01T23:53:13Z | 2013-10-01T23:53:13Z | 2014-07-16T08:32:30Z | |
API: default export for to_clipboard is now csv/tsv suitable for excel (GH3368) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 6ea4e5a3046b2..34cc4e499a0d5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -207,6 +207,8 @@ API Changes
- ``Series.get`` with negative indexers now returns the same as ``[]`` (:issue:`4390`)
- allow ``ix/loc`` for Series/DataFrame/Panel to set on any axis even when the single-key is not currently contained in
the index for that axis (:issue:`2578`, :issue:`5226`)
+ - Default export for ``to_clipboard`` is now csv with a sep of `\t` for
+ compat (:issue:`3368`)
- ``at`` now will enlarge the object inplace (and return the same) (:issue:`2578`)
- ``HDFStore``
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index bc5d84b9ff0f5..8a2ed9926d630 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -868,9 +868,15 @@ def load(self, path): # TODO remove in 0.14
warnings.warn("load is deprecated, use pd.read_pickle", FutureWarning)
return read_pickle(path)
- def to_clipboard(self):
+ def to_clipboard(self, sep=None, **kwargs):
"""
Attempt to write text representation of object to the system clipboard
+ This can be pasted into Excel, for example.
+
+ Parameters
+ ----------
+ sep : optional, defaults to comma
+ other keywords are passed to to_csv
Notes
-----
@@ -880,7 +886,7 @@ def to_clipboard(self):
- OS X: none
"""
from pandas.io import clipboard
- clipboard.to_clipboard(self)
+ clipboard.to_clipboard(self, sep, **kwargs)
#----------------------------------------------------------------------
# Fancy Indexing
diff --git a/pandas/io/clipboard.py b/pandas/io/clipboard.py
index c4bea55ce2714..401a0689fe000 100644
--- a/pandas/io/clipboard.py
+++ b/pandas/io/clipboard.py
@@ -26,9 +26,10 @@ def read_clipboard(**kwargs): # pragma: no cover
return read_table(StringIO(text), **kwargs)
-def to_clipboard(obj): # pragma: no cover
+def to_clipboard(obj, sep=None, **kwargs): # pragma: no cover
"""
Attempt to write text representation of object to the system clipboard
+ The clipboard can be then pasted into Excel for example.
Notes
-----
@@ -38,4 +39,12 @@ def to_clipboard(obj): # pragma: no cover
- OS X:
"""
from pandas.util.clipboard import clipboard_set
- clipboard_set(str(obj))
+ try:
+ if sep is None:
+ sep = '\t'
+ buf = StringIO()
+ obj.to_csv(buf,sep=sep, **kwargs)
+ clipboard_set(buf.getvalue())
+ except:
+ clipboard_set(str(obj))
+
diff --git a/pandas/io/tests/test_clipboard.py b/pandas/io/tests/test_clipboard.py
index f5b5ba745d83c..90ec2d6fed0ce 100644
--- a/pandas/io/tests/test_clipboard.py
+++ b/pandas/io/tests/test_clipboard.py
@@ -39,12 +39,19 @@ def setUpClass(cls):
def tearDownClass(cls):
del cls.data_types, cls.data
- def check_round_trip_frame(self, data_type):
+ def check_round_trip_frame(self, data_type, sep=None):
data = self.data[data_type]
- data.to_clipboard()
- result = read_clipboard()
+ data.to_clipboard(sep=sep)
+ if sep is not None:
+ result = read_clipboard(sep=sep,index_col=0)
+ else:
+ result = read_clipboard()
tm.assert_frame_equal(data, result, check_dtype=False)
+ def test_round_trip_frame_sep(self):
+ for dt in self.data_types:
+ self.check_round_trip_frame(dt,sep=',')
+
def test_round_trip_frame(self):
for dt in self.data_types:
self.check_round_trip_frame(dt)
| closes #3368
| https://api.github.com/repos/pandas-dev/pandas/pulls/5070 | 2013-10-01T16:42:18Z | 2013-10-16T14:49:07Z | 2013-10-16T14:49:07Z | 2014-06-12T07:22:42Z |
CLN: remove unreachable code in tslib.pyx | diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index a77b0afb20b52..0f7a356e84664 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -13,8 +13,7 @@
DateOffset, Week, YearBegin, YearEnd, Hour, Minute, Second, Day, Micro,
Milli, Nano,
WeekOfMonth, format, ole2datetime, QuarterEnd, to_datetime, normalize_date,
- get_offset, get_offset_name, inferTimeRule, hasOffsetName,
- get_standard_freq)
+ get_offset, get_offset_name, hasOffsetName, get_standard_freq)
from pandas.tseries.frequencies import _offset_map
from pandas.tseries.index import _to_m8, DatetimeIndex, _daterange_cache
@@ -532,7 +531,7 @@ def test_repr(self):
self.assertEqual(repr(Week(weekday=0)), "<Week: weekday=0>")
self.assertEqual(repr(Week(n=-1, weekday=0)), "<-1 * Week: weekday=0>")
self.assertEqual(repr(Week(n=-2, weekday=0)), "<-2 * Weeks: weekday=0>")
-
+
def test_corner(self):
self.assertRaises(ValueError, Week, weekday=7)
assertRaisesRegexp(ValueError, "Day must be", Week, weekday=-1)
@@ -905,7 +904,7 @@ def test_onOffset(self):
class TestBQuarterBegin(unittest.TestCase):
-
+
def test_repr(self):
self.assertEqual(repr(BQuarterBegin()),"<BusinessQuarterBegin: startingMonth=3>")
self.assertEqual(repr(BQuarterBegin(startingMonth=3)), "<BusinessQuarterBegin: startingMonth=3>")
@@ -1000,7 +999,7 @@ def test_repr(self):
self.assertEqual(repr(BQuarterEnd()),"<BusinessQuarterEnd: startingMonth=3>")
self.assertEqual(repr(BQuarterEnd(startingMonth=3)), "<BusinessQuarterEnd: startingMonth=3>")
self.assertEqual(repr(BQuarterEnd(startingMonth=1)), "<BusinessQuarterEnd: startingMonth=1>")
-
+
def test_isAnchored(self):
self.assert_(BQuarterEnd(startingMonth=1).isAnchored())
self.assert_(BQuarterEnd().isAnchored())
@@ -1107,7 +1106,7 @@ def test_repr(self):
self.assertEqual(repr(QuarterBegin()), "<QuarterBegin: startingMonth=3>")
self.assertEqual(repr(QuarterBegin(startingMonth=3)), "<QuarterBegin: startingMonth=3>")
self.assertEqual(repr(QuarterBegin(startingMonth=1)),"<QuarterBegin: startingMonth=1>")
-
+
def test_isAnchored(self):
self.assert_(QuarterBegin(startingMonth=1).isAnchored())
self.assert_(QuarterBegin().isAnchored())
@@ -1181,7 +1180,7 @@ def test_repr(self):
self.assertEqual(repr(QuarterEnd()), "<QuarterEnd: startingMonth=3>")
self.assertEqual(repr(QuarterEnd(startingMonth=3)), "<QuarterEnd: startingMonth=3>")
self.assertEqual(repr(QuarterEnd(startingMonth=1)), "<QuarterEnd: startingMonth=1>")
-
+
def test_isAnchored(self):
self.assert_(QuarterEnd(startingMonth=1).isAnchored())
self.assert_(QuarterEnd().isAnchored())
@@ -1631,6 +1630,7 @@ def assertEq(offset, base, expected):
"\nAt Date: %s" %
(expected, actual, offset, base))
+
def test_Hour():
assertEq(Hour(), datetime(2010, 1, 1), datetime(2010, 1, 1, 1))
assertEq(Hour(-1), datetime(2010, 1, 1, 1), datetime(2010, 1, 1))
@@ -1698,6 +1698,8 @@ def test_Microsecond():
def test_NanosecondGeneric():
+ if _np_version_under1p7:
+ raise nose.SkipTest('numpy >= 1.7 required')
timestamp = Timestamp(datetime(2010, 1, 1))
assert timestamp.nanosecond == 0
@@ -1710,7 +1712,6 @@ def test_NanosecondGeneric():
def test_Nanosecond():
if _np_version_under1p7:
- import nose
raise nose.SkipTest('numpy >= 1.7 required')
timestamp = Timestamp(datetime(2010, 1, 1))
@@ -1815,8 +1816,6 @@ def setUp(self):
pass
def test_alias_equality(self):
- from pandas.tseries.frequencies import _offset_map
-
for k, v in compat.iteritems(_offset_map):
if v is None:
continue
@@ -1872,7 +1871,8 @@ def test_freq_offsets():
off = BDay(1, offset=timedelta(0, -1800))
assert(off.freqstr == 'B-30Min')
-
+
+
def get_all_subclasses(cls):
ret = set()
this_subclasses = cls.__subclasses__()
@@ -1881,40 +1881,41 @@ def get_all_subclasses(cls):
ret | get_all_subclasses(this_subclass)
return ret
-class TestCaching(unittest.TestCase):
+
+class TestCaching(unittest.TestCase):
def test_should_cache_month_end(self):
self.assertTrue(MonthEnd()._should_cache())
-
+
def test_should_cache_bmonth_end(self):
self.assertTrue(BusinessMonthEnd()._should_cache())
-
+
def test_should_cache_week_month(self):
self.assertTrue(WeekOfMonth(weekday=1, week=2)._should_cache())
-
+
def test_all_cacheableoffsets(self):
for subclass in get_all_subclasses(CacheableOffset):
if subclass in [WeekOfMonth]:
continue
self.run_X_index_creation(subclass)
-
+
def setUp(self):
_daterange_cache.clear()
-
+
def run_X_index_creation(self, cls):
inst1 = cls()
if not inst1.isAnchored():
self.assertFalse(inst1._should_cache(), cls)
return
-
+
self.assertTrue(inst1._should_cache(), cls)
-
+
DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,31), freq=inst1, normalize=True)
self.assertTrue(cls() in _daterange_cache, cls)
-
+
def test_month_end_index_creation(self):
DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,31), freq=MonthEnd(), normalize=True)
self.assertTrue(MonthEnd() in _daterange_cache)
-
+
def test_bmonth_end_index_creation(self):
DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,29), freq=BusinessMonthEnd(), normalize=True)
self.assertTrue(BusinessMonthEnd() in _daterange_cache)
@@ -1924,9 +1925,8 @@ def test_week_of_month_index_creation(self):
DatetimeIndex(start=datetime(2013,1,31), end=datetime(2013,3,29), freq=inst1, normalize=True)
inst2 = WeekOfMonth(weekday=1, week=2)
self.assertTrue(inst2 in _daterange_cache)
-
-
+
+
if __name__ == '__main__':
- import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 0df0fc377d000..a8c27806c2c1e 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -336,7 +336,7 @@ class NaTType(_NaT):
def __hash__(self):
return iNaT
-
+
def weekday(self):
return -1
@@ -573,22 +573,22 @@ cdef class _Timestamp(datetime):
dts.us, ts.tzinfo)
def __add__(self, other):
+ cdef Py_ssize_t other_int
+
if is_timedelta64_object(other):
- return Timestamp(self.value + other.astype('timedelta64[ns]').item(), tz=self.tzinfo)
-
+ other_int = other.astype('timedelta64[ns]').astype(int)
+ return Timestamp(self.value + other_int, tz=self.tzinfo)
+
if is_integer_object(other):
if self.offset is None:
- return Timestamp(self.value + other, tz=self.tzinfo)
- msg = ("Cannot add integral value to Timestamp "
- "without offset.")
- raise ValueError(msg)
- else:
- return Timestamp((self.offset.__mul__(other)).apply(self))
-
+ raise ValueError("Cannot add integral value to Timestamp "
+ "without offset.")
+ return Timestamp((self.offset * other).apply(self))
+
if isinstance(other, timedelta) or hasattr(other, 'delta'):
nanos = _delta_to_nanoseconds(other)
return Timestamp(self.value + nanos, tz=self.tzinfo)
-
+
result = datetime.__add__(self, other)
if isinstance(result, datetime):
result = Timestamp(result)
@@ -597,9 +597,9 @@ cdef class _Timestamp(datetime):
def __sub__(self, other):
if is_integer_object(other):
- return self.__add__(-other)
- else:
- return datetime.__sub__(self, other)
+ neg_other = -other
+ return self + neg_other
+ return super(_Timestamp, self).__sub__(other)
cpdef _get_field(self, field):
out = get_date_field(np.array([self.value], dtype=np.int64), field)
@@ -2329,7 +2329,7 @@ cpdef int64_t period_asfreq(int64_t period_ordinal, int freq1, int freq2,
"""
cdef:
int64_t retval
-
+
if end:
retval = asfreq(period_ordinal, freq1, freq2, END)
else:
| https://api.github.com/repos/pandas-dev/pandas/pulls/5067 | 2013-10-01T04:23:23Z | 2013-10-01T12:34:06Z | 2013-10-01T12:34:06Z | 2014-07-13T18:10:17Z | |
BUG: Bug in setting with ix/loc and a mixed int/string index (GH4544) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index f5e7e66c98a64..65e6ca0e1d95c 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -520,6 +520,7 @@ Bug Fixes
chunks of the same file. Now coerces to numerical type or raises warning. (:issue:`3866`)
- Fix a bug where reshaping a ``Series`` to its own shape raised ``TypeError`` (:issue:`4554`)
and other reshaping issues.
+ - Bug in setting with ``ix/loc`` and a mixed int/string index (:issue:`4544`)
pandas 0.12.0
-------------
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index eb377c4b7955f..0d19736ed8083 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -783,9 +783,25 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
- No, prefer label-based indexing
"""
labels = self.obj._get_axis(axis)
+
+ # if we are a scalar indexer and not type correct raise
+ obj = self._convert_scalar_indexer(obj, axis)
+
+ # see if we are positional in nature
is_int_index = labels.is_integer()
+ is_int_positional = com.is_integer(obj) and not is_int_index
- if com.is_integer(obj) and not is_int_index:
+ # if we are a label return me
+ try:
+ return labels.get_loc(obj)
+ except (KeyError, TypeError):
+ pass
+ except (ValueError):
+ if not is_int_positional:
+ raise
+
+ # a positional
+ if is_int_positional:
# if we are setting and its not a valid location
# its an insert which fails by definition
@@ -795,11 +811,6 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
return obj
- try:
- return labels.get_loc(obj)
- except (KeyError, TypeError):
- pass
-
if isinstance(obj, slice):
return self._convert_slice_indexer(obj, axis)
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 837acb90407ea..67c87277647c8 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1070,6 +1070,26 @@ def test_ix_assign_column_mixed(self):
df['b'].ix[[1,3]] = [100,-100]
assert_frame_equal(df,expected)
+ def test_ix_get_set_consistency(self):
+
+ # GH 4544
+ # ix/loc get/set not consistent when
+ # a mixed int/string index
+ df = DataFrame(np.arange(16).reshape((4, 4)),
+ columns=['a', 'b', 8, 'c'],
+ index=['e', 7, 'f', 'g'])
+
+ self.assert_(df.ix['e', 8] == 2)
+ self.assert_(df.loc['e', 8] == 2)
+
+ df.ix['e', 8] = 42
+ self.assert_(df.ix['e', 8] == 42)
+ self.assert_(df.loc['e', 8] == 42)
+
+ df.loc['e', 8] = 45
+ self.assert_(df.ix['e', 8] == 45)
+ self.assert_(df.loc['e', 8] == 45)
+
def test_iloc_mask(self):
# GH 3631, iloc with a mask (of a series) should raise
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index 1f0f7a5564142..a70f2931e36fe 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -878,7 +878,7 @@ def test_setitem_float_labels(self):
tmp = s.copy()
s.ix[1] = 'zoo'
- tmp.values[1] = 'zoo'
+ tmp.iloc[2] = 'zoo'
assert_series_equal(s, tmp)
| closes #4544
| https://api.github.com/repos/pandas-dev/pandas/pulls/5064 | 2013-09-30T23:00:52Z | 2013-10-01T01:21:47Z | 2013-10-01T01:21:47Z | 2014-06-24T06:52:45Z |
Issue 1014 Possibly attempt object -> float64 coercion in to_sparse | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index dbd67aa3c5c25..b7b2301d6b50c 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -956,6 +956,8 @@ def to_sparse(self, fill_value=None, kind='block'):
-------
y : SparseDataFrame
"""
+ if any(dt == object for dt in self.dtypes):
+ self = self.convert_objects(convert_numeric=True)
from pandas.core.sparse import SparseDataFrame
return SparseDataFrame(self._series, index=self.index,
default_kind=kind,
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index ce8d84840ed69..90a26db99b1a3 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -1928,6 +1928,15 @@ def setUp(self):
self.simple = DataFrame(arr, columns=['one', 'two', 'three'],
index=['a', 'b', 'c'])
+ def test_to_sparse_coersion(self):
+ """ Verify conversion from object to float occurs when possible """
+ from pandas.sparse.api import SparseDataFrame
+ data = np.linspace(82.87, 88.98, 15)
+ frame = DataFrame(data).astype(object)
+ expected = SparseDataFrame(np.linspace(82.87, 88.98, 15))
+ actual = frame.to_sparse()
+ assert_frame_equal(expected, actual)
+
def test_get_axis(self):
f = self.frame
self.assertEquals(f._get_axis_number(0), 0)
| closes #1014
I've added a check for dtype=object and try to coerce it to a float in frame.to_sparse
| https://api.github.com/repos/pandas-dev/pandas/pulls/5063 | 2013-09-30T21:33:50Z | 2014-03-09T15:03:57Z | null | 2014-06-16T16:13:45Z |
VIS: let scatter plots obey mpl color scheme (#3338) | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 6631a3cf8c6f1..d6c0482d86be4 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -229,6 +229,7 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
>>> df = DataFrame(np.random.randn(1000, 4), columns=['A','B','C','D'])
>>> scatter_matrix(df, alpha=0.2)
"""
+ import matplotlib.pyplot as plt
from matplotlib.artist import setp
df = frame._get_numeric_data()
@@ -246,6 +247,9 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
hist_kwds = hist_kwds or {}
density_kwds = density_kwds or {}
+ # workaround because `c='b'` is hardcoded in matplotlibs scatter method
+ kwds.setdefault('c', plt.rcParams['patch.facecolor'])
+
for i, a in zip(lrange(n), df.columns):
for j, b in zip(lrange(n), df.columns):
ax = axes[i, j]
@@ -653,6 +657,10 @@ def lag_plot(series, lag=1, ax=None, **kwds):
ax: Matplotlib axis object
"""
import matplotlib.pyplot as plt
+
+ # workaround because `c='b'` is hardcoded in matplotlibs scatter method
+ kwds.setdefault('c', plt.rcParams['patch.facecolor'])
+
data = series.values
y1 = data[:-lag]
y2 = data[lag:]
@@ -1889,6 +1897,9 @@ def scatter_plot(data, x, y, by=None, ax=None, figsize=None, grid=False, **kwarg
"""
import matplotlib.pyplot as plt
+ # workaround because `c='b'` is hardcoded in matplotlibs scatter method
+ kwargs.setdefault('c', plt.rcParams['patch.facecolor'])
+
def plot_group(group, ax):
xvals = group[x].values
yvals = group[y].values
| Closes #3338.
Fixes the remaining plot functions that don't follow the matplotlib style (functions based on scatter plots).
Rationale:
- set the color keyword `c` to default one defined in matplotlibs rcParams if `c` was not specified by the user (because `c='b'` is harcoded in matplotlib `scatter` method).
- the `color` keyword is not checked because, if specified by the user, it already is given preference by matplotlib over `c` if both are given.
There is a new PR on scatter based plotting methods (#3473), but I suppose that it can be handled in that PR to ensure this new plotting method also follows this approach?
| https://api.github.com/repos/pandas-dev/pandas/pulls/5060 | 2013-09-30T18:36:20Z | 2013-10-04T23:17:53Z | 2013-10-04T23:17:52Z | 2014-07-16T08:32:22Z |
BF: import isnull | diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index d5bd1072f6a3e..3a99793937096 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -19,7 +19,7 @@
from pandas.util.decorators import cache_readonly, Appender, Substitution
from pandas.core.common import (PandasError, ABCSeries,
is_timedelta64_dtype, is_datetime64_dtype,
- is_integer_dtype)
+ is_integer_dtype, isnull)
import pandas.core.common as com
| Used in the code
see http://nipy.bic.berkeley.edu/builders/pandas-py2.x-wheezy-sparc/builds/148/steps/shell_4/logs/stdio for the failure
| https://api.github.com/repos/pandas-dev/pandas/pulls/5059 | 2013-09-30T16:21:52Z | 2013-09-30T16:42:04Z | 2013-09-30T16:42:04Z | 2014-07-16T08:32:21Z |
TST: allow to check for specific xlrd version and skip reader test if xlrd < 0.9 | diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 0c6332205ffe5..2cc94524b5d19 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -21,12 +21,12 @@
import pandas as pd
-def _skip_if_no_xlrd():
+def _skip_if_no_xlrd(version=(0, 9)):
try:
import xlrd
ver = tuple(map(int, xlrd.__VERSION__.split(".")[:2]))
- if ver < (0, 9):
- raise nose.SkipTest('xlrd < 0.9, skipping')
+ if ver < version:
+ raise nose.SkipTest('xlrd < %s, skipping' % str(version))
except ImportError:
raise nose.SkipTest('xlrd not installed, skipping')
@@ -350,7 +350,10 @@ def test_excelwriter_contextmanager(self):
with ExcelWriter(pth) as writer:
self.frame.to_excel(writer, 'Data1')
self.frame2.to_excel(writer, 'Data2')
-
+ # If above test passes with outdated xlrd, next test
+ # does require fresh xlrd
+ # http://nipy.bic.berkeley.edu/builders/pandas-py2.x-wheezy-sparc/builds/148/steps/shell_4/logs/stdio
+ _skip_if_no_xlrd((0, 9))
with ExcelFile(pth) as reader:
found_df = reader.parse('Data1')
found_df2 = reader.parse('Data2')
| See http://nipy.bic.berkeley.edu/builders/pandas-py2.x-wheezy-sparc/builds/148/steps/shell_4/logs/stdio for failure when outdated xlrd found (e.g. on debian stable wheezy)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5058 | 2013-09-30T16:17:57Z | 2013-10-02T23:08:16Z | 2013-10-02T23:08:16Z | 2014-06-28T20:27:27Z |
DOC: proposed fix for #4699: Period() docstring inconsistent with code when freq has a mult != 1 | diff --git a/pandas/tseries/period.py b/pandas/tseries/period.py
index b28da7c9d7e0b..819777c2350a5 100644
--- a/pandas/tseries/period.py
+++ b/pandas/tseries/period.py
@@ -51,7 +51,7 @@ class Period(PandasObject):
value : Period or compat.string_types, default None
The time period represented (e.g., '4Q2005')
freq : str, default None
- e.g., 'B' for businessday, ('T', 5) or '5T' for 5 minutes
+ e.g., 'B' for businessday. Must be a singlular rule-code (e.g. 5T is not allowed).
year : int, default None
month : int, default 1
quarter : int, default None
| closes #4699
| https://api.github.com/repos/pandas-dev/pandas/pulls/5057 | 2013-09-30T14:42:25Z | 2013-10-04T12:50:04Z | 2013-10-04T12:50:04Z | 2014-06-25T16:32:09Z |
BUG: fix Index's __iadd__ methods | diff --git a/doc/source/release.rst b/doc/source/release.rst
index daee460fc50a1..05626096fe6fe 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -214,6 +214,7 @@ API Changes
data - allowing metadata changes.
- ``MultiIndex.astype()`` now only allows ``np.object_``-like dtypes and
now returns a ``MultiIndex`` rather than an ``Index``. (:issue:`4039`)
+ - Aliased ``__iadd__`` to ``__add__``. (:issue:`4996`)
- Added ``is_`` method to ``Index`` that allows fast equality comparison of
views (similar to ``np.may_share_memory`` but no false positives, and
changes on ``levels`` and ``labels`` setting on ``MultiIndex``).
diff --git a/pandas/core/index.py b/pandas/core/index.py
index d488a29182a18..081968f47c090 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -832,6 +832,7 @@ def __add__(self, other):
else:
return Index(self.view(np.ndarray) + other)
+ __iadd__ = __add__
__eq__ = _indexOp('__eq__')
__ne__ = _indexOp('__ne__')
__lt__ = _indexOp('__lt__')
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 857836fa698ce..53398b92c6d2e 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -382,6 +382,14 @@ def test_add_string(self):
self.assert_('a' not in index2)
self.assert_('afoo' in index2)
+ def test_iadd_string(self):
+ index = pd.Index(['a', 'b', 'c'])
+ # doesn't fail test unless there is a check before `+=`
+ self.assert_('a' in index)
+
+ index += '_x'
+ self.assert_('a_x' in index)
+
def test_diff(self):
first = self.strIndex[5:20]
second = self.strIndex[:10]
| closes #4996
Aliases **iadd** = **add**.
Not doing anything else for now until time to consider.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5053 | 2013-09-30T03:13:38Z | 2013-10-02T21:17:01Z | 2013-10-02T21:17:01Z | 2014-07-07T01:51:22Z |
ENH/DOC: Cleanup docstrings on NDFrame | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index f7d2b161759ed..b256d76fbcdd2 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -26,7 +26,7 @@
_default_index, _maybe_upcast, _is_sequence,
_infer_dtype_from_scalar, _values_from_object,
_coerce_to_dtypes, _DATELIKE_DTYPES, is_list_like)
-from pandas.core.generic import NDFrame
+from pandas.core.generic import NDFrame, _shared_docs
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.core.indexing import (_maybe_droplevels,
_convert_to_index_sliceable,
@@ -62,6 +62,9 @@
#----------------------------------------------------------------------
# Docstring templates
+_shared_doc_kwargs = dict(axes='index, columns',
+ klass='DataFrame',
+ axes_single_arg="{0,1,'index','columns'}")
_numeric_only_doc = """numeric_only : boolean, default None
Include only float, int, boolean data. If None, will attempt to use
@@ -1380,6 +1383,7 @@ def ftypes(self):
return self.apply(lambda x: x.ftype, reduce=False)
def transpose(self):
+ """Transpose index and columns"""
return super(DataFrame, self).transpose(1, 0)
T = property(transpose)
@@ -2157,6 +2161,24 @@ def _reindex_multi(self, axes, copy, fill_value):
return self._reindex_with_indexers({0: [new_index, row_indexer],
1: [new_columns, col_indexer]}, copy=copy, fill_value=fill_value)
+ @Appender(_shared_docs['reindex'] % _shared_doc_kwargs)
+ def reindex(self, index=None, columns=None, **kwargs):
+ return super(DataFrame, self).reindex(index=index, columns=columns,
+ **kwargs)
+
+ @Appender(_shared_docs['reindex_axis'] % _shared_doc_kwargs)
+ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
+ limit=None, fill_value=np.nan):
+ return super(DataFrame, self).reindex_axis(labels=labels, axis=axis,
+ method=method, level=level,
+ copy=copy, limit=limit,
+ fill_value=fill_value)
+
+ @Appender(_shared_docs['rename'] % _shared_doc_kwargs)
+ def rename(self, index=None, columns=None, **kwargs):
+ return super(DataFrame, self).rename(index=index, columns=columns,
+ **kwargs)
+
def reindex_like(self, other, method=None, copy=True, limit=None,
fill_value=NA):
"""
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 18a03eb313dd2..f92496173854f 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -24,6 +24,15 @@
import pandas.core.nanops as nanops
from pandas.util.decorators import Appender, Substitution
+# goal is to be able to define the docs close to function, while still being
+# able to share
+_shared_docs = dict()
+_shared_doc_kwargs = dict(axes='keywords for axes',
+ klass='NDFrame',
+ axes_single_arg='int or labels for object',
+ args_transpose='axes to permute (int or label for'
+ ' object)')
+
def is_dictlike(x):
return isinstance(x, (dict, com.ABCSeries))
@@ -348,13 +357,12 @@ def _set_axis(self, axis, labels):
self._data.set_axis(axis, labels)
self._clear_item_cache()
- def transpose(self, *args, **kwargs):
- """
- Permute the dimensions of the Object
+ _shared_docs['transpose'] = """
+ Permute the dimensions of the %(klass)s
Parameters
----------
- axes : int or name (or alias)
+ args : %(args_transpose)s
copy : boolean, default False
Make a copy of the underlying data. Mixed-dtype data will
always result in a copy
@@ -368,6 +376,8 @@ def transpose(self, *args, **kwargs):
-------
y : same as input
"""
+ @Appender(_shared_docs['transpose'] % _shared_doc_kwargs)
+ def transpose(self, *args, **kwargs):
# construct the args
axes, kwargs = self._construct_axes_from_arguments(
@@ -451,31 +461,31 @@ def swaplevel(self, i, j, axis=0):
#----------------------------------------------------------------------
# Rename
- def rename(self, *args, **kwargs):
- """
- Alter axes input function or
- functions. Function / dict values must be unique (1-to-1). Labels not
- contained in a dict / Series will be left as-is.
+ # TODO: define separate funcs for DataFrame, Series and Panel so you can
+ # get completion on keyword arguments.
+ _shared_docs['rename'] = """
+ Alter axes input function or functions. Function / dict values must be
+ unique (1-to-1). Labels not contained in a dict / Series will be left
+ as-is.
Parameters
----------
- axis keywords for this object
- (e.g. index for Series,
- index,columns for DataFrame,
- items,major_axis,minor_axis for Panel)
- : dict-like or function, optional
+ %(axes)s : dict-like or function, optional
Transformation to apply to that axis values
copy : boolean, default True
Also copy underlying data
inplace : boolean, default False
- Whether to return a new PandasObject. If True then value of copy is
+ Whether to return a new %(klass)s. If True then value of copy is
ignored.
Returns
-------
- renamed : PandasObject (new object)
+ renamed : %(klass)s (new object)
"""
+ @Appender(_shared_docs['rename'] % dict(axes='axes keywords for this'
+ ' object', klass='NDFrame'))
+ def rename(self, *args, **kwargs):
axes, kwargs = self._construct_axes_from_arguments(args, kwargs)
copy = kwargs.get('copy', True)
@@ -518,6 +528,8 @@ def f(x):
else:
return result._propogate_attributes(self)
+ rename.__doc__ = _shared_docs['rename']
+
def rename_axis(self, mapper, axis=0, copy=True, inplace=False):
"""
Alter index and / or columns using input function or functions.
@@ -527,7 +539,7 @@ def rename_axis(self, mapper, axis=0, copy=True, inplace=False):
Parameters
----------
mapper : dict-like or function, optional
- axis : int, default 0
+ axis : int or string, default 0
copy : boolean, default True
Also copy underlying data
inplace : boolean, default False
@@ -568,16 +580,19 @@ def __iter__(self):
"""
return iter(self._info_axis)
+ # can we get a better explanation of this?
def keys(self):
""" return the info axis names """
return self._info_axis
+ # what does info axis actually mean?
def iteritems(self):
for h in self._info_axis:
yield h, self[h]
# originally used to get around 2to3's changes to iteritems.
- # Now unnecessary.
+ # Now unnecessary. Sidenote: don't want to deprecate this for a while,
+ # otherwise libraries that use 2to3 will have issues.
def iterkv(self, *args, **kwargs):
warnings.warn("iterkv is deprecated and will be removed in a future "
"release, use ``iteritems`` instead.", DeprecationWarning)
@@ -782,13 +797,13 @@ def to_pickle(self, path):
from pandas.io.pickle import to_pickle
return to_pickle(self, path)
- def save(self, path): # TODO remove in 0.13
+ def save(self, path): # TODO remove in 0.14
import warnings
from pandas.io.pickle import to_pickle
warnings.warn("save is deprecated, use to_pickle", FutureWarning)
return to_pickle(self, path)
- def load(self, path): # TODO remove in 0.13
+ def load(self, path): # TODO remove in 0.14
import warnings
from pandas.io.pickle import read_pickle
warnings.warn("load is deprecated, use pd.read_pickle", FutureWarning)
@@ -802,8 +817,8 @@ def to_clipboard(self):
-----
Requirements for your platform
- Linux: xclip, or xsel (with gtk or PyQt4 modules)
- - Windows:
- - OS X:
+ - Windows: none
+ - OS X: none
"""
from pandas.io import clipboard
clipboard.to_clipboard(self)
@@ -945,6 +960,7 @@ def take(self, indices, axis=0, convert=True):
new_data = self._data.take(indices, axis=baxis)
return self._constructor(new_data)
+ # TODO: Check if this was clearer in 0.12
def select(self, crit, axis=0):
"""
Return data corresponding to axis labels matching criteria
@@ -1095,16 +1111,15 @@ def sort_index(self, axis=0, ascending=True):
new_axis = labels.take(sort_index)
return self.reindex(**{axis_name: new_axis})
-
- def reindex(self, *args, **kwargs):
- """Conform DataFrame to new index with optional filling logic, placing
+ _shared_docs['reindex'] = """
+ Conform %(klass)s to new index with optional filling logic, placing
NA/NaN in locations having no value in the previous index. A new object
is produced unless the new index is equivalent to the current one and
copy=False
Parameters
----------
- axes : array-like, optional (can be specified in order, or as keywords)
+ %(axes)s : array-like, optional (can be specified in order, or as keywords)
New labels / index to conform to. Preferably an Index object to
avoid duplicating data
method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
@@ -1130,8 +1145,12 @@ def reindex(self, *args, **kwargs):
Returns
-------
- reindexed : same type as calling instance
+ reindexed : %(klass)s
"""
+ # TODO: Decide if we care about having different examples for different
+ # kinds
+ @Appender(_shared_docs['reindex'] % dict(axes="axes", klass="NDFrame"))
+ def reindex(self, *args, **kwargs):
# construct the args
axes, kwargs = self._construct_axes_from_arguments(args, kwargs)
@@ -1189,8 +1208,7 @@ def _needs_reindex_multi(self, axes, method, level):
def _reindex_multi(self, axes, copy, fill_value):
return NotImplemented
- def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
- limit=None, fill_value=np.nan):
+ _shared_docs['reindex_axis'] = (
"""Conform input object to new index with optional filling logic, placing
NA/NaN in locations having no value in the previous index. A new object
is produced unless the new index is equivalent to the current one and
@@ -1201,9 +1219,9 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
index : array-like, optional
New labels / index to conform to. Preferably an Index object to
avoid duplicating data
- axis : allowed axis for the input
+ axis : %(axes_single_arg)s
method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
- Method to use for filling holes in reindexed DataFrame
+ Method to use for filling holes in reindexed object.
pad / ffill: propagate last valid observation forward to next valid
backfill / bfill: use NEXT valid observation to fill gap
copy : boolean, default True
@@ -1220,12 +1238,15 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
See also
--------
- DataFrame.reindex, DataFrame.reindex_like
+ reindex, reindex_like
Returns
-------
- reindexed : same type as calling instance
- """
+ reindexed : %(klass)s
+ """)
+ @Appender(_shared_docs['reindex_axis'] % _shared_doc_kwargs)
+ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
+ limit=None, fill_value=np.nan):
self._consolidate_inplace()
axis_name = self._get_axis_name(axis)
@@ -1432,7 +1453,7 @@ def as_matrix(self, columns=None):
Returns
-------
values : ndarray
- If the DataFrame is heterogeneous and contains booleans or objects,
+ If the caller is heterogeneous and contains booleans or objects,
the result will be of dtype=object
"""
self._consolidate_inplace()
@@ -1568,10 +1589,9 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
0: fill column-by-column
1: fill row-by-row
inplace : boolean, default False
- If True, fill the DataFrame in place. Note: this will modify any
- other views on this DataFrame, like if you took a no-copy slice of
- an existing DataFrame, for example a column in a DataFrame. Returns
- a reference to the filled object, which is self if inplace=True
+ If True, fill in place. Note: this will modify any
+ other views on this object, (e.g. a no-copy slice for a column in a
+ DataFrame). Still returns the object.
limit : int, default None
Maximum size gap to forward or backward fill
downcast : dict, default is None, a dict of item->dtype of what to
@@ -1584,7 +1604,7 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
Returns
-------
- filled : DataFrame
+ filled : same type as caller
"""
if isinstance(value, (list, tuple)):
raise TypeError('"value" parameter must be a scalar or dict, but '
@@ -1714,10 +1734,9 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
dict will not be filled). Regular expressions, strings and lists or
dicts of such objects are also allowed.
inplace : boolean, default False
- If True, fill the DataFrame in place. Note: this will modify any
- other views on this DataFrame, like if you took a no-copy slice of
- an existing DataFrame, for example a column in a DataFrame. Returns
- a reference to the filled object, which is self if inplace=True
+ If True, in place. Note: this will modify any
+ other views on this object (e.g. a column form a DataFrame).
+ Returns the caller if this is True.
limit : int, default None
Maximum size gap to forward or backward fill
regex : bool or same types as `to_replace`, default False
@@ -1916,9 +1935,9 @@ def interpolate(self, to_replace, method='pad', axis=0, inplace=False,
reindex, replace, fillna
"""
from warnings import warn
- warn('DataFrame.interpolate will be removed in v0.13, please use '
- 'either DataFrame.fillna or DataFrame.replace instead',
- FutureWarning)
+ warn('{klass}.interpolate will be removed in v0.14, please use '
+ 'either {klass}.fillna or {klass}.replace '
+ 'instead'.format(klass=self.__class__), FutureWarning)
if self._is_mixed_type and axis == 1:
return self.T.replace(to_replace, method=method, limit=limit).T
@@ -2381,8 +2400,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
Parameters
----------
- cond : boolean DataFrame or array
- other : scalar or DataFrame
+ cond : boolean NDFrame or array
+ other : scalar or NDFrame
inplace : boolean, default False
Whether to perform the operation in place on the data
axis : alignment axis if needed, default None
@@ -2395,7 +2414,7 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
Returns
-------
- wh : DataFrame
+ wh : same type as caller
"""
if isinstance(cond, NDFrame):
cond = cond.reindex(**self._construct_axes_dict())
@@ -2430,7 +2449,7 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
# slice me out of the other
else:
- raise NotImplemented("cannot align with a bigger dimensional PandasObject")
+ raise NotImplemented("cannot align with a higher dimensional NDFrame")
elif is_list_like(other):
@@ -2512,12 +2531,12 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
def mask(self, cond):
"""
- Returns copy of self whose values are replaced with nan if the
+ Returns copy whose values are replaced with nan if the
inverted condition is True
Parameters
----------
- cond: boolean object or array
+ cond : boolean NDFrame or array
Returns
-------
@@ -2528,8 +2547,7 @@ def mask(self, cond):
def shift(self, periods=1, freq=None, axis=0, **kwds):
"""
- Shift the index of the DataFrame by desired number of periods with an
- optional time freq
+ Shift index by desired number of periods with an optional time freq
Parameters
----------
@@ -2545,7 +2563,7 @@ def shift(self, periods=1, freq=None, axis=0, **kwds):
Returns
-------
- shifted : DataFrame
+ shifted : same type as caller
"""
if periods == 0:
return self
@@ -2621,15 +2639,15 @@ def tshift(self, periods=1, freq=None, axis=0, **kwds):
return self._constructor(new_data)
def truncate(self, before=None, after=None, copy=True):
- """Function truncate a sorted DataFrame / Series before and/or after
- some particular dates.
+ """Truncates a sorted NDFrame before and/or after some particular
+ dates.
Parameters
----------
before : date
- Truncate before date
+ Truncate before date
after : date
- Truncate after date
+ Truncate after date
Returns
-------
@@ -2778,8 +2796,9 @@ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
Returns
-------
- chg : Series or DataFrame
+ chg : same type as caller
"""
+ # TODO: Not sure if above is correct - need someone to confirm.
if fill_method is None:
data = self
else:
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index f0bad6b796e7c..a23d8160bb91a 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -18,7 +18,7 @@
create_block_manager_from_arrays,
create_block_manager_from_blocks)
from pandas.core.frame import DataFrame
-from pandas.core.generic import NDFrame
+from pandas.core.generic import NDFrame, _shared_docs
from pandas import compat
from pandas.util.decorators import deprecate, Appender, Substitution
import pandas.core.common as com
@@ -27,6 +27,15 @@
import pandas.computation.expressions as expressions
+_shared_doc_kwargs = dict(
+ axes='items, major_axis, minor_axis',
+ klass="Panel",
+ axes_single_arg="{0,1,2,'items','major_axis','minor_axis'}")
+_shared_doc_kwargs['args_transpose'] = ("three positional arguments: each one"
+ "of\n %s" %
+ _shared_doc_kwargs['axes_single_arg'])
+
+
def _ensure_like_indices(time, panels):
"""
Makes sure that time and panels are conformable
@@ -871,6 +880,31 @@ def _wrap_result(self, result, axis):
return self._construct_return_type(result, axes)
+ @Appender(_shared_docs['reindex'] % _shared_doc_kwargs)
+ def reindex(self, items=None, major_axis=None, minor_axis=None, **kwargs):
+ major_axis = major_axis if major_axis is not None else kwargs.pop('major', None)
+ minor_axis = minor_axis if minor_axis is not None else kwargs.pop('minor', None)
+ return super(Panel, self).reindex(items=items, major_axis=major_axis,
+ minor_axis=minor_axis, **kwargs)
+
+ @Appender(_shared_docs['rename'] % _shared_doc_kwargs)
+ def rename(self, items=None, major_axis=None, minor_axis=None, **kwargs):
+ major_axis = major_axis if major_axis is not None else kwargs.pop('major', None)
+ minor_axis = minor_axis if minor_axis is not None else kwargs.pop('minor', None)
+ return super(Panel, self).rename(items=items, major_axis=major_axis,
+ minor_axis=minor_axis, **kwargs)
+
+ @Appender(_shared_docs['reindex_axis'] % _shared_doc_kwargs)
+ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
+ limit=None, fill_value=np.nan):
+ return super(Panel, self).reindex_axis(labels=labels, axis=axis,
+ method=method, level=level,
+ copy=copy, limit=limit,
+ fill_value=fill_value)
+ @Appender(_shared_docs['transpose'] % _shared_doc_kwargs)
+ def transpose(self, *args, **kwargs):
+ return super(Panel, self).transpose(*args, **kwargs)
+
def count(self, axis='major'):
"""
Return number of observations over requested axis.
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 884e737f357a7..0bbdbc89879ff 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -53,6 +53,11 @@
__all__ = ['Series']
+_shared_doc_kwargs = dict(
+ axes='index',
+ klass='Series',
+ axes_single_arg="{0,'index'}"
+)
def _coerce_method(converter):
""" install the scalar coercion methods """
@@ -1977,6 +1982,14 @@ def _needs_reindex_multi(self, axes, method, level):
""" check if we do need a multi reindex; this is for compat with higher dims """
return False
+ @Appender(generic._shared_docs['reindex'] % _shared_doc_kwargs)
+ def rename(self, index=None, **kwargs):
+ return super(Series, self).rename(index=index, **kwargs)
+
+ @Appender(generic._shared_docs['reindex'] % _shared_doc_kwargs)
+ def reindex(self, index=None, **kwargs):
+ return super(Series, self).reindex(index=index, **kwargs)
+
def reindex_axis(self, labels, axis=0, **kwargs):
""" for compatibility with higher dims """
if axis != 0:
| Resolves much of #4717. Basically cleans up docstrings in a few places
plus, for the few functions that need them, uses a shared dict of
function definitions to share them. This has the side-benefit of letting
the function docstrings reside right next to the functions themselves,
maintaining readability.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5052 | 2013-09-30T02:58:12Z | 2013-10-01T04:00:14Z | 2013-10-01T04:00:14Z | 2014-07-16T08:32:14Z |
CLN: Fixups to exceptions in generic | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 5db612059685b..5b6a0d15be943 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -317,10 +317,11 @@ API Changes
- Arithemtic func factories are now passed real names (suitable for using with super) (:issue:`5240`)
- Provide numpy compatibility with 1.7 for a calling convention like ``np.prod(pandas_object)`` as numpy
call with additional keyword args (:issue:`4435`)
- - Provide __dir__ method (and local context) for tab completion / remove ipython completers code
- (:issue:`4501`)
+ - Provide ``__dir__`` method (and local context) for tab completion / remove
+ ipython completers code (:issue:`4501`)
- Support non-unique axes in a Panel via indexing operations (:issue:`4960`)
- ``.truncate`` will raise a ``ValueError`` if invalid before and afters dates are given (:issue:`5242`)
+ - Improve some exceptions in core/generic. (:issue:`5051`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 9666fe42cc822..100c546753154 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -233,8 +233,8 @@ def _construct_axes_from_arguments(self, args, kwargs, require_all=False):
if alias is not None:
if a in kwargs:
if alias in kwargs:
- raise Exception(
- "arguments are multually exclusive for [%s,%s]" % (a, alias))
+ raise TypeError("arguments are multually exclusive "
+ "for [%s,%s]" % (a, alias))
continue
if alias in kwargs:
kwargs[a] = kwargs.pop(alias)
@@ -244,10 +244,9 @@ def _construct_axes_from_arguments(self, args, kwargs, require_all=False):
if a not in kwargs:
try:
kwargs[a] = args.pop(0)
- except (IndexError):
+ except IndexError:
if require_all:
- raise AssertionError(
- "not enough arguments specified!")
+ raise TypeError("not enough arguments specified!")
axes = dict([(a, kwargs.get(a)) for a in self._AXIS_ORDERS])
return axes, kwargs
@@ -273,7 +272,8 @@ def _get_axis_number(self, axis):
return self._AXIS_NUMBERS[axis]
except:
pass
- raise ValueError('No axis named {0} for object type {1}'.format(axis,type(self)))
+ raise ValueError('No axis named {0} for object type {1}'.format(
+ axis, type(self).__name__))
def _get_axis_name(self, axis):
axis = self._AXIS_ALIASES.get(axis, axis)
@@ -285,7 +285,8 @@ def _get_axis_name(self, axis):
return self._AXIS_NAMES[axis]
except:
pass
- raise ValueError('No axis named {0} for object type {1}'.format(axis,type(self)))
+ raise ValueError('No axis named {0} for object type {1}'.format(
+ axis, type(self).__name__))
def _get_axis(self, axis):
name = self._get_axis_name(axis)
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 07b33266d88a1..7b5e6ac2dd1cf 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1250,12 +1250,19 @@ def test_transpose(self):
## test bad aliases
# test ambiguous aliases
- self.assertRaises(AssertionError, self.panel.transpose, 'minor',
- maj='major', majo='items')
+ with tm.assertRaisesRegexp(TypeError, 'not enough arguments'):
+ self.panel.transpose('minor', maj='major', majo='items')
# test invalid kwargs
- self.assertRaises(AssertionError, self.panel.transpose, 'minor',
- maj='major', minor='items')
+ # TODO: Decide whether to remove this test - it's no longer testing
+ # the correct thing on current master (and hasn't been for a
+ # while)
+ with tm.assertRaisesRegexp(TypeError, 'not enough arguments'):
+ self.panel.transpose('minor', maj='major', minor='items')
+
+ # does this test make sense?
+ # with tm.assertRaisesRegexp(ValueError, 'duplicate axes'):
+ # self.panel.transpose('minor', 'major', major='minor', minor='items')
result = self.panel.transpose(2, 1, 0)
assert_panel_equal(result, expected)
| Improved message and better exception types.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5051 | 2013-09-30T02:18:40Z | 2013-10-24T18:18:37Z | null | 2013-10-24T18:18:37Z |
API: provide __dir__ method (and local context) for tab completion / remove ipython completers code | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 801158a00b9ab..23b8252cd57e5 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -274,6 +274,8 @@ API Changes
support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`)
- Provide numpy compatibility with 1.7 for a calling convention like ``np.prod(pandas_object)`` as numpy
call with additional keyword args (:issue:`4435`)
+ - Provide __dir__ method (and local context) for tab completion / remove ipython completers code
+ (:issue:`4501`)
Internal Refactoring
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 14070d8825393..f390592a6f6c4 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -48,6 +48,16 @@ def __repr__(self):
"""
return str(self)
+ def _local_dir(self):
+ """ provide addtional __dir__ for this object """
+ return []
+
+ def __dir__(self):
+ """
+ Provide method name lookup and completion
+ Only provide 'public' methods
+ """
+ return list(sorted(list(set(dir(type(self)) + self._local_dir()))))
class PandasObject(StringMixin):
"""baseclass for various pandas objects"""
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index dbd67aa3c5c25..05514372de6fc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -4674,25 +4674,6 @@ def _put_str(s, space):
return ('%s' % s)[:space].ljust(space)
-def install_ipython_completers(): # pragma: no cover
- """Register the DataFrame type with IPython's tab completion machinery, so
- that it knows about accessing column names as attributes."""
- from IPython.utils.generics import complete_object
-
- @complete_object.when_type(DataFrame)
- def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.columns
- if isinstance(c, compat.string_types) and compat.isidentifier(c)]
-
-
-# Importing IPython brings in about 200 modules, so we want to avoid it unless
-# we're in IPython (when those modules are loaded anyway).
-if "IPython" in sys.modules: # pragma: no cover
- try:
- install_ipython_completers()
- except Exception:
- pass
-
#----------------------------------------------------------------------
# Add plotting methods to DataFrame
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 18a03eb313dd2..7d304209168b1 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -16,7 +16,7 @@
import pandas.core.common as com
import pandas.core.datetools as datetools
from pandas import compat, _np_version_under1p7
-from pandas.compat import map, zip, lrange
+from pandas.compat import map, zip, lrange, string_types, isidentifier
from pandas.core.common import (isnull, notnull, is_list_like,
_values_from_object,
_infer_dtype_from_scalar, _maybe_promote,
@@ -109,6 +109,11 @@ def __unicode__(self):
prepr = '[%s]' % ','.join(map(com.pprint_thing, self))
return '%s(%s)' % (self.__class__.__name__, prepr)
+ def _local_dir(self):
+ """ add the string-like attributes from the info_axis """
+ return [c for c in self._info_axis
+ if isinstance(c, string_types) and isidentifier(c) ]
+
@property
def _constructor_sliced(self):
raise NotImplementedError
@@ -252,7 +257,7 @@ def _get_axis_number(self, axis):
def _get_axis_name(self, axis):
axis = self._AXIS_ALIASES.get(axis, axis)
- if isinstance(axis, compat.string_types):
+ if isinstance(axis, string_types):
if axis in self._AXIS_NUMBERS:
return axis
else:
@@ -1311,7 +1316,7 @@ def filter(self, items=None, like=None, regex=None, axis=None):
if items is not None:
return self.reindex(**{axis_name: [r for r in items if r in axis_values]})
elif like:
- matchf = lambda x: (like in x if isinstance(x, compat.string_types)
+ matchf = lambda x: (like in x if isinstance(x, string_types)
else like in str(x))
return self.select(matchf, axis=axis_name)
elif regex:
@@ -2601,7 +2606,7 @@ def tshift(self, periods=1, freq=None, axis=0, **kwds):
offset = _resolve_offset(freq, kwds)
- if isinstance(offset, compat.string_types):
+ if isinstance(offset, string_types):
offset = datetools.to_offset(offset)
block_axis = self._get_block_manager_axis(axis)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index 2e07662bffbfe..e70c01ffcb12f 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -2704,30 +2704,3 @@ def numpy_groupby(data, labels, axis=0):
group_sums = np.add.reduceat(ordered_data, groups_at, axis=axis)
return group_sums
-
-#-----------------------------------------------------------------------
-# Helper functions
-
-
-from pandas import compat
-import sys
-
-
-def install_ipython_completers(): # pragma: no cover
- """Register the DataFrame type with IPython's tab completion machinery, so
- that it knows about accessing column names as attributes."""
- from IPython.utils.generics import complete_object
-
- @complete_object.when_type(DataFrameGroupBy)
- def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.obj.columns
- if isinstance(c, compat.string_types) and compat.isidentifier(c)]
-
-
-# Importing IPython brings in about 200 modules, so we want to avoid it unless
-# we're in IPython (when those modules are loaded anyway).
-if "IPython" in sys.modules: # pragma: no cover
- try:
- install_ipython_completers()
- except Exception:
- pass
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index f0bad6b796e7c..b6054c1a96781 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -1230,21 +1230,3 @@ def f(self, other, axis=0):
LongPanel = DataFrame
-def install_ipython_completers(): # pragma: no cover
- """Register the Panel type with IPython's tab completion machinery, so
- that it knows about accessing column names as attributes."""
- from IPython.utils.generics import complete_object
-
- @complete_object.when_type(Panel)
- def complete_dataframe(obj, prev_completions):
- return prev_completions + [c for c in obj.keys()
- if isinstance(c, compat.string_types)
- and compat.isidentifier(c)]
-
-# Importing IPython brings in about 200 modules, so we want to avoid it unless
-# we're in IPython (when those modules are loaded anyway).
-if "IPython" in sys.modules: # pragma: no cover
- try:
- install_ipython_completers()
- except Exception:
- pass
| closes #4501
this removes the ipython completers code and instead defines `__dir__`,
with a `_local_dir` that a class can override
NDFrame objects will have a pre-defined local of the info_axis (e.g. columns in a DataFrame)
for example:
```
In [1]: df = DataFrame(columns=list('ABC'))
In [2]: df.
Display all 199 possibilities? (y or n)
df.A df.at_time df.corrwith df.eq df.groupby df.itertuples df.min df.reindex df.shape df.to_dict df.tshift
df.B df.axes df.count df.eval df.gt df.ix df.mod df.reindex_axis df.shift df.to_excel df.tz_convert
df.C df.between_time df.cov df.ffill df.head df.join df.mul df.reindex_like df.skew df.to_hdf df.tz_localize
df.T df.bfill df.cummax df.fillna df.hist df.keys df.multiply df.rename df.sort df.to_html df.unstack
df.abs df.blocks df.cummin df.filter df.iat df.kurt df.ndim df.rename_axis df.sort_index df.to_json df.update
df.add df.boxplot df.cumprod df.first df.icol df.kurtosis df.ne df.reorder_levels df.sortlevel df.to_latex df.values
df.add_prefix df.clip df.cumsum df.first_valid_index df.idxmax df.last df.pct_change df.replace df.squeeze df.to_panel df.var
df.add_suffix df.clip_lower df.delevel df.floordiv df.idxmin df.last_valid_index df.pivot df.resample df.stack df.to_period df.where
df.align df.clip_upper df.describe df.from_csv df.iget_value df.le df.pivot_table df.reset_index df.std df.to_pickle df.xs
df.all df.columns df.diff df.from_dict df.iloc df.load df.plot df.rfloordiv df.sub df.to_records
df.any df.combine df.div df.from_items df.index df.loc df.pop df.rmod df.subtract df.to_sparse
df.append df.combineAdd df.divide df.from_records df.info df.lookup df.pow df.rmul df.sum df.to_sql
df.apply df.combineMult df.dot df.ftypes df.insert df.lt df.prod df.rpow df.swapaxes df.to_stata
df.applymap df.combine_first df.drop df.ge df.interpolate df.mad df.product df.rsub df.swaplevel df.to_string
df.as_blocks df.compound df.drop_duplicates df.get df.irow df.mask df.quantile df.rtruediv df.tail df.to_timestamp
df.as_matrix df.consolidate df.dropna df.get_dtype_counts df.isin df.max df.query df.save df.take df.to_wide
df.asfreq df.convert_objects df.dtypes df.get_ftype_counts df.iteritems df.mean df.radd df.select df.to_clipboard df.transpose
df.astype df.copy df.duplicated df.get_value df.iterkv df.median df.rank df.set_index df.to_csv df.truediv
df.at df.corr df.empty df.get_values df.iterrows df.merge df.rdiv df.set_value df.to_dense df.truncate
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/5050 | 2013-09-30T02:01:15Z | 2013-10-01T00:04:14Z | 2013-10-01T00:04:14Z | 2014-06-25T01:17:18Z |
CLN: pytables cleanup added functiones (previously deleted/moved to comp... | diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 42a434c005a4c..ff0e1b08d7247 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -3966,227 +3966,6 @@ def _need_convert(kind):
return True
return False
-
-class Coordinates(object):
-
- """ holds a returned coordinates list, useful to select the same rows from different tables
-
- coordinates : holds the array of coordinates
- group : the source group
- where : the source where
- """
-
- _ops = ['<=', '<', '>=', '>', '!=', '==', '=']
- _search = re.compile(
- "^\s*(?P<field>\w+)\s*(?P<op>%s)\s*(?P<value>.+)\s*$" % '|'.join(_ops))
- _max_selectors = 31
-
- def __init__(self, field, op=None, value=None, queryables=None, encoding=None):
- self.field = None
- self.op = None
- self.value = None
- self.q = queryables or dict()
- self.filter = None
- self.condition = None
- self.encoding = encoding
-
- # unpack lists/tuples in field
- while(isinstance(field, (tuple, list))):
- f = field
- field = f[0]
- if len(f) > 1:
- op = f[1]
- if len(f) > 2:
- value = f[2]
-
- # backwards compatible
- if isinstance(field, dict):
- self.field = field.get('field')
- self.op = field.get('op') or '=='
- self.value = field.get('value')
-
- # passed a term
- elif isinstance(field, Term):
- self.field = field.field
- self.op = field.op
- self.value = field.value
-
- # a string expression (or just the field)
- elif isinstance(field, compat.string_types):
-
- # is a term is passed
- s = self._search.match(field)
- if s is not None:
- self.field = s.group('field')
- self.op = s.group('op')
- self.value = s.group('value')
-
- else:
- self.field = field
-
- # is an op passed?
- if isinstance(op, compat.string_types) and op in self._ops:
- self.op = op
- self.value = value
- else:
- self.op = '=='
- self.value = op
-
- else:
- raise ValueError(
- "Term does not understand the supplied field [%s]" % field)
-
- # we have valid fields
- if self.field is None or self.op is None or self.value is None:
- raise ValueError("Could not create this term [%s]" % str(self))
-
- # = vs ==
- if self.op == '=':
- self.op = '=='
-
- # we have valid conditions
- if self.op in ['>', '>=', '<', '<=']:
- if hasattr(self.value, '__iter__') and len(self.value) > 1 and not isinstance(self.value, compat.string_types):
- raise ValueError(
- "an inequality condition cannot have multiple values [%s]" % str(self))
-
- if not is_list_like(self.value):
- self.value = [self.value]
-
- if len(self.q):
- self.eval()
-
- def __unicode__(self):
- attrs = lmap(pprint_thing, (self.field, self.op, self.value))
- return "field->%s,op->%s,value->%s" % tuple(attrs)
-
- @property
- def is_valid(self):
- """ return True if this is a valid field """
- return self.field in self.q
-
- @property
- def is_in_table(self):
- """ return True if this is a valid column name for generation (e.g. an actual column in the table) """
- return self.q.get(self.field) is not None
-
- @property
- def kind(self):
- """ the kind of my field """
- return self.q.get(self.field)
-
- def generate(self, v):
- """ create and return the op string for this TermValue """
- val = v.tostring(self.encoding)
- return "(%s %s %s)" % (self.field, self.op, val)
-
- def eval(self):
- """ set the numexpr expression for this term """
-
- if not self.is_valid:
- raise ValueError("query term is not valid [{0}]\n"
- " all queries terms must include a reference to\n"
- " either an axis (e.g. index or column), or a data_columns\n".format(str(self)))
-
- # convert values if we are in the table
- if self.is_in_table:
- values = [self.convert_value(v) for v in self.value]
- else:
- values = [TermValue(v, v, self.kind) for v in self.value]
-
- # equality conditions
- if self.op in ['==', '!=']:
-
- # our filter op expression
- if self.op == '!=':
- filter_op = lambda axis, vals: not axis.isin(vals)
- else:
- filter_op = lambda axis, vals: axis.isin(vals)
-
- if self.is_in_table:
-
- # too many values to create the expression?
- if len(values) <= self._max_selectors:
- vs = [self.generate(v) for v in values]
- self.condition = "(%s)" % ' | '.join(vs)
-
- # use a filter after reading
- else:
- self.filter = (
- self.field, filter_op, Index([v.value for v in values]))
-
- else:
-
- self.filter = (
- self.field, filter_op, Index([v.value for v in values]))
-
- else:
-
- if self.is_in_table:
-
- self.condition = self.generate(values[0])
-
- else:
-
- raise TypeError(
- "passing a filterable condition to a Fixed format indexer [%s]" % str(self))
-
- def convert_value(self, v):
- """ convert the expression that is in the term to something that is accepted by pytables """
-
- def stringify(value):
- value = str(value)
- if self.encoding is not None:
- value = value.encode(self.encoding)
- return value
-
- kind = _ensure_decoded(self.kind)
- if kind == u('datetime64') or kind == u('datetime'):
- v = lib.Timestamp(v)
- if v.tz is not None:
- v = v.tz_convert('UTC')
- return TermValue(v, v.value, kind)
- elif kind == u('timedelta64') or kind == u('timedelta'):
- v = _coerce_scalar_to_timedelta_type(v,unit='s').item()
- return TermValue(int(v), v, kind)
- elif (isinstance(v, datetime) or hasattr(v, 'timetuple')):
- v = time.mktime(v.timetuple())
- return TermValue(v, Timestamp(v), kind)
- elif kind == u('date'):
- v = v.toordinal()
- return TermValue(v, Timestamp.fromordinal(v), kind)
- elif kind == u('integer'):
- v = int(float(v))
- return TermValue(v, v, kind)
- elif kind == u('float'):
- v = float(v)
- return TermValue(v, v, kind)
- elif kind == u('bool'):
- if isinstance(v, compat.string_types):
- poss_vals = [u('false'), u('f'), u('no'),
- u('n'), u('none'), u('0'),
- u('[]'), u('{}'), u('')]
- v = not v.strip().lower() in poss_vals
- else:
- v = bool(v)
- return TermValue(v, v, kind)
- elif not isinstance(v, compat.string_types):
- v = stringify(v)
- return TermValue(v, stringify(v), u('string'))
-
- # string quoting
- return TermValue(v, stringify(v), u('string'))
-
-
-
- def __len__(self):
- return len(self.values)
-
- def __getitem__(self, key):
- """ return a new coordinates object, sliced by the key """
- return Coordinates(self.values[key], self.group, self.where)
-
-
class Selection(object):
"""
| ...utation/pytables.py)
| https://api.github.com/repos/pandas-dev/pandas/pulls/5047 | 2013-09-29T21:09:35Z | 2013-09-29T21:23:13Z | 2013-09-29T21:23:13Z | 2014-07-16T08:32:08Z |
BUG: Fix unbound local in exception handling in core/index | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 935dff44ad49e..f7d2b161759ed 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -3342,6 +3342,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
else: # pragma : no cover
raise AssertionError('Axis must be 0 or 1, got %s' % str(axis))
+ i = None
keys = []
results = {}
if ignore_failures:
@@ -3362,14 +3363,12 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
results[i] = func(v)
keys.append(v.name)
except Exception as e:
- try:
- if hasattr(e, 'args'):
+ if hasattr(e, 'args'):
+ # make sure i is defined
+ if i is not None:
k = res_index[i]
e.args = e.args + ('occurred at index %s' %
- com.pprint_thing(k),)
- except (NameError, UnboundLocalError): # pragma: no cover
- # no k defined yet
- pass
+ com.pprint_thing(k),)
raise
if len(results) > 0 and _is_sequence(results[0]):
| Only would occur if both enumerations failed. better way to handle than
try/except.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5046 | 2013-09-29T20:37:00Z | 2013-09-29T22:48:09Z | 2013-09-29T22:48:09Z | 2014-07-16T08:32:07Z |
CLN/BUG/TST: Fix and test multiple places using undefined names. | diff --git a/pandas/computation/align.py b/pandas/computation/align.py
index f420d0dacf34c..2f776f2db053f 100644
--- a/pandas/computation/align.py
+++ b/pandas/computation/align.py
@@ -10,6 +10,7 @@
import pandas as pd
from pandas import compat
import pandas.core.common as com
+import pandas.computation.ops as ops
def _align_core_single_unary_op(term):
@@ -170,10 +171,10 @@ def _align_core(terms):
return typ, _zip_axes_from_type(typ, axes)
-
+# TODO: Add tests that cover this function!
def _filter_terms(flat):
# numeric literals
- literals = frozenset(filter(lambda x: isinstance(x, Constant), flat))
+ literals = frozenset(filter(lambda x: isinstance(x, ops.Constant), flat))
# these are strings which are variable names
names = frozenset(flat) - literals
diff --git a/pandas/computation/eval.py b/pandas/computation/eval.py
index 36b1e2bc96090..62869b8773ba0 100644
--- a/pandas/computation/eval.py
+++ b/pandas/computation/eval.py
@@ -2,12 +2,7 @@
"""Top level ``eval`` module.
"""
-
-import numbers
-import numpy as np
-
from pandas.core import common as com
-from pandas.compat import string_types
from pandas.computation.expr import Expr, _parsers, _ensure_scope
from pandas.computation.engines import _engines
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 5778a524a584a..cc5074a1fe381 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -57,15 +57,6 @@ def unique(values):
return _hashtable_algo(f, values.dtype)
-# def count(values, uniques=None):
-# f = lambda htype, caster: _count_generic(values, htype, caster)
-
-# if uniques is not None:
-# raise NotImplementedError
-# else:
-# return _hashtable_algo(f, values.dtype)
-
-
def _hashtable_algo(f, dtype):
"""
f(HashTable, type_caster) -> result
@@ -78,16 +69,6 @@ def _hashtable_algo(f, dtype):
return f(htable.PyObjectHashTable, com._ensure_object)
-def _count_generic(values, table_type, type_caster):
- from pandas.core.series import Series
-
- values = type_caster(values)
- table = table_type(min(len(values), 1000000))
- uniques, labels = table.factorize(values)
-
- return Series(counts, index=uniques)
-
-
def _match_generic(values, index, table_type, type_caster):
values = type_caster(values)
index = type_caster(index)
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index e5447e5f8f58f..58657f881c8fb 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1,3 +1,4 @@
+import sys
import types
from functools import wraps
import numpy as np
@@ -2123,8 +2124,6 @@ def filter(self, func, dropna=True, *args, **kwargs):
>>> grouped = df.groupby(lambda x: mapping[x])
>>> grouped.filter(lambda x: x['A'].sum() + x['B'].sum() > 0)
"""
- from pandas.tools.merge import concat
-
indexers = []
obj = self._obj_with_exclusions
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 1f2e823833810..ed8028fc33132 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -501,10 +501,11 @@ def is_int(v):
# if we are mixed and have integers
try:
if is_positional and self.is_mixed():
+ # check that start and stop are valid
if start is not None:
- i = self.get_loc(start)
+ self.get_loc(start)
if stop is not None:
- j = self.get_loc(stop)
+ self.get_loc(stop)
is_positional = False
except KeyError:
if self.inferred_type == 'mixed-integer-float':
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index 0bc0afaf255f2..88d2b6e8e4411 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -540,8 +540,6 @@ def _align_frame(self, indexer, df):
raise ValueError('Incompatible indexer with DataFrame')
def _align_panel(self, indexer, df):
- is_frame = self.obj.ndim == 2
- is_panel = self.obj.ndim >= 3
raise NotImplementedError("cannot set using an indexer with a Panel yet!")
def _getitem_tuple(self, tup):
@@ -581,11 +579,9 @@ def _multi_take_opportunity(self, tup):
return False
# just too complicated
- for indexer, ax in zip(tup,self.obj._data.axes):
+ for ax in self.obj._data.axes:
if isinstance(ax, MultiIndex):
return False
- elif com._is_bool_indexer(indexer):
- return False
return True
@@ -637,6 +633,7 @@ def _getitem_lowerdim(self, tup):
if not ax0.is_lexsorted_for_tuple(tup):
raise e1
try:
+ # Check for valid axis
loc = ax0.get_loc(tup[0])
except KeyError:
raise e1
@@ -933,6 +930,7 @@ class _IXIndexer(_NDFrameIndexer):
""" A primarily location based indexer, with integer fallback """
def _has_valid_type(self, key, axis):
+ # check for valid axis (raises if invalid)
ax = self.obj._get_axis(axis)
if isinstance(key, slice):
@@ -945,7 +943,7 @@ def _has_valid_type(self, key, axis):
return True
else:
-
+ # check for valid key/axis combo (raises if invalid)
self._convert_scalar_indexer(key, axis)
return True
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index b4d5c1814a6bc..1fd7ae8a7f6fb 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -1,7 +1,6 @@
import itertools
import re
from datetime import datetime, timedelta
-import copy
from collections import defaultdict
import numpy as np
@@ -589,9 +588,9 @@ def setitem(self, indexer, value):
values = self._try_coerce_result(values)
values = self._try_cast_result(values, dtype)
return [make_block(transf(values), self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
- except (ValueError, TypeError) as detail:
+ except (ValueError, TypeError):
raise
- except (Exception) as detail:
+ except Exception:
pass
return [ self ]
@@ -3681,6 +3680,7 @@ def _lcd_dtype(l):
have_complex = len(counts[ComplexBlock]) > 0
have_dt64 = len(counts[DatetimeBlock]) > 0
have_td64 = len(counts[TimeDeltaBlock]) > 0
+ # TODO: Use this.
have_sparse = len(counts[SparseBlock]) > 0
have_numeric = have_float or have_complex or have_int
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index f35070c634aa1..cffd21b54ab55 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -3,13 +3,10 @@
"""
# pylint: disable=E1103,W0231,W0212,W0621
-from pandas.compat import map, zip, range, lrange, lmap, u, OrderedDict, OrderedDefaultdict
-from pandas import compat
import sys
import numpy as np
-from pandas.core.common import (PandasError,
- _try_sort, _default_index, _infer_dtype_from_scalar,
- notnull)
+from pandas.core.common import (PandasError, _try_sort, _default_index,
+ _infer_dtype_from_scalar)
from pandas.core.categorical import Categorical
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_get_combined_index)
@@ -20,10 +17,10 @@
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame, _shared_docs
from pandas import compat
+from pandas.compat import zip, range, lrange, u, OrderedDict, OrderedDefaultdict
from pandas.util.decorators import deprecate, Appender, Substitution
import pandas.core.common as com
import pandas.core.ops as ops
-import pandas.core.nanops as nanops
import pandas.computation.expressions as expressions
diff --git a/pandas/io/ga.py b/pandas/io/ga.py
index dcbecd74886ac..f13f1dd5b73a2 100644
--- a/pandas/io/ga.py
+++ b/pandas/io/ga.py
@@ -394,7 +394,6 @@ def _get_match(obj_store, name, id, **kwargs):
id_ok = lambda item: id is not None and item.get('id') == id
key_ok = lambda item: key is not None and item.get(key) == val
- match = None
if obj_store.get('items'):
# TODO look up gapi for faster lookup
for item in obj_store.get('items'):
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index e9e82824326a7..7235bf87e37ca 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -720,6 +720,9 @@ def _extract_multi_indexer_columns(self, header, index_names, col_names, passed_
ic = [ ic ]
sic = set(ic)
+ # TODO: Decide if this is necessary...
+ orig_header = list(header)
+
# clean the index_names
index_names = header.pop(-1)
index_names, names, index_col = _clean_index_names(index_names,
@@ -2033,8 +2036,6 @@ def _stringify_na_values(na_values):
def _get_na_values(col, na_values, na_fvalues):
if isinstance(na_values, dict):
if col in na_values:
- values = na_values[col]
- fvalues = na_fvalues[col]
return na_values[col], na_fvalues[col]
else:
return _NA_VALUES, set()
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 999f0751abe99..511d75ba6451f 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -29,10 +29,10 @@
import pandas.core.common as com
from pandas.tools.merge import concat
from pandas import compat
-from pandas.compat import u_safe as u, PY3, range, lrange
+from pandas.compat import u_safe as u, PY3, range, lrange, lmap
from pandas.io.common import PerformanceWarning
from pandas.core.config import get_option
-from pandas.computation.pytables import Expr, maybe_expression
+from pandas.computation.pytables import Expr, maybe_expression, TermValue
import pandas.lib as lib
import pandas.algos as algos
diff --git a/pandas/stats/misc.py b/pandas/stats/misc.py
index c79bae34f20c4..05918d78a8332 100644
--- a/pandas/stats/misc.py
+++ b/pandas/stats/misc.py
@@ -5,6 +5,7 @@
from pandas.core.api import Series, DataFrame, isnull, notnull
from pandas.core.series import remove_na
from pandas.compat import zip
+import pandas.core.common as com
def zscore(series):
@@ -157,6 +158,7 @@ def bucketcat(series, cats):
cats = np.asarray(cats)
unique_labels = np.unique(cats)
+ # TODO: Add test case that reaches this code.
unique_labels = unique_labels[com.notnull(unique_labels)]
# group by
@@ -217,6 +219,7 @@ def _bucketpanel_by(series, xby, yby, xbins, ybins):
labels = _uniquify(xlabels, ylabels, xbins, ybins)
+ # TODO: Add a test that reaches this part of the code.
mask = com.isnull(labels)
labels[mask] = -1
@@ -232,6 +235,7 @@ def relabel(key):
xlab = xlabels[pos]
ylab = ylabels[pos]
+ # TODO: Add a test that reaches this part of the code.
return '%sx%s' % (int(xlab) if com.notnull(xlab) else 'NULL',
int(ylab) if com.notnull(ylab) else 'NULL')
@@ -251,6 +255,7 @@ def _bucketpanel_cat(series, xcat, ycat):
sorted_ylabels = ylabels.take(sorter)
unique_labels = np.unique(labels)
+ # TODO: Add a test that reaches this part of the code.
unique_labels = unique_labels[com.notnull(unique_labels)]
locs = sorted_labels.searchsorted(unique_labels)
diff --git a/pandas/tools/rplot.py b/pandas/tools/rplot.py
index 1c3d17ee908cb..768929b655b69 100644
--- a/pandas/tools/rplot.py
+++ b/pandas/tools/rplot.py
@@ -553,7 +553,6 @@ def work(self, fig=None, ax=None):
ax = fig.gca()
x = self.data[self.aes['x']]
y = self.data[self.aes['y']]
- rvs = np.array([x, y])
x_min = x.min()
x_max = x.max()
y_min = y.min()
diff --git a/pandas/tseries/converter.py b/pandas/tseries/converter.py
index d059d229ef22e..ddc6aa75ef95d 100644
--- a/pandas/tseries/converter.py
+++ b/pandas/tseries/converter.py
@@ -296,12 +296,14 @@ def __call__(self):
try:
start = dmin - delta
except ValueError:
+ # TODO: Never used.
start = _from_ordinal(1.0)
try:
stop = dmax + delta
except ValueError:
# The magic number!
+ # TODO: Never used.
stop = _from_ordinal(3652059.9999999)
nmax, nmin = dates.date2num((dmax, dmin))
@@ -357,12 +359,14 @@ def autoscale(self):
try:
start = dmin - delta
except ValueError:
+ # TODO: Never used.
start = _from_ordinal(1.0)
try:
stop = dmax + delta
except ValueError:
# The magic number!
+ # TODO: Never used.
stop = _from_ordinal(3652059.9999999)
dmin, dmax = self.datalim_to_dt()
diff --git a/pandas/tseries/plotting.py b/pandas/tseries/plotting.py
index ae32367a57cd3..f36194eeb44de 100644
--- a/pandas/tseries/plotting.py
+++ b/pandas/tseries/plotting.py
@@ -75,7 +75,6 @@ def tsplot(series, plotf, **kwargs):
args.append(style)
lines = plotf(ax, *args, **kwargs)
- label = kwargs.get('label', None)
# set date formatter, locators and rescale limits
format_dateaxis(ax, ax.freq)
| Pyflakes revealed that there are a number of places using undefined
names. I've tried to fix them where I can; however, those parts of the
code are also clearly untested (or tested with
`assertRaises(Exception,...)`) so they need some love.
The PyTables code had a bunch of places with undefined `TermValue`, as
did stats/misc.
Finally, I've also noted (or removed) various places where variables
were assigned but then not used.
cc @jreback
| https://api.github.com/repos/pandas-dev/pandas/pulls/5045 | 2013-09-29T20:30:22Z | 2014-01-03T22:24:57Z | null | 2014-01-03T22:24:57Z |
TST: sparc test fixups | diff --git a/pandas/core/series.py b/pandas/core/series.py
index 90d535e51580c..4a77a5669948a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -135,6 +135,12 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
if isinstance(data, MultiIndex):
raise NotImplementedError
+ elif isinstance(data, Index):
+ # need to copy to avoid aliasing issues
+ if name is None:
+ name = data.name
+ data = data.values
+ copy = True
elif isinstance(data, pa.Array):
pass
elif isinstance(data, Series):
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index bb46d563b904e..852d02764affc 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -2221,9 +2221,10 @@ def test_tolist(self):
self.assertEqual(result, exp)
def test_repr_with_unicode_data(self):
- d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
- index = pd.DataFrame(d).set_index(["a", "b"]).index
- self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
+ with pd.core.config.option_context("display.encoding",'UTF-8'):
+ d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
+ index = pd.DataFrame(d).set_index(["a", "b"]).index
+ self.assertFalse("\\u" in repr(index)) # we don't want unicode-escaped
def test_unicode_string_with_unicode(self):
d = {"a": [u("\u05d0"), 2, 3], "b": [4, 5, 6], "c": [7, 8, 9]}
| closes #4396
| https://api.github.com/repos/pandas-dev/pandas/pulls/5043 | 2013-09-29T19:53:09Z | 2013-09-29T20:16:10Z | 2013-09-29T20:16:10Z | 2014-06-22T11:58:23Z |
EHN: Allow load_data to load problematic R datasets | diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index 79a87cb49f027..4f5c5a03a1be5 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -20,7 +20,7 @@ its release 2.3, while the current interface is
designed for the 2.2.x series. We recommend to use 2.2.x over other series
unless you are prepared to fix parts of the code, yet the rpy2-2.3.0
introduces improvements such as a better R-Python bridge memory management
-layer so I might be a good idea to bite the bullet and submit patches for
+layer so it might be a good idea to bite the bullet and submit patches for
the few minor differences that need to be fixed.
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 058ea165120a6..0894c84809a13 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -160,6 +160,10 @@ Improvements to existing features
:issue:`4998`)
- ``to_dict`` now takes ``records`` as a possible outtype. Returns an array
of column-keyed dictionaries. (:issue:`4936`)
+ - Improve support for converting R datasets to pandas objects (more
+ informative index for timeseries and numeric, support for factors, dist, and
+ high-dimensional arrays).
+
API Changes
~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 90d2989de65c2..3f6919e3b3df0 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -480,6 +480,12 @@ Enhancements
dfi[mask.any(1)]
:ref:`See the docs<indexing.basics.indexing_isin>` for more.
+ - All R datasets listed here http://stat.ethz.ch/R-manual/R-devel/library/datasets/html/00Index.html can now be loaded into Pandas objects
+
+ .. code-block:: python
+
+ import pandas.rpy.common as com
+ com.load_data('Titanic')
.. _whatsnew_0130.experimental:
diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py
index a640b43ab97e6..5747285deb988 100644
--- a/pandas/rpy/common.py
+++ b/pandas/rpy/common.py
@@ -15,6 +15,9 @@
from rpy2.robjects import r
import rpy2.robjects as robj
+import itertools as IT
+
+
__all__ = ['convert_robj', 'load_data', 'convert_to_r_dataframe',
'convert_to_r_matrix']
@@ -46,38 +49,44 @@ def _is_null(obj):
def _convert_list(obj):
"""
- Convert named Vector to dict
+ Convert named Vector to dict, factors to list
"""
- values = [convert_robj(x) for x in obj]
- return dict(zip(obj.names, values))
+ try:
+ values = [convert_robj(x) for x in obj]
+ keys = r['names'](obj)
+ return dict(zip(keys, values))
+ except TypeError:
+ # For state.division and state.region
+ factors = list(r['factor'](obj))
+ level = list(r['levels'](obj))
+ result = [level[index-1] for index in factors]
+ return result
def _convert_array(obj):
"""
- Convert Array to ndarray
+ Convert Array to DataFrame
"""
- # this royally sucks. "Matrices" (arrays) with dimension > 3 in R aren't
- # really matrices-- things come out Fortran order in the first two
- # dimensions. Maybe I'm wrong?
-
+ def _list(item):
+ try:
+ return list(item)
+ except TypeError:
+ return []
+
+ # For iris3, HairEyeColor, UCBAdmissions, Titanic
dim = list(obj.dim)
values = np.array(list(obj))
-
- if len(dim) == 3:
- arr = values.reshape(dim[-1:] + dim[:-1]).swapaxes(1, 2)
-
- if obj.names is not None:
- name_list = [list(x) for x in obj.names]
- if len(dim) == 2:
- return pd.DataFrame(arr, index=name_list[0], columns=name_list[1])
- elif len(dim) == 3:
- return pd.Panel(arr, items=name_list[2],
- major_axis=name_list[0],
- minor_axis=name_list[1])
- else:
- print('Cannot handle dim=%d' % len(dim))
- else:
- return arr
+ names = r['dimnames'](obj)
+ try:
+ columns = list(r['names'](names))[::-1]
+ except TypeError:
+ columns = ['X{:d}'.format(i) for i in range(len(names))][::-1]
+ columns.append('value')
+ name_list = [(_list(x) or range(d)) for x, d in zip(names, dim)][::-1]
+ arr = np.array(list(IT.product(*name_list)))
+ arr = np.column_stack([arr,values])
+ df = pd.DataFrame(arr, columns=columns)
+ return df
def _convert_vector(obj):
@@ -85,8 +94,24 @@ def _convert_vector(obj):
return _convert_int_vector(obj)
elif isinstance(obj, robj.StrVector):
return _convert_str_vector(obj)
-
- return list(obj)
+ # Check if the vector has extra information attached to it that can be used
+ # as an index
+ try:
+ attributes = set(r['attributes'](obj).names)
+ except AttributeError:
+ return list(obj)
+ if 'names' in attributes:
+ return pd.Series(list(obj), index=r['names'](obj))
+ elif 'tsp' in attributes:
+ return pd.Series(list(obj), index=r['time'](obj))
+ elif 'labels' in attributes:
+ return pd.Series(list(obj), index=r['labels'](obj))
+ if _rclass(obj) == 'dist':
+ # For 'eurodist'. WARNING: This results in a DataFrame, not a Series or list.
+ matrix = r['as.matrix'](obj)
+ return convert_robj(matrix)
+ else:
+ return list(obj)
NA_INTEGER = -2147483648
@@ -141,8 +166,7 @@ def _convert_Matrix(mat):
rows = mat.rownames
columns = None if _is_null(columns) else list(columns)
- index = None if _is_null(rows) else list(rows)
-
+ index = r['time'](mat) if _is_null(rows) else list(rows)
return pd.DataFrame(np.array(mat), index=_check_int(index),
columns=columns)
@@ -197,7 +221,7 @@ def convert_robj(obj, use_pandas=True):
if isinstance(obj, rpy_type):
return converter(obj)
- raise Exception('Do not know what to do with %s object' % type(obj))
+ raise TypeError('Do not know what to do with %s object' % type(obj))
def convert_to_r_posixct(obj):
@@ -329,117 +353,5 @@ def convert_to_r_matrix(df, strings_as_factors=False):
return r_matrix
-
-def test_convert_list():
- obj = r('list(a=1, b=2, c=3)')
-
- converted = convert_robj(obj)
- expected = {'a': [1], 'b': [2], 'c': [3]}
-
- _test.assert_dict_equal(converted, expected)
-
-
-def test_convert_nested_list():
- obj = r('list(a=list(foo=1, bar=2))')
-
- converted = convert_robj(obj)
- expected = {'a': {'foo': [1], 'bar': [2]}}
-
- _test.assert_dict_equal(converted, expected)
-
-
-def test_convert_frame():
- # built-in dataset
- df = r['faithful']
-
- converted = convert_robj(df)
-
- assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
- assert np.array_equal(converted.index, np.arange(1, 273))
-
-
-def _test_matrix():
- r('mat <- matrix(rnorm(9), ncol=3)')
- r('colnames(mat) <- c("one", "two", "three")')
- r('rownames(mat) <- c("a", "b", "c")')
-
- return r['mat']
-
-
-def test_convert_matrix():
- mat = _test_matrix()
-
- converted = convert_robj(mat)
-
- assert np.array_equal(converted.index, ['a', 'b', 'c'])
- assert np.array_equal(converted.columns, ['one', 'two', 'three'])
-
-
-def test_convert_r_dataframe():
-
- is_na = robj.baseenv.get("is.na")
-
- seriesd = _test.getSeriesData()
- frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
-
- # Null data
- frame["E"] = [np.nan for item in frame["A"]]
- # Some mixed type data
- frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
-
- r_dataframe = convert_to_r_dataframe(frame)
-
- assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
- assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
- assert all(is_na(item) for item in r_dataframe.rx2("E"))
-
- for column in frame[["A", "B", "C", "D"]]:
- coldata = r_dataframe.rx2(column)
- original_data = frame[column]
- assert np.array_equal(convert_robj(coldata), original_data)
-
- for column in frame[["D", "E"]]:
- for original, converted in zip(frame[column],
- r_dataframe.rx2(column)):
-
- if pd.isnull(original):
- assert is_na(converted)
- else:
- assert original == converted
-
-
-def test_convert_r_matrix():
-
- is_na = robj.baseenv.get("is.na")
-
- seriesd = _test.getSeriesData()
- frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
- # Null data
- frame["E"] = [np.nan for item in frame["A"]]
-
- r_dataframe = convert_to_r_matrix(frame)
-
- assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
- assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
- assert all(is_na(item) for item in r_dataframe.rx(True, "E"))
-
- for column in frame[["A", "B", "C", "D"]]:
- coldata = r_dataframe.rx(True, column)
- original_data = frame[column]
- assert np.array_equal(convert_robj(coldata),
- original_data)
-
- # Pandas bug 1282
- frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
-
- # FIXME: Ugly, this whole module needs to be ported to nose/unittest
- try:
- wrong_matrix = convert_to_r_matrix(frame)
- except TypeError:
- pass
- except Exception:
- raise
-
-
if __name__ == '__main__':
pass
diff --git a/pandas/rpy/tests/__init__.py b/pandas/rpy/tests/__init__.py
new file mode 100644
index 0000000000000..e69de29bb2d1d
diff --git a/pandas/rpy/tests/test_common.py b/pandas/rpy/tests/test_common.py
new file mode 100644
index 0000000000000..a2e6d08d07b58
--- /dev/null
+++ b/pandas/rpy/tests/test_common.py
@@ -0,0 +1,213 @@
+"""
+Testing that functions from rpy work as expected
+"""
+
+import pandas as pd
+import numpy as np
+import unittest
+import nose
+import pandas.util.testing as tm
+
+try:
+ import pandas.rpy.common as com
+ from rpy2.robjects import r
+ import rpy2.robjects as robj
+except ImportError:
+ raise nose.SkipTest('R not installed')
+
+
+class TestCommon(unittest.TestCase):
+ def test_convert_list(self):
+ obj = r('list(a=1, b=2, c=3)')
+
+ converted = com.convert_robj(obj)
+ expected = {'a': [1], 'b': [2], 'c': [3]}
+
+ tm.assert_dict_equal(converted, expected)
+
+ def test_convert_nested_list(self):
+ obj = r('list(a=list(foo=1, bar=2))')
+
+ converted = com.convert_robj(obj)
+ expected = {'a': {'foo': [1], 'bar': [2]}}
+
+ tm.assert_dict_equal(converted, expected)
+
+ def test_convert_frame(self):
+ # built-in dataset
+ df = r['faithful']
+
+ converted = com.convert_robj(df)
+
+ assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
+ assert np.array_equal(converted.index, np.arange(1, 273))
+
+ def _test_matrix(self):
+ r('mat <- matrix(rnorm(9), ncol=3)')
+ r('colnames(mat) <- c("one", "two", "three")')
+ r('rownames(mat) <- c("a", "b", "c")')
+
+ return r['mat']
+
+ def test_convert_matrix(self):
+ mat = self._test_matrix()
+
+ converted = com.convert_robj(mat)
+
+ assert np.array_equal(converted.index, ['a', 'b', 'c'])
+ assert np.array_equal(converted.columns, ['one', 'two', 'three'])
+
+ def test_convert_r_dataframe(self):
+
+ is_na = robj.baseenv.get("is.na")
+
+ seriesd = tm.getSeriesData()
+ frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
+
+ # Null data
+ frame["E"] = [np.nan for item in frame["A"]]
+ # Some mixed type data
+ frame["F"] = ["text" if item %
+ 2 == 0 else np.nan for item in range(30)]
+
+ r_dataframe = com.convert_to_r_dataframe(frame)
+
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.rownames), frame.index)
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.colnames), frame.columns)
+ assert all(is_na(item) for item in r_dataframe.rx2("E"))
+
+ for column in frame[["A", "B", "C", "D"]]:
+ coldata = r_dataframe.rx2(column)
+ original_data = frame[column]
+ assert np.array_equal(com.convert_robj(coldata), original_data)
+
+ for column in frame[["D", "E"]]:
+ for original, converted in zip(frame[column],
+ r_dataframe.rx2(column)):
+
+ if pd.isnull(original):
+ assert is_na(converted)
+ else:
+ assert original == converted
+
+ def test_convert_r_matrix(self):
+
+ is_na = robj.baseenv.get("is.na")
+
+ seriesd = tm.getSeriesData()
+ frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
+ # Null data
+ frame["E"] = [np.nan for item in frame["A"]]
+
+ r_dataframe = com.convert_to_r_matrix(frame)
+
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.rownames), frame.index)
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.colnames), frame.columns)
+ assert all(is_na(item) for item in r_dataframe.rx(True, "E"))
+
+ for column in frame[["A", "B", "C", "D"]]:
+ coldata = r_dataframe.rx(True, column)
+ original_data = frame[column]
+ assert np.array_equal(com.convert_robj(coldata),
+ original_data)
+
+ # Pandas bug 1282
+ frame["F"] = ["text" if item %
+ 2 == 0 else np.nan for item in range(30)]
+
+ try:
+ wrong_matrix = com.convert_to_r_matrix(frame)
+ except TypeError:
+ pass
+ except Exception:
+ raise
+
+ def test_dist(self):
+ for name in ('eurodist',):
+ df = com.load_data(name)
+ dist = r[name]
+ labels = r['labels'](dist)
+ assert np.array_equal(df.index, labels)
+ assert np.array_equal(df.columns, labels)
+
+ def test_timeseries(self):
+ """
+ Test that the series has an informative index.
+ Unfortunately the code currently does not build a DateTimeIndex
+ """
+ for name in (
+ 'austres', 'co2', 'fdeaths', 'freeny.y', 'JohnsonJohnson',
+ 'ldeaths', 'mdeaths', 'nottem', 'presidents', 'sunspot.month', 'sunspots',
+ 'UKDriverDeaths', 'UKgas', 'USAccDeaths',
+ 'airmiles', 'discoveries', 'EuStockMarkets',
+ 'LakeHuron', 'lh', 'lynx', 'nhtemp', 'Nile',
+ 'Seatbelts', 'sunspot.year', 'treering', 'uspop'):
+ series = com.load_data(name)
+ ts = r[name]
+ assert np.array_equal(series.index, r['time'](ts))
+
+ def test_numeric(self):
+ for name in ('euro', 'islands', 'precip'):
+ series = com.load_data(name)
+ numeric = r[name]
+ names = numeric.names
+ assert np.array_equal(series.index, names)
+
+ def test_table(self):
+ iris3 = pd.DataFrame({'X0': {0: '0', 1: '1', 2: '2', 3: '3', 4: '4'},
+ 'X1': {0: 'Sepal L.',
+ 1: 'Sepal L.',
+ 2: 'Sepal L.',
+ 3: 'Sepal L.',
+ 4: 'Sepal L.'},
+ 'X2': {0: 'Setosa',
+ 1: 'Setosa',
+ 2: 'Setosa',
+ 3: 'Setosa',
+ 4: 'Setosa'},
+ 'value': {0: '5.1', 1: '4.9', 2: '4.7', 3: '4.6', 4: '5.0'}})
+ hec = pd.DataFrame(
+ {
+ 'Eye': {0: 'Brown', 1: 'Brown', 2: 'Brown', 3: 'Brown', 4: 'Blue'},
+ 'Hair': {0: 'Black', 1: 'Brown', 2: 'Red', 3: 'Blond', 4: 'Black'},
+ 'Sex': {0: 'Male', 1: 'Male', 2: 'Male', 3: 'Male', 4: 'Male'},
+ 'value': {0: '32.0', 1: '53.0', 2: '10.0', 3: '3.0', 4: '11.0'}})
+ titanic = pd.DataFrame(
+ {
+ 'Age': {0: 'Child', 1: 'Child', 2: 'Child', 3: 'Child', 4: 'Child'},
+ 'Class': {0: '1st', 1: '2nd', 2: '3rd', 3: 'Crew', 4: '1st'},
+ 'Sex': {0: 'Male', 1: 'Male', 2: 'Male', 3: 'Male', 4: 'Female'},
+ 'Survived': {0: 'No', 1: 'No', 2: 'No', 3: 'No', 4: 'No'},
+ 'value': {0: '0.0', 1: '0.0', 2: '35.0', 3: '0.0', 4: '0.0'}})
+ for name, expected in zip(('HairEyeColor', 'Titanic', 'iris3'),
+ (hec, titanic, iris3)):
+ df = com.load_data(name)
+ table = r[name]
+ names = r['dimnames'](table)
+ try:
+ columns = list(r['names'](names))[::-1]
+ except TypeError:
+ columns = ['X{:d}'.format(i) for i in range(len(names))][::-1]
+ columns.append('value')
+ assert np.array_equal(df.columns, columns)
+ result = df.head()
+ cond = ((result.sort(axis=1) == expected.sort(axis=1))).values
+ assert np.all(cond)
+
+ def test_factor(self):
+ for name in ('state.division', 'state.region'):
+ vector = r[name]
+ factors = list(r['factor'](vector))
+ level = list(r['levels'](vector))
+ factors = [level[index - 1] for index in factors]
+ result = com.load_data(name)
+ assert np.equal(result, factors)
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ # '--with-coverage', '--cover-package=pandas.core'],
+ exit=False)
| TST: Move tests from rpy/common.py to tests/test_rpy.py
TST: Add tests to demonstrate the enhancements made to rpy/common.py.
DOC: Add explanation to doc/source/release.rst
| https://api.github.com/repos/pandas-dev/pandas/pulls/5042 | 2013-09-29T17:08:06Z | 2013-10-02T21:26:47Z | 2013-10-02T21:26:47Z | 2014-06-23T00:59:28Z |
TST: added null conversions for TimeDeltaBlock in core/internals related (GH4396) | diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index f10e1612f7fe9..fd9aed58798fe 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -23,7 +23,7 @@
from pandas.tslib import Timestamp
from pandas import compat
from pandas.compat import range, lrange, lmap, callable, map, zip
-
+from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
class Block(PandasObject):
@@ -1083,6 +1083,20 @@ def _try_fill(self, value):
return value
+ def _try_coerce_args(self, values, other):
+ """ provide coercion to our input arguments
+ we are going to compare vs i8, so coerce to integer
+ values is always ndarra like, other may not be """
+ values = values.view('i8')
+ if isnull(other) or (np.isscalar(other) and other == tslib.iNaT):
+ other = tslib.iNaT
+ elif isinstance(other, np.timedelta64):
+ other = _coerce_scalar_to_timedelta_type(other,unit='s').item()
+ else:
+ other = other.view('i8')
+
+ return values, other
+
def _try_operate(self, values):
""" return a version to operate on """
return values.view('i8')
| closes #4396
NaT wraparound bug again, fixed
| https://api.github.com/repos/pandas-dev/pandas/pulls/5040 | 2013-09-29T15:50:44Z | 2013-09-29T17:07:57Z | 2013-09-29T17:07:57Z | 2014-07-09T23:52:42Z |
CLN: PEP8 cleanup | diff --git a/pandas/compat/__init__.py b/pandas/compat/__init__.py
index a2531ebd43c82..982b5de49e6fa 100644
--- a/pandas/compat/__init__.py
+++ b/pandas/compat/__init__.py
@@ -120,7 +120,8 @@ def iteritems(obj, **kwargs):
"""replacement for six's iteritems for Python2/3 compat
uses 'iteritems' if available and otherwise uses 'items'.
- Passes kwargs to method."""
+ Passes kwargs to method.
+ """
func = getattr(obj, "iteritems", None)
if not func:
func = obj.items
@@ -180,6 +181,7 @@ class to receive bound method
def u(s):
return s
+
def u_safe(s):
return s
else:
@@ -243,8 +245,7 @@ def wrapper(cls):
class _OrderedDict(dict):
-
- 'Dictionary that remembers insertion order'
+ """Dictionary that remembers insertion order"""
# An inherited dict maps keys to values.
# The inherited dict provides __getitem__, __len__, __contains__, and get.
# The remaining methods are order-aware.
@@ -258,11 +259,10 @@ class _OrderedDict(dict):
# KEY].
def __init__(self, *args, **kwds):
- '''Initialize an ordered dictionary. Signature is the same as for
+ """Initialize an ordered dictionary. Signature is the same as for
regular dictionaries, but keyword arguments are not recommended
because their insertion order is arbitrary.
-
- '''
+ """
if len(args) > 1:
raise TypeError('expected at most 1 arguments, got %d' % len(args))
try:
@@ -274,7 +274,7 @@ def __init__(self, *args, **kwds):
self.__update(*args, **kwds)
def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
- 'od.__setitem__(i, y) <==> od[i]=y'
+ """od.__setitem__(i, y) <==> od[i]=y"""
# Setting a new item creates a new link which goes at the end of the
# linked list, and the inherited dictionary is updated with the new
# key/value pair.
@@ -285,7 +285,7 @@ def __setitem__(self, key, value, dict_setitem=dict.__setitem__):
dict_setitem(self, key, value)
def __delitem__(self, key, dict_delitem=dict.__delitem__):
- 'od.__delitem__(y) <==> del od[y]'
+ """od.__delitem__(y) <==> del od[y]"""
# Deleting an existing item uses self.__map to find the link which is
# then removed by updating the links in the predecessor and successor
# nodes.
@@ -295,7 +295,7 @@ def __delitem__(self, key, dict_delitem=dict.__delitem__):
link_next[0] = link_prev
def __iter__(self):
- 'od.__iter__() <==> iter(od)'
+ """od.__iter__() <==> iter(od)"""
root = self.__root
curr = root[1]
while curr is not root:
@@ -303,7 +303,7 @@ def __iter__(self):
curr = curr[1]
def __reversed__(self):
- 'od.__reversed__() <==> reversed(od)'
+ """od.__reversed__() <==> reversed(od)"""
root = self.__root
curr = root[0]
while curr is not root:
@@ -311,7 +311,7 @@ def __reversed__(self):
curr = curr[0]
def clear(self):
- 'od.clear() -> None. Remove all items from od.'
+ """od.clear() -> None. Remove all items from od."""
try:
for node in itervalues(self.__map):
del node[:]
@@ -323,10 +323,11 @@ def clear(self):
dict.clear(self)
def popitem(self, last=True):
- '''od.popitem() -> (k, v), return and remove a (key, value) pair.
+ """od.popitem() -> (k, v), return and remove a (key, value) pair.
+
Pairs are returned in LIFO order if last is true or FIFO order if
false.
- '''
+ """
if not self:
raise KeyError('dictionary is empty')
root = self.__root
@@ -348,39 +349,39 @@ def popitem(self, last=True):
# -- the following methods do not depend on the internal structure --
def keys(self):
- 'od.keys() -> list of keys in od'
+ """od.keys() -> list of keys in od"""
return list(self)
def values(self):
- 'od.values() -> list of values in od'
+ """od.values() -> list of values in od"""
return [self[key] for key in self]
def items(self):
- 'od.items() -> list of (key, value) pairs in od'
+ """od.items() -> list of (key, value) pairs in od"""
return [(key, self[key]) for key in self]
def iterkeys(self):
- 'od.iterkeys() -> an iterator over the keys in od'
+ """od.iterkeys() -> an iterator over the keys in od"""
return iter(self)
def itervalues(self):
- 'od.itervalues -> an iterator over the values in od'
+ """od.itervalues -> an iterator over the values in od"""
for k in self:
yield self[k]
def iteritems(self):
- 'od.iteritems -> an iterator over the (key, value) items in od'
+ """od.iteritems -> an iterator over the (key, value) items in od"""
for k in self:
yield (k, self[k])
def update(*args, **kwds):
- '''od.update(E, **F) -> None. Update od from dict/iterable E and F.
+ """od.update(E, **F) -> None. Update od from dict/iterable E and F.
If E is a dict instance, does: for k in E: od[k] = E[k]
If E has a .keys() method, does: for k in E.keys(): od[k] = E[k]
Or if E is an iterable of items, does:for k, v in E: od[k] = v
In either case, this is followed by: for k, v in F.items(): od[k] = v
- '''
+ """
if len(args) > 2:
raise TypeError('update() takes at most 2 positional '
'arguments (%d given)' % (len(args),))
@@ -408,10 +409,10 @@ def update(*args, **kwds):
__marker = object()
def pop(self, key, default=__marker):
- '''od.pop(k[,d]) -> v, remove specified key and return the\
+ """od.pop(k[,d]) -> v, remove specified key and return the
corresponding value. If key is not found, d is returned if given,
otherwise KeyError is raised.
- '''
+ """
if key in self:
result = self[key]
del self[key]
@@ -421,14 +422,15 @@ def pop(self, key, default=__marker):
return default
def setdefault(self, key, default=None):
- 'od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od'
+ """od.setdefault(k[,d]) -> od.get(k,d), also set od[k]=d if k not in od
+ """
if key in self:
return self[key]
self[key] = default
return default
def __repr__(self, _repr_running={}):
- 'od.__repr__() <==> repr(od)'
+ """od.__repr__() <==> repr(od)"""
call_key = id(self), _get_ident()
if call_key in _repr_running:
return '...'
@@ -441,7 +443,7 @@ def __repr__(self, _repr_running={}):
del _repr_running[call_key]
def __reduce__(self):
- 'Return state information for pickling'
+ """Return state information for pickling"""
items = [[k, self[k]] for k in self]
inst_dict = vars(self).copy()
for k in vars(OrderedDict()):
@@ -451,24 +453,24 @@ def __reduce__(self):
return self.__class__, (items,)
def copy(self):
- 'od.copy() -> a shallow copy of od'
+ """od.copy() -> a shallow copy of od"""
return self.__class__(self)
@classmethod
def fromkeys(cls, iterable, value=None):
- '''OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and
+ """OD.fromkeys(S[, v]) -> New ordered dictionary with keys from S and
values equal to v (which defaults to None).
- '''
+ """
d = cls()
for key in iterable:
d[key] = value
return d
def __eq__(self, other):
- '''od.__eq__(y) <==> od==y. Comparison to another OD is
+ """od.__eq__(y) <==> od==y. Comparison to another OD is
order-sensitive while comparison to a regular mapping is
order-insensitive.
- '''
+ """
if isinstance(other, OrderedDict):
return (len(self) == len(other) and
list(self.items()) == list(other.items()))
@@ -480,15 +482,16 @@ def __ne__(self, other):
# -- the following methods are only used in Python 2.7 --
def viewkeys(self):
- "od.viewkeys() -> a set-like object providing a view on od's keys"
+ """od.viewkeys() -> a set-like object providing a view on od's keys"""
return KeysView(self)
def viewvalues(self):
- "od.viewvalues() -> an object providing a view on od's values"
+ """od.viewvalues() -> an object providing a view on od's values"""
return ValuesView(self)
def viewitems(self):
- "od.viewitems() -> a set-like object providing a view on od's items"
+ """od.viewitems() -> a set-like object providing a view on od's items
+ """
return ItemsView(self)
@@ -502,18 +505,17 @@ def viewitems(self):
class _Counter(dict):
-
- '''Dict subclass for counting hashable objects. Sometimes called a bag
+ """Dict subclass for counting hashable objects. Sometimes called a bag
or multiset. Elements are stored as dictionary keys and their counts
are stored as dictionary values.
>>> Counter('zyzygy')
Counter({'y': 3, 'z': 2, 'g': 1})
- '''
+ """
def __init__(self, iterable=None, **kwds):
- '''Create a new, empty Counter object. And if given, count elements
+ """Create a new, empty Counter object. And if given, count elements
from an input iterable. Or, initialize the count from another mapping
of elements to their counts.
@@ -522,26 +524,26 @@ def __init__(self, iterable=None, **kwds):
>>> c = Counter({'a': 4, 'b': 2}) # a new counter from a mapping
>>> c = Counter(a=4, b=2) # a new counter from keyword args
- '''
+ """
self.update(iterable, **kwds)
def __missing__(self, key):
return 0
def most_common(self, n=None):
- '''List the n most common elements and their counts from the most
+ """List the n most common elements and their counts from the most
common to the least. If n is None, then list all element counts.
>>> Counter('abracadabra').most_common(3)
[('a', 5), ('r', 2), ('b', 2)]
- '''
+ """
if n is None:
return sorted(iteritems(self), key=itemgetter(1), reverse=True)
return nlargest(n, iteritems(self), key=itemgetter(1))
def elements(self):
- '''Iterator over elements repeating each as many times as its count.
+ """Iterator over elements repeating each as many times as its count.
>>> c = Counter('ABCABC')
>>> sorted(c.elements())
@@ -550,7 +552,7 @@ def elements(self):
If an element's count has been set to zero or is a negative number,
elements() will ignore it.
- '''
+ """
for elem, count in iteritems(self):
for _ in range(count):
yield elem
@@ -563,7 +565,7 @@ def fromkeys(cls, iterable, v=None):
'Counter.fromkeys() is undefined. Use Counter(iterable) instead.')
def update(self, iterable=None, **kwds):
- '''Like dict.update() but add counts instead of replacing them.
+ """Like dict.update() but add counts instead of replacing them.
Source can be an iterable, a dictionary, or another Counter instance.
@@ -574,7 +576,7 @@ def update(self, iterable=None, **kwds):
>>> c['h'] # four 'h' in which, witch, and watch
4
- '''
+ """
if iterable is not None:
if hasattr(iterable, 'iteritems'):
if self:
@@ -592,12 +594,14 @@ def update(self, iterable=None, **kwds):
self.update(kwds)
def copy(self):
- 'Like dict.copy() but returns a Counter instance instead of a dict.'
+ """Like dict.copy() but returns a Counter instance instead of a dict.
+ """
return Counter(self)
def __delitem__(self, elem):
- '''Like dict.__delitem__() but does not raise KeyError for missing
- values.'''
+ """Like dict.__delitem__() but does not raise KeyError for missing
+ values.
+ """
if elem in self:
dict.__delitem__(self, elem)
@@ -617,13 +621,12 @@ def __repr__(self):
# c += Counter()
def __add__(self, other):
- '''Add counts from two counters.
+ """Add counts from two counters.
>>> Counter('abbb') + Counter('bcc')
Counter({'b': 4, 'c': 2, 'a': 1})
-
- '''
+ """
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
@@ -634,12 +637,12 @@ def __add__(self, other):
return result
def __sub__(self, other):
- ''' Subtract count, but keep only results with positive counts.
+ """Subtract count, but keep only results with positive counts.
>>> Counter('abbbc') - Counter('bccd')
Counter({'b': 2, 'a': 1})
- '''
+ """
if not isinstance(other, Counter):
return NotImplemented
result = Counter()
@@ -650,12 +653,12 @@ def __sub__(self, other):
return result
def __or__(self, other):
- '''Union is the maximum of value in either of the input counters.
+ """Union is the maximum of value in either of the input counters.
>>> Counter('abbb') | Counter('bcc')
Counter({'b': 3, 'c': 2, 'a': 1})
- '''
+ """
if not isinstance(other, Counter):
return NotImplemented
_max = max
@@ -667,12 +670,12 @@ def __or__(self, other):
return result
def __and__(self, other):
- ''' Intersection is the minimum of corresponding counts.
+ """Intersection is the minimum of corresponding counts.
>>> Counter('abbb') & Counter('bcc')
Counter({'b': 1})
- '''
+ """
if not isinstance(other, Counter):
return NotImplemented
_min = min
@@ -705,10 +708,9 @@ def raise_with_traceback(exc, traceback=Ellipsis):
raise exc, None, traceback
""")
-raise_with_traceback.__doc__ = (
-"""Raise exception with existing traceback.
+raise_with_traceback.__doc__ = """Raise exception with existing traceback.
If traceback is not passed, uses sys.exc_info() to get traceback."""
-)
+
# http://stackoverflow.com/questions/4126348
# Thanks to @martineau at SO
@@ -723,6 +725,7 @@ def parse_date(timestr, *args, **kwargs):
else:
parse_date = _date_parser.parse
+
class OrderedDefaultdict(OrderedDict):
def __init__(self, *args, **kwargs):
diff --git a/pandas/compat/pickle_compat.py b/pandas/compat/pickle_compat.py
index bf52fc30a9ea3..3365f1bb630b9 100644
--- a/pandas/compat/pickle_compat.py
+++ b/pandas/compat/pickle_compat.py
@@ -9,6 +9,7 @@
from pandas.core.series import Series, TimeSeries
from pandas.sparse.series import SparseSeries, SparseTimeSeries
+
def load_reduce(self):
stack = self.stack
args = stack.pop()
@@ -18,7 +19,8 @@ def load_reduce(self):
if n == u('DeprecatedSeries') or n == u('DeprecatedTimeSeries'):
stack[-1] = object.__new__(Series)
return
- elif n == u('DeprecatedSparseSeries') or n == u('DeprecatedSparseTimeSeries'):
+ elif (n == u('DeprecatedSparseSeries') or
+ n == u('DeprecatedSparseTimeSeries')):
stack[-1] = object.__new__(SparseSeries)
return
@@ -28,7 +30,9 @@ def load_reduce(self):
# try to reencode the arguments
if self.encoding is not None:
- args = tuple([ arg.encode(self.encoding) if isinstance(arg, string_types) else arg for arg in args ])
+ args = tuple([arg.encode(self.encoding)
+ if isinstance(arg, string_types)
+ else arg for arg in args])
try:
stack[-1] = func(*args)
return
@@ -51,9 +55,9 @@ class Unpickler(pkl.Unpickler):
Unpickler.dispatch[pkl.REDUCE[0]] = load_reduce
+
def load(fh, encoding=None, compat=False, is_verbose=False):
- """
- load a pickle, with a provided encoding
+ """load a pickle, with a provided encoding
if compat is True:
fake the old class hierarchy
@@ -90,14 +94,18 @@ def load(fh, encoding=None, compat=False, is_verbose=False):
pandas.sparse.series.SparseSeries = SparseSeries
pandas.sparse.series.SparseTimeSeries = SparseTimeSeries
+
class DeprecatedSeries(np.ndarray, Series):
pass
+
class DeprecatedTimeSeries(DeprecatedSeries):
pass
+
class DeprecatedSparseSeries(DeprecatedSeries):
pass
+
class DeprecatedSparseTimeSeries(DeprecatedSparseSeries):
pass
diff --git a/pandas/compat/scipy.py b/pandas/compat/scipy.py
index 3dab5b1f0451e..81601ffe25609 100644
--- a/pandas/compat/scipy.py
+++ b/pandas/compat/scipy.py
@@ -7,8 +7,7 @@
def scoreatpercentile(a, per, limit=(), interpolation_method='fraction'):
- """
- Calculate the score at the given `per` percentile of the sequence `a`.
+ """Calculate the score at the given `per` percentile of the sequence `a`.
For example, the score at `per=50` is the median. If the desired quantile
lies between two data points, we interpolate between them, according to
@@ -65,7 +64,7 @@ def scoreatpercentile(a, per, limit=(), interpolation_method='fraction'):
values = values[(limit[0] <= values) & (values <= limit[1])]
idx = per / 100. * (values.shape[0] - 1)
- if (idx % 1 == 0):
+ if idx % 1 == 0:
score = values[idx]
else:
if interpolation_method == 'fraction':
@@ -153,8 +152,7 @@ def fastsort(a):
def percentileofscore(a, score, kind='rank'):
- '''
- The percentile rank of a score relative to a list of scores.
+ """The percentile rank of a score relative to a list of scores.
A `percentileofscore` of, for example, 80% means that 80% of the
scores in `a` are below the given score. In the case of gaps or
@@ -217,7 +215,7 @@ def percentileofscore(a, score, kind='rank'):
>>> percentileofscore([1, 2, 3, 3, 4], 3, kind='mean')
60.0
- '''
+ """
a = np.array(a)
n = len(a)
diff --git a/pandas/computation/align.py b/pandas/computation/align.py
index f420d0dacf34c..233f2b61dc463 100644
--- a/pandas/computation/align.py
+++ b/pandas/computation/align.py
@@ -101,8 +101,8 @@ def wrapper(terms):
@_filter_special_cases
def _align_core(terms):
- term_index = [i for i, term in enumerate(terms) if hasattr(term.value,
- 'axes')]
+ term_index = [i for i, term in enumerate(terms)
+ if hasattr(term.value, 'axes')]
term_dims = [terms[i].value.ndim for i in term_index]
ndims = pd.Series(dict(zip(term_index, term_dims)))
@@ -139,10 +139,10 @@ def _align_core(terms):
ordm = np.log10(abs(reindexer_size - term_axis_size))
if ordm >= 1 and reindexer_size >= 10000:
- warnings.warn("Alignment difference on axis {0} is larger"
- " than an order of magnitude on term {1!r}, "
- "by more than {2:.4g}; performance may suffer"
- "".format(axis, terms[i].name, ordm),
+ warnings.warn('Alignment difference on axis {0} is larger '
+ 'than an order of magnitude on term {1!r}, '
+ 'by more than {2:.4g}; performance may '
+ 'suffer'.format(axis, terms[i].name, ordm),
category=pd.io.common.PerformanceWarning)
if transpose:
@@ -237,7 +237,7 @@ def _reconstruct_object(typ, obj, axes, dtype):
res_t = dtype
if (not isinstance(typ, partial) and
- issubclass(typ, pd.core.generic.PandasObject)):
+ issubclass(typ, pd.core.generic.PandasObject)):
return typ(obj, dtype=res_t, **axes)
# special case for pathological things like ~True/~False
diff --git a/pandas/computation/expr.py b/pandas/computation/expr.py
index 64bceee118fd1..1af41acd34ede 100644
--- a/pandas/computation/expr.py
+++ b/pandas/computation/expr.py
@@ -91,7 +91,8 @@ class Scope(StringMixin):
__slots__ = ('globals', 'locals', 'resolvers', '_global_resolvers',
'resolver_keys', '_resolver', 'level', 'ntemps', 'target')
- def __init__(self, gbls=None, lcls=None, level=1, resolvers=None, target=None):
+ def __init__(self, gbls=None, lcls=None, level=1, resolvers=None,
+ target=None):
self.level = level
self.resolvers = tuple(resolvers or [])
self.globals = dict()
@@ -133,11 +134,12 @@ def __init__(self, gbls=None, lcls=None, level=1, resolvers=None, target=None):
self.resolver_dict.update(dict(o))
def __unicode__(self):
- return com.pprint_thing("locals: {0}\nglobals: {0}\nresolvers: "
- "{0}\ntarget: {0}".format(list(self.locals.keys()),
- list(self.globals.keys()),
- list(self.resolver_keys),
- self.target))
+ return com.pprint_thing(
+ 'locals: {0}\nglobals: {0}\nresolvers: '
+ '{0}\ntarget: {0}'.format(list(self.locals.keys()),
+ list(self.globals.keys()),
+ list(self.resolver_keys),
+ self.target))
def __getitem__(self, key):
return self.resolve(key, globally=False)
@@ -499,9 +501,8 @@ def _possibly_evaluate_binop(self, op, op_class, lhs, rhs,
maybe_eval_in_python=('==', '!=')):
res = op(lhs, rhs)
- if (res.op in _cmp_ops_syms and
- lhs.is_datetime or rhs.is_datetime and
- self.engine != 'pytables'):
+ if (res.op in _cmp_ops_syms and lhs.is_datetime or rhs.is_datetime and
+ self.engine != 'pytables'):
# all date ops must be done in python bc numexpr doesn't work well
# with NaT
return self._possibly_eval(res, self.binary_ops)
@@ -594,18 +595,20 @@ def visit_Assign(self, node, **kwargs):
if len(node.targets) != 1:
raise SyntaxError('can only assign a single expression')
if not isinstance(node.targets[0], ast.Name):
- raise SyntaxError('left hand side of an assignment must be a single name')
+ raise SyntaxError('left hand side of an assignment must be a '
+ 'single name')
if self.env.target is None:
raise ValueError('cannot assign without a target object')
try:
assigner = self.visit(node.targets[0], **kwargs)
- except (UndefinedVariableError):
+ except UndefinedVariableError:
assigner = node.targets[0].id
- self.assigner = getattr(assigner,'name',assigner)
+ self.assigner = getattr(assigner, 'name', assigner)
if self.assigner is None:
- raise SyntaxError('left hand side of an assignment must be a single resolvable name')
+ raise SyntaxError('left hand side of an assignment must be a '
+ 'single resolvable name')
return self.visit(node.value, **kwargs)
@@ -622,7 +625,7 @@ def visit_Attribute(self, node, **kwargs):
name = self.env.add_tmp(v)
return self.term_type(name, self.env)
except AttributeError:
- # something like datetime.datetime where scope is overriden
+ # something like datetime.datetime where scope is overridden
if isinstance(value, ast.Name) and value.id == attr:
return resolved
@@ -699,8 +702,7 @@ def visitor(x, y):
return reduce(visitor, operands)
-_python_not_supported = frozenset(['Dict', 'Call', 'BoolOp',
- 'In', 'NotIn'])
+_python_not_supported = frozenset(['Dict', 'Call', 'BoolOp', 'In', 'NotIn'])
_numexpr_supported_calls = frozenset(_reductions + _mathops)
@@ -744,7 +746,7 @@ def __init__(self, expr, engine='numexpr', parser='pandas', env=None,
@property
def assigner(self):
- return getattr(self._visitor,'assigner',None)
+ return getattr(self._visitor, 'assigner', None)
def __call__(self):
self.env.locals['truediv'] = self.truediv
diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py
index f1007cbc81eb7..035878e20c645 100644
--- a/pandas/computation/expressions.py
+++ b/pandas/computation/expressions.py
@@ -2,7 +2,7 @@
Expressions
-----------
-Offer fast expression evaluation thru numexpr
+Offer fast expression evaluation through numexpr
"""
@@ -22,9 +22,10 @@
_where = None
# the set of dtypes that we will allow pass to numexpr
-_ALLOWED_DTYPES = dict(
- evaluate=set(['int64', 'int32', 'float64', 'float32', 'bool']),
- where=set(['int64', 'float64', 'bool']))
+_ALLOWED_DTYPES = {
+ 'evaluate': set(['int64', 'int32', 'float64', 'float32', 'bool']),
+ 'where': set(['int64', 'float64', 'bool'])
+}
# the minimum prod shape that we will use numexpr
_MIN_ELEMENTS = 10000
@@ -100,10 +101,10 @@ def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, truediv=True,
'b_value': b_value},
casting='safe', truediv=truediv,
**eval_kwargs)
- except (ValueError) as detail:
+ except ValueError as detail:
if 'unknown type object' in str(detail):
pass
- except (Exception) as detail:
+ except Exception as detail:
if raise_on_error:
raise
@@ -135,10 +136,10 @@ def _where_numexpr(cond, a, b, raise_on_error=False):
'a_value': a_value,
'b_value': b_value},
casting='safe')
- except (ValueError) as detail:
+ except ValueError as detail:
if 'unknown type object' in str(detail):
pass
- except (Exception) as detail:
+ except Exception as detail:
if raise_on_error:
raise TypeError(str(detail))
diff --git a/pandas/computation/ops.py b/pandas/computation/ops.py
index fd5ee159fe2b4..0510ee86760a3 100644
--- a/pandas/computation/ops.py
+++ b/pandas/computation/ops.py
@@ -207,7 +207,6 @@ def name(self):
return self.value
-
_bool_op_map = {'not': '~', 'and': '&', 'or': '|'}
diff --git a/pandas/computation/pytables.py b/pandas/computation/pytables.py
index eb675d6230c8c..8afe8e909a434 100644
--- a/pandas/computation/pytables.py
+++ b/pandas/computation/pytables.py
@@ -16,6 +16,7 @@
from pandas.computation.common import _ensure_decoded
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
+
class Scope(expr.Scope):
__slots__ = 'globals', 'locals', 'queryables'
@@ -85,7 +86,7 @@ def _disallow_scalar_only_bool_ops(self):
def prune(self, klass):
def pr(left, right):
- """ create and return a new specilized BinOp from myself """
+ """ create and return a new specialized BinOp from myself """
if left is None:
return right
@@ -95,7 +96,7 @@ def pr(left, right):
k = klass
if isinstance(left, ConditionBinOp):
if (isinstance(left, ConditionBinOp) and
- isinstance(right, ConditionBinOp)):
+ isinstance(right, ConditionBinOp)):
k = JointConditionBinOp
elif isinstance(left, k):
return left
@@ -104,7 +105,7 @@ def pr(left, right):
elif isinstance(left, FilterBinOp):
if (isinstance(left, FilterBinOp) and
- isinstance(right, FilterBinOp)):
+ isinstance(right, FilterBinOp)):
k = JointFilterBinOp
elif isinstance(left, k):
return left
@@ -177,11 +178,12 @@ def stringify(value):
if v.tz is not None:
v = v.tz_convert('UTC')
return TermValue(v, v.value, kind)
- elif isinstance(v, datetime) or hasattr(v, 'timetuple') or kind == u('date'):
+ elif (isinstance(v, datetime) or hasattr(v, 'timetuple') or
+ kind == u('date')):
v = time.mktime(v.timetuple())
return TermValue(v, pd.Timestamp(v), kind)
elif kind == u('timedelta64') or kind == u('timedelta'):
- v = _coerce_scalar_to_timedelta_type(v,unit='s').item()
+ v = _coerce_scalar_to_timedelta_type(v, unit='s').item()
return TermValue(int(v), v, kind)
elif kind == u('integer'):
v = int(float(v))
@@ -293,7 +295,8 @@ def invert(self):
#if self.condition is not None:
# self.condition = "~(%s)" % self.condition
#return self
- raise NotImplementedError("cannot use an invert condition when passing to numexpr")
+ raise NotImplementedError("cannot use an invert condition when "
+ "passing to numexpr")
def format(self):
""" return the actual ne format """
@@ -352,10 +355,10 @@ def prune(self, klass):
operand = operand.prune(klass)
if operand is not None:
- if issubclass(klass,ConditionBinOp):
+ if issubclass(klass, ConditionBinOp):
if operand.condition is not None:
return operand.invert()
- elif issubclass(klass,FilterBinOp):
+ elif issubclass(klass, FilterBinOp):
if operand.filter is not None:
return operand.invert()
@@ -364,6 +367,7 @@ def prune(self, klass):
_op_classes = {'unary': UnaryOp}
+
class ExprVisitor(BaseExprVisitor):
const_type = Constant
term_type = Term
@@ -401,7 +405,7 @@ def visit_Subscript(self, node, **kwargs):
return self.const_type(value[slobj], self.env)
except TypeError:
raise ValueError("cannot subscript {0!r} with "
- "{1!r}".format(value, slobj))
+ "{1!r}".format(value, slobj))
def visit_Attribute(self, node, **kwargs):
attr = node.attr
@@ -435,7 +439,8 @@ class Expr(expr.Expr):
Parameters
----------
where : string term expression, Expr, or list-like of Exprs
- queryables : a "kinds" map (dict of column name -> kind), or None if column is non-indexable
+ queryables : a "kinds" map (dict of column name -> kind), or None if column
+ is non-indexable
encoding : an encoding that will encode the query terms
Returns
@@ -538,13 +543,13 @@ def evaluate(self):
try:
self.condition = self.terms.prune(ConditionBinOp)
except AttributeError:
- raise ValueError(
- "cannot process expression [{0}], [{1}] is not a valid condition".format(self.expr,self))
+ raise ValueError("cannot process expression [{0}], [{1}] is not a "
+ "valid condition".format(self.expr, self))
try:
self.filter = self.terms.prune(FilterBinOp)
except AttributeError:
- raise ValueError(
- "cannot process expression [{0}], [{1}] is not a valid filter".format(self.expr,self))
+ raise ValueError("cannot process expression [{0}], [{1}] is not a "
+ "valid filter".format(self.expr, self))
return self.condition, self.filter
diff --git a/pandas/core/algorithms.py b/pandas/core/algorithms.py
index 2699dd0a25a2b..24c14a5d7f215 100644
--- a/pandas/core/algorithms.py
+++ b/pandas/core/algorithms.py
@@ -153,7 +153,8 @@ def factorize(values, sort=False, order=None, na_sentinel=-1):
return labels, uniques
-def value_counts(values, sort=True, ascending=False, normalize=False, bins=None):
+def value_counts(values, sort=True, ascending=False, normalize=False,
+ bins=None):
"""
Compute a histogram of the counts of non-null values
@@ -191,7 +192,7 @@ def value_counts(values, sort=True, ascending=False, normalize=False, bins=None)
values = com._ensure_int64(values)
keys, counts = htable.value_count_int64(values)
- elif issubclass(values.dtype.type, (np.datetime64,np.timedelta64)):
+ elif issubclass(values.dtype.type, (np.datetime64, np.timedelta64)):
dtype = values.dtype
values = values.view(np.int64)
keys, counts = htable.value_count_int64(values)
@@ -223,7 +224,7 @@ def value_counts(values, sort=True, ascending=False, normalize=False, bins=None)
def mode(values):
- "Returns the mode or mode(s) of the passed Series or ndarray (sorted)"
+ """Returns the mode or mode(s) of the passed Series or ndarray (sorted)"""
# must sort because hash order isn't necessarily defined.
from pandas.core.series import Series
@@ -239,7 +240,7 @@ def mode(values):
values = com._ensure_int64(values)
result = constructor(sorted(htable.mode_int64(values)), dtype=dtype)
- elif issubclass(values.dtype.type, (np.datetime64,np.timedelta64)):
+ elif issubclass(values.dtype.type, (np.datetime64, np.timedelta64)):
dtype = values.dtype
values = values.view(np.int64)
result = constructor(sorted(htable.mode_int64(values)), dtype=dtype)
@@ -324,7 +325,7 @@ def _get_score(at):
return np.nan
idx = at * (len(values) - 1)
- if (idx % 1 == 0):
+ if idx % 1 == 0:
score = values[idx]
else:
if interpolation_method == 'fraction':
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 36081cc34cc3a..28118c60776ce 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -27,8 +27,8 @@
# legacy
from pandas.core.daterange import DateRange # deprecated
-from pandas.core.common import save, load # deprecated, remove in 0.13
+from pandas.core.common import save, load # deprecated, remove in 0.13
import pandas.core.datetools as datetools
-from pandas.core.config import get_option, set_option, reset_option,\
- describe_option, options
+from pandas.core.config import (get_option, set_option, reset_option,
+ describe_option, options)
diff --git a/pandas/core/array.py b/pandas/core/array.py
index 6847ba073b92a..209b00cf8bb3c 100644
--- a/pandas/core/array.py
+++ b/pandas/core/array.py
@@ -37,6 +37,7 @@
#### a series-like ndarray ####
+
class SNDArray(Array):
def __new__(cls, data, index=None, name=None):
@@ -49,4 +50,3 @@ def __new__(cls, data, index=None, name=None):
@property
def values(self):
return self.view(Array)
-
diff --git a/pandas/core/base.py b/pandas/core/base.py
index 6b9fa78d45406..a702e7c87c0a9 100644
--- a/pandas/core/base.py
+++ b/pandas/core/base.py
@@ -5,10 +5,15 @@
import numpy as np
from pandas.core import common as com
+
class StringMixin(object):
- """implements string methods so long as object defines a `__unicode__` method.
- Handles Python2/3 compatibility transparently."""
- # side note - this could be made into a metaclass if more than one object nees
+ """implements string methods so long as object defines a `__unicode__`
+ method.
+
+ Handles Python2/3 compatibility transparently.
+ """
+ # side note - this could be made into a metaclass if more than one
+ # object needs
#----------------------------------------------------------------------
# Formatting
@@ -96,7 +101,8 @@ class FrozenList(PandasObject, list):
because it's technically non-hashable, will be used
for lookups, appropriately, etc.
"""
- # Sidenote: This has to be of type list, otherwise it messes up PyTables typechecks
+ # Sidenote: This has to be of type list, otherwise it messes up PyTables
+ # typechecks
def __add__(self, other):
if isinstance(other, tuple):
@@ -146,7 +152,7 @@ def _disabled(self, *args, **kwargs):
def __unicode__(self):
from pandas.core.common import pprint_thing
return pprint_thing(self, quote_strings=True,
- escape_chars=('\t', '\r', '\n'))
+ escape_chars=('\t', '\r', '\n'))
def __repr__(self):
return "%s(%s)" % (self.__class__.__name__,
@@ -185,7 +191,9 @@ def __unicode__(self):
"""
Return a string representation for this object.
- Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both
+ py2/py3.
"""
- prepr = com.pprint_thing(self, escape_chars=('\t', '\r', '\n'),quote_strings=True)
+ prepr = com.pprint_thing(self, escape_chars=('\t', '\r', '\n'),
+ quote_strings=True)
return "%s(%s, dtype='%s')" % (type(self).__name__, prepr, self.dtype)
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index f412947f92255..fec9cd4ff4274 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -31,6 +31,7 @@ def f(self, other):
return f
+
class Categorical(PandasObject):
"""
Represents a categorical variable in classic R / S-plus fashion
@@ -167,8 +168,8 @@ def _repr_footer(self):
def _get_repr(self, name=False, length=True, na_rep='NaN', footer=True):
formatter = fmt.CategoricalFormatter(self, name=name,
- length=length, na_rep=na_rep,
- footer=footer)
+ length=length, na_rep=na_rep,
+ footer=footer)
result = formatter.to_string()
return compat.text_type(result)
@@ -226,7 +227,8 @@ def describe(self):
grouped = DataFrame(self.labels).groupby(0)
counts = grouped.count().values.squeeze()
freqs = counts/float(counts.sum())
- return DataFrame.from_dict(dict(
- counts=counts,
- freqs=freqs,
- levels=self.levels)).set_index('levels')
+ return DataFrame.from_dict({
+ 'counts': counts,
+ 'freqs': freqs,
+ 'levels': self.levels
+ }).set_index('levels')
diff --git a/pandas/core/common.py b/pandas/core/common.py
index 42964c9d48537..6fc015d2cb575 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -26,20 +26,23 @@
class PandasError(Exception):
pass
+
class SettingWithCopyError(ValueError):
pass
+
class SettingWithCopyWarning(Warning):
pass
+
class AmbiguousIndexError(PandasError, KeyError):
pass
_POSSIBLY_CAST_DTYPES = set([np.dtype(t)
- for t in ['M8[ns]', 'm8[ns]', 'O', 'int8',
- 'uint8', 'int16', 'uint16', 'int32',
- 'uint32', 'int64', 'uint64']])
+ for t in ['M8[ns]', 'm8[ns]', 'O', 'int8',
+ 'uint8', 'int16', 'uint16', 'int32',
+ 'uint32', 'int64', 'uint64']])
_NS_DTYPE = np.dtype('M8[ns]')
_TD_DTYPE = np.dtype('m8[ns]')
@@ -136,8 +139,7 @@ def _isnull_new(obj):
def _isnull_old(obj):
- '''
- Detect missing values. Treat None, NaN, INF, -INF as null.
+ """Detect missing values. Treat None, NaN, INF, -INF as null.
Parameters
----------
@@ -146,7 +148,7 @@ def _isnull_old(obj):
Returns
-------
boolean ndarray or boolean
- '''
+ """
if lib.isscalar(obj):
return lib.checknull_old(obj)
# hack (for now) because MI registers as ndarray
@@ -155,7 +157,8 @@ def _isnull_old(obj):
elif isinstance(obj, (ABCSeries, np.ndarray)):
return _isnull_ndarraylike_old(obj)
elif isinstance(obj, ABCGeneric):
- return obj._constructor(obj._data.apply(lambda x: _isnull_old(x.values)))
+ return obj._constructor(obj._data.apply(
+ lambda x: _isnull_old(x.values)))
elif isinstance(obj, list) or hasattr(obj, '__array__'):
return _isnull_ndarraylike_old(np.asarray(obj))
else:
@@ -165,7 +168,7 @@ def _isnull_old(obj):
def _use_inf_as_null(key):
- '''Option change callback for null/inf behaviour
+ """Option change callback for null/inf behaviour
Choose which replacement for numpy.isnan / -numpy.isfinite is used.
Parameters
@@ -182,7 +185,7 @@ def _use_inf_as_null(key):
* http://stackoverflow.com/questions/4859217/
programmatically-creating-variables-in-python/4859312#4859312
- '''
+ """
flag = get_option(key)
if flag:
globals()['_isnull'] = _isnull_old
@@ -192,7 +195,7 @@ def _use_inf_as_null(key):
def _isnull_ndarraylike(obj):
- values = getattr(obj,'values',obj)
+ values = getattr(obj, 'values', obj)
dtype = values.dtype
if dtype.kind in ('O', 'S', 'U'):
@@ -221,7 +224,7 @@ def _isnull_ndarraylike(obj):
def _isnull_ndarraylike_old(obj):
- values = getattr(obj,'values',obj)
+ values = getattr(obj, 'values', obj)
dtype = values.dtype
if dtype.kind in ('O', 'S', 'U'):
@@ -775,13 +778,15 @@ def diff(arr, n, axis=0):
def _coerce_to_dtypes(result, dtypes):
- """ given a dtypes and a result set, coerce the result elements to the dtypes """
+ """ given a dtypes and a result set, coerce the result elements to the
+ dtypes
+ """
if len(result) != len(dtypes):
raise AssertionError("_coerce_to_dtypes requires equal len arrays")
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
- def conv(r,dtype):
+ def conv(r, dtype):
try:
if isnull(r):
pass
@@ -800,7 +805,7 @@ def conv(r,dtype):
return r
- return np.array([ conv(r,dtype) for r, dtype in zip(result,dtypes) ])
+ return np.array([conv(r, dtype) for r, dtype in zip(result, dtypes)])
def _infer_dtype_from_scalar(val):
@@ -850,7 +855,9 @@ def _infer_dtype_from_scalar(val):
def _maybe_cast_scalar(dtype, value):
- """ if we a scalar value and are casting to a dtype that needs nan -> NaT conversion """
+ """ if we a scalar value and are casting to a dtype that needs nan -> NaT
+ conversion
+ """
if np.isscalar(value) and dtype in _DATELIKE_DTYPES and isnull(value):
return tslib.iNaT
return value
@@ -882,8 +889,8 @@ def _maybe_promote(dtype, fill_value=np.nan):
try:
fill_value = lib.Timestamp(fill_value).value
except:
- # the proper thing to do here would probably be to upcast to
- # object (but numpy 1.6.1 doesn't do this properly)
+ # the proper thing to do here would probably be to upcast
+ # to object (but numpy 1.6.1 doesn't do this properly)
fill_value = tslib.iNaT
else:
fill_value = tslib.iNaT
@@ -920,10 +927,10 @@ def _maybe_promote(dtype, fill_value=np.nan):
def _maybe_upcast_putmask(result, mask, other, dtype=None, change=None):
""" a safe version of put mask that (potentially upcasts the result
- return the result
- if change is not None, then MUTATE the change (and change the dtype)
- return a changed flag
- """
+ return the result
+ if change is not None, then MUTATE the change (and change the dtype)
+ return a changed flag
+ """
if mask.any():
@@ -964,15 +971,17 @@ def changeit():
return r, True
# we want to decide whether putmask will work
- # if we have nans in the False portion of our mask then we need to upcast (possibily)
- # otherwise we DON't want to upcast (e.g. if we are have values, say integers in
- # the success portion then its ok to not upcast)
+ # if we have nans in the False portion of our mask then we need to
+ # upcast (possibily) otherwise we DON't want to upcast (e.g. if we are
+ # have values, say integers in the success portion then its ok to not
+ # upcast)
new_dtype, fill_value = _maybe_promote(result.dtype, other)
if new_dtype != result.dtype:
# we have a scalar or len 0 ndarray
# and its nan and we are changing some values
- if np.isscalar(other) or (isinstance(other, np.ndarray) and other.ndim < 1):
+ if (np.isscalar(other) or
+ (isinstance(other, np.ndarray) and other.ndim < 1)):
if isnull(other):
return changeit()
@@ -991,14 +1000,15 @@ def changeit():
def _maybe_upcast(values, fill_value=np.nan, dtype=None, copy=False):
- """ provide explicty type promotion and coercion
+ """ provide explict type promotion and coercion
- Parameters
- ----------
- values : the ndarray that we want to maybe upcast
- fill_value : what we want to fill with
- dtype : if None, then use the dtype of the values, else coerce to this type
- copy : if True always make a copy even if no upcast is required """
+ Parameters
+ ----------
+ values : the ndarray that we want to maybe upcast
+ fill_value : what we want to fill with
+ dtype : if None, then use the dtype of the values, else coerce to this type
+ copy : if True always make a copy even if no upcast is required
+ """
if dtype is None:
dtype = values.dtype
@@ -1022,7 +1032,8 @@ def _possibly_cast_item(obj, item, dtype):
def _possibly_downcast_to_dtype(result, dtype):
""" try to cast to the specified dtype (e.g. convert back to bool/int
- or could be an astype of float64->float32 """
+ or could be an astype of float64->float32
+ """
if np.isscalar(result) or not len(result):
return result
@@ -1065,22 +1076,25 @@ def _possibly_downcast_to_dtype(result, dtype):
# do a test on the first element, if it fails then we are done
r = result.ravel()
- arr = np.array([ r[0] ])
- if not np.allclose(arr,trans(arr).astype(dtype)):
+ arr = np.array([r[0]])
+ if not np.allclose(arr, trans(arr).astype(dtype)):
return result
# a comparable, e.g. a Decimal may slip in here
- elif not isinstance(r[0], (np.integer,np.floating,np.bool,int,float,bool)):
+ elif not isinstance(r[0], (np.integer, np.floating, np.bool, int,
+ float, bool)):
return result
- if issubclass(result.dtype.type, (np.object_,np.number)) and notnull(result).all():
+ if (issubclass(result.dtype.type, (np.object_, np.number)) and
+ notnull(result).all()):
new_result = trans(result).astype(dtype)
try:
- if np.allclose(new_result,result):
+ if np.allclose(new_result, result):
return new_result
except:
- # comparison of an object dtype with a number type could hit here
+ # comparison of an object dtype with a number type could
+ # hit here
if (new_result == result).all():
return new_result
except:
@@ -1119,8 +1133,9 @@ def _lcd_dtypes(a_dtype, b_dtype):
def _fill_zeros(result, y, fill):
""" if we have an integer value (or array in y)
- and we have 0's, fill them with the fill,
- return the result """
+ and we have 0's, fill them with the fill,
+ return the result
+ """
if fill is not None:
if not isinstance(y, np.ndarray):
@@ -1155,7 +1170,6 @@ def wrapper(arr, mask, limit=None):
np.int64)
-
def pad_1d(values, limit=None, mask=None):
dtype = values.dtype.name
@@ -1357,8 +1371,8 @@ def _interp_limit(invalid, limit):
new_x = new_x[firstIndex:]
xvalues = xvalues[firstIndex:]
- result[firstIndex:][invalid] = _interpolate_scipy_wrapper(valid_x,
- valid_y, new_x, method=method, fill_value=fill_value,
+ result[firstIndex:][invalid] = _interpolate_scipy_wrapper(
+ valid_x, valid_y, new_x, method=method, fill_value=fill_value,
bounds_error=bounds_error, **kwargs)
if limit:
result[violate_limit] = np.nan
@@ -1384,7 +1398,7 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None,
'barycentric': interpolate.barycentric_interpolate,
'krogh': interpolate.krogh_interpolate,
'piecewise_polynomial': interpolate.piecewise_polynomial_interpolate,
- }
+ }
try:
alt_methods['pchip'] = interpolate.pchip_interpolate
@@ -1411,16 +1425,18 @@ def _interpolate_scipy_wrapper(x, y, new_x, method, fill_value=None,
def interpolate_2d(values, method='pad', axis=0, limit=None, fill_value=None):
- """ perform an actual interpolation of values, values will be make 2-d if needed
- fills inplace, returns the result """
+ """ perform an actual interpolation of values, values will be make 2-d if
+ needed fills inplace, returns the result
+ """
transf = (lambda x: x) if axis == 0 else (lambda x: x.T)
# reshape a 1 dim if needed
ndim = values.ndim
if values.ndim == 1:
- if axis != 0: # pragma: no cover
- raise AssertionError("cannot interpolate on a ndim == 1 with axis != 0")
+ if axis != 0: # pragma: no cover
+ raise AssertionError("cannot interpolate on a ndim == 1 with "
+ "axis != 0")
values = values.reshape(tuple((1,) + values.shape))
if fill_value is None:
@@ -1451,6 +1467,7 @@ def _consensus_name_attr(objs):
_fill_methods = {'pad': pad_1d, 'backfill': backfill_1d}
+
def _get_fill_func(method):
method = _clean_fill_method(method)
return _fill_methods[method]
@@ -1478,8 +1495,9 @@ def _values_from_object(o):
return o
-def _possibly_convert_objects(values, convert_dates=True, convert_numeric=True):
- """ if we have an object dtype, try to coerce dates and/or numers """
+def _possibly_convert_objects(values, convert_dates=True,
+ convert_numeric=True):
+ """ if we have an object dtype, try to coerce dates and/or numbers """
# if we have passed in a list or scalar
if isinstance(values, (list, tuple)):
@@ -1537,7 +1555,9 @@ def _possibly_convert_platform(values):
def _possibly_cast_to_datetime(value, dtype, coerce=False):
- """ try to cast the array/value to a datetimelike dtype, converting float nan to iNaT """
+ """ try to cast the array/value to a datetimelike dtype, converting float
+ nan to iNaT
+ """
if dtype is not None:
if isinstance(dtype, compat.string_types):
@@ -1573,21 +1593,26 @@ def _possibly_cast_to_datetime(value, dtype, coerce=False):
from pandas.tseries.tools import to_datetime
value = to_datetime(value, coerce=coerce).values
elif is_timedelta64:
- from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
+ from pandas.tseries.timedeltas import \
+ _possibly_cast_to_timedelta
value = _possibly_cast_to_timedelta(value)
except:
pass
else:
- # only do this if we have an array and the dtype of the array is not setup already
- # we are not an integer/object, so don't bother with this conversion
- if isinstance(value, np.ndarray) and not (issubclass(value.dtype.type, np.integer) or value.dtype == np.object_):
+ # only do this if we have an array and the dtype of the array is not
+ # setup already we are not an integer/object, so don't bother with this
+ # conversion
+ if (isinstance(value, np.ndarray) and not
+ (issubclass(value.dtype.type, np.integer) or
+ value.dtype == np.object_)):
pass
else:
- # we might have a array (or single object) that is datetime like, and no dtype is passed
- # don't change the value unless we find a datetime set
+ # we might have a array (or single object) that is datetime like,
+ # and no dtype is passed don't change the value unless we find a
+ # datetime set
v = value
if not is_list_like(v):
v = [v]
@@ -1599,7 +1624,8 @@ def _possibly_cast_to_datetime(value, dtype, coerce=False):
except:
pass
elif inferred_type in ['timedelta', 'timedelta64']:
- from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
+ from pandas.tseries.timedeltas import \
+ _possibly_cast_to_timedelta
value = _possibly_cast_to_timedelta(value)
return value
@@ -1874,9 +1900,9 @@ def _asarray_tuplesafe(values, dtype=None):
try:
result = np.empty(len(values), dtype=object)
result[:] = values
- except (ValueError):
+ except ValueError:
# we have a list-of-list
- result[:] = [ tuple(x) for x in values ]
+ result[:] = [tuple(x) for x in values]
return result
@@ -1977,7 +2003,8 @@ def is_timedelta64_dtype(arr_or_dtype):
def needs_i8_conversion(arr_or_dtype):
- return is_datetime64_dtype(arr_or_dtype) or is_timedelta64_dtype(arr_or_dtype)
+ return (is_datetime64_dtype(arr_or_dtype) or
+ is_timedelta64_dtype(arr_or_dtype))
def is_float_dtype(arr_or_dtype):
@@ -2010,7 +2037,8 @@ def is_re_compilable(obj):
def is_list_like(arg):
- return hasattr(arg, '__iter__') and not isinstance(arg, compat.string_and_binary_types)
+ return (hasattr(arg, '__iter__') and
+ not isinstance(arg, compat.string_and_binary_types))
def _is_sequence(x):
@@ -2044,8 +2072,8 @@ def _astype_nansafe(arr, dtype, copy=True):
elif dtype == np.int64:
return arr.view(dtype)
elif dtype != _NS_DTYPE:
- raise TypeError(
- "cannot astype a datetimelike from [%s] to [%s]" % (arr.dtype, dtype))
+ raise TypeError("cannot astype a datetimelike from [%s] to [%s]" %
+ (arr.dtype, dtype))
return arr.astype(_NS_DTYPE)
elif is_timedelta64_dtype(arr):
if dtype == np.int64:
@@ -2054,7 +2082,8 @@ def _astype_nansafe(arr, dtype, copy=True):
return arr.astype(object)
# in py3, timedelta64[ns] are int64
- elif (compat.PY3 and dtype not in [_INT64_DTYPE,_TD_DTYPE]) or (not compat.PY3 and dtype != _TD_DTYPE):
+ elif ((compat.PY3 and dtype not in [_INT64_DTYPE, _TD_DTYPE]) or
+ (not compat.PY3 and dtype != _TD_DTYPE)):
# allow frequency conversions
if dtype.kind == 'm':
@@ -2063,7 +2092,8 @@ def _astype_nansafe(arr, dtype, copy=True):
result[mask] = np.nan
return result
- raise TypeError("cannot astype a timedelta from [%s] to [%s]" % (arr.dtype,dtype))
+ raise TypeError("cannot astype a timedelta from [%s] to [%s]" %
+ (arr.dtype, dtype))
return arr.astype(_TD_DTYPE)
elif (np.issubdtype(arr.dtype, np.floating) and
@@ -2083,7 +2113,8 @@ def _astype_nansafe(arr, dtype, copy=True):
def _clean_fill_method(method):
- if method is None: return None
+ if method is None:
+ return None
method = method.lower()
if method == 'ffill':
method = 'pad'
@@ -2130,8 +2161,9 @@ def next(self):
def _get_handle(path, mode, encoding=None, compression=None):
"""Gets file handle for given path and mode.
- NOTE: Under Python 3.2, getting a compressed file handle means reading in the entire file,
- decompressing it and decoding it to ``str`` all at once and then wrapping it in a StringIO.
+ NOTE: Under Python 3.2, getting a compressed file handle means reading in
+ the entire file, decompressing it and decoding it to ``str`` all at once
+ and then wrapping it in a StringIO.
"""
if compression is not None:
if encoding is not None and not compat.PY3:
@@ -2327,8 +2359,10 @@ def in_qtconsole():
"""
try:
ip = get_ipython()
- front_end = (ip.config.get('KernelApp', {}).get('parent_appname', "") or
- ip.config.get('IPKernelApp', {}).get('parent_appname', ""))
+ front_end = (
+ ip.config.get('KernelApp', {}).get('parent_appname', "") or
+ ip.config.get('IPKernelApp', {}).get('parent_appname', "")
+ )
if 'qtconsole' in front_end.lower():
return True
except:
@@ -2342,8 +2376,10 @@ def in_ipnb():
"""
try:
ip = get_ipython()
- front_end = (ip.config.get('KernelApp', {}).get('parent_appname', "") or
- ip.config.get('IPKernelApp', {}).get('parent_appname', ""))
+ front_end = (
+ ip.config.get('KernelApp', {}).get('parent_appname', "") or
+ ip.config.get('IPKernelApp', {}).get('parent_appname', "")
+ )
if 'notebook' in front_end.lower():
return True
except:
@@ -2399,7 +2435,7 @@ def _pprint_seq(seq, _nest_lvl=0, **kwds):
bounds length of printed sequence, depending on options
"""
- if isinstance(seq,set):
+ if isinstance(seq, set):
fmt = u("set([%s])")
else:
fmt = u("[%s]") if hasattr(seq, '__setitem__') else u("(%s)")
@@ -2433,8 +2469,8 @@ def _pprint_dict(seq, _nest_lvl=0, **kwds):
nitems = get_option("max_seq_items") or len(seq)
for k, v in list(seq.items())[:nitems]:
- pairs.append(pfmt % (pprint_thing(k,_nest_lvl+1,**kwds),
- pprint_thing(v,_nest_lvl+1,**kwds)))
+ pairs.append(pfmt % (pprint_thing(k, _nest_lvl+1, **kwds),
+ pprint_thing(v, _nest_lvl+1, **kwds)))
if nitems < len(seq):
return fmt % (", ".join(pairs) + ", ...")
@@ -2505,7 +2541,7 @@ def as_escaped_unicode(thing, escape_chars=escape_chars):
get_option("display.pprint_nest_depth"):
result = _pprint_seq(thing, _nest_lvl, escape_chars=escape_chars,
quote_strings=quote_strings)
- elif isinstance(thing,compat.string_types) and quote_strings:
+ elif isinstance(thing, compat.string_types) and quote_strings:
if compat.PY3:
fmt = "'%s'"
else:
@@ -2539,8 +2575,8 @@ def load(path): # TODO remove in 0.13
Load pickled pandas object (or any other pickled object) from the specified
file path
- Warning: Loading pickled data received from untrusted sources can be unsafe.
- See: http://docs.python.org/2.7/library/pickle.html
+ Warning: Loading pickled data received from untrusted sources can be
+ unsafe. See: http://docs.python.org/2.7/library/pickle.html
Parameters
----------
@@ -2558,7 +2594,7 @@ def load(path): # TODO remove in 0.13
def save(obj, path): # TODO remove in 0.13
- '''
+ """
Pickle (serialize) object to input file path
Parameters
@@ -2566,7 +2602,7 @@ def save(obj, path): # TODO remove in 0.13
obj : any object
path : string
File path
- '''
+ """
import warnings
warnings.warn("save is deprecated, use obj.to_pickle", FutureWarning)
from pandas.io.pickle import to_pickle
@@ -2574,8 +2610,8 @@ def save(obj, path): # TODO remove in 0.13
def _maybe_match_name(a, b):
- a_name = getattr(a,'name',None)
- b_name = getattr(b,'name',None)
+ a_name = getattr(a, 'name', None)
+ b_name = getattr(b, 'name', None)
if a_name == b_name:
return a_name
return None
diff --git a/pandas/core/config.py b/pandas/core/config.py
index 20ec30398fd64..6eb947119578f 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -173,16 +173,19 @@ def _reset_option(pat):
if len(keys) > 1 and len(pat) < 4 and pat != 'all':
raise ValueError('You must specify at least 4 characters when '
- 'resetting multiple keys, use the special keyword "all" '
- 'to reset all the options to their default value')
+ 'resetting multiple keys, use the special keyword '
+ '"all" to reset all the options to their default '
+ 'value')
for k in keys:
_set_option(k, _registered_options[k].defval)
+
def get_default_val(pat):
- key = _get_single_key(pat, silent=True)
+ key = _get_single_key(pat, silent=True)
return _get_registered_option(key).defval
+
class DictWrapper(object):
""" provide attribute-style access to a nested dict
"""
@@ -242,7 +245,8 @@ def __doc__(self):
return self.__doc_tmpl__.format(opts_desc=opts_desc,
opts_list=opts_list)
-_get_option_tmpl = """"get_option(pat) - Retrieves the value of the specified option
+_get_option_tmpl = """
+get_option(pat) - Retrieves the value of the specified option
Available options:
{opts_list}
@@ -266,7 +270,8 @@ def __doc__(self):
{opts_desc}
"""
-_set_option_tmpl = """set_option(pat,value) - Sets the value of the specified option
+_set_option_tmpl = """
+set_option(pat,value) - Sets the value of the specified option
Available options:
{opts_list}
@@ -292,7 +297,8 @@ def __doc__(self):
{opts_desc}
"""
-_describe_option_tmpl = """describe_option(pat,_print_desc=False) Prints the description
+_describe_option_tmpl = """
+describe_option(pat,_print_desc=False) Prints the description
for one or more registered options.
Call with not arguments to get a listing for all registered options.
@@ -317,7 +323,8 @@ def __doc__(self):
{opts_desc}
"""
-_reset_option_tmpl = """reset_option(pat) - Reset one or more options to their default value.
+_reset_option_tmpl = """
+reset_option(pat) - Reset one or more options to their default value.
Pass "all" as argument to reset all options.
@@ -353,9 +360,11 @@ def __doc__(self):
class option_context(object):
def __init__(self, *args):
- if not ( len(args) % 2 == 0 and len(args) >= 2):
- errmsg = "Need to invoke as option_context(pat,val,[(pat,val),..))."
- raise AssertionError(errmsg)
+ if not (len(args) % 2 == 0 and len(args) >= 2):
+ raise AssertionError(
+ 'Need to invoke as'
+ 'option_context(pat, val, [(pat, val), ...)).'
+ )
ops = list(zip(args[::2], args[1::2]))
undo = []
@@ -425,20 +434,21 @@ def register_option(key, defval, doc='', validator=None, cb=None):
for i, p in enumerate(path[:-1]):
if not isinstance(cursor, dict):
raise OptionError("Path prefix to option '%s' is already an option"
- % '.'.join(path[:i]))
+ % '.'.join(path[:i]))
if p not in cursor:
cursor[p] = {}
cursor = cursor[p]
if not isinstance(cursor, dict):
raise OptionError("Path prefix to option '%s' is already an option"
- % '.'.join(path[:-1]))
+ % '.'.join(path[:-1]))
cursor[path[-1]] = defval # initialize
# save the option metadata
_registered_options[key] = RegisteredOption(key=key, defval=defval,
- doc=doc, validator=validator, cb=cb)
+ doc=doc, validator=validator,
+ cb=cb)
def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
@@ -484,7 +494,7 @@ def deprecate_option(key, msg=None, rkey=None, removal_ver=None):
if key in _deprecated_options:
raise OptionError("Option '%s' has already been defined as deprecated."
- % key)
+ % key)
_deprecated_options[key] = DeprecatedOption(key, msg, rkey, removal_ver)
@@ -512,6 +522,7 @@ def _get_root(key):
cursor = cursor[p]
return cursor, path[-1]
+
def _get_option_fast(key):
""" internal quick access routine, no error checking """
path = key.split('.')
@@ -520,6 +531,7 @@ def _get_option_fast(key):
cursor = cursor[p]
return cursor
+
def _is_deprecated(key):
""" Returns True if the given option has been deprecated """
@@ -603,7 +615,8 @@ def _build_option_description(k):
s = u('%s: ') % k
if o:
- s += u('[default: %s] [currently: %s]') % (o.defval, _get_option(k, True))
+ s += u('[default: %s] [currently: %s]') % (o.defval,
+ _get_option(k, True))
if o.doc:
s += '\n' + '\n '.join(o.doc.strip().split('\n'))
@@ -755,12 +768,14 @@ def inner(x):
return inner
+
def is_one_of_factory(legal_values):
def inner(x):
from pandas.core.common import pprint_thing as pp
if not x in legal_values:
pp_values = lmap(pp, legal_values)
- raise ValueError("Value must be one of %s" % pp("|".join(pp_values)))
+ raise ValueError("Value must be one of %s"
+ % pp("|".join(pp_values)))
return inner
diff --git a/pandas/core/config_init.py b/pandas/core/config_init.py
index 9e95759ac088b..b9b934769793f 100644
--- a/pandas/core/config_init.py
+++ b/pandas/core/config_init.py
@@ -1,8 +1,3 @@
-import pandas.core.config as cf
-from pandas.core.config import (is_int, is_bool, is_text, is_float,
- is_instance_factory,is_one_of_factory,get_default_val)
-from pandas.core.format import detect_console_encoding
-
"""
This module is imported from the pandas package __init__.py file
in order to ensure that the core.config options registered here will
@@ -15,6 +10,12 @@
"""
+import pandas.core.config as cf
+from pandas.core.config import (is_int, is_bool, is_text, is_float,
+ is_instance_factory, is_one_of_factory,
+ get_default_val)
+from pandas.core.format import detect_console_encoding
+
###########################################
# options from the "display" namespace
@@ -113,8 +114,8 @@
pc_expand_repr_doc = """
: boolean
- Whether to print out the full DataFrame repr for wide DataFrames
- across multiple lines, `max_columns` is still respected, but the output will
+ Whether to print out the full DataFrame repr for wide DataFrames across
+ multiple lines, `max_columns` is still respected, but the output will
wrap-around across multiple "pages" if it's width exceeds `display.width`.
"""
@@ -124,7 +125,8 @@
"""
pc_line_width_deprecation_warning = """\
-line_width has been deprecated, use display.width instead (currently both are identical)
+line_width has been deprecated, use display.width instead (currently both are
+identical)
"""
pc_height_deprecation_warning = """\
@@ -134,8 +136,8 @@
pc_width_doc = """
: int
Width of the display in characters. In case python/IPython is running in
- a terminal this can be set to None and pandas will correctly auto-detect the
- width.
+ a terminal this can be set to None and pandas will correctly auto-detect
+ the width.
Note that the IPython notebook, IPython qtconsole, or IDLE do not run in a
terminal and hence it is not possible to correctly detect the width.
"""
@@ -155,8 +157,8 @@
: int or None
when pretty-printing a long sequence, no more then `max_seq_items`
- will be printed. If items are ommitted, they will be denoted by the addition
- of "..." to the resulting string.
+ will be printed. If items are omitted, they will be denoted by the
+ addition of "..." to the resulting string.
If set to None, the number of items to be printed is unlimited.
"""
@@ -182,6 +184,8 @@
"""
style_backup = dict()
+
+
def mpl_style_cb(key):
import sys
from pandas.tools.plotting import mpl_stylesheet
@@ -190,15 +194,14 @@ def mpl_style_cb(key):
val = cf.get_option(key)
if 'matplotlib' not in sys.modules.keys():
- if not(val): # starting up, we get reset to None
+ if not(val): # starting up, we get reset to None
return val
raise Exception("matplotlib has not been imported. aborting")
import matplotlib.pyplot as plt
-
if val == 'default':
- style_backup = dict([(k,plt.rcParams[k]) for k in mpl_stylesheet])
+ style_backup = dict([(k, plt.rcParams[k]) for k in mpl_stylesheet])
plt.rcParams.update(mpl_stylesheet)
elif not val:
if style_backup:
@@ -241,10 +244,11 @@ def mpl_style_cb(key):
cb=mpl_style_cb)
cf.register_option('height', 60, pc_height_doc,
validator=is_instance_factory([type(None), int]))
- cf.register_option('width',80, pc_width_doc,
+ cf.register_option('width', 80, pc_width_doc,
validator=is_instance_factory([type(None), int]))
# redirected to width, make defval identical
- cf.register_option('line_width', get_default_val('display.width'), pc_line_width_doc)
+ cf.register_option('line_width', get_default_val('display.width'),
+ pc_line_width_doc)
cf.deprecate_option('display.line_width',
msg=pc_line_width_deprecation_warning,
@@ -271,6 +275,7 @@ def mpl_style_cb(key):
# We don't want to start importing everything at the global context level
# or we'll hit circular deps.
+
def use_inf_as_null_cb(key):
from pandas.core.common import _use_inf_as_null
_use_inf_as_null(key)
@@ -283,7 +288,8 @@ def use_inf_as_null_cb(key):
# user warnings
chained_assignment = """
: string
- Raise an exception, warn, or no action if trying to use chained assignment, The default is warn
+ Raise an exception, warn, or no action if trying to use chained assignment,
+ The default is warn
"""
with cf.config_prefix('mode'):
@@ -294,7 +300,8 @@ def use_inf_as_null_cb(key):
# Set up the io.excel specific configuration.
writer_engine_doc = """
: string
- The default Excel writer engine for '{ext}' files. Available options: '{default}' (the default){others}.
+ The default Excel writer engine for '{ext}' files. Available options:
+ '{default}' (the default){others}.
"""
with cf.config_prefix('io.excel'):
@@ -309,12 +316,13 @@ def use_inf_as_null_cb(key):
doc = writer_engine_doc.format(ext=ext, default=default,
others=options)
cf.register_option(ext + '.writer', default, doc, validator=str)
+
def _register_xlsx(engine, other):
cf.register_option('xlsx.writer', engine,
writer_engine_doc.format(ext='xlsx',
default=engine,
others=", '%s'" % other),
- validator=str)
+ validator=str)
try:
# better memory footprint
diff --git a/pandas/core/datetools.py b/pandas/core/datetools.py
index 91a29259d8f2f..1fb6ae4225f25 100644
--- a/pandas/core/datetools.py
+++ b/pandas/core/datetools.py
@@ -36,6 +36,7 @@
isMonthEnd = MonthEnd().onOffset
isBMonthEnd = BMonthEnd().onOffset
+
def _resolve_offset(freq, kwds):
if 'timeRule' in kwds or 'offset' in kwds:
offset = kwds.get('offset', None)
@@ -54,4 +55,3 @@ def _resolve_offset(freq, kwds):
FutureWarning)
return offset
-
diff --git a/pandas/core/format.py b/pandas/core/format.py
index ae0d95b1c3074..9abfe3c43b8e5 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -62,6 +62,7 @@
-------
formatted : string (or unicode, depending on data and options)"""
+
class CategoricalFormatter(object):
def __init__(self, categorical, buf=None, length=True,
na_rep='NaN', name=False, footer=True):
@@ -78,8 +79,8 @@ def _get_footer(self):
if self.name:
name = com.pprint_thing(self.categorical.name,
escape_chars=('\t', '\r', '\n'))
- footer += ('Name: %s' %
- name) if self.categorical.name is not None else ""
+ footer += ('Name: %s' % name if self.categorical.name is not None
+ else '')
if self.length:
if footer:
@@ -88,7 +89,7 @@ def _get_footer(self):
levheader = 'Levels (%d): ' % len(self.categorical.levels)
- #TODO: should max_line_width respect a setting?
+ # TODO: should max_line_width respect a setting?
levstring = np.array_repr(self.categorical.levels, max_line_width=60)
indent = ' ' * (levstring.find('[') + len(levheader) + 1)
lines = levstring.split('\n')
@@ -140,7 +141,7 @@ def __init__(self, series, buf=None, header=True, length=True,
if float_format is None:
float_format = get_option("display.float_format")
self.float_format = float_format
- self.dtype = dtype
+ self.dtype = dtype
def _get_footer(self):
footer = u('')
@@ -163,10 +164,11 @@ def _get_footer(self):
footer += 'Length: %d' % len(self.series)
if self.dtype:
- if getattr(self.series.dtype,'name',None):
+ name = getattr(self.series.dtype, 'name', None)
+ if name:
if footer:
footer += ', '
- footer += 'dtype: %s' % com.pprint_thing(self.series.dtype.name)
+ footer += 'dtype: %s' % com.pprint_thing(name)
return compat.text_type(footer)
@@ -213,6 +215,7 @@ def to_string(self):
return compat.text_type(u('\n').join(result))
+
def _strlen_func():
if compat.PY3: # pragma: no cover
_strlen = len
@@ -420,9 +423,10 @@ def get_col_type(dtype):
column_format = 'l%s' % ''.join(map(get_col_type, dtypes))
else:
column_format = '%s' % ''.join(map(get_col_type, dtypes))
- elif not isinstance(column_format, compat.string_types): # pragma: no cover
- raise AssertionError(('column_format must be str or unicode, not %s'
- % type(column_format)))
+ elif not isinstance(column_format,
+ compat.string_types): # pragma: no cover
+ raise AssertionError('column_format must be str or unicode, not %s'
+ % type(column_format))
def write(buf, frame, column_format, strcols):
buf.write('\\begin{tabular}{%s}\n' % column_format)
@@ -482,10 +486,9 @@ def is_numeric_dtype(dtype):
fmt_columns = lzip(*fmt_columns)
dtypes = self.frame.dtypes.values
need_leadsp = dict(zip(fmt_columns, map(is_numeric_dtype, dtypes)))
- str_columns = list(zip(*[[' ' + y
- if y not in self.formatters and need_leadsp[x]
- else y for y in x]
- for x in fmt_columns]))
+ str_columns = list(zip(*[
+ [' ' + y if y not in self.formatters and need_leadsp[x]
+ else y for y in x] for x in fmt_columns]))
if self.sparsify:
str_columns = _sparsify(str_columns)
@@ -690,11 +693,12 @@ def _column_header():
sentinal = com.sentinal_factory()
levels = self.columns.format(sparsify=sentinal, adjoin=False,
names=False)
- level_lengths = _get_level_lengths(levels,sentinal)
+ level_lengths = _get_level_lengths(levels, sentinal)
row_levels = self.frame.index.nlevels
- for lnum, (records, values) in enumerate(zip(level_lengths, levels)):
+ for lnum, (records, values) in enumerate(zip(level_lengths,
+ levels)):
name = self.columns.names[lnum]
row = [''] * (row_levels - 1) + ['' if name is None
else str(name)]
@@ -784,8 +788,9 @@ def _write_hierarchical_rows(self, fmt_values, indent):
# GH3547
sentinal = com.sentinal_factory()
- levels = frame.index.format(sparsify=sentinal, adjoin=False, names=False)
- level_lengths = _get_level_lengths(levels,sentinal)
+ levels = frame.index.format(sparsify=sentinal, adjoin=False,
+ names=False)
+ level_lengths = _get_level_lengths(levels, sentinal)
for i in range(len(frame)):
row = []
@@ -810,15 +815,16 @@ def _write_hierarchical_rows(self, fmt_values, indent):
else:
for i in range(len(frame)):
idx_values = list(zip(*frame.index.format(sparsify=False,
- adjoin=False,
- names=False)))
+ adjoin=False,
+ names=False)))
row = []
row.extend(idx_values[i])
row.extend(fmt_values[j][i] for j in range(ncols))
self.write_tr(row, indent, self.indent_delta, tags=None,
nindex_levels=frame.index.nlevels)
-def _get_level_lengths(levels,sentinal=''):
+
+def _get_level_lengths(levels, sentinal=''):
from itertools import groupby
def _make_grouper():
@@ -882,8 +888,8 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None,
#GH3457
if not self.obj.columns.is_unique and engine == 'python':
- msg= "columns.is_unique == False not supported with engine='python'"
- raise NotImplementedError(msg)
+ raise NotImplementedError("columns.is_unique == False not "
+ "supported with engine='python'")
self.tupleize_cols = tupleize_cols
self.has_mi_columns = isinstance(obj.columns, MultiIndex
@@ -892,24 +898,27 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None,
# validate mi options
if self.has_mi_columns:
if cols is not None:
- raise TypeError("cannot specify cols with a MultiIndex on the columns")
+ raise TypeError("cannot specify cols with a MultiIndex on the "
+ "columns")
if cols is not None:
- if isinstance(cols,Index):
- cols = cols.to_native_types(na_rep=na_rep,float_format=float_format,
+ if isinstance(cols, Index):
+ cols = cols.to_native_types(na_rep=na_rep,
+ float_format=float_format,
date_format=date_format)
else:
- cols=list(cols)
- self.obj = self.obj.loc[:,cols]
+ cols = list(cols)
+ self.obj = self.obj.loc[:, cols]
# update columns to include possible multiplicity of dupes
# and make sure sure cols is just a list of labels
cols = self.obj.columns
- if isinstance(cols,Index):
- cols = cols.to_native_types(na_rep=na_rep,float_format=float_format,
+ if isinstance(cols, Index):
+ cols = cols.to_native_types(na_rep=na_rep,
+ float_format=float_format,
date_format=date_format)
else:
- cols=list(cols)
+ cols = list(cols)
# save it
self.cols = cols
@@ -917,19 +926,22 @@ def __init__(self, obj, path_or_buf, sep=",", na_rep='', float_format=None,
# preallocate data 2d list
self.blocks = self.obj._data.blocks
ncols = sum(len(b.items) for b in self.blocks)
- self.data =[None] * ncols
+ self.data = [None] * ncols
self.column_map = self.obj._data.get_items_map(use_cached=False)
if chunksize is None:
- chunksize = (100000/ (len(self.cols) or 1)) or 1
+ chunksize = (100000 / (len(self.cols) or 1)) or 1
self.chunksize = chunksize
self.data_index = obj.index
if isinstance(obj.index, PeriodIndex):
self.data_index = obj.index.to_timestamp()
- if isinstance(self.data_index, DatetimeIndex) and date_format is not None:
- self.data_index = Index([x.strftime(date_format) if notnull(x) else '' for x in self.data_index])
+ if (isinstance(self.data_index, DatetimeIndex) and
+ date_format is not None):
+ self.data_index = Index([x.strftime(date_format)
+ if notnull(x) else ''
+ for x in self.data_index])
self.nlevels = getattr(self.data_index, 'nlevels', 1)
if not index:
@@ -961,7 +973,8 @@ def _helper_csv(self, writer, na_rep=None, cols=None,
index_label = ['']
else:
index_label = [index_label]
- elif not isinstance(index_label, (list, tuple, np.ndarray)):
+ elif not isinstance(index_label,
+ (list, tuple, np.ndarray)):
# given a string for a DF with Index
index_label = [index_label]
@@ -1004,8 +1017,9 @@ def strftime_with_nulls(x):
values = self.obj.copy()
values.index = data_index
- values.columns = values.columns.to_native_types(na_rep=na_rep,float_format=float_format,
- date_format=date_format)
+ values.columns = values.columns.to_native_types(
+ na_rep=na_rep, float_format=float_format,
+ date_format=date_format)
values = values[cols]
series = {}
@@ -1018,7 +1032,7 @@ def strftime_with_nulls(x):
if index:
if nlevels == 1:
row_fields = [idx]
- else: # handle MultiIndex
+ else: # handle MultiIndex
row_fields = list(idx)
for i, col in enumerate(cols):
val = series[col][j]
@@ -1040,7 +1054,8 @@ def save(self):
f = self.path_or_buf
close = False
else:
- f = com._get_handle(self.path_or_buf, self.mode, encoding=self.encoding)
+ f = com._get_handle(self.path_or_buf, self.mode,
+ encoding=self.encoding)
close = True
try:
@@ -1056,14 +1071,15 @@ def save(self):
if self.engine == 'python':
# to be removed in 0.13
self._helper_csv(self.writer, na_rep=self.na_rep,
- float_format=self.float_format, cols=self.cols,
- header=self.header, index=self.index,
- index_label=self.index_label, date_format=self.date_format)
+ float_format=self.float_format,
+ cols=self.cols, header=self.header,
+ index=self.index,
+ index_label=self.index_label,
+ date_format=self.date_format)
else:
self._save()
-
finally:
if close:
f.close()
@@ -1127,7 +1143,8 @@ def _save_header(self):
if has_mi_columns:
columns = obj.columns
- # write out the names for each level, then ALL of the values for each level
+ # write out the names for each level, then ALL of the values for
+ # each level
for i in range(columns.nlevels):
# we need at least 1 index column to write our col names
@@ -1135,10 +1152,10 @@ def _save_header(self):
if self.index:
# name is the first column
- col_line.append( columns.names[i] )
+ col_line.append(columns.names[i])
- if isinstance(index_label,list) and len(index_label)>1:
- col_line.extend([ '' ] * (len(index_label)-1))
+ if isinstance(index_label, list) and len(index_label) > 1:
+ col_line.extend([''] * (len(index_label)-1))
col_line.extend(columns.get_level_values(i))
@@ -1146,7 +1163,7 @@ def _save_header(self):
# add blanks for the columns, so that we
# have consistent seps
- encoded_labels.extend([ '' ] * len(columns))
+ encoded_labels.extend([''] * len(columns))
# write out the index label line
writer.writerow(encoded_labels)
@@ -1171,14 +1188,15 @@ def _save(self):
def _save_chunk(self, start_i, end_i):
- data_index = self.data_index
+ data_index = self.data_index
# create the data for a chunk
- slicer = slice(start_i,end_i)
+ slicer = slice(start_i, end_i)
for i in range(len(self.blocks)):
b = self.blocks[i]
d = b.to_native_types(slicer=slicer, na_rep=self.na_rep,
- float_format=self.float_format, date_format=self.date_format)
+ float_format=self.float_format,
+ date_format=self.date_format)
for i, item in enumerate(b.items):
@@ -1186,7 +1204,8 @@ def _save_chunk(self, start_i, end_i):
self.data[self.column_map[b][i]] = d[i]
ix = data_index.to_native_types(slicer=slicer, na_rep=self.na_rep,
- float_format=self.float_format, date_format=self.date_format)
+ float_format=self.float_format,
+ date_format=self.date_format)
lib.write_csv_rows(self.data, ix, self.nlevels, self.cols, self.writer)
@@ -1194,6 +1213,7 @@ def _save_chunk(self, start_i, end_i):
# ExcelCell = namedtuple("ExcelCell",
# 'row, col, val, style, mergestart, mergeend')
+
class ExcelCell(object):
__fields__ = ('row', 'col', 'val', 'style', 'mergestart', 'mergeend')
__slots__ = __fields__
@@ -1539,8 +1559,8 @@ def _format_strings(self):
else:
float_format = self.float_format
- formatter = (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n'))) \
- if self.formatter is None else self.formatter
+ formatter = self.formatter if self.formatter is not None else \
+ (lambda x: com.pprint_thing(x, escape_chars=('\t', '\r', '\n')))
def _format(x):
if self.na_rep is not None and lib.checknull(x):
@@ -1584,19 +1604,20 @@ def __init__(self, *args, **kwargs):
def _format_with(self, fmt_str):
def _val(x, threshold):
if notnull(x):
- if threshold is None or abs(x) > get_option("display.chop_threshold"):
- return fmt_str % x
+ if (threshold is None or
+ abs(x) > get_option("display.chop_threshold")):
+ return fmt_str % x
else:
- if fmt_str.endswith("e"): # engineering format
- return "0"
+ if fmt_str.endswith("e"): # engineering format
+ return "0"
else:
- return fmt_str % 0
+ return fmt_str % 0
else:
return self.na_rep
threshold = get_option("display.chop_threshold")
- fmt_values = [ _val(x, threshold) for x in self.values]
+ fmt_values = [_val(x, threshold) for x in self.values]
return _trim_zeros(fmt_values, self.na_rep)
def get_result(self):
@@ -1654,6 +1675,7 @@ def get_result(self):
fmt_values = [formatter(x) for x in self.values]
return _make_fixed_width(fmt_values, self.justify)
+
def _format_datetime64(x, tz=None):
if isnull(x):
return 'NaT'
@@ -1674,12 +1696,14 @@ def get_result(self):
fmt_values = [formatter(x) for x in self.values]
return _make_fixed_width(fmt_values, self.justify)
+
def _format_timedelta64(x):
if isnull(x):
return 'NaT'
return lib.repr_timedelta64(x)
+
def _make_fixed_width(strings, justify='right', minimum=None):
if len(strings) == 0:
return strings
@@ -1762,6 +1786,8 @@ def _has_names(index):
# Global formatting options
_initial_defencoding = None
+
+
def detect_console_encoding():
"""
Try to find the most capable encoding supported by the console.
@@ -1776,13 +1802,15 @@ def detect_console_encoding():
except AttributeError:
pass
- if not encoding or 'ascii' in encoding.lower(): # try again for something better
+ # try again for something better
+ if not encoding or 'ascii' in encoding.lower():
try:
encoding = locale.getpreferredencoding()
except Exception:
pass
- if not encoding or 'ascii' in encoding.lower(): # when all else fails. this will usually be "ascii"
+ # when all else fails. this will usually be "ascii"
+ if not encoding or 'ascii' in encoding.lower():
encoding = sys.getdefaultencoding()
# GH3360, save the reported defencoding at import time
@@ -1804,8 +1832,8 @@ def get_console_size():
# Consider
# interactive shell terminal, can detect term size
- # interactive non-shell terminal (ipnb/ipqtconsole), cannot detect term size
- # non-interactive script, should disregard term size
+ # interactive non-shell terminal (ipnb/ipqtconsole), cannot detect term
+ # size non-interactive script, should disregard term size
# in addition
# width,height have default values, but setting to 'None' signals
@@ -1823,7 +1851,7 @@ def get_console_size():
# pure terminal
terminal_width, terminal_height = get_terminal_size()
else:
- terminal_width, terminal_height = None,None
+ terminal_width, terminal_height = None, None
# Note if the User sets width/Height to None (auto-detection)
# and we're in a script (non-inter), this will return (None,None)
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 1222b5b93799d..b194c938b13cc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -60,9 +60,8 @@
#----------------------------------------------------------------------
# Docstring templates
-_shared_doc_kwargs = dict(axes='index, columns',
- klass='DataFrame',
- axes_single_arg="{0,1,'index','columns'}")
+_shared_doc_kwargs = dict(axes='index, columns', klass='DataFrame',
+ axes_single_arg="{0,1,'index','columns'}")
_numeric_only_doc = """numeric_only : boolean, default None
Include only float, int, boolean data. If None, will attempt to use
@@ -196,15 +195,16 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
data = data._data
if isinstance(data, BlockManager):
- mgr = self._init_mgr(
- data, axes=dict(index=index, columns=columns), dtype=dtype, copy=copy)
+ mgr = self._init_mgr(data, axes=dict(index=index, columns=columns),
+ dtype=dtype, copy=copy)
elif isinstance(data, dict):
mgr = self._init_dict(data, index, columns, dtype=dtype)
elif isinstance(data, ma.MaskedArray):
# masked recarray
if isinstance(data, ma.mrecords.MaskedRecords):
- mgr = _masked_rec_array_to_mgr(data, index, columns, dtype, copy)
+ mgr = _masked_rec_array_to_mgr(data, index, columns, dtype,
+ copy)
# a masked array
else:
@@ -224,8 +224,9 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
if columns is None:
columns = data_columns
mgr = self._init_dict(data, index, columns, dtype=dtype)
- elif getattr(data,'name',None):
- mgr = self._init_dict({ data.name : data }, index, columns, dtype=dtype)
+ elif getattr(data, 'name', None):
+ mgr = self._init_dict({data.name: data}, index, columns,
+ dtype=dtype)
else:
mgr = self._init_ndarray(data, index, columns, dtype=dtype,
copy=copy)
@@ -236,7 +237,7 @@ def __init__(self, data=None, index=None, columns=None, dtype=None,
if index is None and isinstance(data[0], Series):
index = _get_names_from_index(data)
- if is_list_like(data[0]) and getattr(data[0],'ndim',1) == 1:
+ if is_list_like(data[0]) and getattr(data[0], 'ndim', 1) == 1:
arrays, columns = _to_arrays(data, columns, dtype=dtype)
columns = _ensure_index(columns)
@@ -283,7 +284,8 @@ def _init_dict(self, data, index, columns, dtype=None):
# prefilter if columns passed
- data = dict((k, v) for k, v in compat.iteritems(data) if k in columns)
+ data = dict((k, v) for k, v in compat.iteritems(data)
+ if k in columns)
if index is None:
index = extract_index(list(data.values()))
@@ -395,7 +397,8 @@ def _repr_fits_horizontal_(self, ignore_width=False):
return False
if (ignore_width # used by repr_html under IPython notebook
- or not com.in_interactive_session()): # scripts ignore terminal dims
+ # scripts ignore terminal dims
+ or not com.in_interactive_session()):
return True
if (get_option('display.width') is not None or
@@ -671,22 +674,25 @@ def to_dict(self, outtype='dict'):
else: # pragma: no cover
raise ValueError("outtype %s not understood" % outtype)
- def to_gbq(self, destination_table, schema=None, col_order=None, if_exists='fail', **kwargs):
+ def to_gbq(self, destination_table, schema=None, col_order=None,
+ if_exists='fail', **kwargs):
"""Write a DataFrame to a Google BigQuery table.
- If the table exists, the DataFrame will be appended. If not, a new table
- will be created, in which case the schema will have to be specified. By default,
- rows will be written in the order they appear in the DataFrame, though
- the user may specify an alternative order.
+ If the table exists, the DataFrame will be appended. If not, a new
+ table will be created, in which case the schema will have to be
+ specified. By default, rows will be written in the order they appear
+ in the DataFrame, though the user may specify an alternative order.
Parameters
---------------
destination_table : string
name of table to be written, in the form 'dataset.tablename'
schema : sequence (optional)
- list of column types in order for data to be inserted, e.g. ['INTEGER', 'TIMESTAMP', 'BOOLEAN']
+ list of column types in order for data to be inserted, e.g.
+ ['INTEGER', 'TIMESTAMP', 'BOOLEAN']
col_order : sequence (optional)
- order which columns are to be inserted, e.g. ['primary_key', 'birthday', 'username']
+ order which columns are to be inserted, e.g. ['primary_key',
+ 'birthday', 'username']
if_exists : {'fail', 'replace', 'append'} (optional)
- fail: If table exists, do nothing.
- replace: If table exists, drop it, recreate it, and insert data.
@@ -696,15 +702,19 @@ def to_gbq(self, destination_table, schema=None, col_order=None, if_exists='fail
Raises
------
SchemaMissing :
- Raised if the 'if_exists' parameter is set to 'replace', but no schema is specified
+ Raised if the 'if_exists' parameter is set to 'replace', but no
+ schema is specified
TableExists :
- Raised if the specified 'destination_table' exists but the 'if_exists' parameter is set to 'fail' (the default)
+ Raised if the specified 'destination_table' exists but the
+ 'if_exists' parameter is set to 'fail' (the default)
InvalidSchema :
- Raised if the 'schema' parameter does not match the provided DataFrame
+ Raised if the 'schema' parameter does not match the provided
+ DataFrame
"""
from pandas.io import gbq
- return gbq.to_gbq(self, destination_table, schema=None, col_order=None, if_exists='fail', **kwargs)
+ return gbq.to_gbq(self, destination_table, schema=None, col_order=None,
+ if_exists='fail', **kwargs)
@classmethod
def from_records(cls, data, index=None, exclude=None, columns=None,
@@ -757,7 +767,7 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
values = [first_row]
#if unknown length iterable (generator)
- if nrows == None:
+ if nrows is None:
#consume whole generator
values += list(data)
else:
@@ -785,7 +795,8 @@ def from_records(cls, data, index=None, exclude=None, columns=None,
arr_columns.append(k)
arrays.append(v)
- arrays, arr_columns = _reorder_arrays(arrays, arr_columns, columns)
+ arrays, arr_columns = _reorder_arrays(arrays, arr_columns,
+ columns)
elif isinstance(data, (np.ndarray, DataFrame)):
arrays, columns = _to_arrays(data, columns)
@@ -864,7 +875,7 @@ def to_records(self, index=True, convert_datetime64=True):
else:
if isinstance(self.index, MultiIndex):
# array of tuples to numpy cols. copy copy copy
- ix_vals = lmap(np.array,zip(*self.index.values))
+ ix_vals = lmap(np.array, zip(*self.index.values))
else:
ix_vals = [self.index.values]
@@ -1017,13 +1028,13 @@ def to_panel(self):
from pandas.core.reshape import block2d_to_blocknd
# only support this kind for now
- if (not isinstance(self.index, MultiIndex) or # pragma: no cover
+ if (not isinstance(self.index, MultiIndex) or # pragma: no cover
len(self.index.levels) != 2):
raise NotImplementedError('Only 2-level MultiIndex are supported.')
if not self.index.is_unique:
raise ValueError("Can't convert non-uniquely indexed "
- "DataFrame to Panel")
+ "DataFrame to Panel")
self._consolidate_inplace()
@@ -1228,8 +1239,8 @@ def to_stata(
>>> writer.write_file()
"""
from pandas.io.stata import StataWriter
- writer = StataWriter(
- fname, self, convert_dates=convert_dates, encoding=encoding, byteorder=byteorder)
+ writer = StataWriter(fname, self, convert_dates=convert_dates,
+ encoding=encoding, byteorder=byteorder)
writer.write_file()
def to_sql(self, name, con, flavor='sqlite', if_exists='fail', **kwargs):
@@ -1407,7 +1418,7 @@ def info(self, verbose=True, buf=None, max_cols=None):
len(self.columns))
space = max([len(com.pprint_thing(k)) for k in self.columns]) + 4
counts = self.count()
- if len(cols) != len(counts): # pragma: no cover
+ if len(cols) != len(counts): # pragma: no cover
raise AssertionError('Columns must equal counts (%d != %d)' %
(len(cols), len(counts)))
for col, count in compat.iteritems(counts):
@@ -1516,8 +1527,8 @@ def set_value(self, index, col, value):
except KeyError:
# set using a non-recursive method & reset the cache
- self.loc[index,col] = value
- self._item_cache.pop(col,None)
+ self.loc[index, col] = value
+ self._item_cache.pop(col, None)
return self
@@ -1581,7 +1592,7 @@ def _ixs(self, i, axis=0, copy=False):
# a numpy error (as numpy should really raise)
values = self._data.iget(i)
if not len(values):
- values = np.array([np.nan]*len(self.index),dtype=object)
+ values = np.array([np.nan]*len(self.index), dtype=object)
return self._constructor_sliced.from_array(
values, index=self.index,
name=label, fastpath=True)
@@ -1824,7 +1835,8 @@ def _box_item_values(self, key, values):
def _box_col_values(self, values, items):
""" provide boxed values for a column """
- return self._constructor_sliced.from_array(values, index=self.index, name=items, fastpath=True)
+ return self._constructor_sliced.from_array(values, index=self.index,
+ name=items, fastpath=True)
def __setitem__(self, key, value):
# see if we can slice the rows
@@ -1877,11 +1889,13 @@ def _setitem_frame(self, key, value):
def _ensure_valid_index(self, value):
"""
- ensure that if we don't have an index, that we can create one from the passed value
+ ensure that if we don't have an index, that we can create one from the
+ passed value
"""
if not len(self.index):
if not isinstance(value, Series):
- raise ValueError("cannot set a frame with no defined index and a non-series")
+ raise ValueError('Cannot set a frame with no defined index '
+ 'and a non-series')
self._data.set_axis(1, value.index.copy(), check_axis=False)
def _set_item(self, key, value):
@@ -1909,7 +1923,8 @@ def _set_item(self, key, value):
def insert(self, loc, column, value, allow_duplicates=False):
"""
Insert column into DataFrame at specified location.
- if allow_duplicates is False, Raises Exception if column is already contained in the DataFrame
+ if allow_duplicates is False, Raises Exception if column is already
+ contained in the DataFrame
Parameters
----------
@@ -1945,8 +1960,8 @@ def _sanitize_column(self, key, value):
value = value.T
elif isinstance(value, Index) or _is_sequence(value):
if len(value) != len(self.index):
- raise ValueError('Length of values does not match '
- 'length of index')
+ raise ValueError('Length of values does not match length of '
+ 'index')
if not isinstance(value, (np.ndarray, Index)):
if isinstance(value, list) and len(value) > 0:
@@ -1967,7 +1982,8 @@ def _sanitize_column(self, key, value):
# broadcast across multiple columns if necessary
if key in self.columns and value.ndim == 1:
- if not self.columns.is_unique or isinstance(self.columns, MultiIndex):
+ if not self.columns.is_unique or isinstance(self.columns,
+ MultiIndex):
existing_piece = self[key]
if isinstance(existing_piece, DataFrame):
value = np.tile(value, (len(existing_piece.columns), 1))
@@ -2053,7 +2069,7 @@ def xs(self, key, axis=0, level=None, copy=True, drop_level=True):
labels = self._get_axis(axis)
if level is not None:
loc, new_ax = labels.get_loc_level(key, level=level,
- drop_level=drop_level)
+ drop_level=drop_level)
if not copy and not isinstance(loc, slice):
raise ValueError('Cannot retrieve view (copy=False)')
@@ -2088,7 +2104,7 @@ def xs(self, key, axis=0, level=None, copy=True, drop_level=True):
index = self.index
if isinstance(index, MultiIndex):
loc, new_index = self.index.get_loc_level(key,
- drop_level=drop_level)
+ drop_level=drop_level)
else:
loc = self.index.get_loc(key)
@@ -2146,8 +2162,7 @@ def lookup(self, row_labels, col_labels):
"""
n = len(row_labels)
if n != len(col_labels):
- raise ValueError('Row labels must have same size as '
- 'column labels')
+ raise ValueError('Row labels must have same size as column labels')
thresh = 1000
if not self._is_mixed_type or n > thresh:
@@ -2173,13 +2188,14 @@ def lookup(self, row_labels, col_labels):
#----------------------------------------------------------------------
# Reindexing and alignment
- def _reindex_axes(self, axes, level, limit, method, fill_value, copy, takeable=False):
+ def _reindex_axes(self, axes, level, limit, method, fill_value, copy,
+ takeable=False):
frame = self
columns = axes['columns']
if columns is not None:
- frame = frame._reindex_columns(columns, copy, level,
- fill_value, limit, takeable=takeable)
+ frame = frame._reindex_columns(columns, copy, level, fill_value,
+ limit, takeable=takeable)
index = axes['index']
if index is not None:
@@ -2191,18 +2207,22 @@ def _reindex_axes(self, axes, level, limit, method, fill_value, copy, takeable=F
def _reindex_index(self, new_index, method, copy, level, fill_value=NA,
limit=None, takeable=False):
new_index, indexer = self.index.reindex(new_index, method, level,
- limit=limit, copy_if_needed=True,
+ limit=limit,
+ copy_if_needed=True,
takeable=takeable)
return self._reindex_with_indexers({0: [new_index, indexer]},
- copy=copy, fill_value=fill_value, allow_dups=takeable)
+ copy=copy, fill_value=fill_value,
+ allow_dups=takeable)
def _reindex_columns(self, new_columns, copy, level, fill_value=NA,
limit=None, takeable=False):
new_columns, indexer = self.columns.reindex(new_columns, level=level,
- limit=limit, copy_if_needed=True,
+ limit=limit,
+ copy_if_needed=True,
takeable=takeable)
return self._reindex_with_indexers({1: [new_columns, indexer]},
- copy=copy, fill_value=fill_value, allow_dups=takeable)
+ copy=copy, fill_value=fill_value,
+ allow_dups=takeable)
def _reindex_multi(self, axes, copy, fill_value):
""" we are guaranteed non-Nones in the axes! """
@@ -2218,7 +2238,9 @@ def _reindex_multi(self, axes, copy, fill_value):
columns=new_columns)
else:
return self._reindex_with_indexers({0: [new_index, row_indexer],
- 1: [new_columns, col_indexer]}, copy=copy, fill_value=fill_value)
+ 1: [new_columns, col_indexer]},
+ copy=copy,
+ fill_value=fill_value)
@Appender(_shared_docs['reindex'] % _shared_doc_kwargs)
def reindex(self, index=None, columns=None, **kwargs):
@@ -2434,7 +2456,8 @@ def _maybe_cast(values, labels=None):
#----------------------------------------------------------------------
# Reindex-based selection methods
- def dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False):
+ def dropna(self, axis=0, how='any', thresh=None, subset=None,
+ inplace=False):
"""
Return object with labels on given axis omitted where alternately any
or all of the data are missing
@@ -2493,7 +2516,6 @@ def dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False):
else:
return result
-
def drop_duplicates(self, cols=None, take_last=False, inplace=False):
"""
Return DataFrame with duplicate rows removed, optionally only
@@ -2630,14 +2652,15 @@ def sort_index(self, axis=0, by=None, ascending=True, inplace=False,
from pandas.core.groupby import _lexsort_indexer
axis = self._get_axis_number(axis)
- if axis not in [0, 1]: # pragma: no cover
+ if axis not in [0, 1]: # pragma: no cover
raise AssertionError('Axis must be 0 or 1, got %s' % str(axis))
labels = self._get_axis(axis)
if by is not None:
if axis != 0:
- raise ValueError('When sorting by column, axis must be 0 (rows)')
+ raise ValueError('When sorting by column, axis must be 0 '
+ '(rows)')
if not isinstance(by, (tuple, list)):
by = [by]
if com._is_sequence(ascending) and len(by) != len(ascending):
@@ -2721,9 +2744,9 @@ def sortlevel(self, level=0, axis=0, ascending=True, inplace=False):
ax = 'index' if axis == 0 else 'columns'
if new_axis.is_unique:
- d = { ax : new_axis }
+ d = {ax: new_axis}
else:
- d = { ax : indexer, 'takeable' : True }
+ d = {ax: indexer, 'takeable': True}
return self.reindex(**d)
if inplace:
@@ -2816,18 +2839,23 @@ def _arith_op(left, right):
def f(col):
r = _arith_op(this[col].values, other[col].values)
- return self._constructor_sliced(r,index=new_index,dtype=r.dtype)
+ return self._constructor_sliced(r, index=new_index,
+ dtype=r.dtype)
- result = dict([ (col, f(col)) for col in this ])
+ result = dict([(col, f(col)) for col in this])
# non-unique
else:
def f(i):
- r = _arith_op(this.iloc[:,i].values, other.iloc[:,i].values)
- return self._constructor_sliced(r,index=new_index,dtype=r.dtype)
-
- result = dict([ (i,f(i)) for i, col in enumerate(this.columns) ])
+ r = _arith_op(this.iloc[:, i].values,
+ other.iloc[:, i].values)
+ return self._constructor_sliced(r, index=new_index,
+ dtype=r.dtype)
+
+ result = dict([
+ (i, f(i)) for i, col in enumerate(this.columns)
+ ])
result = self._constructor(result, index=new_index, copy=False)
result.columns = new_columns
return result
@@ -2894,7 +2922,6 @@ def _combine_const(self, other, func, raise_on_error=True):
new_data = self._data.eval(func, other, raise_on_error=raise_on_error)
return self._constructor(new_data)
-
def _compare_frame_evaluate(self, other, func, str_rep):
# unique
@@ -2907,7 +2934,8 @@ def _compare(a, b):
# non-unique
else:
def _compare(a, b):
- return dict([(i,func(a.iloc[:,i], b.iloc[:,i])) for i, col in enumerate(a.columns)])
+ return dict([(i, func(a.iloc[:, i], b.iloc[:, i]))
+ for i, col in enumerate(a.columns)])
new_data = expressions.evaluate(_compare, str_rep, self, other)
result = self._constructor(data=new_data, index=self.index,
copy=False)
@@ -2917,7 +2945,7 @@ def _compare(a, b):
def _compare_frame(self, other, func, str_rep):
if not self._indexed_same(other):
raise ValueError('Can only compare identically-labeled '
- 'DataFrame objects')
+ 'DataFrame objects')
return self._compare_frame_evaluate(other, func, str_rep)
def _flex_compare_frame(self, other, func, str_rep, level):
@@ -3046,7 +3074,8 @@ def combiner(x, y, needs_i8_conversion=False):
else:
mask = isnull(x_values)
- return expressions.where(mask, y_values, x_values, raise_on_error=True)
+ return expressions.where(mask, y_values, x_values,
+ raise_on_error=True)
return self.combine(other, combiner, overwrite=False)
@@ -3070,7 +3099,7 @@ def update(self, other, join='left', overwrite=True, filter_func=None,
contain data in the same place.
"""
# TODO: Support other joins
- if join != 'left': # pragma: no cover
+ if join != 'left': # pragma: no cover
raise NotImplementedError("Only left join is supported")
if not isinstance(other, DataFrame):
@@ -3413,7 +3442,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
series_gen = (Series.from_array(arr, index=res_columns, name=name)
for i, (arr, name) in
enumerate(zip(values, res_index)))
- else: # pragma : no cover
+ else: # pragma : no cover
raise AssertionError('Axis must be 0 or 1, got %s' % str(axis))
i = None
@@ -3442,7 +3471,7 @@ def _apply_standard(self, func, axis, ignore_failures=False, reduce=True):
if i is not None:
k = res_index[i]
e.args = e.args + ('occurred at index %s' %
- com.pprint_thing(k),)
+ com.pprint_thing(k),)
raise
if len(results) > 0 and _is_sequence(results[0]):
@@ -3837,13 +3866,13 @@ def pretty_name(x):
destat = []
for i in range(len(numdata.columns)):
- series = numdata.iloc[:,i]
+ series = numdata.iloc[:, i]
destat.append([series.count(), series.mean(), series.std(),
series.min(), series.quantile(lb), series.median(),
series.quantile(ub), series.max()])
- return self._constructor(lmap(list, zip(*destat)), index=destat_columns,
- columns=numdata.columns)
+ return self._constructor(lmap(list, zip(*destat)),
+ index=destat_columns, columns=numdata.columns)
#----------------------------------------------------------------------
# ndarray-like stats methods
@@ -3920,7 +3949,8 @@ def _count_level(self, level, axis=0, numeric_only=False):
else:
return result
- def any(self, axis=None, bool_only=None, skipna=True, level=None, **kwargs):
+ def any(self, axis=None, bool_only=None, skipna=True, level=None,
+ **kwargs):
"""
Return whether any element is True over requested axis.
%(na_action)s
@@ -3950,7 +3980,8 @@ def any(self, axis=None, bool_only=None, skipna=True, level=None, **kwargs):
return self._reduce(nanops.nanany, axis=axis, skipna=skipna,
numeric_only=bool_only, filter_type='bool')
- def all(self, axis=None, bool_only=None, skipna=True, level=None, **kwargs):
+ def all(self, axis=None, bool_only=None, skipna=True, level=None,
+ **kwargs):
"""
Return whether all elements are True over requested axis.
%(na_action)s
@@ -3987,7 +4018,8 @@ def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
labels = self._get_agg_axis(axis)
# exclude timedelta/datetime unless we are uniform types
- if axis == 1 and self._is_mixed_type and len(set(self.dtypes) & _DATELIKE_DTYPES):
+ if axis == 1 and self._is_mixed_type and len(set(self.dtypes) &
+ _DATELIKE_DTYPES):
numeric_only = True
if numeric_only is None:
@@ -4020,7 +4052,7 @@ def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
data = self._get_numeric_data()
elif filter_type == 'bool':
data = self._get_bool_data()
- else: # pragma: no cover
+ else: # pragma: no cover
msg = ("Generating numeric_only data with filter_type %s"
"not supported." % filter_type)
raise NotImplementedError(msg)
@@ -4167,6 +4199,7 @@ def f(arr):
data = self._get_numeric_data() if numeric_only else self
return data.apply(f, axis=axis)
+
def rank(self, axis=0, numeric_only=None, method='average',
na_option='keep', ascending=True):
"""
@@ -4242,7 +4275,7 @@ def to_timestamp(self, freq=None, how='start', axis=0, copy=True):
new_data.set_axis(1, self.index.to_timestamp(freq=freq, how=how))
elif axis == 1:
new_data.set_axis(0, self.columns.to_timestamp(freq=freq, how=how))
- else: # pragma: no cover
+ else: # pragma: no cover
raise AssertionError('Axis must be 0 or 1. Got %s' % str(axis))
return self._constructor(new_data)
@@ -4277,7 +4310,7 @@ def to_period(self, freq=None, axis=0, copy=True):
if freq is None:
freq = self.columns.freqstr or self.columns.inferred_freq
new_data.set_axis(0, self.columns.to_period(freq=freq))
- else: # pragma: no cover
+ else: # pragma: no cover
raise AssertionError('Axis must be 0 or 1. Got %s' % str(axis))
return self._constructor(new_data)
@@ -4510,7 +4543,7 @@ def extract_index(data):
elif isinstance(v, dict):
have_dicts = True
indexes.append(list(v.keys()))
- elif is_list_like(v) and getattr(v,'ndim',1) == 1:
+ elif is_list_like(v) and getattr(v, 'ndim', 1) == 1:
have_raw_arrays = True
raw_lengths.append(len(v))
@@ -4658,7 +4691,8 @@ def _masked_rec_array_to_mgr(data, index, columns, dtype, copy):
def _reorder_arrays(arrays, arr_columns, columns):
# reorder according to the columns
- if columns is not None and len(columns) and arr_columns is not None and len(arr_columns):
+ if (columns is not None and len(columns) and arr_columns is not None and
+ len(arr_columns)):
indexer = _ensure_index(
arr_columns).get_indexer(columns)
arr_columns = _ensure_index(
@@ -4681,13 +4715,15 @@ def _list_of_series_to_arrays(data, columns, coerce_float=False, dtype=None):
from pandas.core.index import _get_combined_index
if columns is None:
- columns = _get_combined_index([s.index for s in data if getattr(s,'index',None) is not None ])
+ columns = _get_combined_index([
+ s.index for s in data if getattr(s, 'index', None) is not None
+ ])
indexer_cache = {}
aligned_values = []
for s in data:
- index = getattr(s,'index',None)
+ index = getattr(s, 'index', None)
if index is None:
index = _default_index(len(s))
@@ -4741,13 +4777,13 @@ def _convert_object_array(content, columns, coerce_float=False, dtype=None):
def _get_names_from_index(data):
index = lrange(len(data))
- has_some_name = any([getattr(s,'name',None) is not None for s in data])
+ has_some_name = any([getattr(s, 'name', None) is not None for s in data])
if not has_some_name:
return index
count = 0
for i, s in enumerate(data):
- n = getattr(s,'name',None)
+ n = getattr(s, 'name', None)
if n is not None:
index[i] = n
else:
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index efa083e239f63..f960f64e7be16 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -7,7 +7,8 @@
import pandas as pd
from pandas.core.base import PandasObject
-from pandas.core.index import Index, MultiIndex, _ensure_index, InvalidIndexError
+from pandas.core.index import (Index, MultiIndex, _ensure_index,
+ InvalidIndexError)
import pandas.core.indexing as indexing
from pandas.core.indexing import _maybe_convert_indices
from pandas.tseries.index import DatetimeIndex
@@ -34,6 +35,7 @@
args_transpose='axes to permute (int or label for'
' object)')
+
def is_dictlike(x):
return isinstance(x, (dict, com.ABCSeries))
@@ -49,7 +51,8 @@ def _single_replace(self, to_replace, method, inplace, limit):
if values.dtype == orig_dtype and inplace:
return
- result = pd.Series(values, index=self.index, dtype=self.dtype).__finalize__(self)
+ result = pd.Series(values, index=self.index,
+ dtype=self.dtype).__finalize__(self)
if inplace:
self._data = result._data
@@ -70,13 +73,14 @@ class NDFrame(PandasObject):
axes : list
copy : boolean, default False
"""
- _internal_names = [
- '_data', 'name', '_cacher', '_is_copy', '_subtyp', '_index', '_default_kind', '_default_fill_value']
+ _internal_names = ['_data', 'name', '_cacher', '_is_copy', '_subtyp',
+ '_index', '_default_kind', '_default_fill_value']
_internal_names_set = set(_internal_names)
_metadata = []
_is_copy = None
- def __init__(self, data, axes=None, copy=False, dtype=None, fastpath=False):
+ def __init__(self, data, axes=None, copy=False, dtype=None,
+ fastpath=False):
if not fastpath:
if dtype is not None:
@@ -101,7 +105,8 @@ def _validate_dtype(self, dtype):
# a compound dtype
if dtype.kind == 'V':
raise NotImplementedError("compound dtypes are not implemented"
- "in the {0} constructor".format(self.__class__.__name__))
+ "in the {0} constructor"
+ .format(self.__class__.__name__))
return dtype
def _init_mgr(self, mgr, axes=None, dtype=None, copy=False):
@@ -136,7 +141,7 @@ def __unicode__(self):
def _local_dir(self):
""" add the string-like attributes from the info_axis """
return [c for c in self._info_axis
- if isinstance(c, string_types) and isidentifier(c) ]
+ if isinstance(c, string_types) and isidentifier(c)]
@property
def _constructor_sliced(self):
@@ -156,7 +161,8 @@ def _setup_axes(
stat_axis_num : the number of axis for the default stats (int)
aliases : other names for a single axis (dict)
slicers : how axes slice to others (dict)
- axes_are_reversed : boolean whether to treat passed axes as reversed (DataFrame)
+ axes_are_reversed : boolean whether to treat passed axes as
+ reversed (DataFrame)
build_axes : setup the axis properties (default True)
"""
@@ -238,7 +244,9 @@ def _construct_axes_from_arguments(self, args, kwargs, require_all=False):
if a in kwargs:
if alias in kwargs:
raise TypeError(
- "arguments are multually exclusive for [%s,%s]" % (a, alias))
+ "arguments are mutually exclusive for [%s,%s]" %
+ (a, alias)
+ )
continue
if alias in kwargs:
kwargs[a] = kwargs.pop(alias)
@@ -277,7 +285,8 @@ def _get_axis_number(self, axis):
return self._AXIS_NUMBERS[axis]
except:
pass
- raise ValueError('No axis named {0} for object type {1}'.format(axis,type(self)))
+ raise ValueError('No axis named {0} for object type {1}'
+ .format(axis, type(self)))
def _get_axis_name(self, axis):
axis = self._AXIS_ALIASES.get(axis, axis)
@@ -289,7 +298,8 @@ def _get_axis_name(self, axis):
return self._AXIS_NAMES[axis]
except:
pass
- raise ValueError('No axis named {0} for object type {1}'.format(axis,type(self)))
+ raise ValueError('No axis named {0} for object type {1}'
+ .format(axis, type(self)))
def _get_axis(self, axis):
name = self._get_axis_name(axis)
@@ -399,6 +409,7 @@ def _set_axis(self, axis, labels):
-------
y : same as input
"""
+
@Appender(_shared_docs['transpose'] % _shared_doc_kwargs)
def transpose(self, *args, **kwargs):
@@ -458,7 +469,8 @@ def pop(self, item):
def squeeze(self):
""" squeeze length 1 dimensions """
try:
- return self.ix[tuple([slice(None) if len(a) > 1 else a[0] for a in self.axes])]
+ return self.ix[tuple([slice(None) if len(a) > 1 else a[0]
+ for a in self.axes])]
except:
return self
@@ -506,6 +518,7 @@ def swaplevel(self, i, j, axis=0):
-------
renamed : %(klass)s (new object)
"""
+
@Appender(_shared_docs['rename'] % dict(axes='axes keywords for this'
' object', klass='NDFrame'))
def rename(self, *args, **kwargs):
@@ -530,14 +543,14 @@ def f(x):
return f
-
self._consolidate_inplace()
result = self if inplace else self.copy(deep=copy)
# start in the axis order to eliminate too many copies
for axis in lrange(self._AXIS_LEN):
v = axes.get(self._AXIS_NAMES[axis])
- if v is None: continue
+ if v is None:
+ continue
f = _get_rename_function(v)
baxis = self._get_block_manager_axis(axis)
@@ -572,7 +585,7 @@ def rename_axis(self, mapper, axis=0, copy=True, inplace=False):
renamed : type of caller
"""
axis = self._get_axis_name(axis)
- d = { 'copy' : copy, 'inplace' : inplace }
+ d = {'copy': copy, 'inplace': inplace}
d[axis] = mapper
return self.rename(**d)
@@ -580,7 +593,8 @@ def rename_axis(self, mapper, axis=0, copy=True, inplace=False):
# Comparisons
def _indexed_same(self, other):
- return all([self._get_axis(a).equals(other._get_axis(a)) for a in self._AXIS_ORDERS])
+ return all([self._get_axis(a).equals(other._get_axis(a))
+ for a in self._AXIS_ORDERS])
def __neg__(self):
arr = operator.neg(_values_from_object(self))
@@ -626,7 +640,8 @@ def iteritems(self):
def iterkv(self, *args, **kwargs):
"iteritems alias used to get around 2to3. Deprecated"
warnings.warn("iterkv is deprecated and will be removed in a future "
- "release, use ``iteritems`` instead.", DeprecationWarning)
+ "release, use ``iteritems`` instead.",
+ DeprecationWarning)
return self.iteritems(*args, **kwargs)
def __len__(self):
@@ -644,7 +659,8 @@ def empty(self):
def __nonzero__(self):
raise ValueError("The truth value of a {0} is ambiguous. "
- "Use a.empty, a.bool(), a.item(), a.any() or a.all().".format(self.__class__.__name__))
+ "Use a.empty, a.bool(), a.item(), a.any() or a.all()."
+ .format(self.__class__.__name__))
__bool__ = __nonzero__
@@ -655,10 +671,11 @@ def bool(self):
Raise a ValueError if the PandasObject does not have exactly
1 element, or that element is not boolean """
v = self.squeeze()
- if isinstance(v, (bool,np.bool_)):
+ if isinstance(v, (bool, np.bool_)):
return bool(v)
elif np.isscalar(v):
- raise ValueError("bool cannot act on a non-boolean single element {0}".format(self.__class__.__name__))
+ raise ValueError("bool cannot act on a non-boolean single element "
+ "{0}".format(self.__class__.__name__))
self.__nonzero__()
@@ -823,9 +840,9 @@ def to_hdf(self, path_or_buf, key, **kwargs):
fixed(f) : Fixed format
Fast writing/reading. Not-appendable, nor searchable
table(t) : Table format
- Write as a PyTables Table structure which may perform worse but
- allow more flexible operations like searching / selecting subsets
- of the data
+ Write as a PyTables Table structure which may perform
+ worse but allow more flexible operations like searching
+ / selecting subsets of the data
append : boolean, default False
For Table formats, append the input data to the existing
complevel : int, 1-9, default 0
@@ -852,10 +869,11 @@ def to_msgpack(self, path_or_buf=None, **kwargs):
Parameters
----------
path : string File path, buffer-like, or None
- if None, return generated string
+ if None, return generated string
append : boolean whether to append to an existing msgpack
- (default is False)
- compress : type of compressor (zlib or blosc), default to None (no compression)
+ (default is False)
+ compress : type of compressor (zlib or blosc), default to None (no
+ compression)
"""
from pandas.io import packers
@@ -956,7 +974,7 @@ def _get_item_cache(self, item):
values = self._data.get(item)
res = self._box_item_values(item, values)
cache[item] = res
- res._cacher = (item,weakref.ref(self))
+ res._cacher = (item, weakref.ref(self))
return res
def _box_item_values(self, key, values):
@@ -970,10 +988,10 @@ def _maybe_cache_changed(self, item, value):
def _maybe_update_cacher(self, clear=False):
""" see if we need to update our parent cacher
if clear, then clear our cache """
- cacher = getattr(self,'_cacher',None)
+ cacher = getattr(self, '_cacher', None)
if cacher is not None:
try:
- cacher[1]()._maybe_cache_changed(cacher[0],self)
+ cacher[1]()._maybe_cache_changed(cacher[0], self)
except:
# our referant is dead
@@ -984,7 +1002,7 @@ def _maybe_update_cacher(self, clear=False):
def _clear_item_cache(self, i=None):
if i is not None:
- self._item_cache.pop(i,None)
+ self._item_cache.pop(i, None)
else:
self._item_cache.clear()
@@ -1002,11 +1020,13 @@ def _check_setitem_copy(self):
if self._is_copy:
value = config._get_option_fast('mode.chained_assignment')
- t = "A value is trying to be set on a copy of a slice from a DataFrame.\nTry using .loc[row_index,col_indexer] = value instead"
+ t = ("A value is trying to be set on a copy of a slice from a "
+ "DataFrame.\nTry using .loc[row_index,col_indexer] = value "
+ "instead")
if value == 'raise':
raise SettingWithCopyError(t)
elif value == 'warn':
- warnings.warn(t,SettingWithCopyWarning)
+ warnings.warn(t, SettingWithCopyWarning)
def __delitem__(self, key):
"""
@@ -1066,10 +1086,13 @@ def take(self, indices, axis=0, convert=True):
if baxis == 0:
labels = self._get_axis(axis)
new_items = labels.take(indices)
- new_data = self._data.reindex_axis(new_items, indexer=indices, axis=0)
+ new_data = self._data.reindex_axis(new_items, indexer=indices,
+ axis=0)
else:
new_data = self._data.take(indices, axis=baxis)
- return self._constructor(new_data)._setitem_copy(True).__finalize__(self)
+ return self._constructor(new_data)\
+ ._setitem_copy(True)\
+ .__finalize__(self)
# TODO: Check if this was clearer in 0.12
def select(self, crit, axis=0):
@@ -1149,7 +1172,7 @@ def drop(self, labels, axis=0, level=None, inplace=False, **kwargs):
new_axis = axis.drop(labels, level=level)
else:
new_axis = axis.drop(labels)
- dropped = self.reindex(**{ axis_name: new_axis })
+ dropped = self.reindex(**{axis_name: new_axis})
try:
dropped.axes[axis_].set_names(axis.names, inplace=True)
except AttributeError:
@@ -1247,7 +1270,8 @@ def sort_index(self, axis=0, ascending=True):
Parameters
----------
- %(axes)s : array-like, optional (can be specified in order, or as keywords)
+ %(axes)s : array-like, optional (can be specified in order, or as
+ keywords)
New labels / index to conform to. Preferably an Index object to
avoid duplicating data
method : {'backfill', 'bfill', 'pad', 'ffill', None}, default None
@@ -1277,6 +1301,7 @@ def sort_index(self, axis=0, ascending=True):
"""
# TODO: Decide if we care about having different examples for different
# kinds
+
@Appender(_shared_docs['reindex'] % dict(axes="axes", klass="NDFrame"))
def reindex(self, *args, **kwargs):
@@ -1298,18 +1323,21 @@ def reindex(self, *args, **kwargs):
except:
pass
- # if all axes that are requested to reindex are equal, then only copy if indicated
- # must have index names equal here as well as values
- if all([ self._get_axis(axis).identical(ax) for axis, ax in axes.items() if ax is not None ]):
+ # if all axes that are requested to reindex are equal, then only copy
+ # if indicated must have index names equal here as well as values
+ if all([self._get_axis(axis).identical(ax)
+ for axis, ax in axes.items() if ax is not None]):
if copy:
return self.copy()
return self
# perform the reindex on the axes
return self._reindex_axes(axes, level, limit,
- method, fill_value, copy, takeable=takeable).__finalize__(self)
+ method, fill_value, copy,
+ takeable=takeable).__finalize__(self)
- def _reindex_axes(self, axes, level, limit, method, fill_value, copy, takeable=False):
+ def _reindex_axes(self, axes, level, limit, method, fill_value, copy,
+ takeable=False):
""" perform the reinxed for all the axes """
obj = self
for a in self._AXIS_ORDERS:
@@ -1324,35 +1352,42 @@ def _reindex_axes(self, axes, level, limit, method, fill_value, copy, takeable=F
axis = self._get_axis_number(a)
ax = self._get_axis(a)
try:
- new_index, indexer = ax.reindex(labels, level=level,
- limit=limit, method=method, takeable=takeable)
+ new_index, indexer = ax.reindex(
+ labels, level=level, limit=limit, method=method,
+ takeable=takeable)
except (ValueError):
- # catch trying to reindex a non-monotonic index with a specialized indexer
- # e.g. pad, so fallback to the regular indexer
- # this will show up on reindexing a not-naturally ordering series, e.g.
- # Series([1,2,3,4],index=['a','b','c','d']).reindex(['c','b','g'],method='pad')
- new_index, indexer = ax.reindex(labels, level=level,
- limit=limit, method=None, takeable=takeable)
+ # catch trying to reindex a non-monotonic index with a
+ # specialized indexer e.g. pad, so fallback to the regular
+ # indexer this will show up on reindexing a not-naturally
+ # ordering series,
+ # e.g.
+ # Series(
+ # [1,2,3,4], index=['a','b','c','d']
+ # ).reindex(['c','b','g'], method='pad')
+ new_index, indexer = ax.reindex(
+ labels, level=level, limit=limit, method=None,
+ takeable=takeable)
obj = obj._reindex_with_indexers(
- {axis: [new_index, indexer]}, method=method, fill_value=fill_value,
- limit=limit, copy=copy)
+ {axis: [new_index, indexer]}, method=method,
+ fill_value=fill_value, limit=limit, copy=copy)
return obj
def _needs_reindex_multi(self, axes, method, level):
""" check if we do need a multi reindex """
- return (com._count_not_none(*axes.values()) == self._AXIS_LEN) and method is None and level is None and not self._is_mixed_type
+ return ((com._count_not_none(*axes.values()) == self._AXIS_LEN) and
+ method is None and level is None and not self._is_mixed_type)
def _reindex_multi(self, axes, copy, fill_value):
return NotImplemented
_shared_docs['reindex_axis'] = (
- """Conform input object to new index with optional filling logic, placing
- NA/NaN in locations having no value in the previous index. A new object
- is produced unless the new index is equivalent to the current one and
- copy=False
+ """Conform input object to new index with optional filling logic,
+ placing NA/NaN in locations having no value in the previous index. A
+ new object is produced unless the new index is equivalent to the
+ current one and copy=False
Parameters
----------
@@ -1384,6 +1419,7 @@ def _reindex_multi(self, axes, copy, fill_value):
-------
reindexed : %(klass)s
""")
+
@Appender(_shared_docs['reindex_axis'] % _shared_doc_kwargs)
def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
limit=None, fill_value=np.nan):
@@ -1392,12 +1428,15 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
axis_name = self._get_axis_name(axis)
axis_values = self._get_axis(axis_name)
method = com._clean_fill_method(method)
- new_index, indexer = axis_values.reindex(labels, method, level,
- limit=limit, copy_if_needed=True)
- return self._reindex_with_indexers({axis: [new_index, indexer]}, method=method, fill_value=fill_value,
- limit=limit, copy=copy).__finalize__(self)
-
- def _reindex_with_indexers(self, reindexers, method=None, fill_value=np.nan, limit=None, copy=False, allow_dups=False):
+ new_index, indexer = axis_values.reindex(
+ labels, method, level, limit=limit, copy_if_needed=True)
+ return self._reindex_with_indexers(
+ {axis: [new_index, indexer]}, method=method, fill_value=fill_value,
+ limit=limit, copy=copy).__finalize__(self)
+
+ def _reindex_with_indexers(self, reindexers, method=None,
+ fill_value=np.nan, limit=None, copy=False,
+ allow_dups=False):
""" allow_dups indicates an internal call here """
# reindex doing multiple operations on different axes if indiciated
@@ -1420,13 +1459,16 @@ def _reindex_with_indexers(self, reindexers, method=None, fill_value=np.nan, lim
# TODO: speed up on homogeneous DataFrame objects
indexer = com._ensure_int64(indexer)
new_data = new_data.reindex_indexer(index, indexer, axis=baxis,
- fill_value=fill_value, allow_dups=allow_dups)
+ fill_value=fill_value,
+ allow_dups=allow_dups)
- elif baxis == 0 and index is not None and index is not new_data.axes[baxis]:
+ elif (baxis == 0 and index is not None and
+ index is not new_data.axes[baxis]):
new_data = new_data.reindex_items(index, copy=copy,
fill_value=fill_value)
- elif baxis > 0 and index is not None and index is not new_data.axes[baxis]:
+ elif (baxis > 0 and index is not None and
+ index is not new_data.axes[baxis]):
new_data = new_data.copy(deep=copy)
new_data.set_axis(baxis, index)
@@ -1470,14 +1512,16 @@ def filter(self, items=None, like=None, regex=None, axis=None):
axis_values = self._get_axis(axis_name)
if items is not None:
- return self.reindex(**{axis_name: [r for r in items if r in axis_values]})
+ return self.reindex(**{axis_name: [r for r in items
+ if r in axis_values]})
elif like:
matchf = lambda x: (like in x if isinstance(x, string_types)
else like in str(x))
return self.select(matchf, axis=axis_name)
elif regex:
matcher = re.compile(regex)
- return self.select(lambda x: matcher.search(x) is not None, axis=axis_name)
+ return self.select(lambda x: matcher.search(x) is not None,
+ axis=axis_name)
else:
raise TypeError('Must pass either `items`, `like`, or `regex`')
@@ -1508,9 +1552,10 @@ def __finalize__(self, other, method=None, **kwargs):
Parameters
----------
- other : the object from which to get the attributes that we are going to propagate
- method : optional, a passed method name ; possibily to take different types
- of propagation actions based on this
+ other : the object from which to get the attributes that we are going
+ to propagate
+ method : optional, a passed method name ; possibly to take different
+ types of propagation actions based on this
"""
for name in self._metadata:
@@ -1518,8 +1563,11 @@ def __finalize__(self, other, method=None, **kwargs):
return self
def __getattr__(self, name):
- """After regular attribute access, try looking up the name of a the info
- This allows simpler access to columns for interactive use."""
+ """After regular attribute access, try looking up the name of a the
+ info.
+
+ This allows simpler access to columns for interactive use.
+ """
if name in self._info_axis:
return self[name]
raise AttributeError("'%s' object has no attribute '%s'" %
@@ -1594,7 +1642,8 @@ def _protect_consolidate(self, f):
return result
def _get_numeric_data(self):
- return self._constructor(self._data.get_numeric_data()).__finalize__(self)
+ return self._constructor(
+ self._data.get_numeric_data()).__finalize__(self)
def _get_bool_data(self):
return self._constructor(self._data.get_bool_data()).__finalize__(self)
@@ -1608,9 +1657,10 @@ def as_matrix(self, columns=None):
are presented in sorted order unless a specific list of columns is
provided.
- NOTE: the dtype will be a lower-common-denominator dtype (implicit upcasting)
- that is to say if the dtypes (even of numeric types) are mixed, the one that accomodates all will be chosen
- use this with care if you are not dealing with the blocks
+ NOTE: the dtype will be a lower-common-denominator dtype (implicit
+ upcasting) that is to say if the dtypes (even of numeric types)
+ are mixed, the one that accommodates all will be chosen use this
+ with care if you are not dealing with the blocks
e.g. if the dtypes are float16,float32 -> float32
float16,float32,float64 -> float64
@@ -1654,11 +1704,14 @@ def get_ftype_counts(self):
def as_blocks(self, columns=None):
"""
- Convert the frame to a dict of dtype -> Constructor Types that each has a homogeneous dtype.
+ Convert the frame to a dict of dtype -> Constructor Types that each has
+ a homogeneous dtype.
+
are presented in sorted order unless a specific list of columns is
provided.
- NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in as_matrix)
+ NOTE: the dtypes of the blocks WILL BE PRESERVED HERE (unlike in
+ as_matrix)
Parameters
----------
@@ -1720,23 +1773,27 @@ def copy(self, deep=True):
data = data.copy()
return self._constructor(data).__finalize__(self)
- def convert_objects(self, convert_dates=True, convert_numeric=False, copy=True):
+ def convert_objects(self, convert_dates=True, convert_numeric=False,
+ copy=True):
"""
Attempt to infer better dtype for object columns
Parameters
----------
- convert_dates : if True, attempt to soft convert_dates, if 'coerce', force conversion (and non-convertibles get NaT)
- convert_numeric : if True attempt to coerce to numerbers (including strings), non-convertibles get NaN
+ convert_dates : if True, attempt to soft convert_dates, if 'coerce',
+ force conversion (and non-convertibles get NaT)
+ convert_numeric : if True attempt to coerce to numbers (including
+ strings), non-convertibles get NaN
copy : Boolean, if True, return copy, default is True
Returns
-------
converted : asm as input object
"""
- return self._constructor(self._data.convert(convert_dates=convert_dates,
- convert_numeric=convert_numeric,
- copy=copy)).__finalize__(self)
+ return self._constructor(
+ self._data.convert(convert_dates=convert_dates,
+ convert_numeric=convert_numeric,
+ copy=copy)).__finalize__(self)
#----------------------------------------------------------------------
# Filling NA's
@@ -1767,7 +1824,8 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
Maximum size gap to forward or backward fill
downcast : dict, default is None, a dict of item->dtype of what to
downcast if possible, or the string 'infer' which will try to
- downcast to an appropriate equal type (e.g. float64 to int64 if possible)
+ downcast to an appropriate equal type (e.g. float64 to int64 if
+ possible)
See also
--------
@@ -1800,13 +1858,16 @@ def fillna(self, value=None, method=None, axis=0, inplace=False,
# > 3d
if self.ndim > 3:
- raise NotImplementedError('cannot fillna with a method for > 3dims')
+ raise NotImplementedError(
+ 'Cannot fillna with a method for > 3dims'
+ )
# 3d
elif self.ndim == 3:
# fill in 2d chunks
- result = dict([ (col,s.fillna(method=method, value=value)) for col, s in compat.iteritems(self) ])
+ result = dict([(col, s.fillna(method=method, value=value))
+ for col, s in compat.iteritems(self)])
return self._constructor.from_dict(result).__finalize__(self)
# 2d or less
@@ -2036,7 +2097,7 @@ def replace(self, to_replace=None, value=None, inplace=False, limit=None,
raise TypeError('Fill value must be scalar, dict, or '
'Series')
- elif com.is_list_like(to_replace): # [NA, ''] -> [0, 'missing']
+ elif com.is_list_like(to_replace): # [NA, ''] -> [0, 'missing']
if com.is_list_like(value):
if len(to_replace) != len(value):
raise ValueError('Replacement lists must match '
@@ -2212,8 +2273,8 @@ def isnull(self):
return isnull(self).__finalize__(self)
def notnull(self):
- """
- Return a boolean same-sized object indicating if the values are not null
+ """Return a boolean same-sized object indicating if the values are
+ not null
"""
return notnull(self).__finalize__(self)
@@ -2305,8 +2366,8 @@ def groupby(self, by=None, axis=0, level=None, as_index=True, sort=True,
group_keys : boolean, default True
When calling apply, add group keys to index to identify pieces
squeeze : boolean, default False
- reduce the dimensionaility of the return type if possible, otherwise
- return a consistent type
+ reduce the dimensionaility of the return type if possible,
+ otherwise return a consistent type
Examples
--------
@@ -2590,7 +2651,8 @@ def _align_series(self, other, join='outer', axis=None, level=None,
# series/series compat
if isinstance(self, ABCSeries) and isinstance(other, ABCSeries):
if axis:
- raise ValueError('cannot align series to a series other than axis 0')
+ raise ValueError('cannot align series to a series other than '
+ 'axis 0')
join_index, lidx, ridx = self.index.join(other.index, how=join,
level=level,
@@ -2607,8 +2669,8 @@ def _align_series(self, other, join='outer', axis=None, level=None,
join_index = self.index
lidx, ridx = None, None
if not self.index.equals(other.index):
- join_index, lidx, ridx = self.index.join(other.index, how=join,
- return_indexers=True)
+ join_index, lidx, ridx = self.index.join(
+ other.index, how=join, return_indexers=True)
if lidx is not None:
fdata = fdata.reindex_indexer(join_index, lidx, axis=1)
@@ -2617,8 +2679,8 @@ def _align_series(self, other, join='outer', axis=None, level=None,
lidx, ridx = None, None
if not self.columns.equals(other.index):
join_index, lidx, ridx = \
- self.columns.join(other.index, how=join,
- return_indexers=True)
+ self.columns.join(other.index, how=join,
+ return_indexers=True)
if lidx is not None:
fdata = fdata.reindex_indexer(join_index, lidx, axis=0)
@@ -2639,7 +2701,8 @@ def _align_series(self, other, join='outer', axis=None, level=None,
right_result.fillna(fill_value, method=method,
limit=limit))
else:
- return left_result.__finalize__(self), right_result.__finalize__(other)
+ return (left_result.__finalize__(self),
+ right_result.__finalize__(other))
def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
try_cast=False, raise_on_error=True):
@@ -2669,8 +2732,8 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
cond = cond.reindex(**self._construct_axes_dict())
else:
if not hasattr(cond, 'shape'):
- raise ValueError('where requires an ndarray like object for its '
- 'condition')
+ raise ValueError('where requires an ndarray like object for '
+ 'its condition')
if cond.shape != self.shape:
raise ValueError(
'Array conditional must be same shape as self')
@@ -2693,12 +2756,16 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
fill_value=np.nan)
# if we are NOT aligned, raise as we cannot where index
- if axis is None and not all([ other._get_axis(i).equals(ax) for i, ax in enumerate(self.axes) ]):
+ if (axis is None and
+ not all([other._get_axis(i).equals(ax)
+ for i, ax in enumerate(self.axes)])):
raise InvalidIndexError
# slice me out of the other
else:
- raise NotImplemented("cannot align with a higher dimensional NDFrame")
+ raise NotImplemented(
+ "cannot align with a higher dimensional NDFrame"
+ )
elif is_list_like(other):
@@ -2770,11 +2837,13 @@ def where(self, cond, other=np.nan, inplace=False, axis=None, level=None,
if inplace:
# we may have different type blocks come out of putmask, so
# reconstruct the block manager
- self._data = self._data.putmask(cond, other, align=axis is None, inplace=True)
+ self._data = self._data.putmask(cond, other, align=axis is None,
+ inplace=True)
else:
- new_data = self._data.where(
- other, cond, align=axis is None, raise_on_error=raise_on_error, try_cast=try_cast)
+ new_data = self._data.where(other, cond, align=axis is None,
+ raise_on_error=raise_on_error,
+ try_cast=try_cast)
return self._constructor(new_data).__finalize__(self)
@@ -2793,7 +2862,6 @@ def mask(self, cond):
"""
return self.where(~cond, np.nan)
-
def shift(self, periods=1, freq=None, axis=0, **kwds):
"""
Shift index by desired number of periods with an optional time freq
@@ -2862,7 +2930,6 @@ def tshift(self, periods=1, freq=None, axis=0, **kwds):
msg = 'Freq was not given and was not set in the index'
raise ValueError(msg)
-
if periods == 0:
return self
@@ -2923,12 +2990,13 @@ def truncate(self, before=None, after=None, axis=None, copy=True):
raise ValueError('Truncate: %s must be after %s' %
(after, before))
- slicer = [ slice(None, None) ] * self._AXIS_LEN
- slicer[axis] = slice(before,after)
+ slicer = [slice(None, None)] * self._AXIS_LEN
+ slicer[axis] = slice(before, after)
result = self.ix[tuple(slicer)]
if isinstance(ax, MultiIndex):
- setattr(result,self._get_axis_name(axis),ax.truncate(before, after))
+ setattr(result, self._get_axis_name(axis),
+ ax.truncate(before, after))
if copy:
result = result.copy()
@@ -3083,8 +3151,11 @@ def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwds):
def _add_numeric_operations(cls):
""" add the operations to the cls; evaluate the doc strings again """
- axis_descr = "{" + ', '.join([ "{0} ({1})".format(a,i) for i, a in enumerate(cls._AXIS_ORDERS)]) + "}"
- name = cls._constructor_sliced.__name__ if cls._AXIS_LEN > 1 else 'scalar'
+ axis_descr = "{%s}" % ', '.join([
+ "{0} ({1})".format(a, i) for i, a in enumerate(cls._AXIS_ORDERS)
+ ])
+ name = (cls._constructor_sliced.__name__
+ if cls._AXIS_LEN > 1 else 'scalar')
_num_doc = """
%(desc)s
@@ -3123,8 +3194,8 @@ def _make_stat_function(name, desc, f):
@Substitution(outname=name, desc=desc)
@Appender(_num_doc)
- def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
- **kwargs):
+ def stat_func(self, axis=None, skipna=None, level=None,
+ numeric_only=None, **kwargs):
if skipna is None:
skipna = True
if axis is None:
@@ -3137,24 +3208,40 @@ def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
stat_func.__name__ = name
return stat_func
- cls.sum = _make_stat_function('sum',"Return the sum of the values for the requested axis", nanops.nansum)
- cls.mean = _make_stat_function('mean',"Return the mean of the values for the requested axis", nanops.nanmean)
- cls.skew = _make_stat_function('skew',"Return unbiased skew over requested axis\nNormalized by N-1", nanops.nanskew)
- cls.kurt = _make_stat_function('kurt',"Return unbiased kurtosis over requested axis\nNormalized by N-1", nanops.nankurt)
+ cls.sum = _make_stat_function(
+ 'sum', 'Return the sum of the values for the requested axis',
+ nanops.nansum)
+ cls.mean = _make_stat_function(
+ 'mean', 'Return the mean of the values for the requested axis',
+ nanops.nanmean)
+ cls.skew = _make_stat_function(
+ 'skew',
+ 'Return unbiased skew over requested axis\nNormalized by N-1',
+ nanops.nanskew)
+ cls.kurt = _make_stat_function(
+ 'kurt',
+ 'Return unbiased kurtosis over requested axis\nNormalized by N-1',
+ nanops.nankurt)
cls.kurtosis = cls.kurt
- cls.prod = _make_stat_function('prod',"Return the product of the values for the requested axis", nanops.nanprod)
+ cls.prod = _make_stat_function(
+ 'prod', 'Return the product of the values for the requested axis',
+ nanops.nanprod)
cls.product = cls.prod
- cls.median = _make_stat_function('median',"Return the median of the values for the requested axis", nanops.nanmedian)
- cls.max = _make_stat_function('max',"""
+ cls.median = _make_stat_function(
+ 'median', 'Return the median of the values for the requested axis',
+ nanops.nanmedian)
+ cls.max = _make_stat_function('max', """
This method returns the maximum of the values in the object. If you
want the *index* of the maximum, use ``idxmax``. This is the
equivalent of the ``numpy.ndarray`` method ``argmax``.""", nanops.nanmax)
- cls.min = _make_stat_function('min',"""
+ cls.min = _make_stat_function('min', """
This method returns the minimum of the values in the object. If you
want the *index* of the minimum, use ``idxmin``. This is the
equivalent of the ``numpy.ndarray`` method ``argmin``.""", nanops.nanmin)
- @Substitution(outname='mad', desc="Return the mean absolute deviation of the values for the requested axis")
+ @Substitution(outname='mad',
+ desc="Return the mean absolute deviation of the values "
+ "for the requested axis")
@Appender(_num_doc)
def mad(self, axis=None, skipna=None, level=None, **kwargs):
if skipna is None:
@@ -3173,7 +3260,9 @@ def mad(self, axis=None, skipna=None, level=None, **kwargs):
return np.abs(demeaned).mean(axis=axis, skipna=skipna)
cls.mad = mad
- @Substitution(outname='variance',desc="Return unbiased variance over requested axis\nNormalized by N-1")
+ @Substitution(outname='variance',
+ desc="Return unbiased variance over requested "
+ "axis\nNormalized by N-1")
@Appender(_num_doc)
def var(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
if skipna is None:
@@ -3184,10 +3273,13 @@ def var(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
return self._agg_by_level('var', axis=axis, level=level,
skipna=skipna, ddof=ddof)
- return self._reduce(nanops.nanvar, axis=axis, skipna=skipna, ddof=ddof)
+ return self._reduce(nanops.nanvar, axis=axis, skipna=skipna,
+ ddof=ddof)
cls.var = var
- @Substitution(outname='stdev',desc="Return unbiased standard deviation over requested axis\nNormalized by N-1")
+ @Substitution(outname='stdev',
+ desc="Return unbiased standard deviation over requested "
+ "axis\nNormalized by N-1")
@Appender(_num_doc)
def std(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
if skipna is None:
@@ -3198,12 +3290,14 @@ def std(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
return self._agg_by_level('std', axis=axis, level=level,
skipna=skipna, ddof=ddof)
result = self.var(axis=axis, skipna=skipna, ddof=ddof)
- if getattr(result,'ndim',0) > 0:
+ if getattr(result, 'ndim', 0) > 0:
return result.apply(np.sqrt)
return np.sqrt(result)
cls.std = std
- @Substitution(outname='compounded',desc="Return the compound percentage of the values for the requested axis")
+ @Substitution(outname='compounded',
+ desc="Return the compound percentage of the values for "
+ "the requested axis")
@Appender(_num_doc)
def compound(self, axis=None, skipna=None, level=None, **kwargs):
if skipna is None:
@@ -3214,15 +3308,17 @@ def compound(self, axis=None, skipna=None, level=None, **kwargs):
def _make_cum_function(name, accum_func, mask_a, mask_b):
@Substitution(outname=name)
- @Appender("Return cumulative {0} over requested axis.".format(name) + _cnum_doc)
- def func(self, axis=None, dtype=None, out=None, skipna=True, **kwargs):
+ @Appender("Return cumulative {0} over requested axis.".format(name)
+ + _cnum_doc)
+ def func(self, axis=None, dtype=None, out=None, skipna=True,
+ **kwargs):
if axis is None:
axis = self._stat_axis_number
else:
axis = self._get_axis_number(axis)
y = _values_from_object(self).copy()
- if not issubclass(y.dtype.type, (np.integer,np.bool_)):
+ if not issubclass(y.dtype.type, (np.integer, np.bool_)):
mask = isnull(self)
if skipna:
np.putmask(y, mask, mask_a)
@@ -3239,11 +3335,16 @@ def func(self, axis=None, dtype=None, out=None, skipna=True, **kwargs):
func.__name__ = name
return func
-
- cls.cummin = _make_cum_function('min', lambda y, axis: np.minimum.accumulate(y, axis), np.inf, np.nan)
- cls.cumsum = _make_cum_function('sum', lambda y, axis: y.cumsum(axis), 0., np.nan)
- cls.cumprod = _make_cum_function('prod', lambda y, axis: y.cumprod(axis), 1., np.nan)
- cls.cummax = _make_cum_function('max', lambda y, axis: np.maximum.accumulate(y, axis), -np.inf, np.nan)
+ cls.cummin = _make_cum_function(
+ 'min', lambda y, axis: np.minimum.accumulate(y, axis),
+ np.inf, np.nan)
+ cls.cumsum = _make_cum_function(
+ 'sum', lambda y, axis: y.cumsum(axis), 0., np.nan)
+ cls.cumprod = _make_cum_function(
+ 'prod', lambda y, axis: y.cumprod(axis), 1., np.nan)
+ cls.cummax = _make_cum_function(
+ 'max', lambda y, axis: np.maximum.accumulate(y, axis),
+ -np.inf, np.nan)
# install the indexerse
for _name, _indexer in indexing.get_indexers_list():
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index f37b94cd7f689..18f41917067f2 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -60,7 +60,6 @@
'fillna', 'dtype']) | _plotting_methods
-
class GroupByError(Exception):
pass
@@ -482,17 +481,17 @@ def picker(arr):
return self.agg(picker)
def cumcount(self):
- '''
- Number each item in each group from 0 to the length of that group.
+ """Number each item in each group from 0 to the length of that group.
Essentially this is equivalent to
-
- >>> self.apply(lambda x: Series(np.arange(len(x)), x.index)).
+
+ >>> self.apply(lambda x: Series(np.arange(len(x)), x.index))
Example
-------
- >>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']], columns=['A'])
+ >>> df = pd.DataFrame([['a'], ['a'], ['a'], ['b'], ['b'], ['a']],
+ ... columns=['A'])
>>> df
A
0 a
@@ -510,14 +509,13 @@ def cumcount(self):
5 3
dtype: int64
- '''
+ """
index = self.obj.index
cumcounts = np.zeros(len(index), dtype='int64')
for v in self.indices.values():
cumcounts[v] = np.arange(len(v), dtype='int64')
return Series(cumcounts, index)
-
def _try_cast(self, result, obj):
"""
try to cast the result to our obj original type,
@@ -578,7 +576,7 @@ def _python_agg_general(self, func, *args, **kwargs):
if _is_numeric_dtype(values.dtype):
values = com.ensure_float(values)
- output[name] = self._try_cast(values[mask],result)
+ output[name] = self._try_cast(values[mask], result)
return self._wrap_aggregated_output(output)
@@ -620,7 +618,7 @@ def _apply_filter(self, indices, dropna):
mask[indices.astype(int)] = True
# mask fails to broadcast when passed to where; broadcast manually.
mask = np.tile(mask, list(self.obj.shape[1:]) + [1]).T
- filtered = self.obj.where(mask) # Fill with NaNs.
+ filtered = self.obj.where(mask) # Fill with NaNs.
return filtered
@@ -710,7 +708,7 @@ def apply(self, f, data, axis=0):
# oh boy
if (f.__name__ not in _plotting_methods and
- hasattr(splitter, 'fast_apply') and axis == 0):
+ hasattr(splitter, 'fast_apply') and axis == 0):
try:
values, mutated = splitter.fast_apply(f, group_keys)
return group_keys, values, mutated
@@ -840,16 +838,21 @@ def get_group_levels(self):
# Aggregation functions
_cython_functions = {
- 'add' : 'group_add',
- 'prod' : 'group_prod',
- 'min' : 'group_min',
- 'max' : 'group_max',
- 'mean' : 'group_mean',
- 'median': dict(name = 'group_median'),
- 'var' : 'group_var',
- 'std' : 'group_var',
- 'first': dict(name = 'group_nth', f = lambda func, a, b, c, d: func(a, b, c, d, 1)),
- 'last' : 'group_last',
+ 'add': 'group_add',
+ 'prod': 'group_prod',
+ 'min': 'group_min',
+ 'max': 'group_max',
+ 'mean': 'group_mean',
+ 'median': {
+ 'name': 'group_median'
+ },
+ 'var': 'group_var',
+ 'std': 'group_var',
+ 'first': {
+ 'name': 'group_nth',
+ 'f': lambda func, a, b, c, d: func(a, b, c, d, 1)
+ },
+ 'last': 'group_last',
}
_cython_transforms = {
@@ -867,18 +870,19 @@ def get_group_levels(self):
def _get_aggregate_function(self, how, values):
dtype_str = values.dtype.name
- def get_func(fname):
- # find the function, or use the object function, or return a generic
- for dt in [dtype_str,'object']:
- f = getattr(_algos,"%s_%s" % (fname,dtype_str),None)
+ def get_func(fname):
+ # find the function, or use the object function, or return a
+ # generic
+ for dt in [dtype_str, 'object']:
+ f = getattr(_algos, "%s_%s" % (fname, dtype_str), None)
if f is not None:
return f
- return getattr(_algos,fname,None)
+ return getattr(_algos, fname, None)
ftype = self._cython_functions[how]
- if isinstance(ftype,dict):
+ if isinstance(ftype, dict):
func = afunc = get_func(ftype['name'])
# a sub-function
@@ -895,7 +899,9 @@ def wrapper(*args, **kwargs):
func = get_func(ftype)
if func is None:
- raise NotImplementedError("function is not implemented for this dtype: [how->%s,dtype->%s]" % (how,dtype_str))
+ raise NotImplementedError("function is not implemented for this"
+ "dtype: [how->%s,dtype->%s]" %
+ (how, dtype_str))
return func, dtype_str
def aggregate(self, values, how, axis=0):
@@ -934,11 +940,11 @@ def aggregate(self, values, how, axis=0):
if self._filter_empty_groups:
if result.ndim == 2:
if is_numeric:
- result = lib.row_bool_subset(result,
- (counts > 0).view(np.uint8))
+ result = lib.row_bool_subset(
+ result, (counts > 0).view(np.uint8))
else:
- result = lib.row_bool_subset_object(result,
- (counts > 0).view(np.uint8))
+ result = lib.row_bool_subset_object(
+ result, (counts > 0).view(np.uint8))
else:
result = result[counts > 0]
@@ -957,8 +963,8 @@ def aggregate(self, values, how, axis=0):
return result, names
def _aggregate(self, result, counts, values, how, is_numeric):
- agg_func,dtype = self._get_aggregate_function(how, values)
- trans_func = self._cython_transforms.get(how, lambda x: x)
+ agg_func, dtype = self._get_aggregate_function(how, values)
+ trans_func = self._cython_transforms.get(how, lambda x: x)
comp_ids, _, ngroups = self.group_info
if values.ndim > 3:
@@ -989,7 +995,7 @@ def _aggregate_series_fast(self, obj, func):
group_index, _, ngroups = self.group_info
# avoids object / Series creation overhead
- dummy = obj._get_values(slice(None,0)).to_dense()
+ dummy = obj._get_values(slice(None, 0)).to_dense()
indexer = _algos.groupsort_indexer(group_index, ngroups)[0]
obj = obj.take(indexer, convert=False)
group_index = com.take_nd(group_index, indexer, allow_fill=False)
@@ -1010,7 +1016,8 @@ def _aggregate_series_pure_python(self, obj, func):
for label, group in splitter:
res = func(group)
if result is None:
- if isinstance(res, (Series, np.ndarray)) or isinstance(res, list):
+ if (isinstance(res, (Series, np.ndarray)) or
+ isinstance(res, list)):
raise ValueError('Function does not reduce')
result = np.empty(ngroups, dtype='O')
@@ -1158,16 +1165,19 @@ def names(self):
# cython aggregation
_cython_functions = {
- 'add' : 'group_add_bin',
- 'prod' : 'group_prod_bin',
- 'mean' : 'group_mean_bin',
- 'min' : 'group_min_bin',
- 'max' : 'group_max_bin',
- 'var' : 'group_var_bin',
- 'std' : 'group_var_bin',
- 'ohlc' : 'group_ohlc',
- 'first': dict(name = 'group_nth_bin', f = lambda func, a, b, c, d: func(a, b, c, d, 1)),
- 'last' : 'group_last_bin',
+ 'add': 'group_add_bin',
+ 'prod': 'group_prod_bin',
+ 'mean': 'group_mean_bin',
+ 'min': 'group_min_bin',
+ 'max': 'group_max_bin',
+ 'var': 'group_var_bin',
+ 'std': 'group_var_bin',
+ 'ohlc': 'group_ohlc',
+ 'first': {
+ 'name': 'group_nth_bin',
+ 'f': lambda func, a, b, c, d: func(a, b, c, d, 1)
+ },
+ 'last': 'group_last_bin',
}
_name_functions = {
@@ -1178,8 +1188,8 @@ def names(self):
def _aggregate(self, result, counts, values, how, is_numeric=True):
- agg_func,dtype = self._get_aggregate_function(how, values)
- trans_func = self._cython_transforms.get(how, lambda x: x)
+ agg_func, dtype = self._get_aggregate_function(how, values)
+ trans_func = self._cython_transforms.get(how, lambda x: x)
if values.ndim > 3:
# punting for now
@@ -1295,14 +1305,14 @@ def __init__(self, index, grouper=None, name=None, level=None,
# no level passed
if not isinstance(self.grouper, (Series, np.ndarray)):
self.grouper = self.index.map(self.grouper)
- if not (hasattr(self.grouper,"__len__") and \
- len(self.grouper) == len(self.index)):
- errmsg = "Grouper result violates len(labels) == len(data)\n"
- errmsg += "result: %s" % com.pprint_thing(self.grouper)
- self.grouper = None # Try for sanity
+ if not (hasattr(self.grouper, "__len__") and
+ len(self.grouper) == len(self.index)):
+ errmsg = ('Grouper result violates len(labels) == '
+ 'len(data)\nresult: %s' %
+ com.pprint_thing(self.grouper))
+ self.grouper = None # Try for sanity
raise AssertionError(errmsg)
-
def __repr__(self):
return 'Grouping(%s)' % self.name
@@ -1357,7 +1367,8 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
if not isinstance(group_axis, MultiIndex):
if isinstance(level, compat.string_types):
if obj.index.name != level:
- raise ValueError('level name %s is not the name of the index' % level)
+ raise ValueError('level name %s is not the name of the '
+ 'index' % level)
elif level > 0:
raise ValueError('level > 0 only valid with MultiIndex')
@@ -1416,7 +1427,7 @@ def _get_grouper(obj, key=None, axis=0, level=None, sort=True):
name = gpr
gpr = obj[gpr]
- if (isinstance(gpr,Categorical) and len(gpr) != len(obj)):
+ if isinstance(gpr, Categorical) and len(gpr) != len(obj):
errmsg = "Categorical grouper must have len(grouper) == len(data)"
raise AssertionError(errmsg)
@@ -1628,7 +1639,7 @@ def transform(self, func, *args, **kwargs):
transformed : Series
"""
result = self.obj.copy()
- if hasattr(result,'values'):
+ if hasattr(result, 'values'):
result = result.values
dtype = result.dtype
@@ -1642,7 +1653,7 @@ def transform(self, func, *args, **kwargs):
group = com.ensure_float(group)
object.__setattr__(group, 'name', name)
res = wrapper(group)
- if hasattr(res,'values'):
+ if hasattr(res, 'values'):
res = res.values
# need to do a safe put here, as the dtype may be different
@@ -1653,7 +1664,8 @@ def transform(self, func, *args, **kwargs):
# downcast if we can (and need)
result = _possibly_downcast_to_dtype(result, dtype)
- return self.obj.__class__(result,index=self.obj.index,name=self.obj.name)
+ return self.obj.__class__(result, index=self.obj.index,
+ name=self.obj.name)
def filter(self, func, dropna=True, *args, **kwargs):
"""
@@ -1686,8 +1698,8 @@ def true_and_notnull(x, *args, **kwargs):
return b and notnull(b)
try:
- indices = [self.indices[name] if true_and_notnull(group) else []
- for name, group in self]
+ indices = [self.indices[name] if true_and_notnull(group) else []
+ for name, group in self]
except ValueError:
raise TypeError("the filter must return a boolean result")
except TypeError:
@@ -1880,7 +1892,7 @@ def _aggregate_multiple_funcs(self, arg):
grouper=self.grouper)
results.append(colg.aggregate(arg))
keys.append(col)
- except (TypeError, DataError) :
+ except (TypeError, DataError):
pass
except SpecificationError:
raise
@@ -1901,14 +1913,16 @@ def _aggregate_generic(self, func, *args, **kwargs):
for name, data in self:
# for name in self.indices:
# data = self.get_group(name, obj=obj)
- result[name] = self._try_cast(func(data, *args, **kwargs),data)
+ result[name] = self._try_cast(func(data, *args, **kwargs),
+ data)
except Exception:
return self._aggregate_item_by_item(func, *args, **kwargs)
else:
for name in self.indices:
try:
data = self.get_group(name, obj=obj)
- result[name] = self._try_cast(func(data, *args, **kwargs), data)
+ result[name] = self._try_cast(func(data, *args, **kwargs),
+ data)
except Exception:
wrapper = lambda x: func(x, *args, **kwargs)
result[name] = data.apply(wrapper, axis=axis)
@@ -1929,7 +1943,8 @@ def _aggregate_item_by_item(self, func, *args, **kwargs):
data = obj[item]
colg = SeriesGroupBy(data, selection=item,
grouper=self.grouper)
- result[item] = self._try_cast(colg.aggregate(func, *args, **kwargs), data)
+ result[item] = self._try_cast(
+ colg.aggregate(func, *args, **kwargs), data)
except ValueError:
cannot_agg.append(item)
continue
@@ -1987,12 +2002,15 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
if isinstance(values[0], (np.ndarray, Series)):
if isinstance(values[0], Series):
- applied_index = self.obj._get_axis(self.axis)
- all_indexed_same = _all_indexes_same([x.index for x in values])
- singular_series = len(values) == 1 and applied_index.nlevels == 1
+ applied_index = self.obj._get_axis(self.axis)
+ all_indexed_same = _all_indexes_same([x.index
+ for x in values])
+ singular_series = (len(values) == 1 and
+ applied_index.nlevels == 1)
# GH3596
- # provide a reduction (Frame -> Series) if groups are unique
+ # provide a reduction (Frame -> Series) if groups are
+ # unique
if self.squeeze:
# assign the name to this series
@@ -2000,15 +2018,19 @@ def _wrap_applied_output(self, keys, values, not_indexed_same=False):
values[0].name = keys[0]
# GH2893
- # we have series in the values array, we want to produce a series:
+ # we have series in the values array, we want to
+ # produce a series:
# if any of the sub-series are not indexed the same
- # OR we don't have a multi-index and we have only a single values
- return self._concat_objects(keys, values,
- not_indexed_same=not_indexed_same)
+ # OR we don't have a multi-index and we have only a
+ # single values
+ return self._concat_objects(
+ keys, values, not_indexed_same=not_indexed_same
+ )
if not all_indexed_same:
- return self._concat_objects(keys, values,
- not_indexed_same=not_indexed_same)
+ return self._concat_objects(
+ keys, values, not_indexed_same=not_indexed_same
+ )
try:
if self.axis == 0:
@@ -2079,7 +2101,7 @@ def transform(self, func, *args, **kwargs):
except TypeError:
return self._transform_item_by_item(obj, fast_path)
except Exception: # pragma: no cover
- res = fast_path(group)
+ res = fast_path(group)
path = fast_path
else:
res = path(group)
@@ -2104,15 +2126,17 @@ def transform(self, func, *args, **kwargs):
def _define_paths(self, func, *args, **kwargs):
if isinstance(func, compat.string_types):
fast_path = lambda group: getattr(group, func)(*args, **kwargs)
- slow_path = lambda group: group.apply(lambda x: getattr(x, func)(*args, **kwargs), axis=self.axis)
+ slow_path = lambda group: group.apply(
+ lambda x: getattr(x, func)(*args, **kwargs), axis=self.axis)
else:
fast_path = lambda group: func(group, *args, **kwargs)
- slow_path = lambda group: group.apply(lambda x: func(x, *args, **kwargs), axis=self.axis)
+ slow_path = lambda group: group.apply(
+ lambda x: func(x, *args, **kwargs), axis=self.axis)
return fast_path, slow_path
def _choose_path(self, fast_path, slow_path, group):
path = slow_path
- res = slow_path(group)
+ res = slow_path(group)
# if we make it here, test if we can use the fast path
try:
@@ -2190,7 +2214,7 @@ def filter(self, func, dropna=True, *args, **kwargs):
try:
path, res = self._choose_path(fast_path, slow_path, group)
except Exception: # pragma: no cover
- res = fast_path(group)
+ res = fast_path(group)
path = fast_path
else:
res = path(group)
@@ -2199,11 +2223,11 @@ def add_indices():
indices.append(self.indices[name])
# interpret the result of the filter
- if isinstance(res,(bool,np.bool_)):
+ if isinstance(res, (bool, np.bool_)):
if res:
add_indices()
else:
- if getattr(res,'ndim',None) == 1:
+ if getattr(res, 'ndim', None) == 1:
val = res.ravel()[0]
if val and notnull(val):
add_indices()
@@ -2224,7 +2248,8 @@ def __getitem__(self, key):
if self._selection is not None:
raise Exception('Column(s) %s already selected' % self._selection)
- if isinstance(key, (list, tuple, Series, np.ndarray)) or not self.as_index:
+ if (isinstance(key, (list, tuple, Series, np.ndarray)) or
+ not self.as_index):
return DataFrameGroupBy(self.obj, self.grouper, selection=key,
grouper=self.grouper,
exclusions=self.exclusions,
@@ -2324,16 +2349,17 @@ def _wrap_agged_blocks(self, blocks):
def _iterate_column_groupbys(self):
for i, colname in enumerate(self.obj.columns):
- yield colname, SeriesGroupBy(self.obj.iloc[:, i], selection=colname,
+ yield colname, SeriesGroupBy(self.obj.iloc[:, i],
+ selection=colname,
grouper=self.grouper,
exclusions=self.exclusions)
def _apply_to_column_groupbys(self, func):
from pandas.tools.merge import concat
- return concat((func(col_groupby)
- for _, col_groupby in self._iterate_column_groupbys()),
- keys=self.obj.columns,
- axis=1)
+ return concat(
+ (func(col_groupby) for _, col_groupby
+ in self._iterate_column_groupbys()),
+ keys=self.obj.columns, axis=1)
def ohlc(self):
"""
@@ -2341,7 +2367,8 @@ def ohlc(self):
For multiple groupings, the result index will be a MultiIndex
"""
- return self._apply_to_column_groupbys(lambda x: x._cython_agg_general('ohlc'))
+ return self._apply_to_column_groupbys(
+ lambda x: x._cython_agg_general('ohlc'))
from pandas.tools.plotting import boxplot_frame_groupby
diff --git a/pandas/core/index.py b/pandas/core/index.py
index 096aff548dc9c..65eb8486c36d2 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -56,6 +56,7 @@ def _shouldbe_timestamp(obj):
_Identity = object
+
class Index(FrozenNDArray):
"""
@@ -144,7 +145,7 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False,
inferred = lib.infer_dtype(subarr)
if inferred == 'integer':
return Int64Index(subarr.astype('i8'), copy=copy, name=name)
- elif inferred in ['floating','mixed-integer-float']:
+ elif inferred in ['floating', 'mixed-integer-float']:
return Float64Index(subarr, copy=copy, name=name)
elif inferred != 'string':
if (inferred.startswith('datetime') or
@@ -179,7 +180,7 @@ def is_(self, other):
return self._id is getattr(other, '_id', Ellipsis)
def _reset_identity(self):
- "Initializes or resets ``_id`` attribute with new object"
+ """Initializes or resets ``_id`` attribute with new object"""
self._id = _Identity()
def view(self, *args, **kwargs):
@@ -191,8 +192,10 @@ def view(self, *args, **kwargs):
# construction helpers
@classmethod
def _scalar_data_error(cls, data):
- raise TypeError('{0}(...) must be called with a collection '
- 'of some kind, {1} was passed'.format(cls.__name__,repr(data)))
+ raise TypeError(
+ '{0}(...) must be called with a collection of some kind, {1} was '
+ 'passed'.format(cls.__name__, repr(data))
+ )
@classmethod
def _string_data_error(cls, data):
@@ -411,7 +414,7 @@ def is_integer(self):
return self.inferred_type in ['integer']
def is_floating(self):
- return self.inferred_type in ['floating','mixed-integer-float']
+ return self.inferred_type in ['floating', 'mixed-integer-float']
def is_numeric(self):
return self.inferred_type in ['integer', 'floating']
@@ -423,8 +426,9 @@ def holds_integer(self):
return self.inferred_type in ['integer', 'mixed-integer']
def _convert_scalar_indexer(self, key, typ=None):
- """ convert a scalar indexer, right now we are converting floats -> ints
- if the index supports it """
+ """ convert a scalar indexer, right now we are converting
+ floats -> ints if the index supports it
+ """
def to_int():
ikey = int(key)
@@ -463,7 +467,7 @@ def _convert_slice_indexer_getitem(self, key, is_index_slice=False):
whether positional or not """
if self.is_integer() or is_index_slice:
return key
- return self._convert_slice_indexer(key)
+ return self._convert_slice_indexer(key)
def _convert_slice_indexer(self, key, typ=None):
""" convert a slice indexer. disallow floats in the start/stop/step """
@@ -494,7 +498,8 @@ def is_int(v):
if typ == 'iloc':
return self._convert_slice_indexer_iloc(key)
elif typ == 'getitem':
- return self._convert_slice_indexer_getitem(key, is_index_slice=is_index_slice)
+ return self._convert_slice_indexer_getitem(
+ key, is_index_slice=is_index_slice)
# convert the slice to an indexer here
@@ -535,9 +540,9 @@ def _convert_list_indexer(self, key, typ=None):
def _convert_indexer_error(self, key, msg=None):
if msg is None:
msg = 'label'
- raise TypeError("the {0} [{1}] is not a proper indexer for this index type ({2})".format(msg,
- key,
- self.__class__.__name__))
+ raise TypeError("the {0} [{1}] is not a proper indexer for this index "
+ "type ({2})".format(msg, key, self.__class__.__name__))
+
def get_duplicates(self):
from collections import defaultdict
counter = defaultdict(lambda: 0)
@@ -750,11 +755,12 @@ def equals(self, other):
return np.array_equal(self, other)
def identical(self, other):
+ """Similar to equals, but check that other comparable attributes are
+ also equal
"""
- Similar to equals, but check that other comparable attributes are also equal
- """
- return self.equals(other) and all(
- (getattr(self, c, None) == getattr(other, c, None) for c in self._comparables))
+ return (self.equals(other) and
+ all((getattr(self, c, None) == getattr(other, c, None)
+ for c in self._comparables)))
def asof(self, label):
"""
@@ -1213,7 +1219,8 @@ def reindex(self, target, method=None, level=None, limit=None,
indexer = None
# to avoid aliasing an existing index
- if copy_if_needed and target.name != self.name and self.name is not None:
+ if (copy_if_needed and target.name != self.name and
+ self.name is not None):
if target.name is None:
target = self.copy()
@@ -1621,9 +1628,10 @@ class Int64Index(Index):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
- storing axis labels for all pandas objects. Int64Index is a special case of `Index`
- with purely integer labels. This is the default index type used by the DataFrame
- and Series ctors when no explicit index is provided by the user.
+ storing axis labels for all pandas objects. Int64Index is a special case
+ of `Index` with purely integer labels. This is the default index type used
+ by the DataFrame and Series ctors when no explicit index is provided by the
+ user.
Parameters
----------
@@ -1664,7 +1672,8 @@ def __new__(cls, data, dtype=None, copy=False, name=None, fastpath=False):
elif issubclass(data.dtype.type, np.integer):
# don't force the upcast as we may be dealing
# with a platform int
- if dtype is None or not issubclass(np.dtype(dtype).type, np.integer):
+ if dtype is None or not issubclass(np.dtype(dtype).type,
+ np.integer):
dtype = np.int64
subarr = np.array(data, dtype=dtype, copy=copy)
@@ -1719,8 +1728,8 @@ def _wrap_joined_index(self, joined, other):
class Float64Index(Index):
"""
Immutable ndarray implementing an ordered, sliceable set. The basic object
- storing axis labels for all pandas objects. Float64Index is a special case of `Index`
- with purely floating point labels.
+ storing axis labels for all pandas objects. Float64Index is a special case
+ of `Index` with purely floating point labels.
Parameters
----------
@@ -1774,14 +1783,15 @@ def inferred_type(self):
def astype(self, dtype):
if np.dtype(dtype) != np.object_:
- raise TypeError(
- "Setting %s dtype to anything other than object is not supported" % self.__class__)
- return Index(self.values,name=self.name,dtype=object)
+ raise TypeError('Setting %s dtype to anything other than object '
+ 'is not supported' % self.__class__)
+ return Index(self.values, name=self.name, dtype=object)
def _convert_scalar_indexer(self, key, typ=None):
if typ == 'iloc':
- return super(Float64Index, self)._convert_scalar_indexer(key, typ=typ)
+ return super(Float64Index, self)._convert_scalar_indexer(key,
+ typ=typ)
return key
def _convert_slice_indexer(self, key, typ=None):
@@ -1793,10 +1803,11 @@ def _convert_slice_indexer(self, key, typ=None):
pass
# allow floats here
- self._validate_slicer(key, lambda v: v is None or is_integer(v) or is_float(v))
+ self._validate_slicer(
+ key, lambda v: v is None or is_integer(v) or is_float(v))
# translate to locations
- return self.slice_indexer(key.start,key.stop,key.step)
+ return self.slice_indexer(key.start, key.stop, key.step)
def get_value(self, series, key):
""" we always want to get an index value, never a value """
@@ -1980,8 +1991,8 @@ def _set_labels(self, labels, copy=False, validate=True,
verify_integrity=False):
if validate and len(labels) != self.nlevels:
raise ValueError("Length of labels must match length of levels")
- self._labels = FrozenList(_ensure_frozen(labs, copy=copy)._shallow_copy()
- for labs in labels)
+ self._labels = FrozenList(
+ _ensure_frozen(labs, copy=copy)._shallow_copy() for labs in labels)
self._tuples = None
self._reset_cache()
@@ -2108,12 +2119,12 @@ def __repr__(self):
res = res.encode(encoding)
return res
-
def __unicode__(self):
"""
Return a string representation for a particular Index
- Invoked by unicode(df) in py2 only. Yields a Unicode String in both py2/py3.
+ Invoked by unicode(df) in py2 only. Yields a Unicode String in both
+ py2/py3.
"""
rows = self.format(names=True)
max_rows = get_option('display.max_rows')
@@ -2133,7 +2144,7 @@ def _convert_slice_indexer(self, key, typ=None):
if typ == 'iloc':
return self._convert_slice_indexer_iloc(key)
- return super(MultiIndex,self)._convert_slice_indexer(key, typ=typ)
+ return super(MultiIndex, self)._convert_slice_indexer(key, typ=typ)
def _get_names(self):
return FrozenList(level.name for level in self.levels)
@@ -2142,8 +2153,8 @@ def _set_names(self, values, validate=True):
"""
sets names on levels. WARNING: mutates!
- Note that you generally want to set this *after* changing levels, so that it only
- acts on copies"""
+ Note that you generally want to set this *after* changing levels, so
+ that it only acts on copies"""
values = list(values)
if validate and len(values) != self.nlevels:
raise ValueError('Length of names must match length of levels')
@@ -2189,8 +2200,8 @@ def _get_level_number(self, level):
level += self.nlevels
# Note: levels are zero-based
elif level >= self.nlevels:
- raise IndexError('Too many levels: Index has only %d levels, not %d'
- % (self.nlevels, level + 1))
+ raise IndexError('Too many levels: Index has only %d levels, '
+ 'not %d' % (self.nlevels, level + 1))
return level
_tuples = None
@@ -2288,8 +2299,8 @@ def _try_mi(k):
# a Timestamp will raise a TypeError in a multi-index
# rather than a KeyError, try it here
- if isinstance(key, (datetime.datetime,np.datetime64)) or (
- compat.PY3 and isinstance(key, compat.string_types)):
+ if isinstance(key, (datetime.datetime, np.datetime64)) or (
+ compat.PY3 and isinstance(key, compat.string_types)):
try:
return _try_mi(Timestamp(key))
except:
@@ -2338,7 +2349,8 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
else:
# weird all NA case
- formatted = [com.pprint_thing(na_rep if isnull(x) else x, escape_chars=('\t', '\r', '\n'))
+ formatted = [com.pprint_thing(na_rep if isnull(x) else x,
+ escape_chars=('\t', '\r', '\n'))
for x in com.take_1d(lev.values, lab)]
stringified_levels.append(formatted)
@@ -2347,7 +2359,8 @@ def format(self, space=2, sparsify=None, adjoin=True, names=False,
level = []
if names:
- level.append(com.pprint_thing(name, escape_chars=('\t', '\r', '\n'))
+ level.append(com.pprint_thing(name,
+ escape_chars=('\t', '\r', '\n'))
if name is not None else '')
level.extend(np.array(lev, dtype=object))
@@ -2847,7 +2860,7 @@ def reindex(self, target, method=None, level=None, limit=None,
else:
if takeable:
if method is not None or limit is not None:
- raise ValueError("cannot do a takeable reindex with "
+ raise ValueError("cannot do a takeable reindex "
"with a method or limit")
return self[target], target
@@ -3039,17 +3052,24 @@ def partial_selection(key):
raise KeyError(key)
ilevels = [i for i in range(len(key))
if key[i] != slice(None, None)]
- return indexer, _maybe_drop_levels(indexer, ilevels, drop_level)
+ return indexer, _maybe_drop_levels(indexer, ilevels,
+ drop_level)
if len(key) == self.nlevels:
if self.is_unique:
- # here we have a completely specified key, but are using some partial string matching here
+ # here we have a completely specified key, but are
+ # using some partial string matching here
# GH4758
- can_index_exactly = any(
- [l.is_all_dates and not isinstance(k, compat.string_types) for k, l in zip(key, self.levels)])
- if any([l.is_all_dates for k, l in zip(key, self.levels)]) and not can_index_exactly:
+ can_index_exactly = any([
+ (l.is_all_dates and
+ not isinstance(k, compat.string_types))
+ for k, l in zip(key, self.levels)
+ ])
+ if any([
+ l.is_all_dates for k, l in zip(key, self.levels)
+ ]) and not can_index_exactly:
indexer = slice(*self.slice_locs(key, key))
# we have a multiple selection here
@@ -3058,7 +3078,8 @@ def partial_selection(key):
key = tuple(self[indexer].tolist()[0])
- return self._engine.get_loc(_values_from_object(key)), None
+ return (self._engine.get_loc(_values_from_object(key)),
+ None)
else:
return partial_selection(key)
else:
@@ -3089,7 +3110,8 @@ def partial_selection(key):
indexer = slice(None, None)
ilevels = [i for i in range(len(key))
if key[i] != slice(None, None)]
- return indexer, _maybe_drop_levels(indexer, ilevels, drop_level)
+ return indexer, _maybe_drop_levels(indexer, ilevels,
+ drop_level)
else:
indexer = self._get_level_indexer(key, level=level)
new_index = _maybe_drop_levels(indexer, [level], drop_level)
@@ -3277,8 +3299,8 @@ def _assert_can_do_setop(self, other):
def astype(self, dtype):
if np.dtype(dtype) != np.object_:
- raise TypeError(
- "Setting %s dtype to anything other than object is not supported" % self.__class__)
+ raise TypeError('Setting %s dtype to anything other than object '
+ 'is not supported' % self.__class__)
return self._shallow_copy()
def insert(self, loc, item):
@@ -3530,8 +3552,9 @@ def _get_consensus_names(indexes):
# find the non-none names, need to tupleify to make
# the set hashable, then reverse on return
- consensus_names = set([tuple(i.names)
- for i in indexes if all(n is not None for n in i.names)])
+ consensus_names = set([
+ tuple(i.names) for i in indexes if all(n is not None for n in i.names)
+ ])
if len(consensus_names) == 1:
return list(list(consensus_names)[0])
return [None] * indexes[0].nlevels
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index b462624dde1f5..ab9000fd21a0a 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -12,15 +12,16 @@
import numpy as np
+
# the supported indexers
def get_indexers_list():
return [
- ('ix' ,_IXIndexer ),
- ('iloc',_iLocIndexer ),
- ('loc' ,_LocIndexer ),
- ('at' ,_AtIndexer ),
- ('iat' ,_iAtIndexer ),
+ ('ix', _IXIndexer),
+ ('iloc', _iLocIndexer),
+ ('loc', _LocIndexer),
+ ('at', _AtIndexer),
+ ('iat', _iAtIndexer),
]
# "null slice"
@@ -33,7 +34,7 @@ class IndexingError(Exception):
class _NDFrameIndexer(object):
_valid_types = None
- _exception = KeyError
+ _exception = KeyError
def __init__(self, obj, name):
self.obj = obj
@@ -70,7 +71,8 @@ def _get_loc(self, key, axis=0):
return self.obj._ixs(key, axis=axis)
def _slice(self, obj, axis=0, raise_on_error=False, typ=None):
- return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error, typ=typ)
+ return self.obj._slice(obj, axis=axis, raise_on_error=raise_on_error,
+ typ=typ)
def __setitem__(self, key, value):
# kludgetastic
@@ -101,8 +103,9 @@ def _has_valid_tuple(self, key):
for i, k in enumerate(key):
if i >= self.obj.ndim:
raise IndexingError('Too many indexers')
- if not self._has_valid_type(k,i):
- raise ValueError("Location based indexing can only have [%s] types" % self._valid_types)
+ if not self._has_valid_type(k, i):
+ raise ValueError("Location based indexing can only have [%s] "
+ "types" % self._valid_types)
def _convert_tuple(self, key, is_setter=False):
keyidx = []
@@ -113,13 +116,13 @@ def _convert_tuple(self, key, is_setter=False):
def _convert_scalar_indexer(self, key, axis):
# if we are accessing via lowered dim, use the last dim
- ax = self.obj._get_axis(min(axis,self.ndim-1))
+ ax = self.obj._get_axis(min(axis, self.ndim-1))
# a scalar
return ax._convert_scalar_indexer(key, typ=self.name)
def _convert_slice_indexer(self, key, axis):
# if we are accessing via lowered dim, use the last dim
- ax = self.obj._get_axis(min(axis,self.ndim-1))
+ ax = self.obj._get_axis(min(axis, self.ndim-1))
return ax._convert_slice_indexer(key, typ=self.name)
def _has_valid_setitem_indexer(self, indexer):
@@ -129,11 +132,12 @@ def _has_valid_positional_setitem_indexer(self, indexer):
""" validate that an positional indexer cannot enlarge its target
will raise if needed, does not modify the indexer externally """
if isinstance(indexer, dict):
- raise IndexError("{0} cannot enlarge its target object".format(self.name))
+ raise IndexError("{0} cannot enlarge its target object"
+ .format(self.name))
else:
if not isinstance(indexer, tuple):
indexer = self._tuplify(indexer)
- for ax, i in zip(self.obj.axes,indexer):
+ for ax, i in zip(self.obj.axes, indexer):
if isinstance(i, slice):
# should check the stop slice?
pass
@@ -142,9 +146,11 @@ def _has_valid_positional_setitem_indexer(self, indexer):
pass
elif com.is_integer(i):
if i >= len(ax):
- raise IndexError("{0} cannot enlarge its target object".format(self.name))
+ raise IndexError("{0} cannot enlarge its target object"
+ .format(self.name))
elif isinstance(i, dict):
- raise IndexError("{0} cannot enlarge its target object".format(self.name))
+ raise IndexError("{0} cannot enlarge its target object"
+ .format(self.name))
return True
@@ -157,34 +163,41 @@ def _setitem_with_indexer(self, indexer, value):
# maybe partial set
take_split_path = self.obj._is_mixed_type
- if isinstance(indexer,tuple):
+ if isinstance(indexer, tuple):
nindexer = []
for i, idx in enumerate(indexer):
if isinstance(idx, dict):
# reindex the axis to the new value
# and set inplace
- key,_ = _convert_missing_indexer(idx)
+ key, _ = _convert_missing_indexer(idx)
- # if this is the items axes, then take the main missing path
- # first; this correctly sets the dtype and avoids cache issues
- # essentially this separates out the block that is needed to possibly
- # be modified
+ # if this is the items axes, then take the main missing
+ # path first
+ # this correctly sets the dtype and avoids cache issues
+ # essentially this separates out the block that is needed
+ # to possibly be modified
if self.ndim > 1 and i == self.obj._info_axis_number:
# add the new item, and set the value
# must have all defined axes if we have a scalar
- # or a list-like on the non-info axes if we have a list-like
- len_non_info_axes = [ len(_ax) for _i, _ax in enumerate(self.obj.axes) if _i != i ]
- if any([ not l for l in len_non_info_axes ]):
+ # or a list-like on the non-info axes if we have a
+ # list-like
+ len_non_info_axes = [
+ len(_ax) for _i, _ax in enumerate(self.obj.axes)
+ if _i != i
+ ]
+ if any([not l for l in len_non_info_axes]):
if not is_list_like(value):
- raise ValueError("cannot set a frame with no defined index and a scalar")
+ raise ValueError("cannot set a frame with no "
+ "defined index and a scalar")
self.obj[key] = value
return self.obj
self.obj[key] = np.nan
- new_indexer = _convert_from_missing_indexer_tuple(indexer, self.obj.axes)
+ new_indexer = _convert_from_missing_indexer_tuple(
+ indexer, self.obj.axes)
self._setitem_with_indexer(new_indexer, value)
return self.obj
@@ -194,10 +207,10 @@ def _setitem_with_indexer(self, indexer, value):
# so the object is the same
index = self.obj._get_axis(i)
labels = _safe_append_to_index(index, key)
- self.obj._data = self.obj.reindex_axis(labels,i)._data
+ self.obj._data = self.obj.reindex_axis(labels, i)._data
self.obj._maybe_update_cacher(clear=True)
- if isinstance(labels,MultiIndex):
+ if isinstance(labels, MultiIndex):
self.obj.sortlevel(inplace=True)
labels = self.obj._get_axis(i)
@@ -225,10 +238,11 @@ def _setitem_with_indexer(self, indexer, value):
# this preserves dtype of the value
new_values = Series([value]).values
if len(self.obj.values):
- new_values = np.concatenate([self.obj.values, new_values])
+ new_values = np.concatenate([self.obj.values,
+ new_values])
- self.obj._data = self.obj._constructor(new_values,
- index=new_index, name=self.obj.name)._data
+ self.obj._data = self.obj._constructor(
+ new_values, index=new_index, name=self.obj.name)._data
self.obj._maybe_update_cacher(clear=True)
return self.obj
@@ -236,24 +250,28 @@ def _setitem_with_indexer(self, indexer, value):
# no columns and scalar
if not len(self.obj.columns):
- raise ValueError("cannot set a frame with no defined columns")
+ raise ValueError(
+ "cannot set a frame with no defined columns"
+ )
index = self.obj._get_axis(0)
labels = _safe_append_to_index(index, indexer)
- self.obj._data = self.obj.reindex_axis(labels,0)._data
+ self.obj._data = self.obj.reindex_axis(labels, 0)._data
self.obj._maybe_update_cacher(clear=True)
- return getattr(self.obj,self.name).__setitem__(indexer,value)
+ return getattr(self.obj, self.name).__setitem__(indexer,
+ value)
# set using setitem (Panel and > dims)
elif self.ndim >= 3:
- return self.obj.__setitem__(indexer,value)
+ return self.obj.__setitem__(indexer, value)
# set
info_axis = self.obj._info_axis_number
item_labels = self.obj._get_axis(info_axis)
# if we have a complicated setup, take the split path
- if isinstance(indexer, tuple) and any([ isinstance(ax,MultiIndex) for ax in self.obj.axes ]):
+ if (isinstance(indexer, tuple) and
+ any([isinstance(ax, MultiIndex) for ax in self.obj.axes])):
take_split_path = True
# align and set the values
@@ -270,8 +288,10 @@ def _setitem_with_indexer(self, indexer, value):
info_idx = [info_idx]
labels = item_labels[info_idx]
- # if we have a partial multiindex, then need to adjust the plane indexer here
- if len(labels) == 1 and isinstance(self.obj[labels[0]].index,MultiIndex):
+ # if we have a partial multiindex, then need to adjust the plane
+ # indexer here
+ if (len(labels) == 1 and
+ isinstance(self.obj[labels[0]].index, MultiIndex)):
item = labels[0]
obj = self.obj[item]
index = obj.index
@@ -282,19 +302,23 @@ def _setitem_with_indexer(self, indexer, value):
except:
pass
plane_indexer = tuple([idx]) + indexer[info_axis + 1:]
- lplane_indexer = _length_of_indexer(plane_indexer[0],index)
+ lplane_indexer = _length_of_indexer(plane_indexer[0], index)
- # require that we are setting the right number of values that we are indexing
+ # require that we are setting the right number of values that
+ # we are indexing
if is_list_like(value) and lplane_indexer != len(value):
if len(obj[idx]) != len(value):
- raise ValueError("cannot set using a multi-index selection indexer with a different length than the value")
+ raise ValueError(
+ "cannot set using a multi-index selection indexer "
+ "with a different length than the value"
+ )
# we can directly set the series here
# as we select a slice indexer on the mi
idx = index._convert_slice_indexer(idx)
obj = obj.copy()
- obj._data = obj._data.setitem(tuple([idx]),value)
+ obj._data = obj._data.setitem(tuple([idx]), value)
self.obj[item] = obj
return
@@ -303,7 +327,8 @@ def _setitem_with_indexer(self, indexer, value):
plane_indexer = indexer[:info_axis] + indexer[info_axis + 1:]
if info_axis > 0:
plane_axis = self.obj.axes[:info_axis][0]
- lplane_indexer = _length_of_indexer(plane_indexer[0],plane_axis)
+ lplane_indexer = _length_of_indexer(plane_indexer[0],
+ plane_axis)
else:
lplane_indexer = 0
@@ -313,7 +338,7 @@ def setter(item, v):
# set the item, possibly having a dtype change
s = s.copy()
- s._data = s._data.setitem(pi,v)
+ s._data = s._data.setitem(pi, v)
s._maybe_update_cacher(clear=True)
self.obj[item] = s
@@ -352,11 +377,11 @@ def can_do_equal_len():
# we have an equal len ndarray to our labels
elif isinstance(value, np.ndarray) and value.ndim == 2:
if len(labels) != value.shape[1]:
- raise ValueError('Must have equal len keys and value when'
- ' setting with an ndarray')
+ raise ValueError('Must have equal len keys and value '
+ 'when setting with an ndarray')
for i, item in enumerate(labels):
- setter(item, value[:,i])
+ setter(item, value[:, i])
# we have an equal len list/ndarray
elif can_do_equal_len():
@@ -366,8 +391,8 @@ def can_do_equal_len():
else:
if len(labels) != len(value):
- raise ValueError('Must have equal len keys and value when'
- ' setting with an iterable')
+ raise ValueError('Must have equal len keys and value'
+ 'when setting with an iterable')
for item, v in zip(labels, value):
setter(item, v)
@@ -390,14 +415,14 @@ def can_do_equal_len():
if isinstance(value, ABCPanel):
value = self._align_panel(indexer, value)
- self.obj._data = self.obj._data.setitem(indexer,value)
+ self.obj._data = self.obj._data.setitem(indexer, value)
self.obj._maybe_update_cacher(clear=True)
def _align_series(self, indexer, ser):
# indexer to assign Series can be tuple or scalar
if isinstance(indexer, tuple):
- aligners = [ not _is_null_slice(idx) for idx in indexer ]
+ aligners = [not _is_null_slice(idx) for idx in indexer]
sum_aligners = sum(aligners)
single_aligner = sum_aligners == 1
is_frame = self.obj.ndim == 2
@@ -415,15 +440,17 @@ def _align_series(self, indexer, ser):
# panel
elif is_panel:
- single_aligner = single_aligner and (aligners[1] or aligners[2])
-
- # we have a frame, with multiple indexers on both axes; and a series,
- # so need to broadcast (see GH5206)
- if sum_aligners == self.ndim and all([ com._is_sequence(_) for _ in indexer ]):
-
- ser = ser.reindex(obj.axes[0][indexer[0].ravel()],copy=True).values
+ single_aligner = (single_aligner and
+ (aligners[1] or aligners[2]))
+
+ # we have a frame, with multiple indexers on both axes; and a
+ # series, so need to broadcast (see GH5206)
+ if (sum_aligners == self.ndim and
+ all([com._is_sequence(_) for _ in indexer])):
+ ser = ser.reindex(obj.axes[0][indexer[0].ravel()],
+ copy=True).values
l = len(indexer[1].ravel())
- ser = np.tile(ser,l).reshape(l,-1).T
+ ser = np.tile(ser, l).reshape(l, -1).T
return ser
for i, idx in enumerate(indexer):
@@ -462,14 +489,14 @@ def _align_series(self, indexer, ser):
if len(labels & ser.index):
ser = ser.reindex(labels)
else:
- broadcast.append((n,len(labels)))
+ broadcast.append((n, len(labels)))
# broadcast along other dims
ser = ser.values.copy()
- for (axis,l) in broadcast:
- shape = [ -1 ] * (len(broadcast)+1)
+ for (axis, l) in broadcast:
+ shape = [-1] * (len(broadcast)+1)
shape[axis] = l
- ser = np.tile(ser,l).reshape(shape)
+ ser = np.tile(ser, l).reshape(shape)
if self.obj.ndim == 3:
ser = ser.T
@@ -509,7 +536,7 @@ def _align_frame(self, indexer, df):
if len(sindexers) == 1 and idx is None and cols is None:
if sindexers[0] == 0:
df = df.T
- return self.obj.conform(df,axis=sindexers[0])
+ return self.obj.conform(df, axis=sindexers[0])
df = df.T
if idx is not None and cols is not None:
@@ -551,7 +578,8 @@ def _align_frame(self, indexer, df):
def _align_panel(self, indexer, df):
is_frame = self.obj.ndim == 2
is_panel = self.obj.ndim >= 3
- raise NotImplementedError("cannot set using an indexer with a Panel yet!")
+ raise NotImplementedError("cannot set using an indexer with a Panel "
+ "yet!")
def _getitem_tuple(self, tup):
try:
@@ -575,7 +603,7 @@ def _getitem_tuple(self, tup):
if _is_null_slice(key):
continue
- retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
+ retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
return retval
@@ -590,7 +618,7 @@ def _multi_take_opportunity(self, tup):
return False
# just too complicated
- for indexer, ax in zip(tup,self.obj._data.axes):
+ for indexer, ax in zip(tup, self.obj._data.axes):
if isinstance(ax, MultiIndex):
return False
elif com._is_bool_indexer(indexer):
@@ -599,11 +627,15 @@ def _multi_take_opportunity(self, tup):
return True
def _multi_take(self, tup):
- """ create the reindex map for our objects, raise the _exception if we can't create the indexer """
-
+ """ create the reindex map for our objects, raise the _exception if we
+ can't create the indexer
+ """
try:
o = self.obj
- d = dict([ (a,self._convert_for_reindex(t, axis=o._get_axis_number(a))) for t, a in zip(tup, o._AXIS_ORDERS) ])
+ d = dict([
+ (a, self._convert_for_reindex(t, axis=o._get_axis_number(a)))
+ for t, a in zip(tup, o._AXIS_ORDERS)
+ ])
return o.reindex(**d)
except:
raise self._exception
@@ -682,7 +714,7 @@ def _getitem_lowerdim(self, tup):
if len(new_key) == 1:
new_key, = new_key
- return getattr(section,self.name)[new_key]
+ return getattr(section, self.name)[new_key]
raise IndexingError('not applicable')
@@ -769,7 +801,8 @@ def _reindex(keys, level=None):
else:
indexer, missing = labels.get_indexer_non_unique(keyarr)
check = indexer != -1
- result = self.obj.take(indexer[check], axis=axis, convert=False)
+ result = self.obj.take(indexer[check], axis=axis,
+ convert=False)
# need to merge the result labels and the missing labels
if len(missing):
@@ -781,33 +814,39 @@ def _reindex(keys, level=None):
cur_labels = result._get_axis(axis).values
cur_indexer = com._ensure_int64(l[check])
- new_labels = np.empty(tuple([len(indexer)]),dtype=object)
- new_labels[cur_indexer] = cur_labels
+ new_labels = np.empty(tuple([len(indexer)]), dtype=object)
+ new_labels[cur_indexer] = cur_labels
new_labels[missing_indexer] = missing_labels
# reindex with the specified axis
ndim = self.obj.ndim
if axis+1 > ndim:
- raise AssertionError("invalid indexing error with non-unique index")
+ raise AssertionError("invalid indexing error with "
+ "non-unique index")
# a unique indexer
if keyarr_is_unique:
- new_indexer = (Index(cur_indexer) + Index(missing_indexer)).values
+ new_indexer = (Index(cur_indexer) +
+ Index(missing_indexer)).values
new_indexer[missing_indexer] = -1
- # we have a non_unique selector, need to use the original indexer here
+ # we have a non_unique selector, need to use the original
+ # indexer here
else:
# need to retake to have the same size as the indexer
rindexer = indexer.values
rindexer[~check] = 0
- result = self.obj.take(rindexer, axis=axis, convert=False)
+ result = self.obj.take(rindexer, axis=axis,
+ convert=False)
# reset the new indexer to account for the new size
new_indexer = np.arange(len(result))
new_indexer[~check] = -1
- result = result._reindex_with_indexers({ axis : [ new_labels, new_indexer ] }, copy=True, allow_dups=True)
+ result = result._reindex_with_indexers({
+ axis: [new_labels, new_indexer]
+ }, copy=True, allow_dups=True)
return result
@@ -853,11 +892,12 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
# always valid
if self.name == 'loc':
- return { 'key' : obj }
+ return {'key': obj}
# a positional
if obj >= len(self.obj) and not isinstance(labels, MultiIndex):
- raise ValueError("cannot set by positional indexing with enlargement")
+ raise ValueError("cannot set by positional indexing with "
+ "enlargement")
return obj
@@ -898,7 +938,8 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
# non-unique (dups)
else:
- indexer, missing = labels.get_indexer_non_unique(objarr)
+ (indexer,
+ missing) = labels.get_indexer_non_unique(objarr)
check = indexer
mask = check == -1
@@ -906,7 +947,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
# mi here
if isinstance(obj, tuple) and is_setter:
- return { 'key' : obj }
+ return {'key': obj}
raise KeyError('%s not in index' % objarr[mask])
return indexer
@@ -914,11 +955,10 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
else:
try:
return labels.get_loc(obj)
- except (KeyError):
-
+ except KeyError:
# allow a not found key only if we are a setter
if not is_list_like(obj) and is_setter:
- return { 'key' : obj }
+ return {'key': obj}
raise
def _tuplify(self, loc):
@@ -938,6 +978,7 @@ def _get_slice_axis(self, slice_obj, axis=0):
else:
return self.obj.take(indexer, axis=axis)
+
class _IXIndexer(_NDFrameIndexer):
""" A primarily location based indexer, with integer fallback """
@@ -959,8 +1000,9 @@ def _has_valid_type(self, key, axis):
return True
+
class _LocationIndexer(_NDFrameIndexer):
- _exception = Exception
+ _exception = Exception
def __getitem__(self, key):
if type(key) is tuple:
@@ -977,8 +1019,9 @@ def _getbool_axis(self, key, axis=0):
inds, = key.nonzero()
try:
return self.obj.take(inds, axis=axis, convert=False)
- except (Exception) as detail:
+ except Exception as detail:
raise self._exception(detail)
+
def _get_slice_axis(self, slice_obj, axis=0):
""" this is pretty simple as we just have to deal with labels """
obj = self.obj
@@ -986,17 +1029,21 @@ def _get_slice_axis(self, slice_obj, axis=0):
return obj
labels = obj._get_axis(axis)
- indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop, slice_obj.step)
+ indexer = labels.slice_indexer(slice_obj.start, slice_obj.stop,
+ slice_obj.step)
if isinstance(indexer, slice):
return self._slice(indexer, axis=axis, typ='iloc')
else:
return self.obj.take(indexer, axis=axis)
+
class _LocIndexer(_LocationIndexer):
""" purely label based location based indexing """
- _valid_types = "labels (MUST BE IN THE INDEX), slices of labels (BOTH endpoints included! Can be slices of integers if the index is integers), listlike of labels, boolean"
- _exception = KeyError
+ _valid_types = ("labels (MUST BE IN THE INDEX), slices of labels (BOTH "
+ "endpoints included! Can be slices of integers if the "
+ "index is integers), listlike of labels, boolean")
+ _exception = KeyError
def _has_valid_type(self, key, axis):
ax = self.obj._get_axis(axis)
@@ -1016,10 +1063,16 @@ def _has_valid_type(self, key, axis):
else:
if key.start is not None:
if key.start not in ax:
- raise KeyError("start bound [%s] is not the [%s]" % (key.start,self.obj._get_axis_name(axis)))
+ raise KeyError(
+ "start bound [%s] is not the [%s]" %
+ (key.start, self.obj._get_axis_name(axis))
+ )
if key.stop is not None:
if key.stop not in ax:
- raise KeyError("stop bound [%s] is not in the [%s]" % (key.stop,self.obj._get_axis_name(axis)))
+ raise KeyError(
+ "stop bound [%s] is not in the [%s]" %
+ (key.stop, self.obj._get_axis_name(axis))
+ )
elif com._is_bool_indexer(key):
return True
@@ -1033,7 +1086,8 @@ def _has_valid_type(self, key, axis):
# require all elements in the index
idx = _ensure_index(key)
if not idx.isin(ax).all():
- raise KeyError("[%s] are not in ALL in the [%s]" % (key,self.obj._get_axis_name(axis)))
+ raise KeyError("[%s] are not in ALL in the [%s]" %
+ (key, self.obj._get_axis_name(axis)))
return True
@@ -1041,8 +1095,10 @@ def _has_valid_type(self, key, axis):
def error():
if isnull(key):
- raise ValueError("cannot use label indexing with a null key")
- raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
+ raise ValueError(
+ "cannot use label indexing with a null key")
+ raise KeyError("the label [%s] is not in the [%s]" %
+ (key, self.obj._get_axis_name(axis)))
try:
key = self._convert_scalar_indexer(key, axis)
@@ -1063,7 +1119,7 @@ def _getitem_axis(self, key, axis=0):
labels = self.obj._get_axis(axis)
if isinstance(key, slice):
- self._has_valid_type(key,axis)
+ self._has_valid_type(key, axis)
return self._get_slice_axis(key, axis=axis)
elif com._is_bool_indexer(key):
return self._getbool_axis(key, axis=axis)
@@ -1075,23 +1131,31 @@ def _getitem_axis(self, key, axis=0):
return self._getitem_iterable(key, axis=axis)
else:
- self._has_valid_type(key,axis)
+ self._has_valid_type(key, axis)
return self._get_label(key, axis=axis)
+
class _iLocIndexer(_LocationIndexer):
""" purely integer based location based indexing """
- _valid_types = "integer, integer slice (START point is INCLUDED, END point is EXCLUDED), listlike of integers, boolean array"
- _exception = IndexError
+ _valid_types = ("integer, integer slice (START point is INCLUDED, END "
+ "point is EXCLUDED), listlike of integers, boolean array")
+ _exception = IndexError
def _has_valid_type(self, key, axis):
if com._is_bool_indexer(key):
- if hasattr(key,'index') and isinstance(key.index,Index):
+ if hasattr(key, 'index') and isinstance(key.index, Index):
if key.index.inferred_type == 'integer':
- raise NotImplementedError("iLocation based boolean indexing on an integer type is not available")
- raise ValueError("iLocation based boolean indexing cannot use an indexable as a mask")
+ raise NotImplementedError(
+ "iLocation based boolean indexing on an integer type "
+ "is not available"
+ )
+ raise ValueError("iLocation based boolean indexing cannot use "
+ "an indexable as a mask")
return True
- return isinstance(key, slice) or com.is_integer(key) or _is_list_like(key)
+ return (isinstance(key, slice) or
+ com.is_integer(key) or
+ _is_list_like(key))
def _has_valid_setitem_indexer(self, indexer):
self._has_valid_positional_setitem_indexer(indexer)
@@ -1112,7 +1176,7 @@ def _getitem_tuple(self, tup):
if _is_null_slice(key):
continue
- retval = getattr(retval,self.name)._getitem_axis(key, axis=i)
+ retval = getattr(retval, self.name)._getitem_axis(key, axis=i)
return retval
@@ -1123,18 +1187,19 @@ def _get_slice_axis(self, slice_obj, axis=0):
return obj
if isinstance(slice_obj, slice):
- return self._slice(slice_obj, axis=axis, raise_on_error=True, typ='iloc')
+ return self._slice(slice_obj, axis=axis, raise_on_error=True,
+ typ='iloc')
else:
return self.obj.take(slice_obj, axis=axis)
def _getitem_axis(self, key, axis=0):
if isinstance(key, slice):
- self._has_valid_type(key,axis)
+ self._has_valid_type(key, axis)
return self._get_slice_axis(key, axis=axis)
elif com._is_bool_indexer(key):
- self._has_valid_type(key,axis)
+ self._has_valid_type(key, axis)
return self._getbool_axis(key, axis=axis)
# a single integer or a list of integers
@@ -1148,16 +1213,18 @@ def _getitem_axis(self, key, axis=0):
key = self._convert_scalar_indexer(key, axis)
if not com.is_integer(key):
- raise TypeError("Cannot index by location index with a non-integer key")
+ raise TypeError("Cannot index by location index with a "
+ "non-integer key")
- return self._get_loc(key,axis=axis)
+ return self._get_loc(key, axis=axis)
def _convert_to_indexer(self, obj, axis=0, is_setter=False):
""" much simpler as we only have to deal with our valid types """
- if self._has_valid_type(obj,axis):
+ if self._has_valid_type(obj, axis):
return obj
- raise ValueError("Can only index by location with a [%s]" % self._valid_types)
+ raise ValueError("Can only index by location with a [%s]" %
+ self._valid_types)
class _ScalarAccessIndexer(_NDFrameIndexer):
@@ -1171,7 +1238,7 @@ def __getitem__(self, key):
# we could have a convertible item here (e.g. Timestamp)
if not _is_list_like(key):
- key = tuple([ key ])
+ key = tuple([key])
else:
raise ValueError('Invalid call for scalar access (getting)!')
@@ -1182,15 +1249,18 @@ def __setitem__(self, key, value):
if not isinstance(key, tuple):
key = self._tuplify(key)
if len(key) != self.obj.ndim:
- raise ValueError('Not enough indexers for scalar access (setting)!')
+ raise ValueError('Not enough indexers for scalar access '
+ '(setting)!')
key = self._convert_key(key)
key.append(value)
self.obj.set_value(*key)
+
class _AtIndexer(_ScalarAccessIndexer):
""" label based scalar accessor """
pass
+
class _iAtIndexer(_ScalarAccessIndexer):
""" integer based scalar accessor """
@@ -1200,17 +1270,20 @@ def _has_valid_setitem_indexer(self, indexer):
def _convert_key(self, key):
""" require integer args (and convert to label arguments) """
ckey = []
- for a, i in zip(self.obj.axes,key):
+ for a, i in zip(self.obj.axes, key):
if not com.is_integer(i):
- raise ValueError("iAt based indexing can only have integer indexers")
+ raise ValueError("iAt based indexing can only have integer "
+ "indexers")
ckey.append(a[i])
return ckey
# 32-bit floating point machine epsilon
_eps = np.finfo('f4').eps
-def _length_of_indexer(indexer,target=None):
- """ return the length of a single non-tuple indexer which could be a slice """
+
+def _length_of_indexer(indexer, target=None):
+ """return the length of a single non-tuple indexer which could be a slice
+ """
if target is not None and isinstance(indexer, slice):
l = len(target)
start = indexer.start
@@ -1235,8 +1308,10 @@ def _length_of_indexer(indexer,target=None):
return 1
raise AssertionError("cannot find the length of the indexer")
+
def _convert_to_index_sliceable(obj, key):
- """ if we are index sliceable, then return my slicer, otherwise return None """
+ """if we are index sliceable, then return my slicer, otherwise return None
+ """
idx = obj.index
if isinstance(key, slice):
return idx._convert_slice_indexer(key, typ='getitem')
@@ -1256,6 +1331,7 @@ def _convert_to_index_sliceable(obj, key):
return None
+
def _is_index_slice(obj):
def _is_valid_index(x):
return (com.is_integer(x) or com.is_float(x)
@@ -1301,11 +1377,13 @@ def _setitem_with_indexer(self, indexer, value):
# need to delegate to the super setter
if isinstance(indexer, dict):
- return super(_SeriesIndexer, self)._setitem_with_indexer(indexer, value)
+ return super(_SeriesIndexer, self)._setitem_with_indexer(indexer,
+ value)
# fast access
self.obj._set_values(indexer, value)
+
def _check_bool_indexer(ax, key):
# boolean indexing, need to check that the data are aligned, otherwise
# disallowed
@@ -1344,14 +1422,18 @@ def _convert_missing_indexer(indexer):
return indexer, False
+
def _convert_from_missing_indexer_tuple(indexer, axes):
""" create a filtered indexer that doesn't have any missing indexers """
def get_indexer(_i, _idx):
- return axes[_i].get_loc(_idx['key']) if isinstance(_idx,dict) else _idx
- return tuple([ get_indexer(_i, _idx) for _i, _idx in enumerate(indexer) ])
+ return (axes[_i].get_loc(_idx['key'])
+ if isinstance(_idx, dict) else _idx)
+ return tuple([get_indexer(_i, _idx) for _i, _idx in enumerate(indexer)])
+
def _safe_append_to_index(index, key):
- """ a safe append to an index, if incorrect type, then catch and recreate """
+ """ a safe append to an index, if incorrect type, then catch and recreate
+ """
try:
return index.insert(len(index), key)
except:
@@ -1359,23 +1441,26 @@ def _safe_append_to_index(index, key):
# raise here as this is basically an unsafe operation and we want
# it to be obvious that you are doing something wrong
- raise ValueError("unsafe appending to index of "
- "type {0} with a key {1}".format(index.__class__.__name__,key))
+ raise ValueError("unsafe appending to index of type {0} with a key "
+ "{1}".format(index.__class__.__name__, key))
+
def _maybe_convert_indices(indices, n):
""" if we have negative indicies, translate to postive here
- if have indicies that are out-of-bounds, raise an IndexError """
+ if have indicies that are out-of-bounds, raise an IndexError
+ """
if isinstance(indices, list):
indices = np.array(indices)
- mask = indices<0
+ mask = indices < 0
if mask.any():
indices[mask] += n
- mask = (indices>=n) | (indices<0)
+ mask = (indices >= n) | (indices < 0)
if mask.any():
raise IndexError("indices are out-of-bounds")
return indices
+
def _maybe_convert_ix(*args):
"""
We likely want to take the cross-product
@@ -1426,6 +1511,7 @@ def _check_slice_bounds(slobj, values):
if stop < -l-1 or stop > l:
raise IndexError("out-of-bounds on slice (end)")
+
def _maybe_droplevels(index, key):
# drop levels
original_index = index
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index c5e245d2e320c..bb719722fd090 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -8,8 +8,9 @@
from pandas.core.base import PandasObject
from pandas.core.common import (_possibly_downcast_to_dtype, isnull, notnull,
- _NS_DTYPE, _TD_DTYPE, ABCSeries, ABCSparseSeries,
- is_list_like, _infer_dtype_from_scalar, _values_from_object)
+ _NS_DTYPE, _TD_DTYPE, ABCSeries, is_list_like,
+ ABCSparseSeries, _infer_dtype_from_scalar,
+ _values_from_object)
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_handle_legacy_indexes)
from pandas.core.indexing import (_check_slice_bounds, _maybe_convert_indices,
@@ -25,6 +26,7 @@
from pandas.compat import range, lrange, lmap, callable, map, zip, u
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
+
class Block(PandasObject):
"""
@@ -49,7 +51,8 @@ class Block(PandasObject):
_verify_integrity = True
_ftype = 'dense'
- def __init__(self, values, items, ref_items, ndim=None, fastpath=False, placement=None):
+ def __init__(self, values, items, ref_items, ndim=None, fastpath=False,
+ placement=None):
if ndim is None:
ndim = values.ndim
@@ -58,8 +61,8 @@ def __init__(self, values, items, ref_items, ndim=None, fastpath=False, placemen
raise ValueError('Wrong number of dimensions')
if len(items) != len(values):
- raise ValueError('Wrong number of items passed %d, indices imply %d'
- % (len(items), len(values)))
+ raise ValueError('Wrong number of items passed %d, indices imply '
+ '%d' % (len(items), len(values)))
self.set_ref_locs(placement)
self.values = values
@@ -100,10 +103,11 @@ def ref_locs(self):
# this means that we have nan's in our block
try:
- indexer[indexer == -1] = np.arange(len(self.items))[isnull(self.items)]
+ indexer[indexer == -1] = np.arange(
+ len(self.items))[isnull(self.items)]
except:
- raise AssertionError('Some block items were not in block '
- 'ref_items')
+ raise AssertionError('Some block items were not in '
+ 'block ref_items')
self._ref_locs = indexer
return self._ref_locs
@@ -113,7 +117,9 @@ def reset_ref_locs(self):
self._ref_locs = np.empty(len(self.items), dtype='int64')
def set_ref_locs(self, placement):
- """ explicity set the ref_locs indexer, only necessary for duplicate indicies """
+ """ explicity set the ref_locs indexer, only necessary for duplicate
+ indicies
+ """
if placement is None:
self._ref_locs = None
else:
@@ -195,7 +201,8 @@ def merge(self, other):
# union_ref = self.ref_items + other.ref_items
return _merge_blocks([self, other], self.ref_items)
- def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, limit=None, mask_info=None):
+ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None,
+ limit=None, mask_info=None):
"""
Reindex using pre-computed indexer information
"""
@@ -206,11 +213,12 @@ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, limit=None
new_values = com.take_nd(self.values, indexer, axis,
fill_value=fill_value, mask_info=mask_info)
- return make_block(
- new_values, self.items, self.ref_items, ndim=self.ndim, fastpath=True,
- placement=self._ref_locs)
+ return make_block(new_values, self.items, self.ref_items,
+ ndim=self.ndim, fastpath=True,
+ placement=self._ref_locs)
- def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_value=None, limit=None, copy=True):
+ def reindex_items_from(self, new_ref_items, indexer=None, method=None,
+ fill_value=None, limit=None, copy=True):
"""
Reindex to only those items contained in the input set of items
@@ -222,7 +230,8 @@ def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_valu
reindexed : Block
"""
if indexer is None:
- new_ref_items, indexer = self.items.reindex(new_ref_items, limit=limit)
+ new_ref_items, indexer = self.items.reindex(new_ref_items,
+ limit=limit)
needs_fill = method is not None and limit is None
if fill_value is None:
@@ -247,9 +256,11 @@ def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_valu
# fill if needed
if needs_fill:
- new_values = com.interpolate_2d(new_values, method=method, limit=limit, fill_value=fill_value)
+ new_values = com.interpolate_2d(new_values, method=method,
+ limit=limit, fill_value=fill_value)
- block = make_block(new_values, new_items, new_ref_items, ndim=self.ndim, fastpath=True)
+ block = make_block(new_values, new_items, new_ref_items,
+ ndim=self.ndim, fastpath=True)
# down cast if needed
if not self.is_float and (needs_fill or notnull(fill_value)):
@@ -284,7 +295,8 @@ def delete(self, item):
loc = self.items.get_loc(item)
new_items = self.items.delete(loc)
new_values = np.delete(self.values, loc, 0)
- return make_block(new_values, new_items, self.ref_items, ndim=self.ndim, klass=self.__class__, fastpath=True)
+ return make_block(new_values, new_items, self.ref_items,
+ ndim=self.ndim, klass=self.__class__, fastpath=True)
def split_block_at(self, item):
"""
@@ -344,7 +356,7 @@ def downcast(self, dtypes=None):
# turn it off completely
if dtypes is False:
- return [ self ]
+ return [self]
values = self.values
@@ -356,14 +368,16 @@ def downcast(self, dtypes=None):
dtypes = 'infer'
nv = _possibly_downcast_to_dtype(values, dtypes)
- return [ make_block(nv, self.items, self.ref_items, ndim=self.ndim, fastpath=True) ]
+ return [make_block(nv, self.items, self.ref_items, ndim=self.ndim,
+ fastpath=True)]
# ndim > 1
if dtypes is None:
- return [ self ]
+ return [self]
if not (dtypes == 'infer' or isinstance(dtypes, dict)):
- raise ValueError("downcast must have a dictionary or 'infer' as its argument")
+ raise ValueError("downcast must have a dictionary or 'infer' as "
+ "its argument")
# item-by-item
# this is expensive as it splits the blocks items-by-item
@@ -376,12 +390,13 @@ def downcast(self, dtypes=None):
dtype = dtypes.get(item, self._downcast_dtype)
if dtype is None:
- nv = _block_shape(values[i],ndim=self.ndim)
+ nv = _block_shape(values[i], ndim=self.ndim)
else:
nv = _possibly_downcast_to_dtype(values[i], dtype)
- nv = _block_shape(nv,ndim=self.ndim)
+ nv = _block_shape(nv, ndim=self.ndim)
- blocks.append(make_block(nv, Index([item]), self.ref_items, ndim=self.ndim, fastpath=True))
+ blocks.append(make_block(nv, Index([item]), self.ref_items,
+ ndim=self.ndim, fastpath=True))
return blocks
@@ -405,9 +420,9 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None,
# force the copy here
if values is None:
values = com._astype_nansafe(self.values, dtype, copy=True)
- newb = make_block(
- values, self.items, self.ref_items, ndim=self.ndim, placement=self._ref_locs,
- fastpath=True, dtype=dtype, klass=klass)
+ newb = make_block(values, self.items, self.ref_items,
+ ndim=self.ndim, placement=self._ref_locs,
+ fastpath=True, dtype=dtype, klass=klass)
except:
if raise_on_error is True:
raise
@@ -418,15 +433,16 @@ def _astype(self, dtype, copy=False, raise_on_error=True, values=None,
raise TypeError("cannot set astype for copy = [%s] for dtype "
"(%s [%s]) with smaller itemsize that current "
"(%s [%s])" % (copy, self.dtype.name,
- self.itemsize, newb.dtype.name, newb.itemsize))
- return [ newb ]
+ self.itemsize, newb.dtype.name,
+ newb.itemsize))
+ return [newb]
def convert(self, copy=True, **kwargs):
""" attempt to coerce any object types to better types
return a copy of the block (if copy = True)
by definition we are not an ObjectBlock here! """
- return [ self.copy() ] if copy else [ self ]
+ return [self.copy()] if copy else [self]
def prepare_for_merge(self, **kwargs):
""" a regular block is ok to merge as is """
@@ -445,8 +461,8 @@ def post_merge(self, items, **kwargs):
# this is a safe bet with multiple dtypes
dtype = list(dtypes)[0] if len(dtypes) == 1 else np.float64
- b = make_block(
- SparseArray(self.get(item), dtype=dtype), [item], self.ref_items)
+ b = make_block(SparseArray(self.get(item), dtype=dtype),
+ [item], self.ref_items)
new_blocks.append(b)
return new_blocks
@@ -470,18 +486,18 @@ def _try_cast_result(self, result, dtype=None):
elif self.is_float and result.dtype == self.dtype:
# protect against a bool/object showing up here
- if isinstance(dtype,compat.string_types) and dtype == 'infer':
+ if isinstance(dtype, compat.string_types) and dtype == 'infer':
return result
- if not isinstance(dtype,type):
+ if not isinstance(dtype, type):
dtype = dtype.type
- if issubclass(dtype,(np.bool_,np.object_)):
- if issubclass(dtype,np.bool_):
+ if issubclass(dtype, (np.bool_, np.object_)):
+ if issubclass(dtype, np.bool_):
if isnull(result).all():
return result.astype(np.bool_)
else:
result = result.astype(np.object_)
- result[result==1] = True
- result[result==0] = False
+ result[result == 1] = True
+ result[result == 0] = False
return result
else:
return result.astype(np.object_)
@@ -524,9 +540,9 @@ def copy(self, deep=True, ref_items=None):
values = values.copy()
if ref_items is None:
ref_items = self.ref_items
- return make_block(
- values, self.items, ref_items, ndim=self.ndim, klass=self.__class__,
- fastpath=True, placement=self._ref_locs)
+ return make_block(values, self.items, ref_items, ndim=self.ndim,
+ klass=self.__class__, fastpath=True,
+ placement=self._ref_locs)
def replace(self, to_replace, value, inplace=False, filter=None,
regex=False):
@@ -547,8 +563,12 @@ def replace(self, to_replace, value, inplace=False, filter=None,
return self.putmask(mask, value, inplace=inplace)
def setitem(self, indexer, value):
- """ set the value inplace; return a new block (of a possibly different dtype)
- indexer is a direct slice/positional indexer; value must be a compaitable shape """
+ """ set the value inplace; return a new block (of a possibly different
+ dtype)
+
+ indexer is a direct slice/positional indexer; value must be a
+ compatible shape
+ """
# coerce args
values, value = self._try_coerce_args(self.values, value)
@@ -567,15 +587,19 @@ def setitem(self, indexer, value):
# boolean with truth values == len of the value is ok too
if isinstance(indexer, (np.ndarray, list)):
if is_list_like(value) and len(indexer) != len(value):
- if not (isinstance(indexer, np.ndarray) and indexer.dtype == np.bool_ and len(indexer[indexer]) == len(value)):
- raise ValueError("cannot set using a list-like indexer with a different length than the value")
+ if not (isinstance(indexer, np.ndarray) and
+ indexer.dtype == np.bool_ and
+ len(indexer[indexer]) == len(value)):
+ raise ValueError("cannot set using a list-like indexer "
+ "with a different length than the value")
# slice
elif isinstance(indexer, slice):
if is_list_like(value) and l:
if len(value) != _length_of_indexer(indexer, values):
- raise ValueError("cannot set using a slice indexer with a different length than the value")
+ raise ValueError("cannot set using a slice indexer with a "
+ "different length than the value")
try:
# set and return a block
@@ -583,22 +607,25 @@ def setitem(self, indexer, value):
# coerce and try to infer the dtypes of the result
if np.isscalar(value):
- dtype,_ = _infer_dtype_from_scalar(value)
+ dtype, _ = _infer_dtype_from_scalar(value)
else:
dtype = 'infer'
values = self._try_coerce_result(values)
values = self._try_cast_result(values, dtype)
- return [make_block(transf(values), self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
+ return [make_block(transf(values), self.items, self.ref_items,
+ ndim=self.ndim, fastpath=True)]
except (ValueError, TypeError) as detail:
raise
- except (Exception) as detail:
+ except Exception as detail:
pass
- return [ self ]
+ return [self]
def putmask(self, mask, new, align=True, inplace=False):
- """ putmask the data to the block; it is possible that we may create a new dtype of block
- return the resulting block(s)
+ """ putmask the data to the block; it is possible that we may create a
+ new dtype of block
+
+ return the resulting block(s)
Parameters
----------
@@ -618,7 +645,8 @@ def putmask(self, mask, new, align=True, inplace=False):
if hasattr(new, 'reindex_axis'):
if align:
axis = getattr(new, '_info_axis_number', 0)
- new = new.reindex_axis(self.items, axis=axis, copy=False).values.T
+ new = new.reindex_axis(self.items, axis=axis,
+ copy=False).values.T
else:
new = new.values.T
@@ -639,8 +667,8 @@ def putmask(self, mask, new, align=True, inplace=False):
new = self._try_cast(new)
# pseudo-broadcast
- if isinstance(new,np.ndarray) and new.ndim == self.ndim-1:
- new = np.repeat(new,self.shape[-1]).reshape(self.shape)
+ if isinstance(new, np.ndarray) and new.ndim == self.ndim-1:
+ new = np.repeat(new, self.shape[-1]).reshape(self.shape)
np.putmask(new_values, mask, new)
@@ -712,16 +740,16 @@ def create_block(v, m, n, item, reshape=True):
new_blocks.append(block)
else:
-
- new_blocks.append(
- create_block(new_values, mask, new, self.items, reshape=False))
+ new_blocks.append(create_block(new_values, mask, new,
+ self.items, reshape=False))
return new_blocks
if inplace:
return [self]
- return [make_block(new_values, self.items, self.ref_items, placement=self._ref_locs, fastpath=True)]
+ return [make_block(new_values, self.items, self.ref_items,
+ placement=self._ref_locs, fastpath=True)]
def interpolate(self, method='pad', axis=0, index=None,
values=None, inplace=False, limit=None,
@@ -761,7 +789,8 @@ def interpolate(self, method='pad', axis=0, index=None,
raise ValueError("invalid method '{0}' to interpolate.".format(method))
def _interpolate_with_fill(self, method='pad', axis=0, inplace=False,
- limit=None, fill_value=None, coerce=False, downcast=None):
+ limit=None, fill_value=None, coerce=False,
+ downcast=None):
""" fillna but using the interpolate machinery """
# if we are coercing, then don't force the conversion
@@ -779,7 +808,9 @@ def _interpolate_with_fill(self, method='pad', axis=0, inplace=False,
values = com.interpolate_2d(values, method, axis, limit, fill_value)
values = self._try_coerce_result(values)
- blocks = [ make_block(values, self.items, self.ref_items, ndim=self.ndim, klass=self.__class__, fastpath=True) ]
+ blocks = [make_block(values, self.items, self.ref_items,
+ ndim=self.ndim, klass=self.__class__,
+ fastpath=True)]
return self._maybe_downcast(blocks, downcast)
def _interpolate(self, method=None, index=None, values=None,
@@ -810,8 +841,8 @@ def func(x):
# should the axis argument be handled below in apply_along_axis?
# i.e. not an arg to com.interpolate_1d
return com.interpolate_1d(index, x, method=method, limit=limit,
- fill_value=fill_value, bounds_error=False,
- **kwargs)
+ fill_value=fill_value,
+ bounds_error=False, **kwargs)
# interp each column independently
interp_values = np.apply_along_axis(func, axis, data)
@@ -825,7 +856,8 @@ def take(self, indexer, ref_items, axis=1):
raise AssertionError('axis must be at least 1, got %d' % axis)
new_values = com.take_nd(self.values, indexer, axis=axis,
allow_fill=False)
- return [make_block(new_values, self.items, ref_items, ndim=self.ndim, klass=self.__class__, fastpath=True)]
+ return [make_block(new_values, self.items, ref_items, ndim=self.ndim,
+ klass=self.__class__, fastpath=True)]
def get_values(self, dtype=None):
return self.values
@@ -836,7 +868,8 @@ def get_merge_length(self):
def diff(self, n):
""" return block for the diff of the values """
new_values = com.diff(self.values, n, axis=1)
- return [make_block(new_values, self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
+ return [make_block(new_values, self.items, self.ref_items,
+ ndim=self.ndim, fastpath=True)]
def shift(self, indexer, periods, axis=0):
""" shift the block by periods, possibly upcast """
@@ -859,7 +892,8 @@ def shift(self, indexer, periods, axis=0):
new_values[:, :periods] = fill_value
else:
new_values[:, periods:] = fill_value
- return [make_block(new_values, self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
+ return [make_block(new_values, self.items, self.ref_items,
+ ndim=self.ndim, fastpath=True)]
def eval(self, func, other, raise_on_error=True, try_cast=False):
"""
@@ -869,8 +903,8 @@ def eval(self, func, other, raise_on_error=True, try_cast=False):
----------
func : how to combine self, other
other : a ndarray/object
- raise_on_error : if True, raise when I can't perform the function, False by default (and just return
- the data that we had coming in)
+ raise_on_error : if True, raise when I can't perform the function,
+ False by default (and just return the data that we had coming in)
Returns
-------
@@ -896,8 +930,9 @@ def eval(self, func, other, raise_on_error=True, try_cast=False):
is_transposed = True
else:
# this is a broadcast error heree
- raise ValueError("cannot broadcast shape [%s] with block values [%s]"
- % (values.T.shape,other.shape))
+ raise ValueError("cannot broadcast shape [%s] with block "
+ "values [%s]" % (values.T.shape,
+ other.shape))
transf = (lambda x: x.T) if is_transposed else (lambda x: x)
@@ -925,21 +960,22 @@ def handle_error():
result = get_result(other)
# if we have an invalid shape/broadcast error
- # GH4576, so raise instead of allowing to pass thru
- except (ValueError) as detail:
+ # GH4576, so raise instead of allowing to pass through
+ except ValueError as detail:
raise
- except (Exception) as detail:
+ except Exception as detail:
result = handle_error()
- # technically a broadcast error in numpy can 'work' by returning a boolean False
+ # technically a broadcast error in numpy can 'work' by returning a
+ # boolean False
if not isinstance(result, np.ndarray):
if not isinstance(result, np.ndarray):
- # differentiate between an invalid ndarray-ndarray comparsion and
- # an invalid type comparison
+ # differentiate between an invalid ndarray-ndarray comparison
+ # and an invalid type comparison
if isinstance(values, np.ndarray) and is_list_like(other):
- raise ValueError('Invalid broadcasting comparison [%s] with block values'
- % repr(other))
+ raise ValueError('Invalid broadcasting comparison [%s] '
+ 'with block values' % repr(other))
raise TypeError('Could not compare [%s] with block values'
% repr(other))
@@ -951,9 +987,11 @@ def handle_error():
if try_cast:
result = self._try_cast_result(result)
- return [make_block(result, self.items, self.ref_items, ndim=self.ndim, fastpath=True)]
+ return [make_block(result, self.items, self.ref_items, ndim=self.ndim,
+ fastpath=True)]
- def where(self, other, cond, align=True, raise_on_error=True, try_cast=False):
+ def where(self, other, cond, align=True, raise_on_error=True,
+ try_cast=False):
"""
evaluate the block; return result block(s) from the result
@@ -962,8 +1000,8 @@ def where(self, other, cond, align=True, raise_on_error=True, try_cast=False):
other : a ndarray/object
cond : the condition to respect
align : boolean, perform alignment on other/cond
- raise_on_error : if True, raise when I can't perform the function, False by default (and just return
- the data that we had coming in)
+ raise_on_error : if True, raise when I can't perform the function,
+ False by default (and just return the data that we had coming in)
Returns
-------
@@ -976,7 +1014,8 @@ def where(self, other, cond, align=True, raise_on_error=True, try_cast=False):
if hasattr(other, 'reindex_axis'):
if align:
axis = getattr(other, '_info_axis_number', 0)
- other = other.reindex_axis(self.items, axis=axis, copy=True).values
+ other = other.reindex_axis(self.items, axis=axis,
+ copy=True).values
else:
other = other.values
@@ -985,8 +1024,10 @@ def where(self, other, cond, align=True, raise_on_error=True, try_cast=False):
if hasattr(other, 'ndim') and hasattr(values, 'ndim'):
if values.ndim != other.ndim or values.shape == other.shape[::-1]:
- # pseodo broadcast (its a 2d vs 1d say and where needs it in a specific direction)
- if other.ndim >= 1 and values.ndim-1 == other.ndim and values.shape[0] != other.shape[0]:
+ # pseodo broadcast (its a 2d vs 1d say and where needs it in a
+ # specific direction)
+ if (other.ndim >= 1 and values.ndim-1 == other.ndim and
+ values.shape[0] != other.shape[0]):
other = _block_shape(other).T
else:
values = values.T
@@ -1016,11 +1057,13 @@ def func(c, v, o):
v, o = self._try_coerce_args(v, o)
try:
- return self._try_coerce_result(expressions.where(c, v, o, raise_on_error=True))
- except (Exception) as detail:
+ return self._try_coerce_result(
+ expressions.where(c, v, o, raise_on_error=True)
+ )
+ except Exception as detail:
if raise_on_error:
- raise TypeError('Could not operate [%s] with block values [%s]'
- % (repr(o), str(detail)))
+ raise TypeError('Could not operate [%s] with block values '
+ '[%s]' % (repr(o), str(detail)))
else:
# return the values
result = np.empty(v.shape, dtype='float64')
@@ -1043,7 +1086,8 @@ def func(c, v, o):
if try_cast:
result = self._try_cast_result(result)
- return make_block(result, self.items, self.ref_items, ndim=self.ndim)
+ return make_block(result, self.items, self.ref_items,
+ ndim=self.ndim)
# might need to separate out blocks
axis = cond.ndim - 1
@@ -1076,7 +1120,8 @@ def _can_hold_element(self, element):
if is_list_like(element):
element = np.array(element)
return issubclass(element.dtype.type, (np.floating, np.integer))
- return isinstance(element, (float, int, np.float_, np.int_)) and not isinstance(bool,np.bool_)
+ return (isinstance(element, (float, int, np.float_, np.int_)) and
+ not isinstance(bool, np.bool_))
def _try_cast(self, element):
try:
@@ -1084,7 +1129,8 @@ def _try_cast(self, element):
except: # pragma: no cover
return element
- def to_native_types(self, slicer=None, na_rep='', float_format=None, **kwargs):
+ def to_native_types(self, slicer=None, na_rep='', float_format=None,
+ **kwargs):
""" convert to our native types format, slicing if desired """
values = self.values
@@ -1102,7 +1148,8 @@ def to_native_types(self, slicer=None, na_rep='', float_format=None, **kwargs):
def should_store(self, value):
# when inserting a column should not coerce integers to floats
# unnecessarily
- return issubclass(value.dtype.type, np.floating) and value.dtype == self.dtype
+ return (issubclass(value.dtype.type, np.floating) and
+ value.dtype == self.dtype)
class ComplexBlock(NumericBlock):
@@ -1176,7 +1223,7 @@ def masker(v):
if isnull(other) or (np.isscalar(other) and other == tslib.iNaT):
other = np.nan
elif isinstance(other, np.timedelta64):
- other = _coerce_scalar_to_timedelta_type(other,unit='s').item()
+ other = _coerce_scalar_to_timedelta_type(other, unit='s').item()
if other == tslib.iNaT:
other = np.nan
else:
@@ -1191,7 +1238,7 @@ def _try_operate(self, values):
def _try_coerce_result(self, result):
""" reverse of try_coerce_args / try_operate """
if isinstance(result, np.ndarray):
- if result.dtype.kind in ['i','f','O']:
+ if result.dtype.kind in ['i', 'f', 'O']:
result = result.astype('m8[ns]')
elif isinstance(result, np.integer):
result = np.timedelta64(result)
@@ -1214,7 +1261,8 @@ def to_native_types(self, slicer=None, na_rep=None, **kwargs):
rvalues[mask] = na_rep
imask = (-mask).ravel()
rvalues.flat[imask] = np.array([lib.repr_timedelta64(val)
- for val in values.ravel()[imask]], dtype=object)
+ for val in values.ravel()[imask]],
+ dtype=object)
return rvalues.tolist()
@@ -1242,19 +1290,24 @@ class ObjectBlock(Block):
is_object = True
_can_hold_na = True
- def __init__(self, values, items, ref_items, ndim=2, fastpath=False, placement=None):
+ def __init__(self, values, items, ref_items, ndim=2, fastpath=False,
+ placement=None):
if issubclass(values.dtype.type, compat.string_types):
values = np.array(values, dtype=object)
- super(ObjectBlock, self).__init__(values, items, ref_items,
- ndim=ndim, fastpath=fastpath, placement=placement)
+ super(ObjectBlock, self).__init__(values, items, ref_items, ndim=ndim,
+ fastpath=fastpath,
+ placement=placement)
@property
def is_bool(self):
- """ we can be a bool if we have only bool values but are of type object """
+ """ we can be a bool if we have only bool values but are of type
+ object
+ """
return lib.is_bool_array(self.values.ravel())
- def convert(self, convert_dates=True, convert_numeric=True, copy=True, by_item=True):
+ def convert(self, convert_dates=True, convert_numeric=True, copy=True,
+ by_item=True):
""" attempt to coerce any object types to better types
return a copy of the block (if copy = True)
by definition we ARE an ObjectBlock!!!!!
@@ -1271,20 +1324,24 @@ def convert(self, convert_dates=True, convert_numeric=True, copy=True, by_item=T
values = self.iget(i)
values = com._possibly_convert_objects(
- values.ravel(), convert_dates=convert_dates, convert_numeric=convert_numeric).reshape(values.shape)
+ values.ravel(), convert_dates=convert_dates,
+ convert_numeric=convert_numeric
+ ).reshape(values.shape)
values = _block_shape(values, ndim=self.ndim)
items = self.items.take([i])
placement = None if is_unique else [i]
- newb = make_block(
- values, items, self.ref_items, ndim=self.ndim, placement=placement)
+ newb = make_block(values, items, self.ref_items,
+ ndim=self.ndim, placement=placement)
blocks.append(newb)
else:
values = com._possibly_convert_objects(
- self.values.ravel(), convert_dates=convert_dates, convert_numeric=convert_numeric).reshape(self.values.shape)
- blocks.append(
- make_block(values, self.items, self.ref_items, ndim=self.ndim))
+ self.values.ravel(), convert_dates=convert_dates,
+ convert_numeric=convert_numeric
+ ).reshape(self.values.shape)
+ blocks.append(make_block(values, self.items, self.ref_items,
+ ndim=self.ndim))
return blocks
@@ -1296,7 +1353,8 @@ def _maybe_downcast(self, blocks, downcast=None):
# split and convert the blocks
result_blocks = []
for blk in blocks:
- result_blocks.extend(blk.convert(convert_dates=True,convert_numeric=False))
+ result_blocks.extend(blk.convert(convert_dates=True,
+ convert_numeric=False))
return result_blocks
def _can_hold_element(self, element):
@@ -1376,7 +1434,8 @@ def _replace_single(self, to_replace, value, inplace=False, filter=None,
# the superclass method -> to_replace is some kind of object
result = super(ObjectBlock, self).replace(to_replace, value,
inplace=inplace,
- filter=filter, regex=regex)
+ filter=filter,
+ regex=regex)
if not isinstance(result, list):
result = [result]
return result
@@ -1417,18 +1476,22 @@ class DatetimeBlock(Block):
is_datetime = True
_can_hold_na = True
- def __init__(self, values, items, ref_items, fastpath=False, placement=None, **kwargs):
+ def __init__(self, values, items, ref_items, fastpath=False,
+ placement=None, **kwargs):
if values.dtype != _NS_DTYPE:
values = tslib.cast_to_nanoseconds(values)
super(DatetimeBlock, self).__init__(values, items, ref_items,
- fastpath=True, placement=placement, **kwargs)
+ fastpath=True, placement=placement,
+ **kwargs)
def _can_hold_element(self, element):
if is_list_like(element):
element = np.array(element)
return element.dtype == _NS_DTYPE or element.dtype == np.int64
- return com.is_integer(element) or isinstance(element, datetime) or isnull(element)
+ return (com.is_integer(element) or
+ isinstance(element, datetime) or
+ isnull(element))
def _try_cast(self, element):
try:
@@ -1460,7 +1523,7 @@ def _try_coerce_result(self, result):
if result.dtype == 'i8':
result = tslib.array_to_datetime(
result.astype(object).ravel()).reshape(result.shape)
- elif result.dtype.kind in ['i','f','O']:
+ elif result.dtype.kind in ['i', 'f', 'O']:
result = result.astype('M8[ns]')
elif isinstance(result, (np.integer, np.datetime64)):
result = lib.Timestamp(result)
@@ -1477,11 +1540,12 @@ def fillna(self, value, inplace=False, downcast=None):
values = self.values if inplace else self.values.copy()
mask = com.isnull(self.values)
value = self._try_fill(value)
- np.putmask(values,mask,value)
- return [self if inplace else make_block(values, self.items,
- self.ref_items, fastpath=True)]
+ np.putmask(values, mask, value)
+ return [self if inplace else
+ make_block(values, self.items, self.ref_items, fastpath=True)]
- def to_native_types(self, slicer=None, na_rep=None, date_format=None, **kwargs):
+ def to_native_types(self, slicer=None, na_rep=None, date_format=None,
+ **kwargs):
""" convert to our native types format, slicing if desired """
values = self.values
@@ -1515,7 +1579,8 @@ def astype(self, dtype, copy=False, raise_on_error=True):
klass = None
if np.dtype(dtype).type == np.object_:
klass = ObjectBlock
- return self._astype(dtype, copy=copy, raise_on_error=raise_on_error, klass=klass)
+ return self._astype(dtype, copy=copy, raise_on_error=raise_on_error,
+ klass=klass)
def set(self, item, value):
"""
@@ -1535,7 +1600,8 @@ def set(self, item, value):
def get_values(self, dtype=None):
# return object dtype as Timestamps
if dtype == object:
- return lib.map_infer(self.values.ravel(), lib.Timestamp).reshape(self.values.shape)
+ return lib.map_infer(self.values.ravel(), lib.Timestamp)\
+ .reshape(self.values.shape)
return self.values
@@ -1550,7 +1616,8 @@ class SparseBlock(Block):
_verify_integrity = False
_ftype = 'sparse'
- def __init__(self, values, items, ref_items, ndim=None, fastpath=False, placement=None):
+ def __init__(self, values, items, ref_items, ndim=None, fastpath=False,
+ placement=None):
# kludgetastic
if ndim is not None:
@@ -1600,8 +1667,9 @@ def sp_values(self):
@sp_values.setter
def sp_values(self, v):
# reset the sparse values
- self.values = SparseArray(
- v, sparse_index=self.sp_index, kind=self.kind, dtype=v.dtype, fill_value=self.fill_value, copy=False)
+ self.values = SparseArray(v, sparse_index=self.sp_index,
+ kind=self.kind, dtype=v.dtype,
+ fill_value=self.fill_value, copy=False)
@property
def sp_index(self):
@@ -1651,9 +1719,9 @@ def get_values(self, dtype=None):
def get_merge_length(self):
return 1
- def make_block(
- self, values, items=None, ref_items=None, sparse_index=None, kind=None, dtype=None, fill_value=None,
- copy=False, fastpath=True):
+ def make_block(self, values, items=None, ref_items=None, sparse_index=None,
+ kind=None, dtype=None, fill_value=None, copy=False,
+ fastpath=True):
""" return a new block """
if dtype is None:
dtype = self.dtype
@@ -1664,8 +1732,10 @@ def make_block(
if ref_items is None:
ref_items = self.ref_items
new_values = SparseArray(values, sparse_index=sparse_index,
- kind=kind or self.kind, dtype=dtype, fill_value=fill_value, copy=copy)
- return make_block(new_values, items, ref_items, ndim=self.ndim, fastpath=fastpath)
+ kind=kind or self.kind, dtype=dtype,
+ fill_value=fill_value, copy=copy)
+ return make_block(new_values, items, ref_items, ndim=self.ndim,
+ fastpath=fastpath)
def interpolate(self, method='pad', axis=0, inplace=False,
limit=None, fill_value=None, **kwargs):
@@ -1679,7 +1749,7 @@ def fillna(self, value, inplace=False, downcast=None):
if issubclass(self.dtype.type, np.floating):
value = float(value)
values = self.values if inplace else self.values.copy()
- return [ self.make_block(values.get_values(value), fill_value=value) ]
+ return [self.make_block(values.get_values(value), fill_value=value)]
def shift(self, indexer, periods, axis=0):
""" shift the block by periods """
@@ -1692,7 +1762,7 @@ def shift(self, indexer, periods, axis=0):
new_values[:periods] = fill_value
else:
new_values[periods:] = fill_value
- return [ self.make_block(new_values) ]
+ return [self.make_block(new_values)]
def take(self, indexer, ref_items, axis=1):
""" going to take our items
@@ -1700,9 +1770,10 @@ def take(self, indexer, ref_items, axis=1):
if axis < 1:
raise AssertionError('axis must be at least 1, got %d' % axis)
- return [ self.make_block(self.values.take(indexer)) ]
+ return [self.make_block(self.values.take(indexer))]
- def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, limit=None, mask_info=None):
+ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None,
+ limit=None, mask_info=None):
"""
Reindex using pre-computed indexer information
"""
@@ -1712,9 +1783,11 @@ def reindex_axis(self, indexer, method=None, axis=1, fill_value=None, limit=None
# taking on the 0th axis always here
if fill_value is None:
fill_value = self.fill_value
- return self.make_block(self.values.take(indexer), items=self.items, fill_value=fill_value)
+ return self.make_block(self.values.take(indexer), items=self.items,
+ fill_value=fill_value)
- def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_value=None, limit=None, copy=True):
+ def reindex_items_from(self, new_ref_items, indexer=None, method=None,
+ fill_value=None, limit=None, copy=True):
"""
Reindex to only those items contained in the input set of items
@@ -1728,7 +1801,8 @@ def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_valu
# 1-d always
if indexer is None:
- new_ref_items, indexer = self.items.reindex(new_ref_items, limit=limit)
+ new_ref_items, indexer = self.items.reindex(new_ref_items,
+ limit=limit)
if indexer is None:
indexer = np.arange(len(self.items))
@@ -1751,9 +1825,11 @@ def reindex_items_from(self, new_ref_items, indexer=None, method=None, fill_valu
if method is not None or limit is not None:
if fill_value is None:
fill_value = self.fill_value
- new_values = com.interpolate_2d(new_values, method=method, limit=limit, fill_value=fill_value)
+ new_values = com.interpolate_2d(new_values, method=method,
+ limit=limit, fill_value=fill_value)
- return self.make_block(new_values, items=new_items, ref_items=new_ref_items, copy=copy)
+ return self.make_block(new_values, items=new_items,
+ ref_items=new_ref_items, copy=copy)
def sparse_reindex(self, new_index):
""" sparse reindex and return a new block
@@ -1772,8 +1848,8 @@ def _try_cast_result(self, result, dtype=None):
return result
-def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None, fastpath=False, placement=None):
-
+def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None,
+ fastpath=False, placement=None):
if klass is None:
dtype = dtype or values.dtype
vtype = dtype.type
@@ -1782,9 +1858,11 @@ def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None, fast
klass = SparseBlock
elif issubclass(vtype, np.floating):
klass = FloatBlock
- elif issubclass(vtype, np.integer) and issubclass(vtype, np.timedelta64):
+ elif (issubclass(vtype, np.integer) and
+ issubclass(vtype, np.timedelta64)):
klass = TimeDeltaBlock
- elif issubclass(vtype, np.integer) and not issubclass(vtype, np.datetime64):
+ elif (issubclass(vtype, np.integer) and
+ not issubclass(vtype, np.datetime64)):
klass = IntBlock
elif dtype == np.bool_:
klass = BoolBlock
@@ -1799,10 +1877,10 @@ def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None, fast
if np.prod(values.shape):
flat = values.ravel()
inferred_type = lib.infer_dtype(flat)
- if inferred_type in ['datetime','datetime64']:
+ if inferred_type in ['datetime', 'datetime64']:
- # we have an object array that has been inferred as datetime, so
- # convert it
+ # we have an object array that has been inferred as
+ # datetime, so convert it
try:
values = tslib.array_to_datetime(
flat).reshape(values.shape)
@@ -1814,7 +1892,9 @@ def make_block(values, items, ref_items, klass=None, ndim=None, dtype=None, fast
if klass is None:
klass = ObjectBlock
- return klass(values, items, ref_items, ndim=ndim, fastpath=fastpath, placement=placement)
+ return klass(values, items, ref_items, ndim=ndim, fastpath=fastpath,
+ placement=placement)
+
# TODO: flexible with index=None and/or items=None
@@ -1863,11 +1943,13 @@ def __init__(self, blocks, axes, do_integrity_check=True, fastpath=True):
def make_empty(self, axes=None):
""" return an empty BlockManager with the items axis of len 0 """
if axes is None:
- axes = [_ensure_index([]) ] + [ _ensure_index(a) for a in self.axes[1:] ]
+ axes = [_ensure_index([])] + [
+ _ensure_index(a) for a in self.axes[1:]
+ ]
# preserve dtype if possible
dtype = self.dtype if self.ndim == 1 else object
- return self.__class__(np.array([],dtype=dtype), axes)
+ return self.__class__(np.array([], dtype=dtype), axes)
def __nonzero__(self):
return True
@@ -1892,8 +1974,9 @@ def set_axis(self, axis, value, maybe_rename=True, check_axis=True):
value = _ensure_index(value)
if check_axis and len(value) != len(cur_axis):
- raise ValueError('Length mismatch: Expected axis has %d elements, new values have %d elements'
- % (len(cur_axis), len(value)))
+ raise ValueError('Length mismatch: Expected axis has %d elements, '
+ 'new values have %d elements' % (len(cur_axis),
+ len(value)))
self.axes[axis] = value
self._shape = None
@@ -1929,9 +2012,10 @@ def _reset_ref_locs(self):
self._items_map = None
def _rebuild_ref_locs(self):
- """ take _ref_locs and set the individual block ref_locs, skipping Nones
- no effect on a unique index """
- if getattr(self,'_ref_locs',None) is not None:
+ """Take _ref_locs and set the individual block ref_locs, skipping Nones
+ no effect on a unique index
+ """
+ if getattr(self, '_ref_locs', None) is not None:
item_count = 0
for v in self._ref_locs:
if v is not None:
@@ -1984,9 +2068,10 @@ def _set_ref_locs(self, labels=None, do_refs=False):
try:
rl = block.ref_locs
except:
- raise AssertionError("cannot create BlockManager._ref_locs because "
- "block [%s] with duplicate items [%s] "
- "does not have _ref_locs set" % (block, labels))
+ raise AssertionError(
+ 'Cannot create BlockManager._ref_locs because '
+ 'block [%s] with duplicate items [%s] does not '
+ 'have _ref_locs set' % (block, labels))
m = maybe_create_block_in_items_map(im, block)
for i, item in enumerate(block.items):
@@ -2138,7 +2223,8 @@ def apply(self, f, *args, **kwargs):
----------
f : the callable or function name to operate on at the block level
axes : optional (if not supplied, use self.axes)
- filter : list, if supplied, only call the block if the filter is in the block
+ filter : list, if supplied, only call the block if the filter is in
+ the block
"""
axes = kwargs.pop('axes', None)
@@ -2169,8 +2255,8 @@ def apply(self, f, *args, **kwargs):
result_blocks.append(applied)
if len(result_blocks) == 0:
return self.make_empty(axes or self.axes)
- bm = self.__class__(
- result_blocks, axes or self.axes, do_integrity_check=do_integrity_check)
+ bm = self.__class__(result_blocks, axes or self.axes,
+ do_integrity_check=do_integrity_check)
bm._consolidate_inplace()
return bm
@@ -2254,7 +2340,9 @@ def comp(s):
return bm
def prepare_for_merge(self, *args, **kwargs):
- """ prepare for merging, return a new block manager with Sparse -> Dense """
+ """ prepare for merging, return a new block manager with
+ Sparse -> Dense
+ """
self._consolidate_inplace()
if self._has_sparse:
return self.apply('prepare_for_merge', *args, **kwargs)
@@ -2305,7 +2393,8 @@ def is_numeric_mixed_type(self):
self._consolidate_inplace()
return all([block.is_numeric for block in self.blocks])
- def get_block_map(self, copy=False, typ=None, columns=None, is_numeric=False, is_bool=False):
+ def get_block_map(self, copy=False, typ=None, columns=None,
+ is_numeric=False, is_bool=False):
""" return a dictionary mapping the ftype -> block list
Parameters
@@ -2316,7 +2405,8 @@ def get_block_map(self, copy=False, typ=None, columns=None, is_numeric=False, is
filter if the type is indicated """
# short circuit - mainly for merging
- if typ == 'dict' and columns is None and not is_numeric and not is_bool and not copy:
+ if (typ == 'dict' and columns is None and not is_numeric and
+ not is_bool and not copy):
bm = defaultdict(list)
for b in self.blocks:
bm[str(b.ftype)].append(b)
@@ -2414,15 +2504,13 @@ def get_slice(self, slobj, axis=0, raise_on_error=False):
new_items = new_axes[0]
if len(self.blocks) == 1:
blk = self.blocks[0]
- newb = make_block(blk._slice(slobj),
- new_items,
- new_items,
- klass=blk.__class__,
- fastpath=True,
+ newb = make_block(blk._slice(slobj), new_items, new_items,
+ klass=blk.__class__, fastpath=True,
placement=blk._ref_locs)
new_blocks = [newb]
else:
- return self.reindex_items(new_items, indexer=np.arange(len(self.items))[slobj])
+ return self.reindex_items(
+ new_items, indexer=np.arange(len(self.items))[slobj])
else:
new_blocks = self._slice_blocks(slobj, axis)
@@ -2477,7 +2565,7 @@ def copy(self, deep=True):
else:
new_axes = list(self.axes)
return self.apply('copy', axes=new_axes, deep=deep,
- ref_items=new_axes[0], do_integrity_check=False)
+ ref_items=new_axes[0], do_integrity_check=False)
def as_matrix(self, items=None):
if len(self.blocks) == 0:
@@ -2947,7 +3035,7 @@ def _add_new_block(self, item, value, loc=None):
# need to shift elements to the right
if self._ref_locs[loc] is not None:
- for i in reversed(lrange(loc+1,len(self._ref_locs))):
+ for i in reversed(lrange(loc+1, len(self._ref_locs))):
self._ref_locs[i] = self._ref_locs[i-1]
self._ref_locs[loc] = (new_block, 0)
@@ -2966,7 +3054,8 @@ def _check_have(self, item):
if item not in self.items:
raise KeyError('no item named %s' % com.pprint_thing(item))
- def reindex_axis(self, new_axis, indexer=None, method=None, axis=0, fill_value=None, limit=None, copy=True):
+ def reindex_axis(self, new_axis, indexer=None, method=None, axis=0,
+ fill_value=None, limit=None, copy=True):
new_axis = _ensure_index(new_axis)
cur_axis = self.axes[axis]
@@ -2987,19 +3076,25 @@ def reindex_axis(self, new_axis, indexer=None, method=None, axis=0, fill_value=N
if axis == 0:
if method is not None or limit is not None:
- return self.reindex_axis0_with_method(new_axis, indexer=indexer,
- method=method, fill_value=fill_value, limit=limit, copy=copy)
- return self.reindex_items(new_axis, indexer=indexer, copy=copy, fill_value=fill_value)
+ return self.reindex_axis0_with_method(
+ new_axis, indexer=indexer, method=method,
+ fill_value=fill_value, limit=limit, copy=copy
+ )
+ return self.reindex_items(new_axis, indexer=indexer, copy=copy,
+ fill_value=fill_value)
new_axis, indexer = cur_axis.reindex(
new_axis, method, copy_if_needed=True)
- return self.reindex_indexer(new_axis, indexer, axis=axis, fill_value=fill_value)
+ return self.reindex_indexer(new_axis, indexer, axis=axis,
+ fill_value=fill_value)
- def reindex_axis0_with_method(self, new_axis, indexer=None, method=None, fill_value=None, limit=None, copy=True):
+ def reindex_axis0_with_method(self, new_axis, indexer=None, method=None,
+ fill_value=None, limit=None, copy=True):
raise AssertionError('method argument not supported for '
'axis == 0')
- def reindex_indexer(self, new_axis, indexer, axis=1, fill_value=None, allow_dups=False):
+ def reindex_indexer(self, new_axis, indexer, axis=1, fill_value=None,
+ allow_dups=False):
"""
pandas-indexer with -1's only.
"""
@@ -3063,7 +3158,8 @@ def _reindex_indexer_items(self, new_items, indexer, fill_value):
return self.__class__(new_blocks, new_axes)
- def reindex_items(self, new_items, indexer=None, copy=True, fill_value=None):
+ def reindex_items(self, new_items, indexer=None, copy=True,
+ fill_value=None):
"""
"""
@@ -3071,10 +3167,12 @@ def reindex_items(self, new_items, indexer=None, copy=True, fill_value=None):
data = self
if not data.is_consolidated():
data = data.consolidate()
- return data.reindex_items(new_items, copy=copy, fill_value=fill_value)
+ return data.reindex_items(new_items, copy=copy,
+ fill_value=fill_value)
if indexer is None:
- new_items, indexer = self.items.reindex(new_items, copy_if_needed=True)
+ new_items, indexer = self.items.reindex(new_items,
+ copy_if_needed=True)
new_axes = [new_items] + self.axes[1:]
# could have so me pathological (MultiIndex) issues here
@@ -3103,12 +3201,9 @@ def reindex_items(self, new_items, indexer=None, copy=True, fill_value=None):
for i, idx in enumerate(indexer):
blk, lidx = rl[idx]
item = new_items.take([i])
- blk = make_block(_block_shape(blk.iget(lidx)),
- item,
- new_items,
- ndim=self.ndim,
- fastpath=True,
- placement = [i])
+ blk = make_block(_block_shape(blk.iget(lidx)), item,
+ new_items, ndim=self.ndim, fastpath=True,
+ placement=[i])
new_blocks.append(blk)
# add a na block if we are missing items
@@ -3122,7 +3217,8 @@ def reindex_items(self, new_items, indexer=None, copy=True, fill_value=None):
return self.__class__(new_blocks, new_axes)
- def _make_na_block(self, items, ref_items, placement=None, fill_value=None):
+ def _make_na_block(self, items, ref_items, placement=None,
+ fill_value=None):
# TODO: infer dtypes other than float64 from fill_value
if fill_value is None:
@@ -3157,7 +3253,8 @@ def take(self, indexer, new_index=None, axis=1, verify=True):
new_index = self.axes[axis].take(indexer)
new_axes[axis] = new_index
- return self.apply('take', axes=new_axes, indexer=indexer, ref_items=new_axes[0], axis=axis)
+ return self.apply('take', axes=new_axes, indexer=indexer,
+ ref_items=new_axes[0], axis=axis)
def merge(self, other, lsuffix=None, rsuffix=None):
if not self._is_indexed_like(other):
@@ -3220,7 +3317,8 @@ def rename_axis(self, mapper, axis=1):
index = self.axes[axis]
if isinstance(index, MultiIndex):
new_axis = MultiIndex.from_tuples(
- [tuple(mapper(y) for y in x) for x in index], names=index.names)
+ [tuple(mapper(y) for y in x) for x in index],
+ names=index.names)
else:
new_axis = Index([mapper(x) for x in index], name=index.name)
@@ -3307,8 +3405,8 @@ def __init__(self, block, axis, do_integrity_check=False, fastpath=True):
self.axes = [axis]
if isinstance(block, list):
if len(block) != 1:
- raise ValueError(
- "cannot create SingleBlockManager with more than 1 block")
+ raise ValueError('Cannot create SingleBlockManager with '
+ 'more than 1 block')
block = block[0]
if not isinstance(block, Block):
block = make_block(block, axis, axis, ndim=1, fastpath=True)
@@ -3327,8 +3425,8 @@ def __init__(self, block, axis, do_integrity_check=False, fastpath=True):
block = _consolidate(block, axis)
if len(block) != 1:
- raise ValueError(
- "cannot create SingleBlockManager with more than 1 block")
+ raise ValueError('Cannot create SingleBlockManager with '
+ 'more than 1 block')
block = block[0]
if not isinstance(block, Block):
@@ -3349,39 +3447,46 @@ def shape(self):
self._shape = tuple([len(self.axes[0])])
return self._shape
- def reindex(self, new_axis, indexer=None, method=None, fill_value=None, limit=None, copy=True):
-
+ def reindex(self, new_axis, indexer=None, method=None, fill_value=None,
+ limit=None, copy=True):
# if we are the same and don't copy, just return
if not copy and self.index.equals(new_axis):
return self
- block = self._block.reindex_items_from(new_axis, indexer=indexer, method=method,
- fill_value=fill_value, limit=limit, copy=copy)
+ block = self._block.reindex_items_from(new_axis, indexer=indexer,
+ method=method,
+ fill_value=fill_value,
+ limit=limit, copy=copy)
mgr = SingleBlockManager(block, new_axis)
mgr._consolidate_inplace()
return mgr
def _reindex_indexer_items(self, new_items, indexer, fill_value):
# equiv to a reindex
- return self.reindex(new_items, indexer=indexer, fill_value=fill_value, copy=False)
+ return self.reindex(new_items, indexer=indexer, fill_value=fill_value,
+ copy=False)
- def reindex_axis0_with_method(self, new_axis, indexer=None, method=None, fill_value=None, limit=None, copy=True):
+ def reindex_axis0_with_method(self, new_axis, indexer=None, method=None,
+ fill_value=None, limit=None, copy=True):
if method is None:
indexer = None
- return self.reindex(new_axis, indexer=indexer, method=method, fill_value=fill_value, limit=limit, copy=copy)
+ return self.reindex(new_axis, indexer=indexer, method=method,
+ fill_value=fill_value, limit=limit, copy=copy)
def get_slice(self, slobj, raise_on_error=False):
if raise_on_error:
_check_slice_bounds(slobj, self.index)
- return self.__class__(self._block._slice(slobj), self.index._getitem_slice(slobj), fastpath=True)
+ return self.__class__(self._block._slice(slobj),
+ self.index._getitem_slice(slobj), fastpath=True)
def set_axis(self, axis, value):
cur_axis = self.axes[axis]
value = _ensure_index(value)
if len(value) != len(cur_axis):
- raise ValueError('Length mismatch: Expected axis has %d elements, new values have %d elements'
- % (len(cur_axis), len(value)))
+ raise ValueError('Length mismatch: Expected axis has %d elements, '
+ 'new values have %d elements' % (len(cur_axis),
+ len(value)))
self.axes[axis] = value
self._shape = None
@@ -3575,7 +3680,9 @@ def form_blocks(arrays, names, axes):
def _simple_blockify(tuples, ref_items, dtype, is_unique=True):
- """ return a single array of a block that has a single dtype; if dtype is not None, coerce to this dtype """
+ """ return a single array of a block that has a single dtype; if dtype is
+ not None, coerce to this dtype
+ """
block_items, values, placement = _stack_arrays(tuples, ref_items, dtype)
# CHECK DTYPE?
@@ -3608,7 +3715,9 @@ def _multi_blockify(tuples, ref_items, dtype=None, is_unique=True):
def _sparse_blockify(tuples, ref_items, dtype=None):
- """ return an array of blocks that potentially have different dtypes (and are sparse) """
+ """ return an array of blocks that potentially have different dtypes (and
+ are sparse)
+ """
new_blocks = []
for i, names, array in tuples:
@@ -3748,8 +3857,8 @@ def _consolidate(blocks, items):
new_blocks = []
for (_can_consolidate, dtype), group_blocks in grouper:
- merged_blocks = _merge_blocks(
- list(group_blocks), items, dtype=dtype, _can_consolidate=_can_consolidate)
+ merged_blocks = _merge_blocks(list(group_blocks), items, dtype=dtype,
+ _can_consolidate=_can_consolidate)
if isinstance(merged_blocks, list):
new_blocks.extend(merged_blocks)
else:
@@ -3810,6 +3919,6 @@ def _vstack(to_stack, dtype):
def _possibly_convert_to_indexer(loc):
if com._is_bool_indexer(loc):
loc = [i for i, v in enumerate(loc) if v]
- elif isinstance(loc,slice):
- loc = lrange(loc.start,loc.stop)
+ elif isinstance(loc, slice):
+ loc = lrange(loc.start, loc.stop)
return loc
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 45e6a54721bd2..b6ebeb7f96489 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -105,6 +105,7 @@ def _has_infs(result):
return False
return np.isinf(result) or np.isneginf(result)
+
def _get_fill_value(dtype, fill_value=None, fill_value_typ=None):
""" return the correct fill value for the dtype of the values """
if fill_value is not None:
@@ -127,7 +128,9 @@ def _get_fill_value(dtype, fill_value=None, fill_value_typ=None):
else:
return tslib.iNaT
-def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=False, copy=True):
+
+def _get_values(values, skipna, fill_value=None, fill_value_typ=None,
+ isfinite=False, copy=True):
""" utility to get the values view, mask, dtype
if necessary copy and mask using the specified fill_value
copy = True will force the copy """
@@ -137,11 +140,13 @@ def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=F
else:
mask = isnull(values)
- dtype = values.dtype
+ dtype = values.dtype
dtype_ok = _na_ok_dtype(dtype)
- # get our fill value (in case we need to provide an alternative dtype for it)
- fill_value = _get_fill_value(dtype, fill_value=fill_value, fill_value_typ=fill_value_typ)
+ # get our fill value (in case we need to provide an alternative
+ # dtype for it)
+ fill_value = _get_fill_value(dtype, fill_value=fill_value,
+ fill_value_typ=fill_value_typ)
if skipna:
if copy:
@@ -151,7 +156,8 @@ def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=F
# promote if needed
else:
- values, changed = com._maybe_upcast_putmask(values, mask, fill_value)
+ values, changed = com._maybe_upcast_putmask(values, mask,
+ fill_value)
elif copy:
values = values.copy()
@@ -159,20 +165,25 @@ def _get_values(values, skipna, fill_value=None, fill_value_typ=None, isfinite=F
values = _view_if_needed(values)
return values, mask, dtype
+
def _isfinite(values):
- if issubclass(values.dtype.type, (np.timedelta64,np.datetime64)):
+ if issubclass(values.dtype.type, (np.timedelta64, np.datetime64)):
return isnull(values)
return -np.isfinite(values)
+
def _na_ok_dtype(dtype):
- return not issubclass(dtype.type, (np.integer, np.datetime64, np.timedelta64))
+ return not issubclass(dtype.type, (np.integer, np.datetime64,
+ np.timedelta64))
+
def _view_if_needed(values):
- if issubclass(values.dtype.type, (np.datetime64,np.timedelta64)):
+ if issubclass(values.dtype.type, (np.datetime64, np.timedelta64)):
return values.view(np.int64)
return values
-def _wrap_results(result,dtype):
+
+def _wrap_results(result, dtype):
""" wrap our results if needed """
if issubclass(dtype.type, np.datetime64):
@@ -185,27 +196,30 @@ def _wrap_results(result,dtype):
# this is a scalar timedelta result!
# we have series convert then take the element (scalar)
- # as series will do the right thing in py3 (and deal with numpy 1.6.2
- # bug in that it results dtype of timedelta64[us]
+ # as series will do the right thing in py3 (and deal with numpy
+ # 1.6.2 bug in that it results dtype of timedelta64[us]
from pandas import Series
# coerce float to results
if is_float(result):
result = int(result)
- result = Series([result],dtype='timedelta64[ns]')
+ result = Series([result], dtype='timedelta64[ns]')
else:
result = result.view(dtype)
return result
+
def nanany(values, axis=None, skipna=True):
values, mask, dtype = _get_values(values, skipna, False, copy=skipna)
return values.any(axis)
+
def nanall(values, axis=None, skipna=True):
values, mask, dtype = _get_values(values, skipna, True, copy=skipna)
return values.all(axis)
+
@disallow('M8')
@bottleneck_switch(zero_value=0)
def nansum(values, axis=None, skipna=True):
@@ -214,6 +228,7 @@ def nansum(values, axis=None, skipna=True):
the_sum = _maybe_null_out(the_sum, axis, mask)
return the_sum
+
@disallow('M8')
@bottleneck_switch()
def nanmean(values, axis=None, skipna=True):
@@ -229,7 +244,8 @@ def nanmean(values, axis=None, skipna=True):
else:
the_mean = the_sum / count if count > 0 else np.nan
- return _wrap_results(the_mean,dtype)
+ return _wrap_results(the_mean, dtype)
+
@disallow('M8')
@bottleneck_switch()
@@ -265,7 +281,7 @@ def get_median(x):
return ret
# otherwise return a scalar value
- return _wrap_results(get_median(values),dtype) if notempty else np.nan
+ return _wrap_results(get_median(values), dtype) if notempty else np.nan
@disallow('M8')
@@ -292,7 +308,7 @@ def nanvar(values, axis=None, skipna=True, ddof=1):
@bottleneck_switch()
def nanmin(values, axis=None, skipna=True):
- values, mask, dtype = _get_values(values, skipna, fill_value_typ = '+inf')
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ='+inf')
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_ and compat.PY3):
@@ -315,13 +331,13 @@ def nanmin(values, axis=None, skipna=True):
else:
result = values.min(axis)
- result = _wrap_results(result,dtype)
+ result = _wrap_results(result, dtype)
return _maybe_null_out(result, axis, mask)
@bottleneck_switch()
def nanmax(values, axis=None, skipna=True):
- values, mask, dtype = _get_values(values, skipna, fill_value_typ ='-inf')
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ='-inf')
# numpy 1.6.1 workaround in Python 3.x
if (values.dtype == np.object_ and compat.PY3):
@@ -345,7 +361,7 @@ def nanmax(values, axis=None, skipna=True):
else:
result = values.max(axis)
- result = _wrap_results(result,dtype)
+ result = _wrap_results(result, dtype)
return _maybe_null_out(result, axis, mask)
@@ -353,7 +369,8 @@ def nanargmax(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- values, mask, dtype = _get_values(values, skipna, fill_value_typ = '-inf', isfinite=True)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ='-inf',
+ isfinite=True)
result = values.argmax(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
@@ -363,7 +380,8 @@ def nanargmin(values, axis=None, skipna=True):
"""
Returns -1 in the NA case
"""
- values, mask, dtype = _get_values(values, skipna, fill_value_typ = '+inf', isfinite=True)
+ values, mask, dtype = _get_values(values, skipna, fill_value_typ='+inf',
+ isfinite=True)
result = values.argmin(axis)
result = _maybe_arg_null_out(result, axis, mask, skipna)
return result
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
index 249468c332e0c..0836ac7bc22a6 100644
--- a/pandas/core/ops.py
+++ b/pandas/core/ops.py
@@ -317,13 +317,14 @@ def _convert_to_array(self, values, name=None, other=None):
if inferred_type in ('datetime64', 'datetime', 'date', 'time'):
# if we have a other of timedelta, but use pd.NaT here we
# we are in the wrong path
- if other is not None and other.dtype == 'timedelta64[ns]' and all(isnull(v) for v in values):
- values = np.empty(values.shape,dtype=other.dtype)
+ if (other is not None and other.dtype == 'timedelta64[ns]' and
+ all(isnull(v) for v in values)):
+ values = np.empty(values.shape, dtype=other.dtype)
values[:] = tslib.iNaT
# a datetlike
elif not (isinstance(values, (pa.Array, pd.Series)) and
- com.is_datetime64_dtype(values)):
+ com.is_datetime64_dtype(values)):
values = tslib.array_to_datetime(values)
elif isinstance(values, pd.DatetimeIndex):
values = values.to_series()
@@ -353,11 +354,12 @@ def _convert_to_array(self, values, name=None, other=None):
# all nan, so ok, use the other dtype (e.g. timedelta or datetime)
if isnull(values).all():
- values = np.empty(values.shape,dtype=other.dtype)
+ values = np.empty(values.shape, dtype=other.dtype)
values[:] = tslib.iNaT
else:
- raise TypeError("incompatible type [{0}] for a datetime/timedelta"
- " operation".format(pa.array(values).dtype))
+ raise TypeError(
+ 'incompatible type [{0}] for a datetime/timedelta '
+ 'operation'.format(pa.array(values).dtype))
else:
raise TypeError("incompatible type [{0}] for a datetime/timedelta"
" operation".format(pa.array(values).dtype))
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 885ec2714c47a..c695dc44dbdb5 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -3,13 +3,13 @@
"""
# pylint: disable=E1103,W0231,W0212,W0621
from __future__ import division
-from pandas.compat import map, zip, range, lrange, lmap, u, OrderedDict, OrderedDefaultdict
+from pandas.compat import (map, zip, range, lrange, lmap, u, OrderedDict,
+ OrderedDefaultdict)
from pandas import compat
import sys
import numpy as np
-from pandas.core.common import (PandasError,
- _try_sort, _default_index, _infer_dtype_from_scalar,
- notnull)
+from pandas.core.common import (PandasError, _try_sort, _default_index,
+ _infer_dtype_from_scalar, notnull)
from pandas.core.categorical import Categorical
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_get_combined_index)
@@ -100,8 +100,6 @@ def panel_index(time, panels, names=['time', 'panel']):
verify_integrity=False)
-
-
class Panel(NDFrame):
"""
@@ -130,9 +128,8 @@ def _constructor(self):
def __init__(self, data=None, items=None, major_axis=None, minor_axis=None,
copy=False, dtype=None):
- self._init_data(
- data=data, items=items, major_axis=major_axis, minor_axis=minor_axis,
- copy=copy, dtype=dtype)
+ self._init_data(data=data, items=items, major_axis=major_axis,
+ minor_axis=minor_axis, copy=copy, dtype=dtype)
def _init_data(self, data, copy, dtype, **kwargs):
"""
@@ -327,8 +324,8 @@ def axis_pretty(a):
v = getattr(self, a)
if len(v) > 0:
return u('%s axis: %s to %s') % (a.capitalize(),
- com.pprint_thing(v[0]),
- com.pprint_thing(v[-1]))
+ com.pprint_thing(v[0]),
+ com.pprint_thing(v[-1]))
else:
return u('%s axis: None') % a.capitalize()
@@ -535,9 +532,9 @@ def __setitem__(self, key, value):
mat = value.values
elif isinstance(value, np.ndarray):
if value.shape != shape[1:]:
- raise ValueError('shape of value must be {0}, shape of given '
- 'object was {1}'.format(shape[1:],
- tuple(map(int, value.shape))))
+ raise ValueError(
+ 'shape of value must be {0}, shape of given object was '
+ '{1}'.format(shape[1:], tuple(map(int, value.shape))))
mat = np.asarray(value)
elif np.isscalar(value):
dtype, value = _infer_dtype_from_scalar(value)
@@ -589,7 +586,10 @@ def tail(self, n=5):
def _needs_reindex_multi(self, axes, method, level):
# only allowing multi-index on Panel (and not > dims)
- return method is None and not self._is_mixed_type and self._AXIS_LEN <= 3 and com._count_not_none(*axes.values()) == 3
+ return (method is None and
+ not self._is_mixed_type and
+ self._AXIS_LEN <= 3 and
+ com._count_not_none(*axes.values()) == 3)
def _reindex_multi(self, axes, copy, fill_value):
""" we are guaranteed non-Nones in the axes! """
@@ -780,13 +780,13 @@ def _ixs(self, i, axis=0):
# xs cannot handle a non-scalar key, so just reindex here
if _is_list_like(key):
- indexer = { self._get_axis_name(axis): key }
+ indexer = {self._get_axis_name(axis): key}
return self.reindex(**indexer)
# a reduction
if axis == 0:
values = self._data.iget(i)
- return self._box_item_values(key,values)
+ return self._box_item_values(key, values)
# xs by position
self._consolidate_inplace()
@@ -904,11 +904,11 @@ def _construct_return_type(self, result, axes=None, **kwargs):
elif self.ndim == ndim + 1:
if axes is None:
return self._constructor_sliced(result)
- return self._constructor_sliced(result,
- **self._extract_axes_for_slice(self, axes))
+ return self._constructor_sliced(
+ result, **self._extract_axes_for_slice(self, axes))
- raise PandasError("invalid _construct_return_type [self->%s] [result->%s]" %
- (self.ndim, result.ndim))
+ raise PandasError('invalid _construct_return_type [self->%s] '
+ '[result->%s]' % (self.ndim, result.ndim))
def _wrap_result(self, result, axis):
axis = self._get_axis_name(axis)
@@ -920,15 +920,19 @@ def _wrap_result(self, result, axis):
@Appender(_shared_docs['reindex'] % _shared_doc_kwargs)
def reindex(self, items=None, major_axis=None, minor_axis=None, **kwargs):
- major_axis = major_axis if major_axis is not None else kwargs.pop('major', None)
- minor_axis = minor_axis if minor_axis is not None else kwargs.pop('minor', None)
+ major_axis = (major_axis if major_axis is not None
+ else kwargs.pop('major', None))
+ minor_axis = (minor_axis if minor_axis is not None
+ else kwargs.pop('minor', None))
return super(Panel, self).reindex(items=items, major_axis=major_axis,
minor_axis=minor_axis, **kwargs)
@Appender(_shared_docs['rename'] % _shared_doc_kwargs)
def rename(self, items=None, major_axis=None, minor_axis=None, **kwargs):
- major_axis = major_axis if major_axis is not None else kwargs.pop('major', None)
- minor_axis = minor_axis if minor_axis is not None else kwargs.pop('minor', None)
+ major_axis = (major_axis if major_axis is not None
+ else kwargs.pop('major', None))
+ minor_axis = (minor_axis if minor_axis is not None
+ else kwargs.pop('minor', None))
return super(Panel, self).rename(items=items, major_axis=major_axis,
minor_axis=minor_axis, **kwargs)
@@ -939,6 +943,7 @@ def reindex_axis(self, labels, axis=0, method=None, level=None, copy=True,
method=method, level=level,
copy=copy, limit=limit,
fill_value=fill_value)
+
@Appender(_shared_docs['transpose'] % _shared_doc_kwargs)
def transpose(self, *args, **kwargs):
return super(Panel, self).transpose(*args, **kwargs)
@@ -1225,11 +1230,11 @@ def _add_aggregate_operations(cls, use_numexpr=True):
# doc strings substitors
_agg_doc = """
-Wrapper method for %s
+Wrapper method for %%s
Parameters
----------
-other : """ + "%s or %s" % (cls._constructor_sliced.__name__, cls.__name__) + """
+other : %s or %s""" % (cls._constructor_sliced.__name__, cls.__name__) + """
axis : {""" + ', '.join(cls._AXIS_ORDERS) + "}" + """
Axis to broadcast over
@@ -1237,19 +1242,22 @@ def _add_aggregate_operations(cls, use_numexpr=True):
-------
""" + cls.__name__ + "\n"
- def _panel_arith_method(op, name, str_rep = None, default_axis=None,
+ def _panel_arith_method(op, name, str_rep=None, default_axis=None,
fill_zeros=None, **eval_kwargs):
def na_op(x, y):
try:
- result = expressions.evaluate(op, str_rep, x, y, raise_on_error=True, **eval_kwargs)
+ result = expressions.evaluate(op, str_rep, x, y,
+ raise_on_error=True,
+ **eval_kwargs)
except TypeError:
result = op(x, y)
- # handles discrepancy between numpy and numexpr on division/mod by 0
- # though, given that these are generally (always?) non-scalars, I'm
- # not sure whether it's worth it at the moment
- result = com._fill_zeros(result,y,fill_zeros)
+ # handles discrepancy between numpy and numexpr on division/mod
+ # by 0 though, given that these are generally (always?)
+ # non-scalars, I'm not sure whether it's worth it at the moment
+ result = com._fill_zeros(result, y, fill_zeros)
return result
+
@Substitution(name)
@Appender(_agg_doc)
def f(self, other, axis=0):
@@ -1258,9 +1266,9 @@ def f(self, other, axis=0):
return f
# add `div`, `mul`, `pow`, etc..
- ops.add_flex_arithmetic_methods(cls, _panel_arith_method,
- use_numexpr=use_numexpr,
- flex_comp_method=ops._comp_method_PANEL)
+ ops.add_flex_arithmetic_methods(
+ cls, _panel_arith_method, use_numexpr=use_numexpr,
+ flex_comp_method=ops._comp_method_PANEL)
Panel._setup_axes(axes=['items', 'major_axis', 'minor_axis'],
info_axis=0,
@@ -1276,5 +1284,3 @@ def f(self, other, axis=0):
WidePanel = Panel
LongPanel = DataFrame
-
-
diff --git a/pandas/core/panel4d.py b/pandas/core/panel4d.py
index 5679506cc6bb8..3d480464388c8 100644
--- a/pandas/core/panel4d.py
+++ b/pandas/core/panel4d.py
@@ -5,15 +5,14 @@
Panel4D = create_nd_panel_factory(
klass_name='Panel4D',
- orders =['labels', 'items', 'major_axis', 'minor_axis'],
- slices ={'labels': 'labels', 'items': 'items',
- 'major_axis': 'major_axis',
- 'minor_axis': 'minor_axis'},
+ orders=['labels', 'items', 'major_axis', 'minor_axis'],
+ slices={'labels': 'labels', 'items': 'items', 'major_axis': 'major_axis',
+ 'minor_axis': 'minor_axis'},
slicer=Panel,
- aliases ={'major': 'major_axis', 'minor': 'minor_axis'},
+ aliases={'major': 'major_axis', 'minor': 'minor_axis'},
stat_axis=2,
- ns=dict(__doc__= """
- Represents a 4 dimensonal structured
+ ns=dict(__doc__="""
+ Represents a 4 dimensional structured
Parameters
----------
@@ -28,10 +27,9 @@
Data type to force, otherwise infer
copy : boolean, default False
Copy data from inputs. Only affects DataFrame / 2d ndarray input
- """
+ """)
+)
- )
- )
def panel4d_init(self, data=None, labels=None, items=None, major_axis=None,
minor_axis=None, copy=False, dtype=None):
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index 9ccce1edc9067..8ac84c0d91adc 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -5,27 +5,24 @@
import pandas.compat as compat
-
-def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None, stat_axis=2, info_axis=0, ns=None):
+def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None,
+ stat_axis=2, info_axis=0, ns=None):
""" manufacture a n-d class:
- parameters
+ Parameters
----------
klass_name : the klass name
- orders : the names of the axes in order (highest to lowest)
- slices : a dictionary that defines how the axes map to the sliced axis
- slicer : the class representing a slice of this panel
- aliases : a dictionary defining aliases for various axes
- default = { major : major_axis, minor : minor_axis }
- stat_axis : the default statistic axis
- default = 2
- info_axis : the info axis
-
-
- returns
+ orders : the names of the axes in order (highest to lowest)
+ slices : a dictionary that defines how the axes map to the slice axis
+ slicer : the class representing a slice of this panel
+ aliases : a dictionary defining aliases for various axes
+ default = { major : major_axis, minor : minor_axis }
+ stat_axis : the default statistic axis default = 2
+ info_axis : the info axis
+
+ Returns
-------
- a class object reprsenting this panel
-
+ a class object representing this panel
"""
@@ -42,11 +39,8 @@ def create_nd_panel_factory(klass_name, orders, slices, slicer, aliases=None, st
klass = type(klass_name, (slicer,), ns)
# setup the axes
- klass._setup_axes(axes = orders,
- info_axis = info_axis,
- stat_axis = stat_axis,
- aliases = aliases,
- slicers = slices)
+ klass._setup_axes(axes=orders, info_axis=info_axis, stat_axis=stat_axis,
+ aliases=aliases, slicers=slices)
klass._constructor_sliced = slicer
@@ -101,7 +95,8 @@ def _combine_with_constructor(self, other, func):
klass._combine_with_constructor = _combine_with_constructor
# set as NonImplemented operations which we don't support
- for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter', 'dropna', 'shift']:
+ for f in ['to_frame', 'to_excel', 'to_sparse', 'groupby', 'join', 'filter',
+ 'dropna', 'shift']:
def func(self, *args, **kwargs):
raise NotImplementedError
setattr(klass, f, func)
diff --git a/pandas/core/reshape.py b/pandas/core/reshape.py
index c2c1a2931d4aa..24a4797759dab 100644
--- a/pandas/core/reshape.py
+++ b/pandas/core/reshape.py
@@ -71,12 +71,14 @@ def __init__(self, values, index, level=-1, value_columns=None):
levels = index.levels
labels = index.labels
- def _make_index(lev,lab):
- i = lev.__class__(_make_index_array_level(lev.values,lab))
+
+ def _make_index(lev, lab):
+ i = lev.__class__(_make_index_array_level(lev.values, lab))
i.name = lev.name
return i
- self.new_index_levels = list([ _make_index(lev,lab) for lev,lab in zip(levels,labels) ])
+ self.new_index_levels = [_make_index(lev, lab)
+ for lev, lab in zip(levels, labels)]
self.new_index_names = list(index.names)
self.removed_name = self.new_index_names.pop(self.level)
@@ -154,7 +156,8 @@ def get_result(self):
mask = isnull(index)
if mask.any():
l = np.arange(len(index))
- values, orig_values = np.empty((len(index),values.shape[1])), values
+ values, orig_values = (np.empty((len(index), values.shape[1])),
+ values)
values.fill(np.nan)
values_indexer = com._ensure_int64(l[~mask])
for i, j in enumerate(values_indexer):
@@ -224,7 +227,7 @@ def get_new_index(self):
result_labels = []
for cur in self.sorted_labels[:-1]:
labels = cur.take(self.compressor)
- labels = _make_index_array_level(labels,cur)
+ labels = _make_index_array_level(labels, cur)
result_labels.append(labels)
# construct the new index
@@ -240,26 +243,27 @@ def get_new_index(self):
return new_index
-def _make_index_array_level(lev,lab):
+def _make_index_array_level(lev, lab):
""" create the combined index array, preserving nans, return an array """
mask = lab == -1
if not mask.any():
return lev
l = np.arange(len(lab))
- mask_labels = np.empty(len(mask[mask]),dtype=object)
+ mask_labels = np.empty(len(mask[mask]), dtype=object)
mask_labels.fill(np.nan)
mask_indexer = com._ensure_int64(l[mask])
labels = lev
labels_indexer = com._ensure_int64(l[~mask])
- new_labels = np.empty(tuple([len(lab)]),dtype=object)
+ new_labels = np.empty(tuple([len(lab)]), dtype=object)
new_labels[labels_indexer] = labels
- new_labels[mask_indexer] = mask_labels
+ new_labels[mask_indexer] = mask_labels
return new_labels
+
def _unstack_multiple(data, clocs):
if len(clocs) == 0:
return data
@@ -341,7 +345,8 @@ def pivot(self, index=None, columns=None, values=None):
return indexed.unstack(columns)
else:
indexed = Series(self[values].values,
- index=MultiIndex.from_arrays([self[index], self[columns]]))
+ index=MultiIndex.from_arrays([self[index],
+ self[columns]]))
return indexed.unstack(columns)
@@ -540,9 +545,10 @@ def _stack_multi_columns(frame, level=-1, dropna=True):
# tuple list excluding level for grouping columns
if len(frame.columns.levels) > 2:
- tuples = list(zip(*[lev.values.take(lab)
- for lev, lab in zip(this.columns.levels[:-1],
- this.columns.labels[:-1])]))
+ tuples = list(zip(*[
+ lev.values.take(lab) for lev, lab in
+ zip(this.columns.levels[:-1], this.columns.labels[:-1])
+ ]))
unique_groups = [key for key, _ in itertools.groupby(tuples)]
new_names = this.columns.names[:-1]
new_columns = MultiIndex.from_tuples(unique_groups, names=new_names)
@@ -678,7 +684,8 @@ def melt(frame, id_vars=None, value_vars=None,
frame = frame.copy()
if col_level is not None: # allow list or other?
- frame.columns = frame.columns.get_level_values(col_level) # frame is a copy
+ # frame is a copy
+ frame.columns = frame.columns.get_level_values(col_level)
if var_name is None:
if isinstance(frame.columns, MultiIndex):
@@ -848,7 +855,8 @@ def get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False):
2 0 0 1
"""
- cat = Categorical.from_array(Series(data)) # Series avoids inconsistent NaN handling
+ # Series avoids inconsistent NaN handling
+ cat = Categorical.from_array(Series(data))
levels = cat.levels
# if all NaN
@@ -957,6 +965,9 @@ def block2d_to_blocknd(values, items, shape, labels, ref_items=None):
def factor_indexer(shape, labels):
- """ given a tuple of shape and a list of Categorical labels, return the expanded label indexer """
+ """ given a tuple of shape and a list of Categorical labels, return the
+ expanded label indexer
+ """
mult = np.array(shape)[::-1].cumprod()[::-1]
- return com._ensure_platform_int(np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T)
+ return com._ensure_platform_int(
+ np.sum(np.array(labels).T * np.append(mult, [1]), axis=1).T)
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 3e8202c7ec0b6..cf704e9aef174 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -22,8 +22,8 @@
_values_from_object,
_possibly_cast_to_datetime, _possibly_castable,
_possibly_convert_platform,
- ABCSparseArray, _maybe_match_name, _ensure_object,
- SettingWithCopyError)
+ ABCSparseArray, _maybe_match_name,
+ _ensure_object, SettingWithCopyError)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
@@ -63,6 +63,7 @@
axes_single_arg="{0,'index'}"
)
+
def _coerce_method(converter):
""" install the scalar coercion methods """
@@ -224,8 +225,8 @@ def __init__(self, data=None, index=None, dtype=None, name=None,
self._set_axis(0, index, fastpath=True)
@classmethod
- def from_array(cls, arr, index=None, name=None, copy=False, fastpath=False):
-
+ def from_array(cls, arr, index=None, name=None, copy=False,
+ fastpath=False):
# return a sparse series here
if isinstance(arr, ABCSparseArray):
from pandas.sparse.series import SparseSeries
@@ -336,7 +337,8 @@ def __len__(self):
return len(self._data)
def view(self, dtype=None):
- return self._constructor(self.values.view(dtype), index=self.index).__finalize__(self)
+ return self._constructor(self.values.view(dtype),
+ index=self.index).__finalize__(self)
def __array__(self, result=None):
""" the array interface, return my values """
@@ -346,7 +348,8 @@ def __array_wrap__(self, result):
"""
Gets called prior to a ufunc (and after)
"""
- return self._constructor(result, index=self.index, copy=False).__finalize__(self)
+ return self._constructor(result, index=self.index,
+ copy=False).__finalize__(self)
def __contains__(self, key):
return key in self.index
@@ -455,7 +458,7 @@ def _ixs(self, i, axis=0):
raise
except:
if isinstance(i, slice):
- indexer = self.index._convert_slice_indexer(i,typ='iloc')
+ indexer = self.index._convert_slice_indexer(i, typ='iloc')
return self._get_values(indexer)
else:
label = self.index[i]
@@ -472,8 +475,9 @@ def _is_mixed_type(self):
def _slice(self, slobj, axis=0, raise_on_error=False, typ=None):
if raise_on_error:
_check_slice_bounds(slobj, self.values)
- slobj = self.index._convert_slice_indexer(slobj,typ=typ or 'getitem')
- return self._constructor(self.values[slobj], index=self.index[slobj]).__finalize__(self)
+ slobj = self.index._convert_slice_indexer(slobj, typ=typ or 'getitem')
+ return self._constructor(self.values[slobj],
+ index=self.index[slobj]).__finalize__(self)
def __getitem__(self, key):
try:
@@ -510,7 +514,7 @@ def __getitem__(self, key):
def _get_with(self, key):
# other: fancy integer or otherwise
if isinstance(key, slice):
- indexer = self.index._convert_slice_indexer(key,typ='getitem')
+ indexer = self.index._convert_slice_indexer(key, typ='getitem')
return self._get_values(indexer)
else:
if isinstance(key, tuple):
@@ -564,11 +568,13 @@ def _get_values_tuple(self, key):
# If key is contained, would have returned by now
indexer, new_index = self.index.get_loc_level(key)
- return self._constructor(self.values[indexer], index=new_index).__finalize__(self)
+ return self._constructor(self.values[indexer],
+ index=new_index).__finalize__(self)
def _get_values(self, indexer):
try:
- return self._constructor(self._data.get_slice(indexer), fastpath=True).__finalize__(self)
+ return self._constructor(self._data.get_slice(indexer),
+ fastpath=True).__finalize__(self)
except Exception:
return self.values[indexer]
@@ -605,7 +611,8 @@ def __setitem__(self, key, value):
return
except TypeError as e:
- if isinstance(key, tuple) and not isinstance(self.index, MultiIndex):
+ if isinstance(key, tuple) and not isinstance(self.index,
+ MultiIndex):
raise ValueError("Can only tuple-index with a MultiIndex")
# python 3 type errors should be raised
@@ -635,7 +642,7 @@ def _set_with_engine(self, key, value):
def _set_with(self, key, value):
# other: fancy integer or otherwise
if isinstance(key, slice):
- indexer = self.index._convert_slice_indexer(key,typ='getitem')
+ indexer = self.index._convert_slice_indexer(key, typ='getitem')
return self._set_values(indexer, value)
else:
if isinstance(key, tuple):
@@ -677,7 +684,7 @@ def _set_labels(self, key, value):
def _set_values(self, key, value):
if isinstance(key, Series):
key = key.values
- self._data = self._data.setitem(key,value)
+ self._data = self._data.setitem(key, value)
# help out SparseSeries
_get_val_at = ndarray.__getitem__
@@ -705,7 +712,8 @@ def repeat(self, reps):
"""
new_index = self.index.repeat(reps)
new_values = self.values.repeat(reps)
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
def reshape(self, *args, **kwargs):
"""
@@ -722,7 +730,6 @@ def reshape(self, *args, **kwargs):
return self.values.reshape(shape, **kwargs)
-
def get(self, label, default=None):
"""
Returns value occupying requested label, default to specified
@@ -824,7 +831,8 @@ def reset_index(self, level=None, drop=False, name=None, inplace=False):
# set name if it was passed, otherwise, keep the previous name
self.name = name or self.name
else:
- return self._constructor(self.values.copy(), index=new_index).__finalize__(self)
+ return self._constructor(self.values.copy(),
+ index=new_index).__finalize__(self)
elif inplace:
raise TypeError('Cannot reset_index inplace on a Series '
'to create a DataFrame')
@@ -1035,7 +1043,8 @@ def to_frame(self, name=None):
Parameters
----------
name : object, default None
- The passed name should substitute for the series name (if it has one).
+ The passed name should substitute for the series name (if it has
+ one).
Returns
-------
@@ -1094,18 +1103,21 @@ def count(self, level=None):
level_index = self.index.levels[level]
if len(self) == 0:
- return self._constructor(0, index=level_index).__finalize__(self)
+ return self._constructor(0, index=level_index)\
+ .__finalize__(self)
# call cython function
max_bin = len(level_index)
labels = com._ensure_int64(self.index.labels[level])
counts = lib.count_level_1d(mask.view(pa.uint8),
labels, max_bin)
- return self._constructor(counts, index=level_index).__finalize__(self)
+ return self._constructor(counts,
+ index=level_index).__finalize__(self)
return notnull(_values_from_object(self)).sum()
- def value_counts(self, normalize=False, sort=True, ascending=False, bins=None):
+ def value_counts(self, normalize=False, sort=True, ascending=False,
+ bins=None):
"""
Returns Series containing counts of unique values. The resulting Series
will be in descending order so that the first element is the most
@@ -1195,7 +1207,6 @@ def drop_duplicates(self, take_last=False, inplace=False):
else:
return result
-
def duplicated(self, take_last=False):
"""
Return boolean Series denoting duplicate values
@@ -1211,7 +1222,8 @@ def duplicated(self, take_last=False):
"""
keys = _ensure_object(self.values)
duplicated = lib.duplicated(keys, take_last=take_last)
- return self._constructor(duplicated, index=self.index).__finalize__(self)
+ return self._constructor(duplicated,
+ index=self.index).__finalize__(self)
def idxmin(self, axis=None, out=None, skipna=True):
"""
@@ -1276,7 +1288,8 @@ def round(self, decimals=0, out=None):
"""
result = _values_from_object(self).round(decimals, out=out)
if out is None:
- result = self._constructor(result, index=self.index).__finalize__(self)
+ result = self._constructor(result,
+ index=self.index).__finalize__(self)
return result
@@ -1448,7 +1461,8 @@ def autocorr(self):
def dot(self, other):
"""
- Matrix multiplication with DataFrame or inner-product with Series objects
+ Matrix multiplication with DataFrame or inner-product with Series
+ objects
Parameters
----------
@@ -1692,7 +1706,8 @@ def sort_index(self, ascending=True):
ascending=ascending)
new_values = self.values.take(indexer)
- return self._constructor(new_values, index=new_labels).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_labels).__finalize__(self)
def argsort(self, axis=0, kind='quicksort', order=None):
"""
@@ -1720,7 +1735,8 @@ def argsort(self, axis=0, kind='quicksort', order=None):
-1, index=self.index, name=self.name, dtype='int64')
notmask = -mask
result[notmask] = np.argsort(values[notmask], kind=kind)
- return self._constructor(result, index=self.index).__finalize__(self)
+ return self._constructor(result,
+ index=self.index).__finalize__(self)
else:
return self._constructor(
np.argsort(values, kind=kind), index=self.index,
@@ -1802,8 +1818,8 @@ def _try_kind_sort(arr):
sortedIdx[n:] = idx[good][argsorted]
sortedIdx[:n] = idx[bad]
- return self._constructor(arr[sortedIdx],
- index=self.index[sortedIdx]).__finalize__(self)
+ return self._constructor(arr[sortedIdx], index=self.index[sortedIdx])\
+ .__finalize__(self)
def sortlevel(self, level=0, ascending=True):
"""
@@ -1825,7 +1841,8 @@ def sortlevel(self, level=0, ascending=True):
new_index, indexer = self.index.sortlevel(level, ascending=ascending)
new_values = self.values.take(indexer)
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
def swaplevel(self, i, j, copy=True):
"""
@@ -1954,10 +1971,12 @@ def map_f(values, f):
indexer = arg.index.get_indexer(values)
new_values = com.take_1d(arg.values, indexer)
- return self._constructor(new_values, index=self.index).__finalize__(self)
+ return self._constructor(new_values,
+ index=self.index).__finalize__(self)
else:
mapped = map_f(values, arg)
- return self._constructor(mapped, index=self.index).__finalize__(self)
+ return self._constructor(mapped,
+ index=self.index).__finalize__(self)
def apply(self, func, convert_dtype=True, args=(), **kwds):
"""
@@ -2000,7 +2019,8 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
from pandas.core.frame import DataFrame
return DataFrame(mapped.tolist(), index=self.index)
else:
- return self._constructor(mapped, index=self.index).__finalize__(self)
+ return self._constructor(mapped,
+ index=self.index).__finalize__(self)
def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
@@ -2018,7 +2038,9 @@ def _reindex_indexer(self, new_index, indexer, copy):
return self._constructor(new_values, index=new_index)
def _needs_reindex_multi(self, axes, method, level):
- """ check if we do need a multi reindex; this is for compat with higher dims """
+ """ check if we do need a multi reindex; this is for compat with
+ higher dims
+ """
return False
@Appender(generic._shared_docs['reindex'] % _shared_doc_kwargs)
@@ -2057,7 +2079,8 @@ def take(self, indices, axis=0, convert=True):
indices = com._ensure_platform_int(indices)
new_index = self.index.take(indices)
new_values = self.values.take(indices)
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
def isin(self, values):
"""
@@ -2314,7 +2337,8 @@ def asof(self, where):
@property
def weekday(self):
- return self._constructor([d.weekday() for d in self.index], index=self.index).__finalize__(self)
+ return self._constructor([d.weekday() for d in self.index],
+ index=self.index).__finalize__(self)
def tz_convert(self, tz, copy=True):
"""
@@ -2336,7 +2360,8 @@ def tz_convert(self, tz, copy=True):
if copy:
new_values = new_values.copy()
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
def tz_localize(self, tz, copy=True, infer_dst=False):
"""
@@ -2373,7 +2398,8 @@ def tz_localize(self, tz, copy=True, infer_dst=False):
if copy:
new_values = new_values.copy()
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
@cache_readonly
def str(self):
@@ -2401,7 +2427,8 @@ def to_timestamp(self, freq=None, how='start', copy=True):
new_values = new_values.copy()
new_index = self.index.to_timestamp(freq=freq, how=how)
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
def to_period(self, freq=None, copy=True):
"""
@@ -2423,7 +2450,8 @@ def to_period(self, freq=None, copy=True):
if freq is None:
freq = self.index.freqstr or self.index.inferred_freq
new_index = self.index.to_period(freq=freq)
- return self._constructor(new_values, index=new_index).__finalize__(self)
+ return self._constructor(new_values,
+ index=new_index).__finalize__(self)
Series._setup_axes(['index'], info_axis=0, stat_axis=0,
aliases={'rows': 0})
diff --git a/pandas/core/sparse.py b/pandas/core/sparse.py
index 7b9caaa3a0139..84149e5598f82 100644
--- a/pandas/core/sparse.py
+++ b/pandas/core/sparse.py
@@ -1,6 +1,6 @@
"""
-Data structures for sparse float data. Life is made simpler by dealing only with
-float64 data
+Data structures for sparse float data. Life is made simpler by dealing only
+with float64 data
"""
# pylint: disable=W0611
diff --git a/pandas/core/strings.py b/pandas/core/strings.py
index c1bd369686969..0df9db2ebd06c 100644
--- a/pandas/core/strings.py
+++ b/pandas/core/strings.py
@@ -9,6 +9,7 @@
import pandas.lib as lib
import warnings
+
def _get_array_list(arr, others):
if isinstance(others[0], (list, np.ndarray)):
arrays = [arr] + list(others)
@@ -115,6 +116,7 @@ def g(x):
else:
return lib.map_infer(arr, f)
+
def str_title(arr):
"""
Convert strings to titlecased version
@@ -399,29 +401,31 @@ def f(x):
return None
m = regex.search(x)
if m:
- return m.groups()[0] # may be None
+ return m.groups()[0] # may be None
else:
return None
else:
empty_row = Series(regex.groups*[None])
+
def f(x):
if not isinstance(x, compat.string_types):
return empty_row
m = regex.search(x)
if m:
- return Series(list(m.groups())) # may contain None
+ return Series(list(m.groups())) # may contain None
else:
return empty_row
result = arr.apply(f)
result.replace({None: np.nan}, inplace=True)
if regex.groups > 1:
- result = DataFrame(result) # Don't rely on the wrapper; name columns.
+ result = DataFrame(result) # Don't rely on the wrapper; name columns.
names = dict(zip(regex.groupindex.values(), regex.groupindex.keys()))
result.columns = [names.get(1 + i, i) for i in range(regex.groups)]
else:
result.name = regex.groupindex.get(0)
return result
+
def str_join(arr, sep):
"""
Join lists contained as elements in array, a la str.join
| I stated a PEP8 cleanup and did it for a few modules. Changed only the things where this doesn't produce ugly code.
If this is acceptable, I can continue doing this and go through the whole pandas codebase. There are just a few things that I would like to check if this is acceptable:
1. PEP8: "For flowing long blocks of text with fewer structural restrictions (docstrings or comments), the line length should be limited to 72 characters." - should I do this?
2. Should I clean the unused imports? There are a lot of unused imports, not just from pandas but also from the standard library. Also duplicate imports (importing `timedelta` 2 times for example in the cleanup that I did)...
3. Should I discard the unused variables? (There are a lot of them especially in the tests).
4. `a[i+2:3]` this is against PEP8, it should look like `a[i + 2:3]`, but I don't think these things should be changed, the second one is uglier even if it's against PEP8.
Also one of the problems with this cleanup is that it changes a lot of things and should be merged as soon as possible :( Because it will soon become unmergable...
| https://api.github.com/repos/pandas-dev/pandas/pulls/5038 | 2013-09-29T13:33:56Z | 2013-11-17T01:09:10Z | 2013-11-17T01:09:10Z | 2014-07-16T08:32:03Z |
DOC: make experimental functionality more visible in release notes | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 66c3dcd203a6a..1f0e447429d6a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -60,6 +60,21 @@ New features
- Clipboard functionality now works with PySide (:issue:`4282`)
- New ``extract`` string method returns regex matches more conveniently (:issue:`4685`)
+Experimental Features
+~~~~~~~~~~~~~~~~~~~~~
+
+- The new :func:`~pandas.eval` function implements expression evaluation using
+ ``numexpr`` behind the scenes. This results in large speedups for complicated
+ expressions involving large DataFrames/Series.
+- :class:`~pandas.DataFrame` has a new :meth:`~pandas.DataFrame.eval` that
+ evaluates an expression in the context of the ``DataFrame``.
+- A :meth:`~pandas.DataFrame.query` method has been added that allows
+ you to select elements of a ``DataFrame`` using a natural query syntax nearly
+ identical to Python syntax.
+- ``pd.eval`` and friends now evaluate operations involving ``datetime64``
+ objects in Python space because ``numexpr`` cannot handle ``NaT`` values
+ (:issue:`4897`).
+
Improvements to existing features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -327,21 +342,6 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
- Complex compat for ``Series`` with ``ndarray``. (:issue:`4819`)
- Removed unnecessary ``rwproperty`` from codebase in favor of builtin property. (:issue:`4843`)
-Experimental Features
-~~~~~~~~~~~~~~~~~~~~~
-
-- The new :func:`~pandas.eval` function implements expression evaluation using
- ``numexpr`` behind the scenes. This results in large speedups for complicated
- expressions involving large DataFrames/Series.
-- :class:`~pandas.DataFrame` has a new :meth:`~pandas.DataFrame.eval` that
- evaluates an expression in the context of the ``DataFrame``.
-- A :meth:`~pandas.DataFrame.query` method has been added that allows
- you to select elements of a ``DataFrame`` using a natural query syntax nearly
- identical to Python syntax.
-- ``pd.eval`` and friends now evaluate operations involving ``datetime64``
- objects in Python space because ``numexpr`` cannot handle ``NaT`` values
- (:issue:`4897`).
-
.. _release.bug_fixes-0.13.0:
| closes #5031
| https://api.github.com/repos/pandas-dev/pandas/pulls/5037 | 2013-09-29T04:47:48Z | 2013-09-29T04:48:11Z | 2013-09-29T04:48:11Z | 2014-07-16T08:32:01Z |
R dataset | diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index 79a87cb49f027..4f5c5a03a1be5 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -20,7 +20,7 @@ its release 2.3, while the current interface is
designed for the 2.2.x series. We recommend to use 2.2.x over other series
unless you are prepared to fix parts of the code, yet the rpy2-2.3.0
introduces improvements such as a better R-Python bridge memory management
-layer so I might be a good idea to bite the bullet and submit patches for
+layer so it might be a good idea to bite the bullet and submit patches for
the few minor differences that need to be fixed.
diff --git a/doc/source/release.rst b/doc/source/release.rst
index daee460fc50a1..d5d918c798132 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -138,6 +138,9 @@ Improvements to existing features
(:issue:`4961`).
- ``concat`` now gives a more informative error message when passed objects
that cannot be concatenated (:issue:`4608`).
+ - Improve support for converting R datasets to pandas objects (more
+ informative index for timeseries and numeric, support for factors, dist, and
+ high-dimensional arrays.
API Changes
~~~~~~~~~~~
diff --git a/pandas/rpy/common.py b/pandas/rpy/common.py
index a640b43ab97e6..d35bab81b19d5 100644
--- a/pandas/rpy/common.py
+++ b/pandas/rpy/common.py
@@ -15,6 +15,9 @@
from rpy2.robjects import r
import rpy2.robjects as robj
+import itertools as IT
+
+
__all__ = ['convert_robj', 'load_data', 'convert_to_r_dataframe',
'convert_to_r_matrix']
@@ -46,38 +49,44 @@ def _is_null(obj):
def _convert_list(obj):
"""
- Convert named Vector to dict
+ Convert named Vector to dict, factors to list
"""
- values = [convert_robj(x) for x in obj]
- return dict(zip(obj.names, values))
+ try:
+ values = [convert_robj(x) for x in obj]
+ keys = r['names'](obj)
+ return dict(zip(keys, values))
+ except TypeError:
+ # For state.division and state.region
+ factors = list(r['factor'](obj))
+ level = list(r['levels'](obj))
+ result = [level[index-1] for index in factors]
+ return result
def _convert_array(obj):
"""
- Convert Array to ndarray
+ Convert Array to DataFrame
"""
- # this royally sucks. "Matrices" (arrays) with dimension > 3 in R aren't
- # really matrices-- things come out Fortran order in the first two
- # dimensions. Maybe I'm wrong?
-
+ def _list(item):
+ try:
+ return list(item)
+ except TypeError:
+ return []
+
+ # For iris3, HairEyeColor, UCBAdmissions, Titanic
dim = list(obj.dim)
values = np.array(list(obj))
-
- if len(dim) == 3:
- arr = values.reshape(dim[-1:] + dim[:-1]).swapaxes(1, 2)
-
- if obj.names is not None:
- name_list = [list(x) for x in obj.names]
- if len(dim) == 2:
- return pd.DataFrame(arr, index=name_list[0], columns=name_list[1])
- elif len(dim) == 3:
- return pd.Panel(arr, items=name_list[2],
- major_axis=name_list[0],
- minor_axis=name_list[1])
- else:
- print('Cannot handle dim=%d' % len(dim))
- else:
- return arr
+ names = r['dimnames'](obj)
+ try:
+ columns = list(r['names'](names))[::-1]
+ except TypeError:
+ columns = ['X{:d}'.format(i) for i in range(len(names))][::-1]
+ columns.append('value')
+ name_list = [(_list(x) or range(d)) for x, d in zip(names, dim)][::-1]
+ arr = np.array(list(IT.product(*name_list)))
+ arr = np.column_stack([arr,values])
+ df = pd.DataFrame(arr, columns=columns)
+ return df
def _convert_vector(obj):
@@ -85,8 +94,24 @@ def _convert_vector(obj):
return _convert_int_vector(obj)
elif isinstance(obj, robj.StrVector):
return _convert_str_vector(obj)
-
- return list(obj)
+ # Check if the vector has extra information attached to it that can be used
+ # as an index
+ try:
+ attributes = set(r['attributes'](obj).names)
+ except AttributeError:
+ return list(obj)
+ if 'names' in attributes:
+ return pd.Series(list(obj), index=r['names'](obj))
+ elif 'tsp' in attributes:
+ return pd.Series(list(obj), index=r['time'](obj))
+ elif 'labels' in attributes:
+ return pd.Series(list(obj), index=r['labels'](obj))
+ if _rclass(obj) == 'dist':
+ # For 'eurodist'. WARNING: This results in a DataFrame, not a Series or listq.
+ matrix = r['as.matrix'](obj)
+ return convert_robj(matrix)
+ else:
+ return list(obj)
NA_INTEGER = -2147483648
@@ -141,8 +166,7 @@ def _convert_Matrix(mat):
rows = mat.rownames
columns = None if _is_null(columns) else list(columns)
- index = None if _is_null(rows) else list(rows)
-
+ index = r['time'](mat) if _is_null(rows) else list(rows)
return pd.DataFrame(np.array(mat), index=_check_int(index),
columns=columns)
@@ -197,7 +221,7 @@ def convert_robj(obj, use_pandas=True):
if isinstance(obj, rpy_type):
return converter(obj)
- raise Exception('Do not know what to do with %s object' % type(obj))
+ raise TypeError('Do not know what to do with %s object' % type(obj))
def convert_to_r_posixct(obj):
@@ -329,117 +353,5 @@ def convert_to_r_matrix(df, strings_as_factors=False):
return r_matrix
-
-def test_convert_list():
- obj = r('list(a=1, b=2, c=3)')
-
- converted = convert_robj(obj)
- expected = {'a': [1], 'b': [2], 'c': [3]}
-
- _test.assert_dict_equal(converted, expected)
-
-
-def test_convert_nested_list():
- obj = r('list(a=list(foo=1, bar=2))')
-
- converted = convert_robj(obj)
- expected = {'a': {'foo': [1], 'bar': [2]}}
-
- _test.assert_dict_equal(converted, expected)
-
-
-def test_convert_frame():
- # built-in dataset
- df = r['faithful']
-
- converted = convert_robj(df)
-
- assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
- assert np.array_equal(converted.index, np.arange(1, 273))
-
-
-def _test_matrix():
- r('mat <- matrix(rnorm(9), ncol=3)')
- r('colnames(mat) <- c("one", "two", "three")')
- r('rownames(mat) <- c("a", "b", "c")')
-
- return r['mat']
-
-
-def test_convert_matrix():
- mat = _test_matrix()
-
- converted = convert_robj(mat)
-
- assert np.array_equal(converted.index, ['a', 'b', 'c'])
- assert np.array_equal(converted.columns, ['one', 'two', 'three'])
-
-
-def test_convert_r_dataframe():
-
- is_na = robj.baseenv.get("is.na")
-
- seriesd = _test.getSeriesData()
- frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
-
- # Null data
- frame["E"] = [np.nan for item in frame["A"]]
- # Some mixed type data
- frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
-
- r_dataframe = convert_to_r_dataframe(frame)
-
- assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
- assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
- assert all(is_na(item) for item in r_dataframe.rx2("E"))
-
- for column in frame[["A", "B", "C", "D"]]:
- coldata = r_dataframe.rx2(column)
- original_data = frame[column]
- assert np.array_equal(convert_robj(coldata), original_data)
-
- for column in frame[["D", "E"]]:
- for original, converted in zip(frame[column],
- r_dataframe.rx2(column)):
-
- if pd.isnull(original):
- assert is_na(converted)
- else:
- assert original == converted
-
-
-def test_convert_r_matrix():
-
- is_na = robj.baseenv.get("is.na")
-
- seriesd = _test.getSeriesData()
- frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
- # Null data
- frame["E"] = [np.nan for item in frame["A"]]
-
- r_dataframe = convert_to_r_matrix(frame)
-
- assert np.array_equal(convert_robj(r_dataframe.rownames), frame.index)
- assert np.array_equal(convert_robj(r_dataframe.colnames), frame.columns)
- assert all(is_na(item) for item in r_dataframe.rx(True, "E"))
-
- for column in frame[["A", "B", "C", "D"]]:
- coldata = r_dataframe.rx(True, column)
- original_data = frame[column]
- assert np.array_equal(convert_robj(coldata),
- original_data)
-
- # Pandas bug 1282
- frame["F"] = ["text" if item % 2 == 0 else np.nan for item in range(30)]
-
- # FIXME: Ugly, this whole module needs to be ported to nose/unittest
- try:
- wrong_matrix = convert_to_r_matrix(frame)
- except TypeError:
- pass
- except Exception:
- raise
-
-
if __name__ == '__main__':
pass
diff --git a/pandas/tests/test_rpy.py b/pandas/tests/test_rpy.py
new file mode 100644
index 0000000000000..3a89ec0d5ae20
--- /dev/null
+++ b/pandas/tests/test_rpy.py
@@ -0,0 +1,212 @@
+"""
+Testing that functions from rpy work as expected
+"""
+
+import pandas as pd
+import numpy as np
+import unittest
+import nose
+import pandas.util.testing as tm
+
+try:
+ import pandas.rpy.common as com
+ from rpy2.robjects import r
+ import rpy2.robjects as robj
+except ImportError:
+ raise nose.SkipTest
+
+
+class TestCommon(unittest.TestCase):
+ def test_convert_list(self):
+ obj = r('list(a=1, b=2, c=3)')
+
+ converted = com.convert_robj(obj)
+ expected = {'a': [1], 'b': [2], 'c': [3]}
+
+ tm.assert_dict_equal(converted, expected)
+
+ def test_convert_nested_list(self):
+ obj = r('list(a=list(foo=1, bar=2))')
+
+ converted = com.convert_robj(obj)
+ expected = {'a': {'foo': [1], 'bar': [2]}}
+
+ tm.assert_dict_equal(converted, expected)
+
+ def test_convert_frame(self):
+ # built-in dataset
+ df = r['faithful']
+
+ converted = com.convert_robj(df)
+
+ assert np.array_equal(converted.columns, ['eruptions', 'waiting'])
+ assert np.array_equal(converted.index, np.arange(1, 273))
+
+ def _test_matrix(self):
+ r('mat <- matrix(rnorm(9), ncol=3)')
+ r('colnames(mat) <- c("one", "two", "three")')
+ r('rownames(mat) <- c("a", "b", "c")')
+
+ return r['mat']
+
+ def test_convert_matrix(self):
+ mat = self._test_matrix()
+
+ converted = com.convert_robj(mat)
+
+ assert np.array_equal(converted.index, ['a', 'b', 'c'])
+ assert np.array_equal(converted.columns, ['one', 'two', 'three'])
+
+ def test_convert_r_dataframe(self):
+
+ is_na = robj.baseenv.get("is.na")
+
+ seriesd = tm.getSeriesData()
+ frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
+
+ # Null data
+ frame["E"] = [np.nan for item in frame["A"]]
+ # Some mixed type data
+ frame["F"] = ["text" if item %
+ 2 == 0 else np.nan for item in range(30)]
+
+ r_dataframe = com.convert_to_r_dataframe(frame)
+
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.rownames), frame.index)
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.colnames), frame.columns)
+ assert all(is_na(item) for item in r_dataframe.rx2("E"))
+
+ for column in frame[["A", "B", "C", "D"]]:
+ coldata = r_dataframe.rx2(column)
+ original_data = frame[column]
+ assert np.array_equal(com.convert_robj(coldata), original_data)
+
+ for column in frame[["D", "E"]]:
+ for original, converted in zip(frame[column],
+ r_dataframe.rx2(column)):
+
+ if pd.isnull(original):
+ assert is_na(converted)
+ else:
+ assert original == converted
+
+ def test_convert_r_matrix(self):
+
+ is_na = robj.baseenv.get("is.na")
+
+ seriesd = tm.getSeriesData()
+ frame = pd.DataFrame(seriesd, columns=['D', 'C', 'B', 'A'])
+ # Null data
+ frame["E"] = [np.nan for item in frame["A"]]
+
+ r_dataframe = com.convert_to_r_matrix(frame)
+
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.rownames), frame.index)
+ assert np.array_equal(
+ com.convert_robj(r_dataframe.colnames), frame.columns)
+ assert all(is_na(item) for item in r_dataframe.rx(True, "E"))
+
+ for column in frame[["A", "B", "C", "D"]]:
+ coldata = r_dataframe.rx(True, column)
+ original_data = frame[column]
+ assert np.array_equal(com.convert_robj(coldata),
+ original_data)
+
+ # Pandas bug 1282
+ frame["F"] = ["text" if item %
+ 2 == 0 else np.nan for item in range(30)]
+
+ try:
+ wrong_matrix = com.convert_to_r_matrix(frame)
+ except TypeError:
+ pass
+ except Exception:
+ raise
+
+ def test_dist(self):
+ for name in ('eurodist',):
+ df = com.load_data(name)
+ dist = r[name]
+ labels = r['labels'](dist)
+ assert np.array_equal(df.index, labels)
+ assert np.array_equal(df.columns, labels)
+
+
+ def test_timeseries(self):
+ """
+ Test that the series has an informative index.
+ Unfortunately the code currently does not build a DateTimeIndex
+ """
+ for name in (
+ 'austres', 'co2', 'fdeaths', 'freeny.y', 'JohnsonJohnson',
+ 'ldeaths', 'mdeaths', 'nottem', 'presidents', 'sunspot.month', 'sunspots',
+ 'UKDriverDeaths', 'UKgas', 'USAccDeaths',
+ 'airmiles', 'discoveries', 'EuStockMarkets',
+ 'LakeHuron', 'lh', 'lynx', 'nhtemp', 'Nile',
+ 'Seatbelts', 'sunspot.year', 'treering', 'uspop'):
+ series = com.load_data(name)
+ ts = r[name]
+ assert np.array_equal(series.index, r['time'](ts))
+
+ def test_numeric(self):
+ for name in ('euro', 'islands', 'precip'):
+ series = com.load_data(name)
+ numeric = r[name]
+ names = numeric.names
+ assert np.array_equal(series.index, names)
+
+ def test_table(self):
+ # This test requires the r package 'reshape2' or 'reshape' to be installed
+ iris3 = pd.DataFrame({'X0': {0: '0', 1: '1', 2: '2', 3: '3', 4: '4'},
+ 'X1': {0: 'Sepal L.',
+ 1: 'Sepal L.',
+ 2: 'Sepal L.',
+ 3: 'Sepal L.',
+ 4: 'Sepal L.'},
+ 'X2': {0: 'Setosa',
+ 1: 'Setosa',
+ 2: 'Setosa',
+ 3: 'Setosa',
+ 4: 'Setosa'},
+ 'value': {0: '5.1', 1: '4.9', 2: '4.7', 3: '4.6', 4: '5.0'}})
+ hec = pd.DataFrame(
+ {'Eye': {0: 'Brown', 1: 'Brown', 2: 'Brown', 3: 'Brown', 4: 'Blue'},
+ 'Hair': {0: 'Black', 1: 'Brown', 2: 'Red', 3: 'Blond', 4: 'Black'},
+ 'Sex': {0: 'Male', 1: 'Male', 2: 'Male', 3: 'Male', 4: 'Male'},
+ 'value': {0: '32.0', 1: '53.0', 2: '10.0', 3: '3.0', 4: '11.0'}})
+ titanic = pd.DataFrame(
+ {'Age': {0: 'Child', 1: 'Child', 2: 'Child', 3: 'Child', 4: 'Child'},
+ 'Class': {0: '1st', 1: '2nd', 2: '3rd', 3: 'Crew', 4: '1st'},
+ 'Sex': {0: 'Male', 1: 'Male', 2: 'Male', 3: 'Male', 4: 'Female'},
+ 'Survived': {0: 'No', 1: 'No', 2: 'No', 3: 'No', 4: 'No'},
+ 'value': {0: '0.0', 1: '0.0', 2: '35.0', 3: '0.0', 4: '0.0'}})
+ for name, expected in zip(('HairEyeColor', 'Titanic', 'iris3'),
+ (hec, titanic, iris3)):
+ df = com.load_data(name)
+ table = r[name]
+ names = r['dimnames'](table)
+ try:
+ columns = list(r['names'](names))[::-1]
+ except TypeError:
+ columns = ['X{:d}'.format(i) for i in range(len(names))][::-1]
+ columns.append('value')
+ assert np.array_equal(df.columns, columns)
+ result = df.head()
+ cond = ((result.sort(axis=1) == expected.sort(axis=1))).values
+ assert np.all(cond)
+ def test_factor(self):
+ for name in ('state.division', 'state.region'):
+ vector = r[name]
+ factors = list(r['factor'](vector))
+ level = list(r['levels'](vector))
+ factors = [level[index-1] for index in factors]
+ result = com.load_data(name)
+ assert np.equal(result, factors)
+
+if __name__ == '__main__':
+ nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
+ # '--with-coverage', '--cover-package=pandas.core'],
+ exit=False)
| Motivation for this pull request came from this problem: http://stackoverflow.com/q/19039356/190597.
The problems:
- `pandas.rpy.common.load_data` currently fails to load high-dimensional arrays such as the "Titanic" dataset. `load_data` prints an error to stdout and returns None.
- Applying `load_data` to "iris3", "state.division" or "state.region" R datasets currently raise a TypeError.
- Timeseries R data is currently converted to lists that lack a time series index.
- Factor data is converted to a list of index numbers without any levels information.
This pull request:
- Provides code which allows `load_data` to convert the "Titanic" dataset (and similar high-dimensional arrays) to "melted" DataFrames. The code _does not_ use R's melt function, since that would introduce a new dependency.
- load_data now returns a DataFrame for arrays like "iris3".
- load_data converts R factors like "state.division" and "state.region" to lists which use the levels as values.
- load_data converts ts (timeseries) R data, like "airmiles" to Series with an index showing the times. Note however I wasn't able to convert the index to a DatetimeIndex since the same code is used to convert numeric R data (like "euro") which does not have a time-related index.
- moves already existant tests from rpy/common.py to tests/test_rpy.py
- adds new unit tests which highlight the problems mentioned above, and tests for what I think is more desireable behavior as described above.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5036 | 2013-09-29T03:47:22Z | 2013-09-29T17:01:40Z | null | 2013-10-02T22:26:05Z |
BUG/CLN: numpy compat with pandas numeric functions and cln of same (GH4435) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 73e7e3affd944..2c975e58d9575 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -267,6 +267,9 @@ API Changes
``SparsePanel``, etc.), now support the entire set of arithmetic operators
and arithmetic flex methods (add, sub, mul, etc.). ``SparsePanel`` does not
support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`)
+ - Provide numpy compatibility with 1.7 for a calling convention like ``np.prod(pandas_object)`` as numpy
+ call with additional keyword args (:issue:`4435`)
+
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
@@ -345,6 +348,10 @@ See :ref:`Internal Refactoring<whatsnew_0130.refactoring>`
etc.) into a separate, cleaned up wrapper class. (:issue:`4613`)
- Complex compat for ``Series`` with ``ndarray``. (:issue:`4819`)
- Removed unnecessary ``rwproperty`` from codebase in favor of builtin property. (:issue:`4843`)
+- Refactor object level numeric methods (mean/sum/min/max...) from object level modules to
+ ``core/generic.py``(:issue:`4435`).
+- Refactor cum objects to core/generic.py (:issue:`4435`), note that these have a more numpy-like
+ function signature.
.. _release.bug_fixes-0.13.0:
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c6727f91644fc..935dff44ad49e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -63,28 +63,6 @@
# Docstring templates
-_stat_doc = """
-Return %(name)s over requested axis.
-%(na_action)s
-
-Parameters
-----------
-axis : {0, 1}
- 0 for row-wise, 1 for column-wise
-skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-level : int, default None
- If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a DataFrame
-%(extras)s
-Returns
--------
-%(shortname)s : Series (or DataFrame if level specified)
-"""
-
-_doc_exclude_na = "NA/null values are excluded"
-
_numeric_only_doc = """numeric_only : boolean, default None
Include only float, int, boolean data. If None, will attempt to use
everything, then use only numeric data
@@ -3869,7 +3847,7 @@ def _count_level(self, level, axis=0, numeric_only=False):
else:
return result
- def any(self, axis=0, bool_only=None, skipna=True, level=None):
+ def any(self, axis=None, bool_only=None, skipna=True, level=None, **kwargs):
"""
Return whether any element is True over requested axis.
%(na_action)s
@@ -3891,13 +3869,15 @@ def any(self, axis=0, bool_only=None, skipna=True, level=None):
-------
any : Series (or DataFrame if level specified)
"""
+ if axis is None:
+ axis = self._stat_axis_number
if level is not None:
return self._agg_by_level('any', axis=axis, level=level,
skipna=skipna)
return self._reduce(nanops.nanany, axis=axis, skipna=skipna,
numeric_only=bool_only, filter_type='bool')
- def all(self, axis=0, bool_only=None, skipna=True, level=None):
+ def all(self, axis=None, bool_only=None, skipna=True, level=None, **kwargs):
"""
Return whether all elements are True over requested axis.
%(na_action)s
@@ -3919,169 +3899,14 @@ def all(self, axis=0, bool_only=None, skipna=True, level=None):
-------
any : Series (or DataFrame if level specified)
"""
+ if axis is None:
+ axis = self._stat_axis_number
if level is not None:
return self._agg_by_level('all', axis=axis, level=level,
skipna=skipna)
return self._reduce(nanops.nanall, axis=axis, skipna=skipna,
numeric_only=bool_only, filter_type='bool')
- @Substitution(name='sum', shortname='sum', na_action=_doc_exclude_na,
- extras=_numeric_only_doc)
- @Appender(_stat_doc)
- def sum(self, axis=0, numeric_only=None, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('sum', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nansum, axis=axis, skipna=skipna,
- numeric_only=numeric_only)
-
- @Substitution(name='mean', shortname='mean', na_action=_doc_exclude_na,
- extras='')
- @Appender(_stat_doc)
- def mean(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('mean', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanmean, axis=axis, skipna=skipna,
- numeric_only=None)
-
- @Substitution(name='minimum', shortname='min', na_action=_doc_exclude_na,
- extras='')
- @Appender(_stat_doc)
- def min(self, axis=0, skipna=True, level=None):
- """
- Notes
- -----
- This method returns the minimum of the values in the DataFrame. If you
- want the *index* of the minimum, use ``DataFrame.idxmin``. This is the
- equivalent of the ``numpy.ndarray`` method ``argmin``.
-
- See Also
- --------
- DataFrame.idxmin
- Series.idxmin
- """
- if level is not None:
- return self._agg_by_level('min', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanmin, axis=axis, skipna=skipna,
- numeric_only=None)
-
- @Substitution(name='maximum', shortname='max', na_action=_doc_exclude_na,
- extras='')
- @Appender(_stat_doc)
- def max(self, axis=0, skipna=True, level=None):
- """
- Notes
- -----
- This method returns the maximum of the values in the DataFrame. If you
- want the *index* of the maximum, use ``DataFrame.idxmax``. This is the
- equivalent of the ``numpy.ndarray`` method ``argmax``.
-
- See Also
- --------
- DataFrame.idxmax
- Series.idxmax
- """
- if level is not None:
- return self._agg_by_level('max', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanmax, axis=axis, skipna=skipna,
- numeric_only=None)
-
- @Substitution(name='product', shortname='product',
- na_action='NA/null values are treated as 1', extras='')
- @Appender(_stat_doc)
- def prod(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('prod', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanprod, axis=axis, skipna=skipna,
- numeric_only=None)
-
- product = prod
-
- @Substitution(name='median', shortname='median', na_action=_doc_exclude_na,
- extras='')
- @Appender(_stat_doc)
- def median(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('median', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanmedian, axis=axis, skipna=skipna,
- numeric_only=None)
-
- @Substitution(name='mean absolute deviation', shortname='mad',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def mad(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('mad', axis=axis, level=level,
- skipna=skipna)
-
- frame = self._get_numeric_data()
-
- axis = self._get_axis_number(axis)
- if axis == 0:
- demeaned = frame - frame.mean(axis=0)
- else:
- demeaned = frame.sub(frame.mean(axis=1), axis=0)
- return np.abs(demeaned).mean(axis=axis, skipna=skipna)
-
- @Substitution(name='variance', shortname='var',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
- """
- Normalized by N-1 (unbiased estimator).
- """)
- def var(self, axis=0, skipna=True, level=None, ddof=1):
- if level is not None:
- return self._agg_by_level('var', axis=axis, level=level,
- skipna=skipna, ddof=ddof)
- return self._reduce(nanops.nanvar, axis=axis, skipna=skipna,
- numeric_only=None, ddof=ddof)
-
- @Substitution(name='standard deviation', shortname='std',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
- """
- Normalized by N-1 (unbiased estimator).
- """)
- def std(self, axis=0, skipna=True, level=None, ddof=1):
- if level is not None:
- return self._agg_by_level('std', axis=axis, level=level,
- skipna=skipna, ddof=ddof)
- return np.sqrt(self.var(axis=axis, skipna=skipna, ddof=ddof))
-
- @Substitution(name='unbiased skewness', shortname='skew',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def skew(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('skew', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nanskew, axis=axis, skipna=skipna,
- numeric_only=None)
-
- @Substitution(name='unbiased kurtosis', shortname='kurt',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def kurt(self, axis=0, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('kurt', axis=axis, level=level,
- skipna=skipna)
- return self._reduce(nanops.nankurt, axis=axis, skipna=skipna,
- numeric_only=None)
-
- def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwds):
- grouped = self.groupby(level=level, axis=axis)
- if hasattr(grouped, name) and skipna:
- return getattr(grouped, name)(**kwds)
- axis = self._get_axis_number(axis)
- method = getattr(type(self), name)
- applyf = lambda x: method(x, axis=axis, skipna=skipna, **kwds)
- return grouped.aggregate(applyf)
-
def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
filter_type=None, **kwds):
axis = self._get_axis_number(axis)
@@ -4440,7 +4265,7 @@ def combineMult(self, other):
DataFrame._setup_axes(
['index', 'columns'], info_axis=1, stat_axis=0, axes_are_reversed=True)
-
+DataFrame._add_numeric_operations()
_EMPTY_SERIES = Series([])
diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 705679136c3d2..18a03eb313dd2 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -21,6 +21,8 @@
_values_from_object,
_infer_dtype_from_scalar, _maybe_promote,
ABCSeries)
+import pandas.core.nanops as nanops
+from pandas.util.decorators import Appender, Substitution
def is_dictlike(x):
return isinstance(x, (dict, com.ABCSeries))
@@ -1949,33 +1951,6 @@ def interpolate(self, to_replace, method='pad', axis=0, inplace=False,
#----------------------------------------------------------------------
# Action Methods
- def abs(self):
- """
- Return an object with absolute value taken. Only applicable to objects
- that are all numeric
-
- Returns
- -------
- abs: type of caller
- """
- obj = np.abs(self)
-
- # suprimo numpy 1.6 hacking
- if _np_version_under1p7:
- if self.ndim == 1:
- if obj.dtype == 'm8[us]':
- obj = obj.astype('m8[ns]')
- elif self.ndim == 2:
- def f(x):
- if x.dtype == 'm8[us]':
- x = x.astype('m8[ns]')
- return x
-
- if 'm8[us]' in obj.dtypes.values:
- obj = obj.apply(f)
-
- return obj
-
def clip(self, lower=None, upper=None, out=None):
"""
Trim values at input threshold(s)
@@ -2550,178 +2525,6 @@ def mask(self, cond):
"""
return self.where(~cond, np.nan)
- def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
- **kwds):
- """
- Percent change over given number of periods
-
- Parameters
- ----------
- periods : int, default 1
- Periods to shift for forming percent change
- fill_method : str, default 'pad'
- How to handle NAs before computing percent changes
- limit : int, default None
- The number of consecutive NAs to fill before stopping
- freq : DateOffset, timedelta, or offset alias string, optional
- Increment to use from time series API (e.g. 'M' or BDay())
-
- Returns
- -------
- chg : Series or DataFrame
- """
- if fill_method is None:
- data = self
- else:
- data = self.fillna(method=fill_method, limit=limit)
- rs = data / data.shift(periods=periods, freq=freq, **kwds) - 1
- if freq is None:
- mask = com.isnull(_values_from_object(self))
- np.putmask(rs.values, mask, np.nan)
- return rs
-
- def cumsum(self, axis=None, skipna=True):
- """
- Return DataFrame of cumulative sums over requested axis.
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
- Returns
- -------
- y : DataFrame
- """
- if axis is None:
- axis = self._stat_axis_number
- else:
- axis = self._get_axis_number(axis)
-
- y = _values_from_object(self).copy()
- if not issubclass(y.dtype.type, np.integer):
- mask = np.isnan(_values_from_object(self))
-
- if skipna:
- np.putmask(y, mask, 0.)
-
- result = y.cumsum(axis)
-
- if skipna:
- np.putmask(result, mask, np.nan)
- else:
- result = y.cumsum(axis)
- return self._wrap_array(result, self.axes, copy=False)
-
- def cumprod(self, axis=None, skipna=True):
- """
- Return cumulative product over requested axis as DataFrame
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
- Returns
- -------
- y : DataFrame
- """
- if axis is None:
- axis = self._stat_axis_number
- else:
- axis = self._get_axis_number(axis)
-
- y = _values_from_object(self).copy()
- if not issubclass(y.dtype.type, np.integer):
- mask = np.isnan(_values_from_object(self))
-
- if skipna:
- np.putmask(y, mask, 1.)
- result = y.cumprod(axis)
-
- if skipna:
- np.putmask(result, mask, np.nan)
- else:
- result = y.cumprod(axis)
- return self._wrap_array(result, self.axes, copy=False)
-
- def cummax(self, axis=None, skipna=True):
- """
- Return DataFrame of cumulative max over requested axis.
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
- Returns
- -------
- y : DataFrame
- """
- if axis is None:
- axis = self._stat_axis_number
- else:
- axis = self._get_axis_number(axis)
-
- y = _values_from_object(self).copy()
- if not issubclass(y.dtype.type, np.integer):
- mask = np.isnan(_values_from_object(self))
-
- if skipna:
- np.putmask(y, mask, -np.inf)
-
- result = np.maximum.accumulate(y, axis)
-
- if skipna:
- np.putmask(result, mask, np.nan)
- else:
- result = np.maximum.accumulate(y, axis)
- return self._wrap_array(result, self.axes, copy=False)
-
- def cummin(self, axis=None, skipna=True):
- """
- Return DataFrame of cumulative min over requested axis.
-
- Parameters
- ----------
- axis : {0, 1}
- 0 for row-wise, 1 for column-wise
- skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
- Returns
- -------
- y : DataFrame
- """
- if axis is None:
- axis = self._stat_axis_number
- else:
- axis = self._get_axis_number(axis)
-
- y = _values_from_object(self).copy()
- if not issubclass(y.dtype.type, np.integer):
- mask = np.isnan(_values_from_object(self))
-
- if skipna:
- np.putmask(y, mask, np.inf)
-
- result = np.minimum.accumulate(y, axis)
-
- if skipna:
- np.putmask(result, mask, np.nan)
- else:
- result = np.minimum.accumulate(y, axis)
- return self._wrap_array(result, self.axes, copy=False)
def shift(self, periods=1, freq=None, axis=0, **kwds):
"""
@@ -2928,6 +2731,240 @@ def tz_localize(self, tz, axis=0, copy=True):
return new_obj
+ #----------------------------------------------------------------------
+ # Numeric Methods
+ def abs(self):
+ """
+ Return an object with absolute value taken. Only applicable to objects
+ that are all numeric
+
+ Returns
+ -------
+ abs: type of caller
+ """
+ obj = np.abs(self)
+
+ # suprimo numpy 1.6 hacking
+ if _np_version_under1p7:
+ if self.ndim == 1:
+ if obj.dtype == 'm8[us]':
+ obj = obj.astype('m8[ns]')
+ elif self.ndim == 2:
+ def f(x):
+ if x.dtype == 'm8[us]':
+ x = x.astype('m8[ns]')
+ return x
+
+ if 'm8[us]' in obj.dtypes.values:
+ obj = obj.apply(f)
+
+ return obj
+
+ def pct_change(self, periods=1, fill_method='pad', limit=None, freq=None,
+ **kwds):
+ """
+ Percent change over given number of periods
+
+ Parameters
+ ----------
+ periods : int, default 1
+ Periods to shift for forming percent change
+ fill_method : str, default 'pad'
+ How to handle NAs before computing percent changes
+ limit : int, default None
+ The number of consecutive NAs to fill before stopping
+ freq : DateOffset, timedelta, or offset alias string, optional
+ Increment to use from time series API (e.g. 'M' or BDay())
+
+ Returns
+ -------
+ chg : Series or DataFrame
+ """
+ if fill_method is None:
+ data = self
+ else:
+ data = self.fillna(method=fill_method, limit=limit)
+ rs = data / data.shift(periods=periods, freq=freq, **kwds) - 1
+ if freq is None:
+ mask = com.isnull(_values_from_object(self))
+ np.putmask(rs.values, mask, np.nan)
+ return rs
+
+ def _agg_by_level(self, name, axis=0, level=0, skipna=True, **kwds):
+ grouped = self.groupby(level=level, axis=axis)
+ if hasattr(grouped, name) and skipna:
+ return getattr(grouped, name)(**kwds)
+ axis = self._get_axis_number(axis)
+ method = getattr(type(self), name)
+ applyf = lambda x: method(x, axis=axis, skipna=skipna, **kwds)
+ return grouped.aggregate(applyf)
+
+ @classmethod
+ def _add_numeric_operations(cls):
+ """ add the operations to the cls; evaluate the doc strings again """
+
+ axis_descr = "{" + ', '.join([ "{0} ({1})".format(a,i) for i, a in enumerate(cls._AXIS_ORDERS)]) + "}"
+ name = cls._constructor_sliced.__name__ if cls._AXIS_LEN > 1 else 'scalar'
+ _num_doc = """
+
+%(desc)s
+
+Parameters
+----------
+axis : """ + axis_descr + """
+skipna : boolean, default True
+ Exclude NA/null values. If an entire row/column is NA, the result
+ will be NA
+level : int, default None
+ If the axis is a MultiIndex (hierarchical), count along a
+ particular level, collapsing into a """ + name + """
+numeric_only : boolean, default None
+ Include only float, int, boolean data. If None, will attempt to use
+ everything, then use only numeric data
+
+Returns
+-------
+%(outname)s : """ + name + " or " + cls.__name__ + " (if level specified)\n"
+
+ _cnum_doc = """
+
+Parameters
+----------
+axis : """ + axis_descr + """
+skipna : boolean, default True
+ Exclude NA/null values. If an entire row/column is NA, the result
+ will be NA
+
+Returns
+-------
+%(outname)s : """ + name + "\n"
+
+ def _make_stat_function(name, desc, f):
+
+ @Substitution(outname=name, desc=desc)
+ @Appender(_num_doc)
+ def stat_func(self, axis=None, skipna=None, level=None, numeric_only=None,
+ **kwargs):
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ return self._agg_by_level(name, axis=axis, level=level,
+ skipna=skipna)
+ return self._reduce(f, axis=axis,
+ skipna=skipna, numeric_only=numeric_only)
+ stat_func.__name__ = name
+ return stat_func
+
+ cls.sum = _make_stat_function('sum',"Return the sum of the values for the requested axis", nanops.nansum)
+ cls.mean = _make_stat_function('mean',"Return the mean of the values for the requested axis", nanops.nanmean)
+ cls.skew = _make_stat_function('skew',"Return unbiased skew over requested axis\nNormalized by N-1", nanops.nanskew)
+ cls.kurt = _make_stat_function('kurt',"Return unbiased kurtosis over requested axis\nNormalized by N-1", nanops.nankurt)
+ cls.kurtosis = cls.kurt
+ cls.prod = _make_stat_function('prod',"Return the product of the values for the requested axis", nanops.nanprod)
+ cls.product = cls.prod
+ cls.median = _make_stat_function('median',"Return the median of the values for the requested axis", nanops.nanmedian)
+ cls.max = _make_stat_function('max',"""
+This method returns the maximum of the values in the object. If you
+want the *index* of the maximum, use ``idxmax``. This is the
+equivalent of the ``numpy.ndarray`` method ``argmax``.""", nanops.nanmax)
+ cls.min = _make_stat_function('min',"""
+This method returns the minimum of the values in the object. If you
+want the *index* of the minimum, use ``idxmin``. This is the
+equivalent of the ``numpy.ndarray`` method ``argmin``.""", nanops.nanmin)
+
+ @Substitution(outname='mad', desc="Return the mean absolute deviation of the values for the requested axis")
+ @Appender(_num_doc)
+ def mad(self, axis=None, skipna=None, level=None, **kwargs):
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ return self._agg_by_level('mad', axis=axis, level=level,
+ skipna=skipna)
+
+ data = self._get_numeric_data()
+ if axis == 0:
+ demeaned = data - data.mean(axis=0)
+ else:
+ demeaned = data.sub(data.mean(axis=1), axis=0)
+ return np.abs(demeaned).mean(axis=axis, skipna=skipna)
+ cls.mad = mad
+
+ @Substitution(outname='variance',desc="Return unbiased variance over requested axis\nNormalized by N-1")
+ @Appender(_num_doc)
+ def var(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ return self._agg_by_level('var', axis=axis, level=level,
+ skipna=skipna, ddof=ddof)
+
+ return self._reduce(nanops.nanvar, axis=axis, skipna=skipna, ddof=ddof)
+ cls.var = var
+
+ @Substitution(outname='stdev',desc="Return unbiased standard deviation over requested axis\nNormalized by N-1")
+ @Appender(_num_doc)
+ def std(self, axis=None, skipna=None, level=None, ddof=1, **kwargs):
+ if skipna is None:
+ skipna = True
+ if axis is None:
+ axis = self._stat_axis_number
+ if level is not None:
+ return self._agg_by_level('std', axis=axis, level=level,
+ skipna=skipna, ddof=ddof)
+ result = self.var(axis=axis, skipna=skipna, ddof=ddof)
+ if getattr(result,'ndim',0) > 0:
+ return result.apply(np.sqrt)
+ return np.sqrt(result)
+ cls.std = std
+
+ @Substitution(outname='compounded',desc="Return the compound percentage of the values for the requested axis")
+ @Appender(_num_doc)
+ def compound(self, axis=None, skipna=None, level=None, **kwargs):
+ if skipna is None:
+ skipna = True
+ return (1 + self).prod(axis=axis, skipna=skipna, level=level) - 1
+ cls.compound = compound
+
+ def _make_cum_function(name, accum_func, mask_a, mask_b):
+
+ @Substitution(outname=name)
+ @Appender("Return cumulative {0} over requested axis.".format(name) + _cnum_doc)
+ def func(self, axis=None, dtype=None, out=None, skipna=True, **kwargs):
+ if axis is None:
+ axis = self._stat_axis_number
+ else:
+ axis = self._get_axis_number(axis)
+
+ y = _values_from_object(self).copy()
+ if not issubclass(y.dtype.type, (np.integer,np.bool_)):
+ mask = isnull(self)
+ if skipna:
+ np.putmask(y, mask, mask_a)
+ result = accum_func(y, axis)
+ if skipna:
+ np.putmask(result, mask, mask_b)
+ else:
+ result = accum_func(y, axis)
+
+ d = self._construct_axes_dict()
+ d['copy'] = False
+ return self._constructor(result, **d)._propogate_attributes(self)
+
+ func.__name__ = name
+ return func
+
+
+ cls.cummin = _make_cum_function('min', lambda y, axis: np.minimum.accumulate(y, axis), np.inf, np.nan)
+ cls.cumsum = _make_cum_function('sum', lambda y, axis: y.cumsum(axis), 0., np.nan)
+ cls.cumprod = _make_cum_function('prod', lambda y, axis: y.cumprod(axis), 1., np.nan)
+ cls.cummax = _make_cum_function('max', lambda y, axis: np.maximum.accumulate(y, axis), -np.inf, np.nan)
+
# install the indexerse
for _name, _indexer in indexing.get_indexers_list():
NDFrame._create_indexer(_name, _indexer)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 7208ceff7d1a7..f0bad6b796e7c 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -831,10 +831,11 @@ def apply(self, func, axis='major'):
result = np.apply_along_axis(func, i, self.values)
return self._wrap_result(result, axis=axis)
- def _reduce(self, op, axis=0, skipna=True):
+ def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
+ filter_type=None, **kwds):
axis_name = self._get_axis_name(axis)
axis_number = self._get_axis_number(axis_name)
- f = lambda x: op(x, axis=axis_number, skipna=skipna)
+ f = lambda x: op(x, axis=axis_number, skipna=skipna, **kwds)
result = f(self.values)
@@ -1207,89 +1208,11 @@ def f(self, other, axis=0):
return self._combine(other, na_op, axis=axis)
f.__name__ = name
return f
+
# add `div`, `mul`, `pow`, etc..
ops.add_flex_arithmetic_methods(cls, _panel_arith_method,
use_numexpr=use_numexpr,
flex_comp_method=ops._comp_method_PANEL)
- _agg_doc = """
-Return %(desc)s over requested axis
-
-Parameters
-----------
-axis : {""" + ', '.join(cls._AXIS_ORDERS) + "} or {" \
- + ', '.join([str(i) for i in range(cls._AXIS_LEN)]) + """}
-skipna : boolean, default True
- Exclude NA/null values. If an entire row/column is NA, the result
- will be NA
-
-Returns
--------
-%(outname)s : """ + cls._constructor_sliced.__name__ + "\n"
-
- _na_info = """
-
-NA/null values are %s.
-If all values are NA, result will be NA"""
-
- @Substitution(desc='sum', outname='sum')
- @Appender(_agg_doc)
- def sum(self, axis='major', skipna=True):
- return self._reduce(nanops.nansum, axis=axis, skipna=skipna)
- cls.sum = sum
-
- @Substitution(desc='mean', outname='mean')
- @Appender(_agg_doc)
- def mean(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmean, axis=axis, skipna=skipna)
- cls.mean = mean
-
- @Substitution(desc='unbiased variance', outname='variance')
- @Appender(_agg_doc)
- def var(self, axis='major', skipna=True):
- return self._reduce(nanops.nanvar, axis=axis, skipna=skipna)
- cls.var = var
-
- @Substitution(desc='unbiased standard deviation', outname='stdev')
- @Appender(_agg_doc)
- def std(self, axis='major', skipna=True):
- return self.var(axis=axis, skipna=skipna).apply(np.sqrt)
- cls.std = std
-
- @Substitution(desc='unbiased skewness', outname='skew')
- @Appender(_agg_doc)
- def skew(self, axis='major', skipna=True):
- return self._reduce(nanops.nanskew, axis=axis, skipna=skipna)
- cls.skew = skew
-
- @Substitution(desc='product', outname='prod')
- @Appender(_agg_doc)
- def prod(self, axis='major', skipna=True):
- return self._reduce(nanops.nanprod, axis=axis, skipna=skipna)
- cls.prod = prod
-
- @Substitution(desc='compounded percentage', outname='compounded')
- @Appender(_agg_doc)
- def compound(self, axis='major', skipna=True):
- return (1 + self).prod(axis=axis, skipna=skipna) - 1
- cls.compound = compound
-
- @Substitution(desc='median', outname='median')
- @Appender(_agg_doc)
- def median(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmedian, axis=axis, skipna=skipna)
- cls.median = median
-
- @Substitution(desc='maximum', outname='maximum')
- @Appender(_agg_doc)
- def max(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmax, axis=axis, skipna=skipna)
- cls.max = max
-
- @Substitution(desc='minimum', outname='minimum')
- @Appender(_agg_doc)
- def min(self, axis='major', skipna=True):
- return self._reduce(nanops.nanmin, axis=axis, skipna=skipna)
- cls.min = min
Panel._setup_axes(axes=['items', 'major_axis', 'minor_axis'],
info_axis=0,
@@ -1301,6 +1224,7 @@ def min(self, axis='major', skipna=True):
ops.add_special_arithmetic_methods(Panel, **ops.panel_special_funcs)
Panel._add_aggregate_operations()
+Panel._add_numeric_operations()
WidePanel = Panel
LongPanel = DataFrame
diff --git a/pandas/core/panelnd.py b/pandas/core/panelnd.py
index 8f427568a4102..9ccce1edc9067 100644
--- a/pandas/core/panelnd.py
+++ b/pandas/core/panelnd.py
@@ -108,5 +108,6 @@ def func(self, *args, **kwargs):
# add the aggregate operations
klass._add_aggregate_operations()
+ klass._add_numeric_operations()
return klass
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 38e22e7a9ed3a..90d535e51580c 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -77,40 +77,6 @@ def f(self, *args, **kwargs):
f.__name__ = func.__name__
return f
-_stat_doc = """
-Return %(name)s of values
-%(na_action)s
-
-Parameters
-----------
-skipna : boolean, default True
- Exclude NA/null values
-level : int, default None
- If the axis is a MultiIndex (hierarchical), count along a
- particular level, collapsing into a smaller Series
-%(extras)s
-Returns
--------
-%(shortname)s : float (or Series if level specified)
-"""
-_doc_exclude_na = "NA/null values are excluded"
-_doc_ndarray_interface = ("Extra parameters are to preserve ndarray"
- "interface.\n")
-
-
-def _make_stat_func(nanop, name, shortname, na_action=_doc_exclude_na,
- extras=_doc_ndarray_interface):
-
- @Substitution(name=name, shortname=shortname,
- na_action=na_action, extras=extras)
- @Appender(_stat_doc)
- def f(self, axis=0, dtype=None, out=None, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level(shortname, level=level, skipna=skipna)
- return nanop(_values_from_object(self), skipna=skipna)
- f.__name__ = shortname
- return f
-
#----------------------------------------------------------------------
# Series class
@@ -1194,113 +1160,6 @@ def duplicated(self, take_last=False):
duplicated = lib.duplicated(keys, take_last=take_last)
return self._constructor(duplicated, index=self.index, name=self.name)
- sum = _make_stat_func(nanops.nansum, 'sum', 'sum')
- mean = _make_stat_func(nanops.nanmean, 'mean', 'mean')
- median = _make_stat_func(nanops.nanmedian, 'median', 'median', extras='')
- prod = _make_stat_func(nanops.nanprod, 'product', 'prod', extras='')
-
- @Substitution(name='mean absolute deviation', shortname='mad',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def mad(self, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('mad', level=level, skipna=skipna)
-
- demeaned = self - self.mean(skipna=skipna)
- return np.abs(demeaned).mean(skipna=skipna)
-
- @Substitution(name='minimum', shortname='min',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def min(self, axis=None, out=None, skipna=True, level=None):
- """
- Notes
- -----
- This method returns the minimum of the values in the Series. If you
- want the *index* of the minimum, use ``Series.idxmin``. This is the
- equivalent of the ``numpy.ndarray`` method ``argmin``.
-
- See Also
- --------
- Series.idxmin
- DataFrame.idxmin
- """
- if level is not None:
- return self._agg_by_level('min', level=level, skipna=skipna)
- return nanops.nanmin(_values_from_object(self), skipna=skipna)
-
- @Substitution(name='maximum', shortname='max',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def max(self, axis=None, out=None, skipna=True, level=None):
- """
- Notes
- -----
- This method returns the maximum of the values in the Series. If you
- want the *index* of the maximum, use ``Series.idxmax``. This is the
- equivalent of the ``numpy.ndarray`` method ``argmax``.
-
- See Also
- --------
- Series.idxmax
- DataFrame.idxmax
- """
- if level is not None:
- return self._agg_by_level('max', level=level, skipna=skipna)
- return nanops.nanmax(_values_from_object(self), skipna=skipna)
-
- @Substitution(name='standard deviation', shortname='stdev',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
- """
- Normalized by N-1 (unbiased estimator).
- """)
- def std(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
- level=None):
- if level is not None:
- return self._agg_by_level('std', level=level, skipna=skipna,
- ddof=ddof)
- return np.sqrt(nanops.nanvar(_values_from_object(self), skipna=skipna, ddof=ddof))
-
- @Substitution(name='variance', shortname='var',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc +
- """
- Normalized by N-1 (unbiased estimator).
- """)
- def var(self, axis=None, dtype=None, out=None, ddof=1, skipna=True,
- level=None):
- if level is not None:
- return self._agg_by_level('var', level=level, skipna=skipna,
- ddof=ddof)
- return nanops.nanvar(_values_from_object(self), skipna=skipna, ddof=ddof)
-
- @Substitution(name='unbiased skewness', shortname='skew',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def skew(self, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('skew', level=level, skipna=skipna)
-
- return nanops.nanskew(_values_from_object(self), skipna=skipna)
-
- @Substitution(name='unbiased kurtosis', shortname='kurt',
- na_action=_doc_exclude_na, extras='')
- @Appender(_stat_doc)
- def kurt(self, skipna=True, level=None):
- if level is not None:
- return self._agg_by_level('kurt', level=level, skipna=skipna)
-
- return nanops.nankurt(_values_from_object(self), skipna=skipna)
-
- def _agg_by_level(self, name, level=0, skipna=True, **kwds):
- grouped = self.groupby(level=level)
- if hasattr(grouped, name) and skipna:
- return getattr(grouped, name)(**kwds)
- method = getattr(type(self), name)
- applyf = lambda x: method(x, skipna=skipna, **kwds)
- return grouped.aggregate(applyf)
-
def idxmin(self, axis=None, out=None, skipna=True):
"""
Index of first occurrence of minimum of values.
@@ -1357,124 +1216,6 @@ def idxmax(self, axis=None, out=None, skipna=True):
argmin = idxmin
argmax = idxmax
- def cumsum(self, axis=0, dtype=None, out=None, skipna=True):
- """
- Cumulative sum of values. Preserves locations of NaN values
-
- Extra parameters are to preserve ndarray interface.
-
- Parameters
- ----------
- skipna : boolean, default True
- Exclude NA/null values
-
- Returns
- -------
- cumsum : Series
- """
- arr = _values_from_object(self).copy()
-
- do_mask = skipna and not issubclass(self.dtype.type,
- (np.integer, np.bool_))
- if do_mask:
- mask = isnull(arr)
- np.putmask(arr, mask, 0.)
-
- result = arr.cumsum()
-
- if do_mask:
- np.putmask(result, mask, pa.NA)
-
- return self._constructor(result, index=self.index, name=self.name)
-
- def cumprod(self, axis=0, dtype=None, out=None, skipna=True):
- """
- Cumulative product of values. Preserves locations of NaN values
-
- Extra parameters are to preserve ndarray interface.
-
- Parameters
- ----------
- skipna : boolean, default True
- Exclude NA/null values
-
- Returns
- -------
- cumprod : Series
- """
- arr = _values_from_object(self).copy()
-
- do_mask = skipna and not issubclass(self.dtype.type,
- (np.integer, np.bool_))
- if do_mask:
- mask = isnull(arr)
- np.putmask(arr, mask, 1.)
-
- result = arr.cumprod()
-
- if do_mask:
- np.putmask(result, mask, pa.NA)
-
- return self._constructor(result, index=self.index, name=self.name)
-
- def cummax(self, axis=0, dtype=None, out=None, skipna=True):
- """
- Cumulative max of values. Preserves locations of NaN values
-
- Extra parameters are to preserve ndarray interface.
-
- Parameters
- ----------
- skipna : boolean, default True
- Exclude NA/null values
-
- Returns
- -------
- cummax : Series
- """
- arr = _values_from_object(self).copy()
-
- do_mask = skipna and not issubclass(self.dtype.type, np.integer)
- if do_mask:
- mask = isnull(arr)
- np.putmask(arr, mask, -np.inf)
-
- result = np.maximum.accumulate(arr)
-
- if do_mask:
- np.putmask(result, mask, pa.NA)
-
- return self._constructor(result, index=self.index, name=self.name)
-
- def cummin(self, axis=0, dtype=None, out=None, skipna=True):
- """
- Cumulative min of values. Preserves locations of NaN values
-
- Extra parameters are to preserve ndarray interface.
-
- Parameters
- ----------
- skipna : boolean, default True
- Exclude NA/null values
-
- Returns
- -------
- cummin : Series
- """
- arr = _values_from_object(self).copy()
-
- do_mask = skipna and not issubclass(self.dtype.type, np.integer)
- if do_mask:
- mask = isnull(arr)
- np.putmask(arr, mask, np.inf)
-
- result = np.minimum.accumulate(arr)
-
- if do_mask:
- np.putmask(result, mask, pa.NA)
-
- return self._constructor(result, index=self.index, name=self.name)
-
@Appender(pa.Array.round.__doc__)
def round(self, decimals=0, out=None):
"""
@@ -2208,6 +1949,11 @@ def apply(self, func, convert_dtype=True, args=(), **kwds):
else:
return self._constructor(mapped, index=self.index, name=self.name)
+ def _reduce(self, op, axis=0, skipna=True, numeric_only=None,
+ filter_type=None, **kwds):
+ """ perform a reduction operation """
+ return op(_values_from_object(self), skipna=skipna, **kwds)
+
def _reindex_indexer(self, new_index, indexer, copy):
if indexer is None:
if copy:
@@ -2647,7 +2393,8 @@ def to_period(self, freq=None, copy=True):
new_index = self.index.to_period(freq=freq)
return self._constructor(new_values, index=new_index, name=self.name)
-Series._setup_axes(['index'], info_axis=0)
+Series._setup_axes(['index'], info_axis=0, stat_axis=0)
+Series._add_numeric_operations()
_INDEX_TYPES = ndarray, Index, list, tuple
# reinstall the SeriesIndexer
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index a41072d97ddc3..fd37717e73ba0 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -17,7 +17,7 @@
map, zip, range, long, lrange, lmap, lzip,
OrderedDict, cPickle as pickle, u, StringIO
)
-from pandas import compat
+from pandas import compat, _np_version_under1p7
from numpy import random, nan
from numpy.random import randn, rand
diff --git a/pandas/tests/test_generic.py b/pandas/tests/test_generic.py
index 6ea58ec997e23..7f50cb2453a21 100644
--- a/pandas/tests/test_generic.py
+++ b/pandas/tests/test_generic.py
@@ -9,7 +9,7 @@
import pandas as pd
from pandas import (Index, Series, DataFrame, Panel,
- isnull, notnull,date_range)
+ isnull, notnull,date_range, _np_version_under1p7)
from pandas.core.index import Index, MultiIndex
from pandas.tseries.index import Timestamp, DatetimeIndex
@@ -118,6 +118,7 @@ def test_get_numeric_data(self):
self._compare(result, o)
# _get_numeric_data is includes _get_bool_data, so can't test for non-inclusion
+
def test_nonzero(self):
# GH 4633
@@ -154,6 +155,20 @@ def f():
self.assertRaises(ValueError, lambda : obj1 or obj2)
self.assertRaises(ValueError, lambda : not obj1)
+ def test_numpy_1_7_compat_numeric_methods(self):
+ if _np_version_under1p7:
+ raise nose.SkipTest("numpy < 1.7")
+
+ # GH 4435
+ # numpy in 1.7 tries to pass addtional arguments to pandas functions
+
+ o = self._construct(shape=4)
+ for op in ['min','max','max','var','std','prod','sum','cumsum','cumprod',
+ 'median','skew','kurt','compound','cummax','cummin','all','any']:
+ f = getattr(np,op,None)
+ if f is not None:
+ f(o)
+
class TestSeries(unittest.TestCase, Generic):
_typ = Series
_comparator = lambda self, x, y: assert_series_equal(x,y)
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 46ab0fe022e78..fec6460ea31f3 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -18,7 +18,7 @@
from pandas.compat import(
range, long, lrange, StringIO, lmap, lzip, map, zip, builtins, OrderedDict
)
-from pandas import compat
+from pandas import compat, _np_version_under1p7
from pandas.core.panel import Panel
from pandas.tools.merge import concat
from collections import defaultdict
| related #4787 (keywords now pass thru to _reduce), default for `numeric` is still `None`
closes #4435
BUG: provide numpy compatibility with 1.7 when implicity using `__array__`, IOW:
`np.prod(s)`, or `np.mean(df,axis=1)` will now work (because numpy _decided_ to try to see if the
object has this function and then pass all kinds of keywords to it, so pandas gets the right function called
with `dtype`, and `out` passed in (currently ignored)
CLN: refactored all numeric and stat like function (sum/mean/mad/min/max) and cums (cummin/sum..)
to `core/generic.py` from Series/DataFrame/Panel
| https://api.github.com/repos/pandas-dev/pandas/pulls/5034 | 2013-09-29T02:25:49Z | 2013-09-29T19:20:49Z | 2013-09-29T19:20:49Z | 2014-06-24T03:57:23Z |
ENH: Excel output in non-ascii encodings | diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 01e0d74ef8ce6..a32d2d793b331 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -1351,7 +1351,7 @@ def to_csv(self, path_or_buf, sep=",", na_rep='', float_format=None,
def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
float_format=None, cols=None, header=True, index=True,
- index_label=None, startrow=0, startcol=0, engine=None):
+ index_label=None, startrow=0, startcol=0, engine=None, encoding=None):
"""
Write DataFrame to a excel sheet
@@ -1359,7 +1359,7 @@ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
----------
excel_writer : string or ExcelWriter object
File path or existing ExcelWriter
- sheet_name : string, default 'Sheet1'
+ sheet_name : string, default 'Sheet1'
Name of sheet which will contain DataFrame
na_rep : string, default ''
Missing data representation
@@ -1382,6 +1382,9 @@ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
write engine to use - you can also set this via the options
``io.excel.xlsx.writer``, ``io.excel.xls.writer``, and
``io.excel.xlsm.writer``.
+ encoding: string, default None
+ encoding of the resulting excel file. Only necessary for xlwt,
+ other writers support unicode natively.
Notes
@@ -1396,6 +1399,9 @@ def to_excel(self, excel_writer, sheet_name='Sheet1', na_rep='',
"""
from pandas.io.excel import ExcelWriter
need_save = False
+ if encoding == None:
+ encoding = 'ascii'
+
if isinstance(excel_writer, compat.string_types):
excel_writer = ExcelWriter(excel_writer, engine=engine)
need_save = True
diff --git a/pandas/io/excel.py b/pandas/io/excel.py
index 6b83fada19001..bfdc4b4415b70 100644
--- a/pandas/io/excel.py
+++ b/pandas/io/excel.py
@@ -538,13 +538,16 @@ class _XlwtWriter(ExcelWriter):
engine = 'xlwt'
supported_extensions = ('.xls',)
- def __init__(self, path, **engine_kwargs):
+ def __init__(self, path, encoding=None,**engine_kwargs):
# Use the xlwt module as the Excel writer.
import xlwt
super(_XlwtWriter, self).__init__(path, **engine_kwargs)
- self.book = xlwt.Workbook()
+ if encoding is None:
+ encoding = 'ascii'
+ self.book = xlwt.Workbook(encoding=encoding)
+
self.fm_datetime = xlwt.easyxf(num_format_str='YYYY-MM-DD HH:MM:SS')
self.fm_date = xlwt.easyxf(num_format_str='YYYY-MM-DD')
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index 0c6332205ffe5..c44745abfa5ae 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -673,6 +673,20 @@ def test_to_excel_float_format(self):
index=['A', 'B'], columns=['X', 'Y', 'Z'])
tm.assert_frame_equal(rs, xp)
+ def test_to_excel_output_encoding(self):
+ _skip_if_no_xlrd()
+ ext = self.ext
+ filename = '__tmp_to_excel_float_format__.' + ext
+ df = DataFrame([[u('\u0192'), u('\u0193'), u('\u0194')],
+ [u('\u0195'), u('\u0196'), u('\u0197')]],
+ index=[u('A\u0192'), 'B'], columns=[u('X\u0193'), 'Y', 'Z'])
+
+ with ensure_clean(filename) as filename:
+ df.to_excel(filename, sheet_name = 'TestSheet', encoding='utf8')
+ result = read_excel(filename, 'TestSheet', encoding = 'utf8')
+ tm.assert_frame_equal(result,df)
+
+
def test_to_excel_unicode_filename(self):
_skip_if_no_xlrd()
ext = self.ext
| ENH/TST: Support for non-ascii encodings in DataFrame.to_excel
Closes #3710.
Notice: Despite (for my modest knowledge of python) it should be easier to just put encoding='ascii' in the declaration of to_excel in line 1352 of frame.py, I've declared it with encoding=None as jreback suggested, and then making it default to ascii later with the if clause.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5025 | 2013-09-28T21:13:11Z | 2014-03-12T13:00:58Z | null | 2014-06-27T23:28:48Z |
CLN/ENH: Provide full suite of arithmetic (and flex) methods to all NDFrame objects. | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 8dcf9c0f52de4..f74f5f0d28a58 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -275,12 +275,30 @@ Binary operator functions
:toctree: generated/
Series.add
- Series.div
- Series.mul
Series.sub
+ Series.mul
+ Series.div
+ Series.truediv
+ Series.floordiv
+ Series.mod
+ Series.pow
+ Series.radd
+ Series.rsub
+ Series.rmul
+ Series.rdiv
+ Series.rtruediv
+ Series.rfloordiv
+ Series.rmod
+ Series.rpow
Series.combine
Series.combine_first
Series.round
+ Series.lt
+ Series.gt
+ Series.le
+ Series.ge
+ Series.ne
+ Series.eq
Function application, GroupBy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
@@ -480,13 +498,27 @@ Binary operator functions
:toctree: generated/
DataFrame.add
- DataFrame.div
- DataFrame.mul
DataFrame.sub
+ DataFrame.mul
+ DataFrame.div
+ DataFrame.truediv
+ DataFrame.floordiv
+ DataFrame.mod
+ DataFrame.pow
DataFrame.radd
- DataFrame.rdiv
- DataFrame.rmul
DataFrame.rsub
+ DataFrame.rmul
+ DataFrame.rdiv
+ DataFrame.rtruediv
+ DataFrame.rfloordiv
+ DataFrame.rmod
+ DataFrame.rpow
+ DataFrame.lt
+ DataFrame.gt
+ DataFrame.le
+ DataFrame.ge
+ DataFrame.ne
+ DataFrame.eq
DataFrame.combine
DataFrame.combineAdd
DataFrame.combine_first
@@ -710,9 +742,27 @@ Binary operator functions
:toctree: generated/
Panel.add
- Panel.div
- Panel.mul
Panel.sub
+ Panel.mul
+ Panel.div
+ Panel.truediv
+ Panel.floordiv
+ Panel.mod
+ Panel.pow
+ Panel.radd
+ Panel.rsub
+ Panel.rmul
+ Panel.rdiv
+ Panel.rtruediv
+ Panel.rfloordiv
+ Panel.rmod
+ Panel.rpow
+ Panel.lt
+ Panel.gt
+ Panel.le
+ Panel.ge
+ Panel.ne
+ Panel.eq
Function application, GroupBy
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 1f0e447429d6a..73e7e3affd944 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -263,6 +263,10 @@ API Changes
- Begin removing methods that don't make sense on ``GroupBy`` objects
(:issue:`4887`).
- Remove deprecated ``read_clipboard/to_clipboard/ExcelFile/ExcelWriter`` from ``pandas.io.parsers`` (:issue:`3717`)
+ - All non-Index NDFrames (``Series``, ``DataFrame``, ``Panel``, ``Panel4D``,
+ ``SparsePanel``, etc.), now support the entire set of arithmetic operators
+ and arithmetic flex methods (add, sub, mul, etc.). ``SparsePanel`` does not
+ support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index c7f80a49b9b6c..0796f34ead839 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -68,6 +68,11 @@ API changes
df1 and df2
s1 and s2
+ - All non-Index NDFrames (``Series``, ``DataFrame``, ``Panel``, ``Panel4D``,
+ ``SparsePanel``, etc.), now support the entire set of arithmetic operators
+ and arithmetic flex methods (add, sub, mul, etc.). ``SparsePanel`` does not
+ support ``pow`` or ``mod`` with non-scalars. (:issue:`3765`)
+
Prior Version Deprecations/Changes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/computation/expressions.py b/pandas/computation/expressions.py
index 45c9a2d5259cb..3c1fb091ab823 100644
--- a/pandas/computation/expressions.py
+++ b/pandas/computation/expressions.py
@@ -15,6 +15,8 @@
except ImportError: # pragma: no cover
_NUMEXPR_INSTALLED = False
+_TEST_MODE = None
+_TEST_RESULT = None
_USE_NUMEXPR = _NUMEXPR_INSTALLED
_evaluate = None
_where = None
@@ -55,9 +57,10 @@ def set_numexpr_threads(n=None):
def _evaluate_standard(op, op_str, a, b, raise_on_error=True, **eval_kwargs):
""" standard evaluation """
+ if _TEST_MODE:
+ _store_test_result(False)
return op(a, b)
-
def _can_use_numexpr(op, op_str, a, b, dtype_check):
""" return a boolean if we WILL be using numexpr """
if op_str is not None:
@@ -88,11 +91,8 @@ def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, **eval_kwargs):
if _can_use_numexpr(op, op_str, a, b, 'evaluate'):
try:
- a_value, b_value = a, b
- if hasattr(a_value, 'values'):
- a_value = a_value.values
- if hasattr(b_value, 'values'):
- b_value = b_value.values
+ a_value = getattr(a, "values", a)
+ b_value = getattr(b, "values", b)
result = ne.evaluate('a_value %s b_value' % op_str,
local_dict={'a_value': a_value,
'b_value': b_value},
@@ -104,6 +104,9 @@ def _evaluate_numexpr(op, op_str, a, b, raise_on_error=False, **eval_kwargs):
if raise_on_error:
raise
+ if _TEST_MODE:
+ _store_test_result(result is not None)
+
if result is None:
result = _evaluate_standard(op, op_str, a, b, raise_on_error)
@@ -119,13 +122,9 @@ def _where_numexpr(cond, a, b, raise_on_error=False):
if _can_use_numexpr(None, 'where', a, b, 'where'):
try:
- cond_value, a_value, b_value = cond, a, b
- if hasattr(cond_value, 'values'):
- cond_value = cond_value.values
- if hasattr(a_value, 'values'):
- a_value = a_value.values
- if hasattr(b_value, 'values'):
- b_value = b_value.values
+ cond_value = getattr(cond, 'values', cond)
+ a_value = getattr(a, 'values', a)
+ b_value = getattr(b, 'values', b)
result = ne.evaluate('where(cond_value, a_value, b_value)',
local_dict={'cond_value': cond_value,
'a_value': a_value,
@@ -189,3 +188,28 @@ def where(cond, a, b, raise_on_error=False, use_numexpr=True):
if use_numexpr:
return _where(cond, a, b, raise_on_error=raise_on_error)
return _where_standard(cond, a, b, raise_on_error=raise_on_error)
+
+
+def set_test_mode(v = True):
+ """
+ Keeps track of whether numexpr was used. Stores an additional ``True`` for
+ every successful use of evaluate with numexpr since the last
+ ``get_test_result``
+ """
+ global _TEST_MODE, _TEST_RESULT
+ _TEST_MODE = v
+ _TEST_RESULT = []
+
+
+def _store_test_result(used_numexpr):
+ global _TEST_RESULT
+ if used_numexpr:
+ _TEST_RESULT.append(used_numexpr)
+
+
+def get_test_result():
+ """get test result and reset test_results"""
+ global _TEST_RESULT
+ res = _TEST_RESULT
+ _TEST_RESULT = []
+ return res
diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index e9201c233753f..aa5c0cc5d50f6 100644
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -2,9 +2,7 @@
import unittest
import functools
-import numbers
from itertools import product
-import ast
import nose
from nose.tools import assert_raises, assert_true, assert_false, assert_equal
@@ -250,12 +248,6 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
not np.isscalar(rhs_new) and binop in skip_these):
with tm.assertRaises(TypeError):
_eval_single_bin(lhs_new, binop, rhs_new, self.engine)
- elif _bool_and_frame(lhs_new, rhs_new):
- with tm.assertRaises(TypeError):
- _eval_single_bin(lhs_new, binop, rhs_new, self.engine)
- with tm.assertRaises(TypeError):
- pd.eval('lhs_new & rhs_new'.format(binop),
- engine=self.engine, parser=self.parser)
else:
expected = _eval_single_bin(lhs_new, binop, rhs_new, self.engine)
result = pd.eval(ex, engine=self.engine, parser=self.parser)
@@ -301,28 +293,15 @@ def check_operands(left, right, cmp_op):
rhs_new = check_operands(mid, rhs, cmp2)
if lhs_new is not None and rhs_new is not None:
- # these are not compatible operands
- if isinstance(lhs_new, Series) and isinstance(rhs_new, DataFrame):
- self.assertRaises(TypeError, _eval_single_bin, lhs_new, '&',
- rhs_new, self.engine)
- elif (_bool_and_frame(lhs_new, rhs_new)):
- self.assertRaises(TypeError, _eval_single_bin, lhs_new, '&',
- rhs_new, self.engine)
- elif _series_and_2d_ndarray(lhs_new, rhs_new):
- # TODO: once #4319 is fixed add this test back in
- #self.assertRaises(Exception, _eval_single_bin, lhs_new, '&',
- #rhs_new, self.engine)
- pass
- else:
- ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
- ex2 = 'lhs {0} mid and mid {1} rhs'.format(cmp1, cmp2)
- ex3 = '(lhs {0} mid) & (mid {1} rhs)'.format(cmp1, cmp2)
- expected = _eval_single_bin(lhs_new, '&', rhs_new, self.engine)
-
- for ex in (ex1, ex2, ex3):
- result = pd.eval(ex, engine=self.engine,
- parser=self.parser)
- assert_array_equal(result, expected)
+ ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
+ ex2 = 'lhs {0} mid and mid {1} rhs'.format(cmp1, cmp2)
+ ex3 = '(lhs {0} mid) & (mid {1} rhs)'.format(cmp1, cmp2)
+ expected = _eval_single_bin(lhs_new, '&', rhs_new, self.engine)
+
+ for ex in (ex1, ex2, ex3):
+ result = pd.eval(ex, engine=self.engine,
+ parser=self.parser)
+ assert_array_equal(result, expected)
@skip_incompatible_operand
def check_simple_cmp_op(self, lhs, cmp1, rhs):
diff --git a/pandas/core/common.py b/pandas/core/common.py
index d3fa10abc7681..2c5ca42c7be86 100644
--- a/pandas/core/common.py
+++ b/pandas/core/common.py
@@ -8,6 +8,7 @@
import codecs
import csv
import sys
+import types
from datetime import timedelta
@@ -27,6 +28,7 @@
from pandas.core.config import get_option
from pandas.core import array as pa
+
class PandasError(Exception):
pass
@@ -74,6 +76,31 @@ def __instancecheck__(cls, inst):
ABCGeneric = _ABCGeneric("ABCGeneric", tuple(), {})
+
+def bind_method(cls, name, func):
+ """Bind a method to class, python 2 and python 3 compatible.
+
+ Parameters
+ ----------
+
+ cls : type
+ class to receive bound method
+ name : basestring
+ name of method on class instance
+ func : function
+ function to be bound as method
+
+
+ Returns
+ -------
+ None
+ """
+ # only python 2 has bound/unbound method issue
+ if not compat.PY3:
+ setattr(cls, name, types.MethodType(func, None, cls))
+ else:
+ setattr(cls, name, func)
+
def isnull(obj):
"""Detect missing values (NaN in numeric arrays, None/NaN in object arrays)
@@ -360,10 +387,10 @@ def _take_2d_multi_generic(arr, indexer, out, fill_value, mask_info):
if col_needs:
out[:, col_mask] = fill_value
for i in range(len(row_idx)):
- u = row_idx[i]
+ u_ = row_idx[i]
for j in range(len(col_idx)):
v = col_idx[j]
- out[i, j] = arr[u, v]
+ out[i, j] = arr[u_, v]
def _take_nd_generic(arr, indexer, out, axis, fill_value, mask_info):
@@ -2348,3 +2375,10 @@ def save(obj, path): # TODO remove in 0.13
warnings.warn("save is deprecated, use obj.to_pickle", FutureWarning)
from pandas.io.pickle import to_pickle
return to_pickle(obj, path)
+
+
+def _maybe_match_name(a, b):
+ name = None
+ if a.name == b.name:
+ name = a.name
+ return name
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 01e0d74ef8ce6..c6727f91644fc 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -34,7 +34,7 @@
from pandas.core.internals import (BlockManager,
create_block_manager_from_arrays,
create_block_manager_from_blocks)
-from pandas.core.series import Series, _radd_compat
+from pandas.core.series import Series
import pandas.computation.expressions as expressions
from pandas.computation.eval import eval as _eval
from pandas.computation.expr import _ensure_scope
@@ -42,7 +42,6 @@
from pandas.compat import(range, zip, lrange, lmap, lzip, StringIO, u,
OrderedDict, raise_with_traceback)
from pandas import compat
-from pandas.util.terminal import get_terminal_size
from pandas.util.decorators import deprecate, Appender, Substitution
from pandas.tseries.period import PeriodIndex
@@ -53,6 +52,7 @@
import pandas.core.common as com
import pandas.core.format as fmt
import pandas.core.nanops as nanops
+import pandas.core.ops as ops
import pandas.lib as lib
import pandas.algos as _algos
@@ -62,31 +62,6 @@
#----------------------------------------------------------------------
# Docstring templates
-_arith_doc = """
-Binary operator %s with support to substitute a fill_value for missing data in
-one of the inputs
-
-Parameters
-----------
-other : Series, DataFrame, or constant
-axis : {0, 1, 'index', 'columns'}
- For Series input, axis to match Series index on
-fill_value : None or float value, default None
- Fill missing (NaN) values with this value. If both DataFrame locations are
- missing, the result will be missing
-level : int or name
- Broadcast across a level, matching Index values on the
- passed MultiIndex level
-
-Notes
------
-Mismatched indices will be unioned together
-
-Returns
--------
-result : DataFrame
-"""
-
_stat_doc = """
Return %(name)s over requested axis.
@@ -181,153 +156,6 @@
merged : DataFrame
"""
-#----------------------------------------------------------------------
-# Factory helper methods
-
-
-def _arith_method(op, name, str_rep=None, default_axis='columns', fill_zeros=None, **eval_kwargs):
- def na_op(x, y):
- try:
- result = expressions.evaluate(
- op, str_rep, x, y, raise_on_error=True, **eval_kwargs)
- result = com._fill_zeros(result, y, fill_zeros)
-
- except TypeError:
- xrav = x.ravel()
- result = np.empty(x.size, dtype=x.dtype)
- if isinstance(y, (np.ndarray, Series)):
- yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
- result[mask] = op(xrav[mask], yrav[mask])
- else:
- mask = notnull(xrav)
- result[mask] = op(xrav[mask], y)
-
- result, changed = com._maybe_upcast_putmask(result, -mask, np.nan)
- result = result.reshape(x.shape)
-
- return result
-
- @Appender(_arith_doc % name)
- def f(self, other, axis=default_axis, level=None, fill_value=None):
- if isinstance(other, DataFrame): # Another DataFrame
- return self._combine_frame(other, na_op, fill_value, level)
- elif isinstance(other, Series):
- return self._combine_series(other, na_op, fill_value, axis, level)
- elif isinstance(other, (list, tuple)):
- if axis is not None and self._get_axis_name(axis) == 'index':
- casted = Series(other, index=self.index)
- else:
- casted = Series(other, index=self.columns)
- return self._combine_series(casted, na_op, fill_value, axis, level)
- elif isinstance(other, np.ndarray):
- if other.ndim == 1:
- if axis is not None and self._get_axis_name(axis) == 'index':
- casted = Series(other, index=self.index)
- else:
- casted = Series(other, index=self.columns)
- return self._combine_series(casted, na_op, fill_value,
- axis, level)
- elif other.ndim == 2:
- casted = DataFrame(other, index=self.index,
- columns=self.columns)
- return self._combine_frame(casted, na_op, fill_value, level)
- else:
- raise ValueError("Incompatible argument shape %s" % (other.shape,))
- else:
- return self._combine_const(other, na_op)
-
- f.__name__ = name
-
- return f
-
-
-def _flex_comp_method(op, name, str_rep=None, default_axis='columns'):
-
- def na_op(x, y):
- try:
- result = op(x, y)
- except TypeError:
- xrav = x.ravel()
- result = np.empty(x.size, dtype=x.dtype)
- if isinstance(y, (np.ndarray, Series)):
- yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
- result[mask] = op(np.array(list(xrav[mask])),
- np.array(list(yrav[mask])))
- else:
- mask = notnull(xrav)
- result[mask] = op(np.array(list(xrav[mask])), y)
-
- if op == operator.ne: # pragma: no cover
- np.putmask(result, -mask, True)
- else:
- np.putmask(result, -mask, False)
- result = result.reshape(x.shape)
-
- return result
-
- @Appender('Wrapper for flexible comparison methods %s' % name)
- def f(self, other, axis=default_axis, level=None):
- if isinstance(other, DataFrame): # Another DataFrame
- return self._flex_compare_frame(other, na_op, str_rep, level)
-
- elif isinstance(other, Series):
- return self._combine_series(other, na_op, None, axis, level)
-
- elif isinstance(other, (list, tuple)):
- if axis is not None and self._get_axis_name(axis) == 'index':
- casted = Series(other, index=self.index)
- else:
- casted = Series(other, index=self.columns)
-
- return self._combine_series(casted, na_op, None, axis, level)
-
- elif isinstance(other, np.ndarray):
- if other.ndim == 1:
- if axis is not None and self._get_axis_name(axis) == 'index':
- casted = Series(other, index=self.index)
- else:
- casted = Series(other, index=self.columns)
-
- return self._combine_series(casted, na_op, None, axis, level)
-
- elif other.ndim == 2:
- casted = DataFrame(other, index=self.index,
- columns=self.columns)
-
- return self._flex_compare_frame(casted, na_op, str_rep, level)
-
- else:
- raise ValueError("Incompatible argument shape: %s" %
- (other.shape,))
-
- else:
- return self._combine_const(other, na_op)
-
- f.__name__ = name
-
- return f
-
-
-def _comp_method(func, name, str_rep):
- @Appender('Wrapper for comparison method %s' % name)
- def f(self, other):
- if isinstance(other, DataFrame): # Another DataFrame
- return self._compare_frame(other, func, str_rep)
- elif isinstance(other, Series):
- return self._combine_series_infer(other, func)
- else:
-
- # straight boolean comparisions we want to allow all columns
- # (regardless of dtype to pass thru)
- return self._combine_const(other, func, raise_on_error=False).fillna(True).astype(bool)
-
- f.__name__ = name
-
- return f
-
-
#----------------------------------------------------------------------
# DataFrame class
@@ -752,79 +580,6 @@ def __len__(self):
"""Returns length of info axis, but here we use the index """
return len(self.index)
- #----------------------------------------------------------------------
- # Arithmetic methods
-
- add = _arith_method(operator.add, 'add', '+')
- mul = _arith_method(operator.mul, 'multiply', '*')
- sub = _arith_method(operator.sub, 'subtract', '-')
- div = divide = _arith_method(lambda x, y: x / y, 'divide', '/')
- pow = _arith_method(operator.pow, 'pow', '**')
- mod = _arith_method(lambda x, y: x % y, 'mod')
-
- radd = _arith_method(_radd_compat, 'radd')
- rmul = _arith_method(operator.mul, 'rmultiply')
- rsub = _arith_method(lambda x, y: y - x, 'rsubtract')
- rdiv = _arith_method(lambda x, y: y / x, 'rdivide')
- rpow = _arith_method(lambda x, y: y ** x, 'rpow')
- rmod = _arith_method(lambda x, y: y % x, 'rmod')
-
- __add__ = _arith_method(operator.add, '__add__', '+', default_axis=None)
- __sub__ = _arith_method(operator.sub, '__sub__', '-', default_axis=None)
- __mul__ = _arith_method(operator.mul, '__mul__', '*', default_axis=None)
- __truediv__ = _arith_method(operator.truediv, '__truediv__', '/',
- default_axis=None, fill_zeros=np.inf, truediv=True)
- # numexpr produces a different value (python/numpy: 0.000, numexpr: inf)
- # when dividing by zero, so can't use floordiv speed up (yet)
- # __floordiv__ = _arith_method(operator.floordiv, '__floordiv__', '//',
- __floordiv__ = _arith_method(operator.floordiv, '__floordiv__',
- default_axis=None, fill_zeros=np.inf)
- __pow__ = _arith_method(operator.pow, '__pow__', '**', default_axis=None)
-
- # currently causes a floating point exception to occur - so sticking with unaccelerated for now
- # __mod__ = _arith_method(operator.mod, '__mod__', '%', default_axis=None, fill_zeros=np.nan)
- __mod__ = _arith_method(
- operator.mod, '__mod__', default_axis=None, fill_zeros=np.nan)
-
- __radd__ = _arith_method(_radd_compat, '__radd__', default_axis=None)
- __rmul__ = _arith_method(operator.mul, '__rmul__', default_axis=None)
- __rsub__ = _arith_method(lambda x, y: y - x, '__rsub__', default_axis=None)
- __rtruediv__ = _arith_method(lambda x, y: y / x, '__rtruediv__',
- default_axis=None, fill_zeros=np.inf)
- __rfloordiv__ = _arith_method(lambda x, y: y // x, '__rfloordiv__',
- default_axis=None, fill_zeros=np.inf)
- __rpow__ = _arith_method(lambda x, y: y ** x, '__rpow__',
- default_axis=None)
- __rmod__ = _arith_method(lambda x, y: y % x, '__rmod__', default_axis=None,
- fill_zeros=np.nan)
-
- # boolean operators
- __and__ = _arith_method(operator.and_, '__and__', '&')
- __or__ = _arith_method(operator.or_, '__or__', '|')
- __xor__ = _arith_method(operator.xor, '__xor__')
-
- # Python 2 division methods
- if not compat.PY3:
- __div__ = _arith_method(operator.div, '__div__', '/',
- default_axis=None, fill_zeros=np.inf, truediv=False)
- __rdiv__ = _arith_method(lambda x, y: y / x, '__rdiv__',
- default_axis=None, fill_zeros=np.inf)
-
- # Comparison methods
- __eq__ = _comp_method(operator.eq, '__eq__', '==')
- __ne__ = _comp_method(operator.ne, '__ne__', '!=')
- __lt__ = _comp_method(operator.lt, '__lt__', '<')
- __gt__ = _comp_method(operator.gt, '__gt__', '>')
- __le__ = _comp_method(operator.le, '__le__', '<=')
- __ge__ = _comp_method(operator.ge, '__ge__', '>=')
-
- eq = _flex_comp_method(operator.eq, 'eq', '==')
- ne = _flex_comp_method(operator.ne, 'ne', '!=')
- lt = _flex_comp_method(operator.lt, 'lt', '<')
- gt = _flex_comp_method(operator.gt, 'gt', '>')
- le = _flex_comp_method(operator.le, 'le', '<=')
- ge = _flex_comp_method(operator.ge, 'ge', '>=')
-
def dot(self, other):
"""
Matrix multiplication with DataFrame or Series objects
@@ -5152,6 +4907,8 @@ def boxplot(self, column=None, by=None, ax=None, fontsize=None,
return ax
DataFrame.boxplot = boxplot
+ops.add_flex_arithmetic_methods(DataFrame, **ops.frame_flex_funcs)
+ops.add_special_arithmetic_methods(DataFrame, **ops.frame_special_funcs)
if __name__ == '__main__':
import nose
diff --git a/pandas/core/ops.py b/pandas/core/ops.py
new file mode 100644
index 0000000000000..4ce2143fdd92c
--- /dev/null
+++ b/pandas/core/ops.py
@@ -0,0 +1,911 @@
+"""
+Arithmetic operations for PandasObjects
+
+This is not a public API.
+"""
+import operator
+import numpy as np
+import pandas as pd
+from pandas import compat, lib, tslib
+import pandas.index as _index
+from pandas.util.decorators import Appender
+import pandas.core.common as com
+import pandas.core.array as pa
+import pandas.computation.expressions as expressions
+from pandas.core.common import(bind_method, is_list_like, notnull, isnull,
+ _values_from_object, _maybe_match_name)
+
+# -----------------------------------------------------------------------------
+# Functions that add arithmetic methods to objects, given arithmetic factory
+# methods
+
+def _create_methods(arith_method, radd_func, comp_method, bool_method,
+ use_numexpr, special=False, default_axis='columns'):
+ # NOTE: Only frame cares about default_axis, specifically: special methods
+ # have default axis None, whereas flex methods have default axis 'columns'
+ # if we're not using numexpr, then don't pass a str_rep
+ if use_numexpr:
+ op = lambda x: x
+ else:
+ op = lambda x: None
+ if special:
+ def names(x):
+ if x[-1] == "_":
+ return "__%s_" % x
+ else:
+ return "__%s__" % x
+ else:
+ names = lambda x: x
+ radd_func = radd_func or operator.add
+ # Inframe, all special methods have default_axis=None, flex methods have default_axis set to the default (columns)
+ new_methods = dict(
+ add=arith_method(operator.add, names('add'), op('+'), default_axis=default_axis),
+ radd=arith_method(radd_func, names('radd'), op('+'), default_axis=default_axis),
+ sub=arith_method(operator.sub, names('sub'), op('-'), default_axis=default_axis),
+ mul=arith_method(operator.mul, names('mul'), op('*'), default_axis=default_axis),
+ truediv=arith_method(operator.truediv, names('truediv'), op('/'),
+ truediv=True, fill_zeros=np.inf, default_axis=default_axis),
+ floordiv=arith_method(operator.floordiv, names('floordiv'), op('//'),
+ default_axis=default_axis, fill_zeros=np.inf),
+ # Causes a floating point exception in the tests when numexpr
+ # enabled, so for now no speedup
+ mod=arith_method(operator.mod, names('mod'), default_axis=default_axis,
+ fill_zeros=np.nan),
+ pow=arith_method(operator.pow, names('pow'), op('**'), default_axis=default_axis),
+ # not entirely sure why this is necessary, but previously was included
+ # so it's here to maintain compatibility
+ rmul=arith_method(operator.mul, names('rmul'), default_axis=default_axis),
+ rsub=arith_method(lambda x, y: y - x, names('rsub'), default_axis=default_axis),
+ rtruediv=arith_method(lambda x, y: operator.truediv(y, x), names('rtruediv'),
+ truediv=True, fill_zeros=np.inf, default_axis=default_axis),
+ rfloordiv=arith_method(lambda x, y: operator.floordiv(y, x), names('rfloordiv'),
+ default_axis=default_axis, fill_zeros=np.inf),
+ rpow=arith_method(lambda x, y: y ** x, names('rpow'), default_axis=default_axis),
+ rmod=arith_method(lambda x, y: y % x, names('rmod'), default_axis=default_axis),
+ )
+ if not compat.PY3:
+ new_methods["div"] = arith_method(operator.div, names('div'), op('/'),
+ truediv=False, fill_zeros=np.inf, default_axis=default_axis)
+ new_methods["rdiv"] = arith_method(lambda x, y: operator.div(y, x), names('rdiv'),
+ truediv=False, fill_zeros=np.inf, default_axis=default_axis)
+ else:
+ new_methods["div"] = arith_method(operator.truediv, names('div'), op('/'),
+ truediv=True, fill_zeros=np.inf, default_axis=default_axis)
+ new_methods["rdiv"] = arith_method(lambda x, y: operator.truediv(y, x), names('rdiv'),
+ truediv=False, fill_zeros=np.inf, default_axis=default_axis)
+ # Comp methods never had a default axis set
+ if comp_method:
+ new_methods.update(dict(
+ eq=comp_method(operator.eq, names('eq'), op('==')),
+ ne=comp_method(operator.ne, names('ne'), op('!='), masker=True),
+ lt=comp_method(operator.lt, names('lt'), op('<')),
+ gt=comp_method(operator.gt, names('gt'), op('>')),
+ le=comp_method(operator.le, names('le'), op('<=')),
+ ge=comp_method(operator.ge, names('ge'), op('>=')),
+ ))
+ if bool_method:
+ new_methods.update(dict(
+ and_=bool_method(operator.and_, names('and_ [&]'), op('&')),
+ or_=bool_method(operator.or_, names('or_ [|]'), op('|')),
+ # For some reason ``^`` wasn't used in original.
+ xor=bool_method(operator.xor, names('xor [^]')),
+ rand_=bool_method(lambda x, y: operator.and_(y, x), names('rand_[&]')),
+ ror_=bool_method(lambda x, y: operator.or_(y, x), names('ror_ [|]')),
+ rxor=bool_method(lambda x, y: operator.xor(y, x), names('rxor [^]'))
+ ))
+
+ new_methods = dict((names(k), v) for k, v in new_methods.items())
+ return new_methods
+
+
+def add_methods(cls, new_methods, force, select, exclude):
+ if select and exclude:
+ raise TypeError("May only pass either select or exclude")
+ methods = new_methods
+ if select:
+ select = set(select)
+ methods = {}
+ for key, method in new_methods.items():
+ if key in select:
+ methods[key] = method
+ if exclude:
+ for k in exclude:
+ new_methods.pop(k, None)
+
+ for name, method in new_methods.items():
+ if force or name not in cls.__dict__:
+ bind_method(cls, name, method)
+
+#----------------------------------------------------------------------
+# Arithmetic
+def add_special_arithmetic_methods(cls, arith_method=None, radd_func=None,
+ comp_method=None, bool_method=None,
+ use_numexpr=True, force=False, select=None,
+ exclude=None):
+ """
+ Adds the full suite of special arithmetic methods (``__add__``, ``__sub__``, etc.) to the class.
+
+ Parameters
+ ----------
+ arith_method : function (optional)
+ factory for special arithmetic methods, with op string:
+ f(op, name, str_rep, default_axis=None, fill_zeros=None, **eval_kwargs)
+ radd_func : function (optional)
+ Possible replacement for ``operator.add`` for compatibility
+ comp_method : function, optional,
+ factory for rich comparison - signature: f(op, name, str_rep)
+ use_numexpr : bool, default True
+ whether to accelerate with numexpr, defaults to True
+ force : bool, default False
+ if False, checks whether function is defined **on ``cls.__dict__``** before defining
+ if True, always defines functions on class base
+ select : iterable of strings (optional)
+ if passed, only sets functions with names in select
+ exclude : iterable of strings (optional)
+ if passed, will not set functions with names in exclude
+ """
+ radd_func = radd_func or operator.add
+ # in frame, special methods have default_axis = None, comp methods use 'columns'
+ new_methods = _create_methods(arith_method, radd_func, comp_method, bool_method, use_numexpr, default_axis=None,
+ special=True)
+
+ # inplace operators (I feel like these should get passed an `inplace=True`
+ # or just be removed
+ new_methods.update(dict(
+ __iadd__=new_methods["__add__"],
+ __isub__=new_methods["__sub__"],
+ __imul__=new_methods["__mul__"],
+ __itruediv__=new_methods["__truediv__"],
+ __ipow__=new_methods["__pow__"]
+ ))
+ if not compat.PY3:
+ new_methods["__idiv__"] = new_methods["__div__"]
+
+ add_methods(cls, new_methods=new_methods, force=force, select=select, exclude=exclude)
+
+
+def add_flex_arithmetic_methods(cls, flex_arith_method, radd_func=None,
+ flex_comp_method=None, flex_bool_method=None,
+ use_numexpr=True, force=False, select=None,
+ exclude=None):
+ """
+ Adds the full suite of flex arithmetic methods (``pow``, ``mul``, ``add``) to the class.
+
+ Parameters
+ ----------
+ flex_arith_method : function (optional)
+ factory for special arithmetic methods, with op string:
+ f(op, name, str_rep, default_axis=None, fill_zeros=None, **eval_kwargs)
+ radd_func : function (optional)
+ Possible replacement for ``lambda x, y: operator.add(y, x)`` for compatibility
+ flex_comp_method : function, optional,
+ factory for rich comparison - signature: f(op, name, str_rep)
+ use_numexpr : bool, default True
+ whether to accelerate with numexpr, defaults to True
+ force : bool, default False
+ if False, checks whether function is defined **on ``cls.__dict__``** before defining
+ if True, always defines functions on class base
+ select : iterable of strings (optional)
+ if passed, only sets functions with names in select
+ exclude : iterable of strings (optional)
+ if passed, will not set functions with names in exclude
+ """
+ radd_func = radd_func or (lambda x, y: operator.add(y, x))
+ # in frame, default axis is 'columns', doesn't matter for series and panel
+ new_methods = _create_methods(
+ flex_arith_method, radd_func, flex_comp_method, flex_bool_method,
+ use_numexpr, default_axis='columns', special=False)
+ new_methods.update(dict(
+ multiply=new_methods['mul'],
+ subtract=new_methods['sub'],
+ divide=new_methods['div']
+ ))
+ # opt out of bool flex methods for now
+ for k in ('ror_', 'rxor', 'rand_'):
+ if k in new_methods:
+ new_methods.pop(k)
+
+ add_methods(cls, new_methods=new_methods, force=force, select=select, exclude=exclude)
+
+def cleanup_name(name):
+ """cleanup special names
+ >>> cleanup_name("__rsub__")
+ sub
+ >>> cleanup_name("rand_")
+ and_
+ """
+ if name[:2] == "__":
+ name = name[2:-2]
+ if name[0] == "r":
+ name = name[1:]
+ # readd last _ for operator names.
+ if name == "or":
+ name = "or_"
+ elif name == "and":
+ name = "and_"
+ return name
+
+
+# direct copy of original Series _TimeOp
+class _TimeOp(object):
+ """
+ Wrapper around Series datetime/time/timedelta arithmetic operations.
+ Generally, you should use classmethod ``maybe_convert_for_time_op`` as an
+ entry point.
+ """
+ fill_value = tslib.iNaT
+ wrap_results = staticmethod(lambda x: x)
+ dtype = None
+
+ def __init__(self, left, right, name):
+ self.name = name
+
+ lvalues = self._convert_to_array(left, name=name)
+ rvalues = self._convert_to_array(right, name=name)
+
+ self.is_timedelta_lhs = com.is_timedelta64_dtype(left)
+ self.is_datetime_lhs = com.is_datetime64_dtype(left)
+ self.is_integer_lhs = left.dtype.kind in ['i','u']
+ self.is_datetime_rhs = com.is_datetime64_dtype(rvalues)
+ self.is_timedelta_rhs = (com.is_timedelta64_dtype(rvalues)
+ or (not self.is_datetime_rhs
+ and pd._np_version_under1p7))
+ self.is_integer_rhs = rvalues.dtype.kind in ('i','u')
+
+ self._validate()
+
+ self._convert_for_datetime(lvalues, rvalues)
+
+ def _validate(self):
+ # timedelta and integer mul/div
+
+ if (self.is_timedelta_lhs and self.is_integer_rhs) or\
+ (self.is_integer_lhs and self.is_timedelta_rhs):
+
+ if self.name not in ('__truediv__','__div__','__mul__'):
+ raise TypeError("can only operate on a timedelta and an integer for "
+ "division, but the operator [%s] was passed" % self.name)
+
+ # 2 datetimes
+ elif self.is_datetime_lhs and self.is_datetime_rhs:
+ if self.name != '__sub__':
+ raise TypeError("can only operate on a datetimes for subtraction, "
+ "but the operator [%s] was passed" % self.name)
+
+
+ # 2 timedeltas
+ elif self.is_timedelta_lhs and self.is_timedelta_rhs:
+
+ if self.name not in ('__div__', '__truediv__', '__add__', '__sub__'):
+ raise TypeError("can only operate on a timedeltas for "
+ "addition, subtraction, and division, but the operator [%s] was passed" % self.name)
+
+ # datetime and timedelta
+ elif self.is_datetime_lhs and self.is_timedelta_rhs:
+
+ if self.name not in ('__add__','__sub__'):
+ raise TypeError("can only operate on a datetime with a rhs of a timedelta for "
+ "addition and subtraction, but the operator [%s] was passed" % self.name)
+
+ elif self.is_timedelta_lhs and self.is_datetime_rhs:
+
+ if self.name != '__add__':
+ raise TypeError("can only operate on a timedelta and a datetime for "
+ "addition, but the operator [%s] was passed" % self.name)
+ else:
+ raise TypeError('cannot operate on a series with out a rhs '
+ 'of a series/ndarray of type datetime64[ns] '
+ 'or a timedelta')
+
+ def _convert_to_array(self, values, name=None):
+ """converts values to ndarray"""
+ from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
+
+ coerce = 'compat' if pd._np_version_under1p7 else True
+ if not is_list_like(values):
+ values = np.array([values])
+ inferred_type = lib.infer_dtype(values)
+ if inferred_type in ('datetime64','datetime','date','time'):
+ # a datetlike
+ if not (isinstance(values, (pa.Array, pd.Series)) and com.is_datetime64_dtype(values)):
+ values = tslib.array_to_datetime(values)
+ elif isinstance(values, pd.DatetimeIndex):
+ values = values.to_series()
+ elif inferred_type in ('timedelta', 'timedelta64'):
+ # have a timedelta, convert to to ns here
+ values = _possibly_cast_to_timedelta(values, coerce=coerce)
+ elif inferred_type == 'integer':
+ # py3 compat where dtype is 'm' but is an integer
+ if values.dtype.kind == 'm':
+ values = values.astype('timedelta64[ns]')
+ elif isinstance(values, pd.PeriodIndex):
+ values = values.to_timestamp().to_series()
+ elif name not in ('__truediv__','__div__','__mul__'):
+ raise TypeError("incompatible type for a datetime/timedelta "
+ "operation [{0}]".format(name))
+ elif isinstance(values[0], pd.DateOffset):
+ # handle DateOffsets
+ os = pa.array([ getattr(v,'delta',None) for v in values ])
+ mask = isnull(os)
+ if mask.any():
+ raise TypeError("cannot use a non-absolute DateOffset in "
+ "datetime/timedelta operations [{0}]".format(
+ ','.join([ com.pprint_thing(v) for v in values[mask] ])))
+ values = _possibly_cast_to_timedelta(os, coerce=coerce)
+ else:
+ raise TypeError("incompatible type [{0}] for a datetime/timedelta operation".format(pa.array(values).dtype))
+
+ return values
+
+ def _convert_for_datetime(self, lvalues, rvalues):
+ mask = None
+ # datetimes require views
+ if self.is_datetime_lhs or self.is_datetime_rhs:
+ # datetime subtraction means timedelta
+ if self.is_datetime_lhs and self.is_datetime_rhs:
+ self.dtype = 'timedelta64[ns]'
+ else:
+ self.dtype = 'datetime64[ns]'
+ mask = isnull(lvalues) | isnull(rvalues)
+ lvalues = lvalues.view(np.int64)
+ rvalues = rvalues.view(np.int64)
+
+ # otherwise it's a timedelta
+ else:
+ self.dtype = 'timedelta64[ns]'
+ mask = isnull(lvalues) | isnull(rvalues)
+ lvalues = lvalues.astype(np.int64)
+ rvalues = rvalues.astype(np.int64)
+
+ # time delta division -> unit less
+ # integer gets converted to timedelta in np < 1.6
+ if (self.is_timedelta_lhs and self.is_timedelta_rhs) and\
+ not self.is_integer_rhs and\
+ not self.is_integer_lhs and\
+ self.name in ('__div__', '__truediv__'):
+ self.dtype = 'float64'
+ self.fill_value = np.nan
+ lvalues = lvalues.astype(np.float64)
+ rvalues = rvalues.astype(np.float64)
+
+ # if we need to mask the results
+ if mask is not None:
+ if mask.any():
+ def f(x):
+ x = pa.array(x,dtype=self.dtype)
+ np.putmask(x,mask,self.fill_value)
+ return x
+ self.wrap_results = f
+ self.lvalues = lvalues
+ self.rvalues = rvalues
+
+ @classmethod
+ def maybe_convert_for_time_op(cls, left, right, name):
+ """
+ if ``left`` and ``right`` are appropriate for datetime arithmetic with
+ operation ``name``, processes them and returns a ``_TimeOp`` object
+ that stores all the required values. Otherwise, it will generate
+ either a ``NotImplementedError`` or ``None``, indicating that the
+ operation is unsupported for datetimes (e.g., an unsupported r_op) or
+ that the data is not the right type for time ops.
+ """
+ # decide if we can do it
+ is_timedelta_lhs = com.is_timedelta64_dtype(left)
+ is_datetime_lhs = com.is_datetime64_dtype(left)
+ if not (is_datetime_lhs or is_timedelta_lhs):
+ return None
+ # rops are allowed. No need for special checks, just strip off
+ # r part.
+ if name.startswith('__r'):
+ name = "__" + name[3:]
+ return cls(left, right, name)
+
+
+def _arith_method_SERIES(op, name, str_rep=None, fill_zeros=None, default_axis=None, **eval_kwargs):
+ """
+ Wrapper function for Series arithmetic operations, to avoid
+ code duplication.
+ """
+ def na_op(x, y):
+ try:
+ result = expressions.evaluate(op, str_rep, x, y,
+ raise_on_error=True, **eval_kwargs)
+ except TypeError:
+ result = pa.empty(len(x), dtype=x.dtype)
+ if isinstance(y, (pa.Array, pd.Series)):
+ mask = notnull(x) & notnull(y)
+ result[mask] = op(x[mask], y[mask])
+ else:
+ mask = notnull(x)
+ result[mask] = op(x[mask], y)
+
+ result, changed = com._maybe_upcast_putmask(result, -mask, pa.NA)
+
+ result = com._fill_zeros(result, y, fill_zeros)
+ return result
+
+ def wrapper(left, right, name=name):
+
+ time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name)
+
+ if time_converted is None:
+ lvalues, rvalues = left, right
+ dtype = None
+ wrap_results = lambda x: x
+ elif time_converted == NotImplemented:
+ return NotImplemented
+ else:
+ lvalues = time_converted.lvalues
+ rvalues = time_converted.rvalues
+ dtype = time_converted.dtype
+ wrap_results = time_converted.wrap_results
+
+ if isinstance(rvalues, pd.Series):
+ join_idx, lidx, ridx = left.index.join(rvalues.index, how='outer',
+ return_indexers=True)
+ rindex = rvalues.index
+ name = _maybe_match_name(left, rvalues)
+ lvalues = getattr(lvalues, 'values', lvalues)
+ rvalues = getattr(rvalues, 'values', rvalues)
+ if left.index.equals(rindex):
+ index = left.index
+ else:
+ index = join_idx
+
+ if lidx is not None:
+ lvalues = com.take_1d(lvalues, lidx)
+
+ if ridx is not None:
+ rvalues = com.take_1d(rvalues, ridx)
+
+ arr = na_op(lvalues, rvalues)
+
+ return left._constructor(wrap_results(arr), index=index,
+ name=name, dtype=dtype)
+ elif isinstance(right, pd.DataFrame):
+ return NotImplemented
+ else:
+ # scalars
+ if hasattr(lvalues, 'values'):
+ lvalues = lvalues.values
+ return left._constructor(wrap_results(na_op(lvalues, rvalues)),
+ index=left.index, name=left.name, dtype=dtype)
+ return wrapper
+
+def _comp_method_SERIES(op, name, str_rep=None, masker=False):
+ """
+ Wrapper function for Series arithmetic operations, to avoid
+ code duplication.
+ """
+ def na_op(x, y):
+ if x.dtype == np.object_:
+ if isinstance(y, list):
+ y = lib.list_to_object_array(y)
+
+ if isinstance(y, (pa.Array, pd.Series)):
+ if y.dtype != np.object_:
+ result = lib.vec_compare(x, y.astype(np.object_), op)
+ else:
+ result = lib.vec_compare(x, y, op)
+ else:
+ result = lib.scalar_compare(x, y, op)
+ else:
+
+ try:
+ result = getattr(x,name)(y)
+ if result is NotImplemented:
+ raise TypeError("invalid type comparison")
+ except (AttributeError):
+ result = op(x, y)
+
+ return result
+
+ def wrapper(self, other):
+ if isinstance(other, pd.Series):
+ name = _maybe_match_name(self, other)
+ if len(self) != len(other):
+ raise ValueError('Series lengths must match to compare')
+ return self._constructor(na_op(self.values, other.values),
+ index=self.index, name=name)
+ elif isinstance(other, pd.DataFrame): # pragma: no cover
+ return NotImplemented
+ elif isinstance(other, (pa.Array, pd.Series)):
+ if len(self) != len(other):
+ raise ValueError('Lengths must match to compare')
+ return self._constructor(na_op(self.values, np.asarray(other)),
+ index=self.index, name=self.name)
+ else:
+
+ mask = isnull(self)
+
+ values = self.values
+ other = _index.convert_scalar(values, other)
+
+ if issubclass(values.dtype.type, np.datetime64):
+ values = values.view('i8')
+
+ # scalars
+ res = na_op(values, other)
+ if np.isscalar(res):
+ raise TypeError('Could not compare %s type with Series'
+ % type(other))
+
+ # always return a full value series here
+ res = _values_from_object(res)
+
+ res = pd.Series(res, index=self.index, name=self.name, dtype='bool')
+
+ # mask out the invalids
+ if mask.any():
+ res[mask.values] = masker
+
+ return res
+ return wrapper
+
+
+def _bool_method_SERIES(op, name, str_rep=None):
+ """
+ Wrapper function for Series arithmetic operations, to avoid
+ code duplication.
+ """
+ def na_op(x, y):
+ try:
+ result = op(x, y)
+ except TypeError:
+ if isinstance(y, list):
+ y = lib.list_to_object_array(y)
+
+ if isinstance(y, (pa.Array, pd.Series)):
+ if (x.dtype == np.bool_ and
+ y.dtype == np.bool_): # pragma: no cover
+ result = op(x, y) # when would this be hit?
+ else:
+ x = com._ensure_object(x)
+ y = com._ensure_object(y)
+ result = lib.vec_binop(x, y, op)
+ else:
+ result = lib.scalar_binop(x, y, op)
+
+ return result
+
+ def wrapper(self, other):
+ if isinstance(other, pd.Series):
+ name = _maybe_match_name(self, other)
+ return self._constructor(na_op(self.values, other.values),
+ index=self.index, name=name)
+ elif isinstance(other, pd.DataFrame):
+ return NotImplemented
+ else:
+ # scalars
+ return self._constructor(na_op(self.values, other),
+ index=self.index, name=self.name)
+ return wrapper
+
+
+# original Series _radd_compat method
+def _radd_compat(left, right):
+ radd = lambda x, y: y + x
+ # GH #353, NumPy 1.5.1 workaround
+ try:
+ output = radd(left, right)
+ except TypeError:
+ cond = (pd._np_version_under1p6 and
+ left.dtype == np.object_)
+ if cond: # pragma: no cover
+ output = np.empty_like(left)
+ output.flat[:] = [radd(x, right) for x in left.flat]
+ else:
+ raise
+
+ return output
+
+
+def _flex_method_SERIES(op, name, str_rep=None, default_axis=None,
+ fill_zeros=None, **eval_kwargs):
+ doc = """
+ Binary operator %s with support to substitute a fill_value for missing data
+ in one of the inputs
+
+ Parameters
+ ----------
+ other: Series or scalar value
+ fill_value : None or float value, default None (NaN)
+ Fill missing (NaN) values with this value. If both Series are
+ missing, the result will be missing
+ level : int or name
+ Broadcast across a level, matching Index values on the
+ passed MultiIndex level
+
+ Returns
+ -------
+ result : Series
+ """ % name
+
+ @Appender(doc)
+ def f(self, other, level=None, fill_value=None):
+ if isinstance(other, pd.Series):
+ return self._binop(other, op, level=level, fill_value=fill_value)
+ elif isinstance(other, (pa.Array, pd.Series, list, tuple)):
+ if len(other) != len(self):
+ raise ValueError('Lengths must be equal')
+ return self._binop(self._constructor(other, self.index), op,
+ level=level, fill_value=fill_value)
+ else:
+ return self._constructor(op(self.values, other), self.index,
+ name=self.name)
+
+ f.__name__ = name
+ return f
+
+series_flex_funcs = dict(flex_arith_method=_flex_method_SERIES,
+ radd_func=_radd_compat,
+ flex_comp_method=_comp_method_SERIES)
+
+series_special_funcs = dict(arith_method=_arith_method_SERIES,
+ radd_func=_radd_compat,
+ comp_method=_comp_method_SERIES,
+ bool_method=_bool_method_SERIES)
+
+
+_arith_doc_FRAME = """
+Binary operator %s with support to substitute a fill_value for missing data in
+one of the inputs
+
+Parameters
+----------
+other : Series, DataFrame, or constant
+axis : {0, 1, 'index', 'columns'}
+ For Series input, axis to match Series index on
+fill_value : None or float value, default None
+ Fill missing (NaN) values with this value. If both DataFrame locations are
+ missing, the result will be missing
+level : int or name
+ Broadcast across a level, matching Index values on the
+ passed MultiIndex level
+
+Notes
+-----
+Mismatched indices will be unioned together
+
+Returns
+-------
+result : DataFrame
+"""
+
+
+def _arith_method_FRAME(op, name, str_rep=None, default_axis='columns', fill_zeros=None, **eval_kwargs):
+ def na_op(x, y):
+ try:
+ result = expressions.evaluate(
+ op, str_rep, x, y, raise_on_error=True, **eval_kwargs)
+ except TypeError:
+ xrav = x.ravel()
+ result = np.empty(x.size, dtype=x.dtype)
+ if isinstance(y, (np.ndarray, pd.Series)):
+ yrav = y.ravel()
+ mask = notnull(xrav) & notnull(yrav)
+ result[mask] = op(xrav[mask], yrav[mask])
+ else:
+ mask = notnull(xrav)
+ result[mask] = op(xrav[mask], y)
+
+ result, changed = com._maybe_upcast_putmask(result, -mask, np.nan)
+ result = result.reshape(x.shape)
+
+ result = com._fill_zeros(result, y, fill_zeros)
+
+ return result
+
+ @Appender(_arith_doc_FRAME % name)
+ def f(self, other, axis=default_axis, level=None, fill_value=None):
+ if isinstance(other, pd.DataFrame): # Another DataFrame
+ return self._combine_frame(other, na_op, fill_value, level)
+ elif isinstance(other, pd.Series):
+ return self._combine_series(other, na_op, fill_value, axis, level)
+ elif isinstance(other, (list, tuple)):
+ if axis is not None and self._get_axis_name(axis) == 'index':
+ # casted = self._constructor_sliced(other, index=self.index)
+ casted = pd.Series(other, index=self.index)
+ else:
+ # casted = self._constructor_sliced(other, index=self.columns)
+ casted = pd.Series(other, index=self.columns)
+ return self._combine_series(casted, na_op, fill_value, axis, level)
+ elif isinstance(other, np.ndarray):
+ if other.ndim == 1:
+ if axis is not None and self._get_axis_name(axis) == 'index':
+ # casted = self._constructor_sliced(other, index=self.index)
+ casted = pd.Series(other, index=self.index)
+ else:
+ # casted = self._constructor_sliced(other, index=self.columns)
+ casted = pd.Series(other, index=self.columns)
+ return self._combine_series(casted, na_op, fill_value,
+ axis, level)
+ elif other.ndim == 2:
+ # casted = self._constructor(other, index=self.index,
+ # columns=self.columns)
+ casted = pd.DataFrame(other, index=self.index,
+ columns=self.columns)
+ return self._combine_frame(casted, na_op, fill_value, level)
+ else:
+ raise ValueError("Incompatible argument shape: %s" %
+ (other.shape,))
+ else:
+ return self._combine_const(other, na_op)
+
+ f.__name__ = name
+
+ return f
+
+
+# Masker unused for now
+def _flex_comp_method_FRAME(op, name, str_rep=None, default_axis='columns',
+ masker=False):
+
+ def na_op(x, y):
+ try:
+ result = op(x, y)
+ except TypeError:
+ xrav = x.ravel()
+ result = np.empty(x.size, dtype=x.dtype)
+ if isinstance(y, (np.ndarray, pd.Series)):
+ yrav = y.ravel()
+ mask = notnull(xrav) & notnull(yrav)
+ result[mask] = op(np.array(list(xrav[mask])),
+ np.array(list(yrav[mask])))
+ else:
+ mask = notnull(xrav)
+ result[mask] = op(np.array(list(xrav[mask])), y)
+
+ if op == operator.ne: # pragma: no cover
+ np.putmask(result, -mask, True)
+ else:
+ np.putmask(result, -mask, False)
+ result = result.reshape(x.shape)
+
+ return result
+
+ @Appender('Wrapper for flexible comparison methods %s' % name)
+ def f(self, other, axis=default_axis, level=None):
+ if isinstance(other, pd.DataFrame): # Another DataFrame
+ return self._flex_compare_frame(other, na_op, str_rep, level)
+
+ elif isinstance(other, pd.Series):
+ return self._combine_series(other, na_op, None, axis, level)
+
+ elif isinstance(other, (list, tuple)):
+ if axis is not None and self._get_axis_name(axis) == 'index':
+ casted = pd.Series(other, index=self.index)
+ else:
+ casted = pd.Series(other, index=self.columns)
+
+ return self._combine_series(casted, na_op, None, axis, level)
+
+ elif isinstance(other, np.ndarray):
+ if other.ndim == 1:
+ if axis is not None and self._get_axis_name(axis) == 'index':
+ casted = pd.Series(other, index=self.index)
+ else:
+ casted = pd.Series(other, index=self.columns)
+
+ return self._combine_series(casted, na_op, None, axis, level)
+
+ elif other.ndim == 2:
+ casted = pd.DataFrame(other, index=self.index,
+ columns=self.columns)
+
+ return self._flex_compare_frame(casted, na_op, str_rep, level)
+
+ else:
+ raise ValueError("Incompatible argument shape: %s" %
+ (other.shape,))
+
+ else:
+ return self._combine_const(other, na_op)
+
+ f.__name__ = name
+
+ return f
+
+
+def _comp_method_FRAME(func, name, str_rep, masker=False):
+ @Appender('Wrapper for comparison method %s' % name)
+ def f(self, other):
+ if isinstance(other, pd.DataFrame): # Another DataFrame
+ return self._compare_frame(other, func, str_rep)
+ elif isinstance(other, pd.Series):
+ return self._combine_series_infer(other, func)
+ else:
+
+ # straight boolean comparisions we want to allow all columns
+ # (regardless of dtype to pass thru) See #4537 for discussion.
+ return self._combine_const(other, func, raise_on_error=False).fillna(True).astype(bool)
+
+ f.__name__ = name
+
+ return f
+
+
+frame_flex_funcs = dict(flex_arith_method=_arith_method_FRAME,
+ radd_func=_radd_compat,
+ flex_comp_method=_flex_comp_method_FRAME)
+
+
+frame_special_funcs = dict(arith_method=_arith_method_FRAME,
+ radd_func=_radd_compat,
+ comp_method=_comp_method_FRAME,
+ bool_method=_arith_method_FRAME)
+
+
+def _arith_method_PANEL(op, name, str_rep=None, fill_zeros=None,
+ default_axis=None, **eval_kwargs):
+ # copied from Series na_op above, but without unnecessary branch for
+ # non-scalar
+ def na_op(x, y):
+ try:
+ result = expressions.evaluate(op, str_rep, x, y,
+ raise_on_error=True, **eval_kwargs)
+ except TypeError:
+ result = pa.empty(len(x), dtype=x.dtype)
+ mask = notnull(x)
+ result[mask] = op(x[mask], y)
+ result, changed = com._maybe_upcast_putmask(result, -mask, pa.NA)
+
+ result = com._fill_zeros(result, y, fill_zeros)
+ return result
+ # work only for scalars
+
+ def f(self, other):
+ if not np.isscalar(other):
+ raise ValueError('Simple arithmetic with %s can only be '
+ 'done with scalar values' % self._constructor.__name__)
+
+ return self._combine(other, op)
+ f.__name__ = name
+ return f
+
+
+def _comp_method_PANEL(op, name, str_rep=None, masker=False):
+
+ def na_op(x, y):
+ try:
+ result = expressions.evaluate(op, str_rep, x, y,
+ raise_on_error=True)
+ except TypeError:
+ xrav = x.ravel()
+ result = np.empty(x.size, dtype=bool)
+ if isinstance(y, np.ndarray):
+ yrav = y.ravel()
+ mask = notnull(xrav) & notnull(yrav)
+ result[mask] = op(np.array(list(xrav[mask])),
+ np.array(list(yrav[mask])))
+ else:
+ mask = notnull(xrav)
+ result[mask] = op(np.array(list(xrav[mask])), y)
+
+ if op == operator.ne: # pragma: no cover
+ np.putmask(result, -mask, True)
+ else:
+ np.putmask(result, -mask, False)
+ result = result.reshape(x.shape)
+
+ return result
+
+ @Appender('Wrapper for comparison method %s' % name)
+ def f(self, other):
+ if isinstance(other, self._constructor):
+ return self._compare_constructor(other, na_op)
+ elif isinstance(other, (self._constructor_sliced, pd.DataFrame,
+ pd.Series)):
+ raise Exception("input needs alignment for this object [%s]" %
+ self._constructor)
+ else:
+ return self._combine_const(other, na_op)
+
+ f.__name__ = name
+
+ return f
+
+
+panel_special_funcs = dict(arith_method=_arith_method_PANEL,
+ comp_method=_comp_method_PANEL,
+ bool_method=_arith_method_PANEL)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 697344639c41b..7208ceff7d1a7 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -5,7 +5,6 @@
from pandas.compat import map, zip, range, lrange, lmap, u, OrderedDict, OrderedDefaultdict
from pandas import compat
-import operator
import sys
import numpy as np
from pandas.core.common import (PandasError,
@@ -18,14 +17,14 @@
from pandas.core.internals import (BlockManager,
create_block_manager_from_arrays,
create_block_manager_from_blocks)
-from pandas.core.series import Series
from pandas.core.frame import DataFrame
from pandas.core.generic import NDFrame
from pandas import compat
from pandas.util.decorators import deprecate, Appender, Substitution
import pandas.core.common as com
+import pandas.core.ops as ops
import pandas.core.nanops as nanops
-import pandas.lib as lib
+import pandas.computation.expressions as expressions
def _ensure_like_indices(time, panels):
@@ -91,57 +90,6 @@ def panel_index(time, panels, names=['time', 'panel']):
return MultiIndex(levels, labels, sortorder=None, names=names)
-def _arith_method(func, name):
- # work only for scalars
-
- def f(self, other):
- if not np.isscalar(other):
- raise ValueError('Simple arithmetic with %s can only be '
- 'done with scalar values' % self._constructor.__name__)
-
- return self._combine(other, func)
- f.__name__ = name
- return f
-
-
-def _comp_method(func, name):
-
- def na_op(x, y):
- try:
- result = func(x, y)
- except TypeError:
- xrav = x.ravel()
- result = np.empty(x.size, dtype=x.dtype)
- if isinstance(y, np.ndarray):
- yrav = y.ravel()
- mask = notnull(xrav) & notnull(yrav)
- result[mask] = func(np.array(list(xrav[mask])),
- np.array(list(yrav[mask])))
- else:
- mask = notnull(xrav)
- result[mask] = func(np.array(list(xrav[mask])), y)
-
- if func == operator.ne: # pragma: no cover
- np.putmask(result, -mask, True)
- else:
- np.putmask(result, -mask, False)
- result = result.reshape(x.shape)
-
- return result
-
- @Appender('Wrapper for comparison method %s' % name)
- def f(self, other):
- if isinstance(other, self._constructor):
- return self._compare_constructor(other, func)
- elif isinstance(other, (self._constructor_sliced, DataFrame, Series)):
- raise Exception("input needs alignment for this object [%s]" %
- self._constructor)
- else:
- return self._combine_const(other, na_op)
-
- f.__name__ = name
-
- return f
class Panel(NDFrame):
@@ -289,25 +237,6 @@ def from_dict(cls, data, intersect=False, orient='items', dtype=None):
d[cls._info_axis_name] = Index(ks)
return cls(**d)
- # Comparison methods
- __add__ = _arith_method(operator.add, '__add__')
- __sub__ = _arith_method(operator.sub, '__sub__')
- __truediv__ = _arith_method(operator.truediv, '__truediv__')
- __floordiv__ = _arith_method(operator.floordiv, '__floordiv__')
- __mul__ = _arith_method(operator.mul, '__mul__')
- __pow__ = _arith_method(operator.pow, '__pow__')
-
- __radd__ = _arith_method(operator.add, '__radd__')
- __rmul__ = _arith_method(operator.mul, '__rmul__')
- __rsub__ = _arith_method(lambda x, y: y - x, '__rsub__')
- __rtruediv__ = _arith_method(lambda x, y: y / x, '__rtruediv__')
- __rfloordiv__ = _arith_method(lambda x, y: y // x, '__rfloordiv__')
- __rpow__ = _arith_method(lambda x, y: y ** x, '__rpow__')
-
- if not compat.PY3:
- __div__ = _arith_method(operator.div, '__div__')
- __rdiv__ = _arith_method(lambda x, y: y / x, '__rdiv__')
-
def __getitem__(self, key):
if isinstance(self._info_axis, MultiIndex):
return self._getitem_multilevel(key)
@@ -365,26 +294,6 @@ def _compare_constructor(self, other, func):
d = self._construct_axes_dict(copy=False)
return self._constructor(data=new_data, **d)
- # boolean operators
- __and__ = _arith_method(operator.and_, '__and__')
- __or__ = _arith_method(operator.or_, '__or__')
- __xor__ = _arith_method(operator.xor, '__xor__')
-
- # Comparison methods
- __eq__ = _comp_method(operator.eq, '__eq__')
- __ne__ = _comp_method(operator.ne, '__ne__')
- __lt__ = _comp_method(operator.lt, '__lt__')
- __gt__ = _comp_method(operator.gt, '__gt__')
- __le__ = _comp_method(operator.le, '__le__')
- __ge__ = _comp_method(operator.ge, '__ge__')
-
- eq = _comp_method(operator.eq, 'eq')
- ne = _comp_method(operator.ne, 'ne')
- gt = _comp_method(operator.gt, 'gt')
- lt = _comp_method(operator.lt, 'lt')
- ge = _comp_method(operator.ge, 'ge')
- le = _comp_method(operator.le, 'le')
-
#----------------------------------------------------------------------
# Magic methods
@@ -1262,7 +1171,7 @@ def _extract_axis(self, data, axis=0, intersect=False):
return _ensure_index(index)
@classmethod
- def _add_aggregate_operations(cls):
+ def _add_aggregate_operations(cls, use_numexpr=True):
""" add the operations to the cls; evaluate the doc strings again """
# doc strings substitors
@@ -1279,25 +1188,29 @@ def _add_aggregate_operations(cls):
-------
""" + cls.__name__ + "\n"
- def _panel_arith_method(op, name):
+ def _panel_arith_method(op, name, str_rep = None, default_axis=None,
+ fill_zeros=None, **eval_kwargs):
+ def na_op(x, y):
+ try:
+ result = expressions.evaluate(op, str_rep, x, y, raise_on_error=True, **eval_kwargs)
+ except TypeError:
+ result = op(x, y)
+
+ # handles discrepancy between numpy and numexpr on division/mod by 0
+ # though, given that these are generally (always?) non-scalars, I'm
+ # not sure whether it's worth it at the moment
+ result = com._fill_zeros(result,y,fill_zeros)
+ return result
@Substitution(op)
@Appender(_agg_doc)
def f(self, other, axis=0):
- return self._combine(other, op, axis=axis)
+ return self._combine(other, na_op, axis=axis)
f.__name__ = name
return f
-
- cls.add = _panel_arith_method(operator.add, 'add')
- cls.subtract = cls.sub = _panel_arith_method(operator.sub, 'subtract')
- cls.multiply = cls.mul = _panel_arith_method(operator.mul, 'multiply')
-
- try:
- cls.divide = cls.div = _panel_arith_method(operator.div, 'divide')
- except AttributeError: # pragma: no cover
- # Python 3
- cls.divide = cls.div = _panel_arith_method(
- operator.truediv, 'divide')
-
+ # add `div`, `mul`, `pow`, etc..
+ ops.add_flex_arithmetic_methods(cls, _panel_arith_method,
+ use_numexpr=use_numexpr,
+ flex_comp_method=ops._comp_method_PANEL)
_agg_doc = """
Return %(desc)s over requested axis
@@ -1385,6 +1298,8 @@ def min(self, axis='major', skipna=True):
'minor': 'minor_axis'},
slicers={'major_axis': 'index',
'minor_axis': 'columns'})
+
+ops.add_special_arithmetic_methods(Panel, **ops.panel_special_funcs)
Panel._add_aggregate_operations()
WidePanel = Panel
diff --git a/pandas/core/series.py b/pandas/core/series.py
index aeb63ecbe268f..38e22e7a9ed3a 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -6,7 +6,6 @@
# pylint: disable=W0703,W0622,W0613,W0201
import operator
-from distutils.version import LooseVersion
import types
from numpy import nan, ndarray
@@ -21,7 +20,7 @@
_values_from_object,
_possibly_cast_to_datetime, _possibly_castable,
_possibly_convert_platform,
- ABCSparseArray)
+ ABCSparseArray, _maybe_match_name)
from pandas.core.index import (Index, MultiIndex, InvalidIndexError,
_ensure_index, _handle_legacy_indexes)
from pandas.core.indexing import (
@@ -32,13 +31,12 @@
from pandas.core.categorical import Categorical
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex, Period
-from pandas.tseries.offsets import DateOffset
-from pandas.tseries.timedeltas import _possibly_cast_to_timedelta
from pandas import compat
from pandas.util.terminal import get_terminal_size
from pandas.compat import zip, lzip, u, OrderedDict
import pandas.core.array as pa
+import pandas.core.ops as ops
import pandas.core.common as com
import pandas.core.datetools as datetools
@@ -55,387 +53,6 @@
__all__ = ['Series']
-_np_version = np.version.short_version
-_np_version_under1p6 = LooseVersion(_np_version) < '1.6'
-_np_version_under1p7 = LooseVersion(_np_version) < '1.7'
-
-class _TimeOp(object):
- """
- Wrapper around Series datetime/time/timedelta arithmetic operations.
- Generally, you should use classmethod ``maybe_convert_for_time_op`` as an
- entry point.
- """
- fill_value = tslib.iNaT
- wrap_results = staticmethod(lambda x: x)
- dtype = None
-
- def __init__(self, left, right, name):
- self.name = name
-
- lvalues = self._convert_to_array(left, name=name)
- rvalues = self._convert_to_array(right, name=name)
-
- self.is_timedelta_lhs = com.is_timedelta64_dtype(left)
- self.is_datetime_lhs = com.is_datetime64_dtype(left)
- self.is_integer_lhs = left.dtype.kind in ['i','u']
- self.is_datetime_rhs = com.is_datetime64_dtype(rvalues)
- self.is_timedelta_rhs = com.is_timedelta64_dtype(rvalues) or (not self.is_datetime_rhs and _np_version_under1p7)
- self.is_integer_rhs = rvalues.dtype.kind in ('i','u')
-
- self._validate()
-
- self._convert_for_datetime(lvalues, rvalues)
-
- def _validate(self):
- # timedelta and integer mul/div
-
- if (self.is_timedelta_lhs and self.is_integer_rhs) or\
- (self.is_integer_lhs and self.is_timedelta_rhs):
-
- if self.name not in ('__truediv__','__div__','__mul__'):
- raise TypeError("can only operate on a timedelta and an integer for "
- "division, but the operator [%s] was passed" % self.name)
-
- # 2 datetimes
- elif self.is_datetime_lhs and self.is_datetime_rhs:
- if self.name != '__sub__':
- raise TypeError("can only operate on a datetimes for subtraction, "
- "but the operator [%s] was passed" % self.name)
-
-
- # 2 timedeltas
- elif self.is_timedelta_lhs and self.is_timedelta_rhs:
-
- if self.name not in ('__div__', '__truediv__', '__add__', '__sub__'):
- raise TypeError("can only operate on a timedeltas for "
- "addition, subtraction, and division, but the operator [%s] was passed" % self.name)
-
- # datetime and timedelta
- elif self.is_datetime_lhs and self.is_timedelta_rhs:
-
- if self.name not in ('__add__','__sub__'):
- raise TypeError("can only operate on a datetime with a rhs of a timedelta for "
- "addition and subtraction, but the operator [%s] was passed" % self.name)
-
- elif self.is_timedelta_lhs and self.is_datetime_rhs:
-
- if self.name != '__add__':
- raise TypeError("can only operate on a timedelta and a datetime for "
- "addition, but the operator [%s] was passed" % self.name)
- else:
- raise TypeError('cannot operate on a series with out a rhs '
- 'of a series/ndarray of type datetime64[ns] '
- 'or a timedelta')
-
- def _convert_to_array(self, values, name=None):
- """converts values to ndarray"""
- coerce = 'compat' if _np_version_under1p7 else True
- if not is_list_like(values):
- values = np.array([values])
- inferred_type = lib.infer_dtype(values)
- if inferred_type in ('datetime64','datetime','date','time'):
- # a datetlike
- if not (isinstance(values, (pa.Array, Series)) and com.is_datetime64_dtype(values)):
- values = tslib.array_to_datetime(values)
- elif isinstance(values, DatetimeIndex):
- values = values.to_series()
- elif inferred_type in ('timedelta', 'timedelta64'):
- # have a timedelta, convert to to ns here
- values = _possibly_cast_to_timedelta(values, coerce=coerce)
- elif inferred_type == 'integer':
- # py3 compat where dtype is 'm' but is an integer
- if values.dtype.kind == 'm':
- values = values.astype('timedelta64[ns]')
- elif isinstance(values, PeriodIndex):
- values = values.to_timestamp().to_series()
- elif name not in ('__truediv__','__div__','__mul__'):
- raise TypeError("incompatible type for a datetime/timedelta "
- "operation [{0}]".format(name))
- elif isinstance(values[0],DateOffset):
- # handle DateOffsets
- os = pa.array([ getattr(v,'delta',None) for v in values ])
- mask = isnull(os)
- if mask.any():
- raise TypeError("cannot use a non-absolute DateOffset in "
- "datetime/timedelta operations [{0}]".format(
- ','.join([ com.pprint_thing(v) for v in values[mask] ])))
- values = _possibly_cast_to_timedelta(os, coerce=coerce)
- else:
- raise TypeError("incompatible type [{0}] for a datetime/timedelta operation".format(pa.array(values).dtype))
-
- return values
-
- def _convert_for_datetime(self, lvalues, rvalues):
- mask = None
- # datetimes require views
- if self.is_datetime_lhs or self.is_datetime_rhs:
- # datetime subtraction means timedelta
- if self.is_datetime_lhs and self.is_datetime_rhs:
- self.dtype = 'timedelta64[ns]'
- else:
- self.dtype = 'datetime64[ns]'
- mask = isnull(lvalues) | isnull(rvalues)
- lvalues = lvalues.view(np.int64)
- rvalues = rvalues.view(np.int64)
-
- # otherwise it's a timedelta
- else:
- self.dtype = 'timedelta64[ns]'
- mask = isnull(lvalues) | isnull(rvalues)
- lvalues = lvalues.astype(np.int64)
- rvalues = rvalues.astype(np.int64)
-
- # time delta division -> unit less
- # integer gets converted to timedelta in np < 1.6
- if (self.is_timedelta_lhs and self.is_timedelta_rhs) and\
- not self.is_integer_rhs and\
- not self.is_integer_lhs and\
- self.name in ('__div__', '__truediv__'):
- self.dtype = 'float64'
- self.fill_value = np.nan
- lvalues = lvalues.astype(np.float64)
- rvalues = rvalues.astype(np.float64)
-
- # if we need to mask the results
- if mask is not None:
- if mask.any():
- def f(x):
- x = pa.array(x,dtype=self.dtype)
- np.putmask(x,mask,self.fill_value)
- return x
- self.wrap_results = f
- self.lvalues = lvalues
- self.rvalues = rvalues
-
- @classmethod
- def maybe_convert_for_time_op(cls, left, right, name):
- """
- if ``left`` and ``right`` are appropriate for datetime arithmetic with
- operation ``name``, processes them and returns a ``_TimeOp`` object
- that stores all the required values. Otherwise, it will generate
- either a ``NotImplementedError`` or ``None``, indicating that the
- operation is unsupported for datetimes (e.g., an unsupported r_op) or
- that the data is not the right type for time ops.
- """
- # decide if we can do it
- is_timedelta_lhs = com.is_timedelta64_dtype(left)
- is_datetime_lhs = com.is_datetime64_dtype(left)
- if not (is_datetime_lhs or is_timedelta_lhs):
- return None
- # rops currently disabled
- if name.startswith('__r'):
- return NotImplemented
-
- return cls(left, right, name)
-
-#----------------------------------------------------------------------
-# Wrapper function for Series arithmetic methods
-
-def _arith_method(op, name, fill_zeros=None):
- """
- Wrapper function for Series arithmetic operations, to avoid
- code duplication.
- """
- def na_op(x, y):
- try:
-
- result = op(x, y)
- result = com._fill_zeros(result, y, fill_zeros)
-
- except TypeError:
- result = pa.empty(len(x), dtype=x.dtype)
- if isinstance(y, (pa.Array, Series)):
- mask = notnull(x) & notnull(y)
- result[mask] = op(x[mask], y[mask])
- else:
- mask = notnull(x)
- result[mask] = op(x[mask], y)
-
- result, changed = com._maybe_upcast_putmask(result, -mask, pa.NA)
-
- return result
-
- def wrapper(left, right, name=name):
- from pandas.core.frame import DataFrame
-
- time_converted = _TimeOp.maybe_convert_for_time_op(left, right, name)
-
- if time_converted is None:
- lvalues, rvalues = left, right
- dtype = None
- wrap_results = lambda x: x
- elif time_converted == NotImplemented:
- return NotImplemented
- else:
- lvalues = time_converted.lvalues
- rvalues = time_converted.rvalues
- dtype = time_converted.dtype
- wrap_results = time_converted.wrap_results
-
- if isinstance(rvalues, Series):
-
- join_idx, lidx, ridx = left.index.join(rvalues.index, how='outer',
- return_indexers=True)
- rindex = rvalues.index
- name = _maybe_match_name(left, rvalues)
- lvalues = getattr(lvalues, 'values', lvalues)
- rvalues = getattr(rvalues, 'values', rvalues)
- if left.index.equals(rindex):
- index = left.index
- else:
- index = join_idx
-
- if lidx is not None:
- lvalues = com.take_1d(lvalues, lidx)
-
- if ridx is not None:
- rvalues = com.take_1d(rvalues, ridx)
-
- arr = na_op(lvalues, rvalues)
-
- return left._constructor(wrap_results(arr), index=index,
- name=name, dtype=dtype)
- elif isinstance(right, DataFrame):
- return NotImplemented
- else:
- # scalars
- if hasattr(lvalues, 'values'):
- lvalues = lvalues.values
- return left._constructor(wrap_results(na_op(lvalues, rvalues)),
- index=left.index, name=left.name, dtype=dtype)
- return wrapper
-
-
-def _comp_method(op, name, masker=False):
- """
- Wrapper function for Series arithmetic operations, to avoid
- code duplication.
- """
- def na_op(x, y):
- if x.dtype == np.object_:
- if isinstance(y, list):
- y = lib.list_to_object_array(y)
-
- if isinstance(y, (pa.Array, Series)):
- if y.dtype != np.object_:
- result = lib.vec_compare(x, y.astype(np.object_), op)
- else:
- result = lib.vec_compare(x, y, op)
- else:
- result = lib.scalar_compare(x, y, op)
- else:
-
- try:
- result = getattr(x,name)(y)
- if result is NotImplemented:
- raise TypeError("invalid type comparison")
- except (AttributeError):
- result = op(x, y)
-
- return result
-
- def wrapper(self, other):
- from pandas.core.frame import DataFrame
-
- if isinstance(other, Series):
- name = _maybe_match_name(self, other)
- if len(self) != len(other):
- raise ValueError('Series lengths must match to compare')
- return self._constructor(na_op(self.values, other.values),
- index=self.index, name=name)
- elif isinstance(other, DataFrame): # pragma: no cover
- return NotImplemented
- elif isinstance(other, (pa.Array, Series)):
- if len(self) != len(other):
- raise ValueError('Lengths must match to compare')
- return self._constructor(na_op(self.values, np.asarray(other)),
- index=self.index, name=self.name)
- else:
-
- mask = isnull(self)
-
- values = self.values
- other = _index.convert_scalar(values, other)
-
- if issubclass(values.dtype.type, np.datetime64):
- values = values.view('i8')
-
- # scalars
- res = na_op(values, other)
- if np.isscalar(res):
- raise TypeError('Could not compare %s type with Series'
- % type(other))
-
- # always return a full value series here
- res = _values_from_object(res)
-
- res = Series(res, index=self.index, name=self.name, dtype='bool')
-
- # mask out the invalids
- if mask.any():
- res[mask.values] = masker
-
- return res
- return wrapper
-
-
-def _bool_method(op, name):
- """
- Wrapper function for Series arithmetic operations, to avoid
- code duplication.
- """
- def na_op(x, y):
- try:
- result = op(x, y)
- except TypeError:
- if isinstance(y, list):
- y = lib.list_to_object_array(y)
-
- if isinstance(y, (pa.Array, Series)):
- if (x.dtype == np.bool_ and
- y.dtype == np.bool_): # pragma: no cover
- result = op(x, y) # when would this be hit?
- else:
- x = com._ensure_object(x)
- y = com._ensure_object(y)
- result = lib.vec_binop(x, y, op)
- else:
- result = lib.scalar_binop(x, y, op)
-
- return result
-
- def wrapper(self, other):
- from pandas.core.frame import DataFrame
-
- if isinstance(other, Series):
- name = _maybe_match_name(self, other)
- return self._constructor(na_op(self.values, other.values),
- index=self.index, name=name)
- elif isinstance(other, DataFrame):
- return NotImplemented
- else:
- # scalars
- return self._constructor(na_op(self.values, other),
- index=self.index, name=self.name)
- return wrapper
-
-
-def _radd_compat(left, right):
- radd = lambda x, y: y + x
- # GH #353, NumPy 1.5.1 workaround
- try:
- output = radd(left, right)
- except TypeError:
- cond = (_np_version_under1p6 and
- left.dtype == np.object_)
- if cond: # pragma: no cover
- output = np.empty_like(left)
- output.flat[:] = [radd(x, right) for x in left.flat]
- else:
- raise
-
- return output
-
def _coerce_method(converter):
""" install the scalar coercion methods """
@@ -448,50 +65,6 @@ def wrapper(self):
return wrapper
-def _maybe_match_name(a, b):
- name = None
- if a.name == b.name:
- name = a.name
- return name
-
-
-def _flex_method(op, name):
- doc = """
- Binary operator %s with support to substitute a fill_value for missing data
- in one of the inputs
-
- Parameters
- ----------
- other: Series or scalar value
- fill_value : None or float value, default None (NaN)
- Fill missing (NaN) values with this value. If both Series are
- missing, the result will be missing
- level : int or name
- Broadcast across a level, matching Index values on the
- passed MultiIndex level
-
- Returns
- -------
- result : Series
- """ % name
-
- @Appender(doc)
- def f(self, other, level=None, fill_value=None):
- if isinstance(other, Series):
- return self._binop(other, op, level=level, fill_value=fill_value)
- elif isinstance(other, (pa.Array, Series, list, tuple)):
- if len(other) != len(self):
- raise ValueError('Lengths must be equal')
- return self._binop(self._constructor(other, self.index), op,
- level=level, fill_value=fill_value)
- else:
- return self._constructor(op(self.values, other), self.index,
- name=self.name)
-
- f.__name__ = name
- return f
-
-
def _unbox(func):
@Appender(func.__doc__)
def f(self, *args, **kwargs):
@@ -1423,37 +996,6 @@ def iteritems(self):
if compat.PY3: # pragma: no cover
items = iteritems
- #----------------------------------------------------------------------
- # Arithmetic operators
-
- __add__ = _arith_method(operator.add, '__add__')
- __sub__ = _arith_method(operator.sub, '__sub__')
- __mul__ = _arith_method(operator.mul, '__mul__')
- __truediv__ = _arith_method(
- operator.truediv, '__truediv__', fill_zeros=np.inf)
- __floordiv__ = _arith_method(
- operator.floordiv, '__floordiv__', fill_zeros=np.inf)
- __pow__ = _arith_method(operator.pow, '__pow__')
- __mod__ = _arith_method(operator.mod, '__mod__', fill_zeros=np.nan)
-
- __radd__ = _arith_method(_radd_compat, '__add__')
- __rmul__ = _arith_method(operator.mul, '__mul__')
- __rsub__ = _arith_method(lambda x, y: y - x, '__sub__')
- __rtruediv__ = _arith_method(
- lambda x, y: y / x, '__truediv__', fill_zeros=np.inf)
- __rfloordiv__ = _arith_method(
- lambda x, y: y // x, '__floordiv__', fill_zeros=np.inf)
- __rpow__ = _arith_method(lambda x, y: y ** x, '__pow__')
- __rmod__ = _arith_method(lambda x, y: y % x, '__mod__', fill_zeros=np.nan)
-
- # comparisons
- __gt__ = _comp_method(operator.gt, '__gt__')
- __ge__ = _comp_method(operator.ge, '__ge__')
- __lt__ = _comp_method(operator.lt, '__lt__')
- __le__ = _comp_method(operator.le, '__le__')
- __eq__ = _comp_method(operator.eq, '__eq__')
- __ne__ = _comp_method(operator.ne, '__ne__', True)
-
# inversion
def __neg__(self):
arr = operator.neg(self.values)
@@ -1463,26 +1005,6 @@ def __invert__(self):
arr = operator.inv(self.values)
return self._constructor(arr, self.index, name=self.name)
- # binary logic
- __or__ = _bool_method(operator.or_, '__or__')
- __and__ = _bool_method(operator.and_, '__and__')
- __xor__ = _bool_method(operator.xor, '__xor__')
-
- # Inplace operators
- __iadd__ = __add__
- __isub__ = __sub__
- __imul__ = __mul__
- __itruediv__ = __truediv__
- __ifloordiv__ = __floordiv__
- __ipow__ = __pow__
-
- # Python 2 division operators
- if not compat.PY3:
- __div__ = _arith_method(operator.div, '__div__', fill_zeros=np.inf)
- __rdiv__ = _arith_method(
- lambda x, y: y / x, '__div__', fill_zeros=np.inf)
- __idiv__ = __div__
-
#----------------------------------------------------------------------
# unbox reductions
@@ -2245,16 +1767,6 @@ def _binop(self, other, func, level=None, fill_value=None):
name = _maybe_match_name(self, other)
return self._constructor(result, index=new_index, name=name)
- add = _flex_method(operator.add, 'add')
- sub = _flex_method(operator.sub, 'subtract')
- mul = _flex_method(operator.mul, 'multiply')
- try:
- div = _flex_method(operator.div, 'divide')
- except AttributeError: # pragma: no cover
- # Python 3
- div = _flex_method(operator.truediv, 'divide')
- mod = _flex_method(operator.mod, 'mod')
-
def combine(self, other, func, fill_value=nan):
"""
Perform elementwise binary operation on two Series using given function
@@ -3281,3 +2793,7 @@ def _try_cast(arr, take_fast_path):
Series.plot = _gfx.plot_series
Series.hist = _gfx.hist_series
+
+# Add arithmetic!
+ops.add_flex_arithmetic_methods(Series, **ops.series_flex_funcs)
+ops.add_special_arithmetic_methods(Series, **ops.series_special_funcs)
diff --git a/pandas/sparse/array.py b/pandas/sparse/array.py
index 8a50a000a9526..bed4ede6ce5f3 100644
--- a/pandas/sparse/array.py
+++ b/pandas/sparse/array.py
@@ -7,7 +7,6 @@
from numpy import nan, ndarray
import numpy as np
-import operator
from pandas.core.base import PandasObject
import pandas.core.common as com
@@ -17,21 +16,26 @@
from pandas._sparse import BlockIndex, IntIndex
import pandas._sparse as splib
import pandas.index as _index
+import pandas.core.ops as ops
-def _sparse_op_wrap(op, name):
+def _arith_method(op, name, str_rep=None, default_axis=None,
+ fill_zeros=None, **eval_kwargs):
"""
Wrapper function for Series arithmetic operations, to avoid
code duplication.
"""
-
def wrapper(self, other):
if isinstance(other, np.ndarray):
if len(self) != len(other):
- raise AssertionError("Operands must be of the same size")
- if not isinstance(other, SparseArray):
+ raise AssertionError("length mismatch: %d vs. %d" %
+ (len(self), len(other)))
+ if not isinstance(other, com.ABCSparseArray):
other = SparseArray(other, fill_value=self.fill_value)
- return _sparse_array_op(self, other, op, name)
+ if name[0] == 'r':
+ return _sparse_array_op(other, self, op, name[1:])
+ else:
+ return _sparse_array_op(self, other, op, name)
elif np.isscalar(other):
new_fill_value = op(np.float64(self.fill_value),
np.float64(other))
@@ -41,7 +45,8 @@ def wrapper(self, other):
fill_value=new_fill_value)
else: # pragma: no cover
raise TypeError('operation with %s not supported' % type(other))
-
+ if name.startswith("__"):
+ name = name[2:-2]
wrapper.__name__ = name
return wrapper
@@ -218,23 +223,6 @@ def __unicode__(self):
com.pprint_thing(self.fill_value),
com.pprint_thing(self.sp_index))
- # Arithmetic operators
-
- __add__ = _sparse_op_wrap(operator.add, 'add')
- __sub__ = _sparse_op_wrap(operator.sub, 'sub')
- __mul__ = _sparse_op_wrap(operator.mul, 'mul')
- __truediv__ = _sparse_op_wrap(operator.truediv, 'truediv')
- __floordiv__ = _sparse_op_wrap(operator.floordiv, 'floordiv')
- __pow__ = _sparse_op_wrap(operator.pow, 'pow')
-
- # reverse operators
- __radd__ = _sparse_op_wrap(operator.add, 'add')
- __rsub__ = _sparse_op_wrap(lambda x, y: y - x, 'rsub')
- __rmul__ = _sparse_op_wrap(operator.mul, 'mul')
- __rtruediv__ = _sparse_op_wrap(lambda x, y: y / x, 'rtruediv')
- __rfloordiv__ = _sparse_op_wrap(lambda x, y: y // x, 'rfloordiv')
- __rpow__ = _sparse_op_wrap(lambda x, y: y ** x, 'rpow')
-
def disable(self, other):
raise NotImplementedError('inplace binary ops not supported')
# Inplace operators
@@ -247,8 +235,6 @@ def disable(self, other):
# Python 2 division operators
if not compat.PY3:
- __div__ = _sparse_op_wrap(operator.div, 'div')
- __rdiv__ = _sparse_op_wrap(lambda x, y: y / x, '__rdiv__')
__idiv__ = disable
@property
@@ -539,3 +525,7 @@ def make_sparse(arr, kind='block', fill_value=nan):
sparsified_values = arr[mask]
return sparsified_values, index
+
+ops.add_special_arithmetic_methods(SparseArray,
+ arith_method=_arith_method,
+ use_numexpr=False)
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index 93b29cbf91b91..6f83ee90dd9da 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -25,6 +25,7 @@
from pandas.core.generic import NDFrame
from pandas.sparse.series import SparseSeries, SparseArray
from pandas.util.decorators import Appender
+import pandas.core.ops as ops
class SparseDataFrame(DataFrame):
@@ -815,3 +816,9 @@ def homogenize(series_dict):
output = series_dict
return output
+
+# use unaccelerated ops for sparse objects
+ops.add_flex_arithmetic_methods(SparseDataFrame, use_numexpr=False,
+ **ops.frame_flex_funcs)
+ops.add_special_arithmetic_methods(SparseDataFrame, use_numexpr=False,
+ **ops.frame_special_funcs)
diff --git a/pandas/sparse/panel.py b/pandas/sparse/panel.py
index 286b683b1ea88..dd0204f11edfb 100644
--- a/pandas/sparse/panel.py
+++ b/pandas/sparse/panel.py
@@ -16,6 +16,7 @@
from pandas.util.decorators import deprecate
import pandas.core.common as com
+import pandas.core.ops as ops
class SparsePanelAxis(object):
@@ -462,6 +463,19 @@ def minor_xs(self, key):
default_fill_value=self.default_fill_value,
default_kind=self.default_kind)
+ # TODO: allow SparsePanel to work with flex arithmetic.
+ # pow and mod only work for scalars for now
+ def pow(self, val, *args, **kwargs):
+ """wrapper around `__pow__` (only works for scalar values)"""
+ return self.__pow__(val)
+
+ def mod(self, val, *args, **kwargs):
+ """wrapper around `__mod__` (only works for scalar values"""
+ return self.__mod__(val)
+
+# Sparse objects opt out of numexpr
+SparsePanel._add_aggregate_operations(use_numexpr=False)
+ops.add_special_arithmetic_methods(SparsePanel, use_numexpr=False, **ops.panel_special_funcs)
SparseWidePanel = SparsePanel
diff --git a/pandas/sparse/series.py b/pandas/sparse/series.py
index 50e80e0c202d5..eb97eec75be36 100644
--- a/pandas/sparse/series.py
+++ b/pandas/sparse/series.py
@@ -10,13 +10,14 @@
import operator
-from pandas.core.common import isnull, _values_from_object
+from pandas.core.common import isnull, _values_from_object, _maybe_match_name
from pandas.core.index import Index, _ensure_index
-from pandas.core.series import Series, _maybe_match_name
+from pandas.core.series import Series
from pandas.core.frame import DataFrame
from pandas.core.internals import SingleBlockManager
from pandas.core import generic
import pandas.core.common as com
+import pandas.core.ops as ops
import pandas.core.datetools as datetools
import pandas.index as _index
@@ -32,10 +33,14 @@
# Wrapper function for Series arithmetic methods
-def _sparse_op_wrap(op, name):
+def _arith_method(op, name, str_rep=None, default_axis=None, fill_zeros=None,
+ **eval_kwargs):
"""
Wrapper function for Series arithmetic operations, to avoid
code duplication.
+
+ str_rep, default_axis, fill_zeros and eval_kwargs are not used, but are present
+ for compatibility.
"""
def wrapper(self, other):
@@ -61,6 +66,10 @@ def wrapper(self, other):
raise TypeError('operation with %s not supported' % type(other))
wrapper.__name__ = name
+ if name.startswith("__"):
+ # strip special method names, e.g. `__add__` needs to be `add` when passed
+ # to _sparse_series_op
+ name = name[2:-2]
return wrapper
@@ -272,36 +281,6 @@ def __unicode__(self):
rep = '%s\n%s' % (series_rep, repr(self.sp_index))
return rep
- # Arithmetic operators
-
- __add__ = _sparse_op_wrap(operator.add, 'add')
- __sub__ = _sparse_op_wrap(operator.sub, 'sub')
- __mul__ = _sparse_op_wrap(operator.mul, 'mul')
- __truediv__ = _sparse_op_wrap(operator.truediv, 'truediv')
- __floordiv__ = _sparse_op_wrap(operator.floordiv, 'floordiv')
- __pow__ = _sparse_op_wrap(operator.pow, 'pow')
-
- # Inplace operators
- __iadd__ = __add__
- __isub__ = __sub__
- __imul__ = __mul__
- __itruediv__ = __truediv__
- __ifloordiv__ = __floordiv__
- __ipow__ = __pow__
-
- # reverse operators
- __radd__ = _sparse_op_wrap(operator.add, '__radd__')
- __rsub__ = _sparse_op_wrap(lambda x, y: y - x, '__rsub__')
- __rmul__ = _sparse_op_wrap(operator.mul, '__rmul__')
- __rtruediv__ = _sparse_op_wrap(lambda x, y: y / x, '__rtruediv__')
- __rfloordiv__ = _sparse_op_wrap(lambda x, y: y // x, 'floordiv')
- __rpow__ = _sparse_op_wrap(lambda x, y: y ** x, '__rpow__')
-
- # Python 2 division operators
- if not compat.PY3:
- __div__ = _sparse_op_wrap(operator.div, 'div')
- __rdiv__ = _sparse_op_wrap(lambda x, y: y / x, '__rdiv__')
-
def __array_wrap__(self, result):
"""
Gets called prior to a ufunc (and after)
@@ -659,5 +638,16 @@ def combine_first(self, other):
dense_combined = self.to_dense().combine_first(other)
return dense_combined.to_sparse(fill_value=self.fill_value)
+# overwrite series methods with unaccelerated versions
+ops.add_special_arithmetic_methods(SparseSeries, use_numexpr=False,
+ **ops.series_special_funcs)
+ops.add_flex_arithmetic_methods(SparseSeries, use_numexpr=False,
+ **ops.series_flex_funcs)
+# overwrite basic arithmetic to use SparseSeries version
+# force methods to overwrite previous definitions.
+ops.add_special_arithmetic_methods(SparseSeries, _arith_method,
+ radd_func=operator.add, comp_method=None,
+ bool_method=None, use_numexpr=False, force=True)
+
# backwards compatiblity
SparseTimeSeries = SparseSeries
diff --git a/pandas/tests/test_expressions.py b/pandas/tests/test_expressions.py
index 56f52447aadfe..85f5ba1f08b1d 100644
--- a/pandas/tests/test_expressions.py
+++ b/pandas/tests/test_expressions.py
@@ -10,12 +10,16 @@
import numpy as np
from numpy.testing import assert_array_equal
-from pandas.core.api import DataFrame
+from pandas.core.api import DataFrame, Panel
from pandas.computation import expressions as expr
-
-from pandas.util.testing import assert_series_equal, assert_frame_equal
from pandas import compat
+from pandas.util.testing import (assert_almost_equal, assert_series_equal,
+ assert_frame_equal, assert_panel_equal,
+ assert_panel4d_equal)
+import pandas.util.testing as tm
+from numpy.testing.decorators import slow
+
if not expr._USE_NUMEXPR:
try:
@@ -31,6 +35,18 @@
_mixed = DataFrame({ 'A' : _frame['A'].copy(), 'B' : _frame['B'].astype('float32'), 'C' : _frame['C'].astype('int64'), 'D' : _frame['D'].astype('int32') })
_mixed2 = DataFrame({ 'A' : _frame2['A'].copy(), 'B' : _frame2['B'].astype('float32'), 'C' : _frame2['C'].astype('int64'), 'D' : _frame2['D'].astype('int32') })
_integer = DataFrame(np.random.randint(1, 100, size=(10001, 4)), columns = list('ABCD'), dtype='int64')
+_integer2 = DataFrame(np.random.randint(1, 100, size=(101, 4)),
+ columns=list('ABCD'), dtype='int64')
+_frame_panel = Panel(dict(ItemA=_frame.copy(), ItemB=(_frame.copy() + 3), ItemC=_frame.copy(), ItemD=_frame.copy()))
+_frame2_panel = Panel(dict(ItemA=_frame2.copy(), ItemB=(_frame2.copy() + 3),
+ ItemC=_frame2.copy(), ItemD=_frame2.copy()))
+_integer_panel = Panel(dict(ItemA=_integer,
+ ItemB=(_integer + 34).astype('int64')))
+_integer2_panel = Panel(dict(ItemA=_integer2,
+ ItemB=(_integer2 + 34).astype('int64')))
+_mixed_panel = Panel(dict(ItemA=_mixed, ItemB=(_mixed + 3)))
+_mixed2_panel = Panel(dict(ItemA=_mixed2, ItemB=(_mixed2 + 3)))
+
class TestExpressions(unittest.TestCase):
@@ -48,20 +64,27 @@ def setUp(self):
def tearDown(self):
expr._MIN_ELEMENTS = self._MIN_ELEMENTS
- #TODO: add test for Panel
- #TODO: add tests for binary operations
@nose.tools.nottest
- def run_arithmetic_test(self, df, assert_func, check_dtype=False):
+ def run_arithmetic_test(self, df, other, assert_func, check_dtype=False,
+ test_flex=True):
expr._MIN_ELEMENTS = 0
- operations = ['add', 'sub', 'mul','mod','truediv','floordiv','pow']
+ operations = ['add', 'sub', 'mul', 'mod', 'truediv', 'floordiv', 'pow']
if not compat.PY3:
operations.append('div')
for arith in operations:
- op = getattr(operator, arith)
+ if test_flex:
+ op = getattr(df, arith)
+ else:
+ op = getattr(operator, arith)
+ if test_flex:
+ op = lambda x, y: getattr(df, arith)(y)
+ op.__name__ = arith
+ else:
+ op = getattr(operator, arith)
expr.set_use_numexpr(False)
- expected = op(df, df)
+ expected = op(df, other)
expr.set_use_numexpr(True)
- result = op(df, df)
+ result = op(df, other)
try:
if check_dtype:
if arith == 'div':
@@ -74,24 +97,150 @@ def run_arithmetic_test(self, df, assert_func, check_dtype=False):
raise
def test_integer_arithmetic(self):
- self.run_arithmetic_test(self.integer, assert_frame_equal)
- self.run_arithmetic_test(self.integer.icol(0), assert_series_equal,
- check_dtype=True)
+ self.run_arithmetic_test(self.integer, self.integer,
+ assert_frame_equal)
+ self.run_arithmetic_test(self.integer.icol(0), self.integer.icol(0),
+ assert_series_equal, check_dtype=True)
+
+ @nose.tools.nottest
+ def run_binary_test(self, df, other, assert_func, check_dtype=False,
+ test_flex=False, numexpr_ops=set(['gt', 'lt', 'ge',
+ 'le', 'eq', 'ne'])):
+ """
+ tests solely that the result is the same whether or not numexpr is
+ enabled. Need to test whether the function does the correct thing
+ elsewhere.
+ """
+ expr._MIN_ELEMENTS = 0
+ expr.set_test_mode(True)
+ operations = ['gt', 'lt', 'ge', 'le', 'eq', 'ne']
+ for arith in operations:
+ if test_flex:
+ op = lambda x, y: getattr(df, arith)(y)
+ op.__name__ = arith
+ else:
+ op = getattr(operator, arith)
+ expr.set_use_numexpr(False)
+ expected = op(df, other)
+ expr.set_use_numexpr(True)
+ expr.get_test_result()
+ result = op(df, other)
+ used_numexpr = expr.get_test_result()
+ try:
+ if check_dtype:
+ if arith == 'div':
+ assert expected.dtype.kind == result.dtype.kind
+ if arith == 'truediv':
+ assert result.dtype.kind == 'f'
+ if arith in numexpr_ops:
+ assert used_numexpr, "Did not use numexpr as expected."
+ else:
+ assert not used_numexpr, "Used numexpr unexpectedly."
+ assert_func(expected, result)
+ except Exception:
+ print("Failed test with operation %r" % arith)
+ print("test_flex was %r" % test_flex)
+ raise
+
+ def run_frame(self, df, other, binary_comp=None, run_binary=True,
+ **kwargs):
+ self.run_arithmetic_test(df, other, assert_frame_equal,
+ test_flex=False, **kwargs)
+ self.run_arithmetic_test(df, other, assert_frame_equal, test_flex=True,
+ **kwargs)
+ if run_binary:
+ if binary_comp is None:
+ expr.set_use_numexpr(False)
+ binary_comp = other + 1
+ expr.set_use_numexpr(True)
+ self.run_binary_test(df, binary_comp, assert_frame_equal,
+ test_flex=False, **kwargs)
+ self.run_binary_test(df, binary_comp, assert_frame_equal,
+ test_flex=True, **kwargs)
+
+ def run_series(self, ser, other, binary_comp=None, **kwargs):
+ self.run_arithmetic_test(ser, other, assert_series_equal,
+ test_flex=False, **kwargs)
+ self.run_arithmetic_test(ser, other, assert_almost_equal,
+ test_flex=True, **kwargs)
+ # series doesn't uses vec_compare instead of numexpr...
+ # if binary_comp is None:
+ # binary_comp = other + 1
+ # self.run_binary_test(ser, binary_comp, assert_frame_equal, test_flex=False,
+ # **kwargs)
+ # self.run_binary_test(ser, binary_comp, assert_frame_equal, test_flex=True,
+ # **kwargs)
+
+ def run_panel(self, panel, other, binary_comp=None, run_binary=True,
+ assert_func=assert_panel_equal, **kwargs):
+ self.run_arithmetic_test(panel, other, assert_func, test_flex=False,
+ **kwargs)
+ self.run_arithmetic_test(panel, other, assert_func, test_flex=True,
+ **kwargs)
+ if run_binary:
+ if binary_comp is None:
+ binary_comp = other + 1
+ self.run_binary_test(panel, binary_comp, assert_func,
+ test_flex=False, **kwargs)
+ self.run_binary_test(panel, binary_comp, assert_func,
+ test_flex=True, **kwargs)
+
+ def test_integer_arithmetic_frame(self):
+ self.run_frame(self.integer, self.integer)
+
+ def test_integer_arithmetic_series(self):
+ self.run_series(self.integer.icol(0), self.integer.icol(0))
+
+ @slow
+ def test_integer_panel(self):
+ self.run_panel(_integer2_panel, np.random.randint(1, 100))
+
+ def test_float_arithemtic_frame(self):
+ self.run_frame(self.frame2, self.frame2)
+
+ def test_float_arithmetic_series(self):
+ self.run_series(self.frame2.icol(0), self.frame2.icol(0))
+
+ @slow
+ def test_float_panel(self):
+ self.run_panel(_frame2_panel, np.random.randn() + 0.1, binary_comp=0.8)
+
+ @slow
+ def test_panel4d(self):
+ self.run_panel(tm.makePanel4D(), np.random.randn() + 0.5,
+ assert_func=assert_panel4d_equal, binary_comp=3)
+
+ def test_mixed_arithmetic_frame(self):
+ # TODO: FIGURE OUT HOW TO GET IT TO WORK...
+ # can't do arithmetic because comparison methods try to do *entire*
+ # frame instead of by-column
+ self.run_frame(self.mixed2, self.mixed2, run_binary=False)
+
+ def test_mixed_arithmetic_series(self):
+ for col in self.mixed2.columns:
+ self.run_series(self.mixed2[col], self.mixed2[col], binary_comp=4)
+
+ @slow
+ def test_mixed_panel(self):
+ self.run_panel(_mixed2_panel, np.random.randint(1, 100),
+ binary_comp=-2)
def test_float_arithemtic(self):
- self.run_arithmetic_test(self.frame, assert_frame_equal)
- self.run_arithmetic_test(self.frame.icol(0), assert_series_equal,
- check_dtype=True)
+ self.run_arithmetic_test(self.frame, self.frame, assert_frame_equal)
+ self.run_arithmetic_test(self.frame.icol(0), self.frame.icol(0),
+ assert_series_equal, check_dtype=True)
def test_mixed_arithmetic(self):
- self.run_arithmetic_test(self.mixed, assert_frame_equal)
+ self.run_arithmetic_test(self.mixed, self.mixed, assert_frame_equal)
for col in self.mixed.columns:
- self.run_arithmetic_test(self.mixed[col], assert_series_equal)
+ self.run_arithmetic_test(self.mixed[col], self.mixed[col],
+ assert_series_equal)
def test_integer_with_zeros(self):
self.integer *= np.random.randint(0, 2, size=np.shape(self.integer))
- self.run_arithmetic_test(self.integer, assert_frame_equal)
- self.run_arithmetic_test(self.integer.icol(0), assert_series_equal)
+ self.run_arithmetic_test(self.integer, self.integer, assert_frame_equal)
+ self.run_arithmetic_test(self.integer.icol(0), self.integer.icol(0),
+ assert_series_equal)
def test_invalid(self):
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 8266502ccdece..a41072d97ddc3 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -4554,35 +4554,72 @@ def test_first_last_valid(self):
self.assert_(index == frame.index[-6])
def test_arith_flex_frame(self):
- ops = ['add', 'sub', 'mul', 'div', 'pow']
- aliases = {'div': 'truediv'}
+ ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod']
+ if not compat.PY3:
+ aliases = {}
+ else:
+ aliases = {'div': 'truediv'}
for op in ops:
- alias = aliases.get(op, op)
- f = getattr(operator, alias)
- result = getattr(self.frame, op)(2 * self.frame)
- exp = f(self.frame, 2 * self.frame)
- assert_frame_equal(result, exp)
-
- # vs mix float
- result = getattr(self.mixed_float, op)(2 * self.mixed_float)
- exp = f(self.mixed_float, 2 * self.mixed_float)
- assert_frame_equal(result, exp)
- _check_mixed_float(result, dtype = dict(C = None))
-
- # vs mix int
- if op in ['add','sub','mul']:
- result = getattr(self.mixed_int, op)(2 + self.mixed_int)
- exp = f(self.mixed_int, 2 + self.mixed_int)
-
- # overflow in the uint
- dtype = None
- if op in ['sub']:
- dtype = dict(B = 'object', C = None)
- elif op in ['add','mul']:
- dtype = dict(C = None)
+ try:
+ alias = aliases.get(op, op)
+ f = getattr(operator, alias)
+ result = getattr(self.frame, op)(2 * self.frame)
+ exp = f(self.frame, 2 * self.frame)
+ assert_frame_equal(result, exp)
+
+ # vs mix float
+ result = getattr(self.mixed_float, op)(2 * self.mixed_float)
+ exp = f(self.mixed_float, 2 * self.mixed_float)
assert_frame_equal(result, exp)
- _check_mixed_int(result, dtype = dtype)
+ _check_mixed_float(result, dtype = dict(C = None))
+
+ # vs mix int
+ if op in ['add','sub','mul']:
+ result = getattr(self.mixed_int, op)(2 + self.mixed_int)
+ exp = f(self.mixed_int, 2 + self.mixed_int)
+
+ # overflow in the uint
+ dtype = None
+ if op in ['sub']:
+ dtype = dict(B = 'object', C = None)
+ elif op in ['add','mul']:
+ dtype = dict(C = None)
+ assert_frame_equal(result, exp)
+ _check_mixed_int(result, dtype = dtype)
+
+ # rops
+ r_f = lambda x, y: f(y, x)
+ result = getattr(self.frame, 'r' + op)(2 * self.frame)
+ exp = r_f(self.frame, 2 * self.frame)
+ assert_frame_equal(result, exp)
+
+ # vs mix float
+ result = getattr(self.mixed_float, op)(2 * self.mixed_float)
+ exp = f(self.mixed_float, 2 * self.mixed_float)
+ assert_frame_equal(result, exp)
+ _check_mixed_float(result, dtype = dict(C = None))
+
+ result = getattr(self.intframe, op)(2 * self.intframe)
+ exp = f(self.intframe, 2 * self.intframe)
+ assert_frame_equal(result, exp)
+
+ # vs mix int
+ if op in ['add','sub','mul']:
+ result = getattr(self.mixed_int, op)(2 + self.mixed_int)
+ exp = f(self.mixed_int, 2 + self.mixed_int)
+
+ # overflow in the uint
+ dtype = None
+ if op in ['sub']:
+ dtype = dict(B = 'object', C = None)
+ elif op in ['add','mul']:
+ dtype = dict(C = None)
+ assert_frame_equal(result, exp)
+ _check_mixed_int(result, dtype = dtype)
+ except:
+ print("Failing operation %r" % op)
+ raise
# ndim >= 3
ndim_5 = np.ones(self.frame.shape + (3, 4, 5))
diff --git a/pandas/tests/test_panel.py b/pandas/tests/test_panel.py
index 289bcb9db0c7e..5d3f7b350250d 100644
--- a/pandas/tests/test_panel.py
+++ b/pandas/tests/test_panel.py
@@ -1,8 +1,6 @@
# pylint: disable=W0612,E1101
from datetime import datetime
-from pandas.compat import range, lrange, StringIO, cPickle, OrderedDict
-from pandas import compat
import operator
import unittest
import nose
@@ -16,6 +14,7 @@
from pandas.core.series import remove_na
import pandas.core.common as com
from pandas import compat
+from pandas.compat import range, lrange, StringIO, cPickle, OrderedDict
from pandas.util.testing import (assert_panel_equal,
assert_frame_equal,
@@ -50,7 +49,7 @@ def test_cumsum(self):
def not_hashable(self):
c_empty = Panel()
- c = Panel(pd.Panel([[[1]]]))
+ c = Panel(Panel([[[1]]]))
self.assertRaises(TypeError, hash, c_empty)
self.assertRaises(TypeError, hash, c)
@@ -313,14 +312,32 @@ def check_op(op, name):
assert_frame_equal(result.minor_xs(idx),
op(self.panel.minor_xs(idx), xs))
+ from pandas import SparsePanel
+ ops = ['add', 'sub', 'mul', 'truediv', 'floordiv']
+ if not compat.PY3:
+ ops.append('div')
+ # pow, mod not supported for SparsePanel as flex ops (for now)
+ if not isinstance(self.panel, SparsePanel):
+ ops.extend(['pow', 'mod'])
+ else:
+ idx = self.panel.minor_axis[1]
+ with assertRaisesRegexp(ValueError, "Simple arithmetic.*scalar"):
+ self.panel.pow(self.panel.minor_xs(idx), axis='minor')
+ with assertRaisesRegexp(ValueError, "Simple arithmetic.*scalar"):
+ self.panel.mod(self.panel.minor_xs(idx), axis='minor')
- check_op(operator.add, 'add')
- check_op(operator.sub, 'subtract')
- check_op(operator.mul, 'multiply')
+ for op in ops:
+ try:
+ check_op(getattr(operator, op), op)
+ except:
+ print("Failing operation: %r" % op)
+ raise
if compat.PY3:
- check_op(operator.truediv, 'divide')
- else:
- check_op(operator.div, 'divide')
+ try:
+ check_op(operator.truediv, 'div')
+ except:
+ print("Failing operation: %r" % name)
+ raise
def test_combinePanel(self):
result = self.panel.add(self.panel)
@@ -1737,6 +1754,31 @@ def test_operators(self):
result = (self.panel + 1).to_panel()
assert_frame_equal(wp['ItemA'] + 1, result['ItemA'])
+ def test_arith_flex_panel(self):
+ ops = ['add', 'sub', 'mul', 'div', 'truediv', 'pow', 'floordiv', 'mod']
+ if not compat.PY3:
+ aliases = {}
+ else:
+ aliases = {'div': 'truediv'}
+ self.panel = self.panel.to_panel()
+ n = np.random.randint(-50, 50)
+ for op in ops:
+ try:
+ alias = aliases.get(op, op)
+ f = getattr(operator, alias)
+ result = getattr(self.panel, op)(n)
+ exp = f(self.panel, n)
+ assert_panel_equal(result, exp, check_panel_type=True)
+
+ # rops
+ r_f = lambda x, y: f(y, x)
+ result = getattr(self.panel, 'r' + op)(n)
+ exp = r_f(self.panel, n)
+ assert_panel_equal(result, exp)
+ except:
+ print("Failing operation %r" % op)
+ raise
+
def test_sort(self):
def is_sorted(arr):
return (arr[1:] > arr[:-1]).any()
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index f8320149f4ac6..479d627e72346 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -269,7 +269,6 @@ class SafeForSparse(object):
_ts = tm.makeTimeSeries()
-
class TestSeries(unittest.TestCase, CheckNameIntegration):
_multiprocess_can_split_ = True
@@ -1946,21 +1945,27 @@ def test_all_any(self):
self.assert_(bool_series.any())
def test_op_method(self):
- def _check_op(series, other, op, alt):
- result = op(series, other)
- expected = alt(series, other)
- tm.assert_almost_equal(result, expected)
-
- def check(series, other):
- simple_ops = ['add', 'sub', 'mul']
+ def check(series, other, check_reverse=False):
+ simple_ops = ['add', 'sub', 'mul', 'floordiv', 'truediv', 'pow']
+ if not compat.PY3:
+ simple_ops.append('div')
for opname in simple_ops:
- _check_op(series, other, getattr(Series, opname),
- getattr(operator, opname))
+ op = getattr(Series, opname)
+ alt = getattr(operator, opname)
+ result = op(series, other)
+ expected = alt(series, other)
+ tm.assert_almost_equal(result, expected)
+ if check_reverse:
+ rop = getattr(Series, "r" + opname)
+ result = rop(series, other)
+ expected = alt(other, series)
+ tm.assert_almost_equal(result, expected)
check(self.ts, self.ts * 2)
check(self.ts, self.ts[::2])
- check(self.ts, 5)
+ check(self.ts, 5, check_reverse=True)
+ check(tm.makeFloatSeries(), tm.makeFloatSeries(), check_reverse=True)
def test_neg(self):
assert_series_equal(-self.series, -1 * self.series)
@@ -2186,13 +2191,18 @@ def test_timedeltas_with_DateOffset(self):
s = Series([Timestamp('20130101 9:01'), Timestamp('20130101 9:02')])
result = s + pd.offsets.Second(5)
+ result2 = pd.offsets.Second(5) + s
expected = Series(
[Timestamp('20130101 9:01:05'), Timestamp('20130101 9:02:05')])
+ assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
result = s + pd.offsets.Milli(5)
+ result2 = pd.offsets.Milli(5) + s
expected = Series(
[Timestamp('20130101 9:01:00.005'), Timestamp('20130101 9:02:00.005')])
assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
result = s + pd.offsets.Minute(5) + pd.offsets.Milli(5)
expected = Series(
@@ -2203,20 +2213,25 @@ def test_timedeltas_with_DateOffset(self):
# operate with np.timedelta64 correctly
result = s + np.timedelta64(1, 's')
+ result2 = np.timedelta64(1, 's') + s
expected = Series(
[Timestamp('20130101 9:01:01'), Timestamp('20130101 9:02:01')])
assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
result = s + np.timedelta64(5, 'ms')
+ result2 = np.timedelta64(5, 'ms') + s
expected = Series(
[Timestamp('20130101 9:01:00.005'), Timestamp('20130101 9:02:00.005')])
assert_series_equal(result, expected)
+ assert_series_equal(result2, expected)
# valid DateOffsets
for do in [ 'Hour', 'Minute', 'Second', 'Day', 'Micro',
'Milli', 'Nano' ]:
op = getattr(pd.offsets,do)
s + op(5)
+ op(5) + s
# invalid DateOffsets
for do in [ 'Week', 'BDay', 'BQuarterEnd', 'BMonthEnd', 'BYearEnd',
@@ -2225,6 +2240,7 @@ def test_timedeltas_with_DateOffset(self):
'MonthBegin', 'QuarterBegin' ]:
op = getattr(pd.offsets,do)
self.assertRaises(TypeError, s.__add__, op(5))
+ self.assertRaises(TypeError, s.__radd__, op(5))
def test_timedelta64_operations_with_timedeltas(self):
@@ -2237,6 +2253,11 @@ def test_timedelta64_operations_with_timedeltas(self):
self.assert_(result.dtype == 'm8[ns]')
assert_series_equal(result, expected)
+ result2 = td2 - td1
+ expected = (Series([timedelta(seconds=1)] * 3) -
+ Series([timedelta(seconds=0)] * 3))
+ assert_series_equal(result2, expected)
+
# roundtrip
assert_series_equal(result + td2,td1)
@@ -2318,6 +2339,10 @@ def test_timedelta64_conversions(self):
result = s1 / np.timedelta64(m,unit)
assert_series_equal(result, expected)
+ # reverse op
+ expected = s1.apply(lambda x: np.timedelta64(m,unit) / x)
+ result = np.timedelta64(m,unit) / s1
+
def test_timedelta64_equal_timedelta_supported_ops(self):
ser = Series([Timestamp('20130301'), Timestamp('20130228 23:00:00'),
Timestamp('20130228 22:00:00'),
@@ -2351,44 +2376,58 @@ def timedelta64(*args):
def test_operators_datetimelike(self):
- # timedelta64 ###
- td1 = Series([timedelta(minutes=5, seconds=3)] * 3)
- td2 = timedelta(minutes=5, seconds=4)
- for op in ['__mul__', '__floordiv__', '__pow__']:
- op = getattr(td1, op, None)
- if op is not None:
- self.assertRaises(TypeError, op, td2)
+ def run_ops(ops, get_ser, test_ser):
+ for op in ops:
+ try:
+ op = getattr(get_ser, op, None)
+ if op is not None:
+ self.assertRaises(TypeError, op, test_ser)
+ except:
+ print("Failed on op %r" % op)
+ raise
+ ### timedelta64 ###
+ td1 = Series([timedelta(minutes=5,seconds=3)]*3)
+ td2 = timedelta(minutes=5,seconds=4)
+ ops = ['__mul__','__floordiv__','__pow__',
+ '__rmul__','__rfloordiv__','__rpow__']
+ run_ops(ops, td1, td2)
td1 + td2
+ td2 + td1
td1 - td2
+ td2 - td1
td1 / td2
-
- # datetime64 ###
- dt1 = Series(
- [Timestamp('20111230'), Timestamp('20120101'), Timestamp('20120103')])
- dt2 = Series(
- [Timestamp('20111231'), Timestamp('20120102'), Timestamp('20120104')])
- for op in ['__add__', '__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__']:
- sop = getattr(dt1, op, None)
- if sop is not None:
- self.assertRaises(TypeError, sop, dt2)
+ td2 / td1
+
+ ### datetime64 ###
+ dt1 = Series([Timestamp('20111230'), Timestamp('20120101'),
+ Timestamp('20120103')])
+ dt2 = Series([Timestamp('20111231'), Timestamp('20120102'),
+ Timestamp('20120104')])
+ ops = ['__add__', '__mul__', '__floordiv__', '__truediv__', '__div__',
+ '__pow__', '__radd__', '__rmul__', '__rfloordiv__',
+ '__rtruediv__', '__rdiv__', '__rpow__']
+ run_ops(ops, dt1, dt2)
dt1 - dt2
+ dt2 - dt1
- # datetime64 with timetimedelta ###
- for op in ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__']:
- sop = getattr(dt1, op, None)
- if sop is not None:
- self.assertRaises(TypeError, sop, td1)
+ ### datetime64 with timetimedelta ###
+ ops = ['__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__',
+ '__rmul__', '__rfloordiv__', '__rtruediv__', '__rdiv__',
+ '__rpow__']
+ run_ops(ops, dt1, td1)
dt1 + td1
+ td1 + dt1
dt1 - td1
-
- # timetimedelta with datetime64 ###
- for op in ['__sub__', '__mul__', '__floordiv__', '__truediv__', '__div__', '__pow__']:
- sop = getattr(td1, op, None)
- if sop is not None:
- self.assertRaises(TypeError, sop, dt1)
-
- # timedelta + datetime ok
+ # TODO: Decide if this ought to work.
+ # td1 - dt1
+
+ ### timetimedelta with datetime64 ###
+ ops = ['__sub__', '__mul__', '__floordiv__', '__truediv__', '__div__',
+ '__pow__', '__rsub__', '__rmul__', '__rfloordiv__',
+ '__rtruediv__', '__rdiv__', '__rpow__']
+ run_ops(ops, td1, dt1)
td1 + dt1
+ dt1 + td1
def test_timedelta64_functions(self):
@@ -2517,6 +2556,9 @@ def test_sub_of_datetime_from_TimeSeries(self):
result = _possibly_cast_to_timedelta(np.abs(a - b))
self.assert_(result.dtype == 'timedelta64[ns]')
+ result = _possibly_cast_to_timedelta(np.abs(b - a))
+ self.assert_(result.dtype == 'timedelta64[ns]')
+
def test_datetime64_with_index(self):
# arithmetic integer ops with an index
@@ -2537,8 +2579,8 @@ def test_datetime64_with_index(self):
df = DataFrame(np.random.randn(5,2),index=date_range('20130101',periods=5))
df['date'] = Timestamp('20130102')
- df['expected'] = df['date']-df.index.to_series()
- df['result'] = df['date']-df.index
+ df['expected'] = df['date'] - df.index.to_series()
+ df['result'] = df['date'] - df.index
assert_series_equal(df['result'],df['expected'])
def test_timedelta64_nan(self):
@@ -2586,7 +2628,9 @@ def test_operators_na_handling(self):
index=[date(2012, 1, 1), date(2012, 1, 2)])
result = s + s.shift(1)
+ result2 = s.shift(1) + s
self.assert_(isnull(result[0]))
+ self.assert_(isnull(result2[0]))
s = Series(['foo', 'bar', 'baz', np.nan])
result = 'prefix_' + s
@@ -2616,7 +2660,7 @@ def test_comparison_operators_with_nas(self):
s = Series(bdate_range('1/1/2000', periods=10), dtype=object)
s[::2] = np.nan
- # test that comparions work
+ # test that comparisons work
ops = ['lt', 'le', 'gt', 'ge', 'eq', 'ne']
for op in ops:
val = s[5]
@@ -2753,7 +2797,10 @@ def tester(a, b):
assert_series_equal(tester(s, list(s)), s)
d = DataFrame({'A': s})
- self.assertRaises(TypeError, tester, s, d)
+ # TODO: Fix this exception - needs to be fixed! (see GH5035)
+ # (previously this was a TypeError because series returned
+ # NotImplemented
+ self.assertRaises(ValueError, tester, s, d)
def test_idxmin(self):
# test idxmin
@@ -2942,19 +2989,13 @@ def test_series_frame_radd_bug(self):
self.assertRaises(TypeError, operator.add, datetime.now(), self.ts)
def test_operators_frame(self):
- import sys
- buf = StringIO()
- tmp = sys.stderr
- sys.stderr = buf
# rpow does not work with DataFrame
- try:
- df = DataFrame({'A': self.ts})
+ df = DataFrame({'A': self.ts})
- tm.assert_almost_equal(self.ts + self.ts, (self.ts + df)['A'])
- tm.assert_almost_equal(self.ts ** self.ts, (self.ts ** df)['A'])
- tm.assert_almost_equal(self.ts < self.ts, (self.ts < df)['A'])
- finally:
- sys.stderr = tmp
+ tm.assert_almost_equal(self.ts + self.ts, (self.ts + df)['A'])
+ tm.assert_almost_equal(self.ts ** self.ts, (self.ts ** df)['A'])
+ tm.assert_almost_equal(self.ts < self.ts, (self.ts < df)['A'])
+ tm.assert_almost_equal(self.ts / self.ts, (self.ts / df)['A'])
def test_operators_combine(self):
def _check_fill(meth, op, a, b, fill_value=0):
@@ -2987,8 +3028,10 @@ def _check_fill(meth, op, a, b, fill_value=0):
a = Series([nan, 1., 2., 3., nan], index=np.arange(5))
b = Series([nan, 1, nan, 3, nan, 4.], index=np.arange(6))
- ops = [Series.add, Series.sub, Series.mul, Series.div]
- equivs = [operator.add, operator.sub, operator.mul]
+ ops = [Series.add, Series.sub, Series.mul, Series.pow,
+ Series.truediv, Series.div]
+ equivs = [operator.add, operator.sub, operator.mul, operator.pow,
+ operator.truediv]
if compat.PY3:
equivs.append(operator.truediv)
else:
@@ -3253,9 +3296,12 @@ def test_value_counts_nunique(self):
# timedelta64[ns]
from datetime import timedelta
td = df.dt - df.dt + timedelta(1)
+ td2 = timedelta(1) + (df.dt - df.dt)
result = td.value_counts()
+ result2 = td2.value_counts()
#self.assert_(result.index.dtype == 'timedelta64[ns]')
self.assert_(result.index.dtype == 'int64')
+ self.assert_(result2.index.dtype == 'int64')
# basics.rst doc example
series = Series(np.random.randn(500))
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 92ed1e415d11a..232ebd2c3726c 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -19,6 +19,10 @@
#----------------------------------------------------------------------
# DateOffset
+class ApplyTypeError(TypeError):
+ # sentinel class for catching the apply error to return NotImplemented
+ pass
+
class CacheableOffset(object):
@@ -128,7 +132,7 @@ def __repr__(self):
kwds_new[key] = self.kwds[key]
if len(kwds_new) > 0:
attrs.append('='.join((attr, repr(kwds_new))))
- else:
+ else:
if attr not in exclude:
attrs.append('='.join((attr, repr(getattr(self, attr)))))
@@ -136,7 +140,7 @@ def __repr__(self):
plural = 's'
else:
plural = ''
-
+
n_str = ""
if self.n != 1:
n_str = "%s * " % self.n
@@ -170,19 +174,21 @@ def __call__(self, other):
return self.apply(other)
def __add__(self, other):
- return self.apply(other)
+ try:
+ return self.apply(other)
+ except ApplyTypeError:
+ return NotImplemented
def __radd__(self, other):
return self.__add__(other)
def __sub__(self, other):
if isinstance(other, datetime):
- raise TypeError('Cannot subtract datetime from offset!')
+ raise TypeError('Cannot subtract datetime from offset.')
elif type(other) == type(self):
return self.__class__(self.n - other.n, **self.kwds)
else: # pragma: no cover
- raise TypeError('Cannot subtract %s from %s'
- % (type(other), type(self)))
+ return NotImplemented
def __rsub__(self, other):
return self.__class__(-self.n, **self.kwds) + other
@@ -273,7 +279,7 @@ def __repr__(self): #TODO: Figure out if this should be merged into DateOffset
plural = 's'
else:
plural = ''
-
+
n_str = ""
if self.n != 1:
n_str = "%s * " % self.n
@@ -370,8 +376,8 @@ def apply(self, other):
return BDay(self.n, offset=self.offset + other,
normalize=self.normalize)
else:
- raise TypeError('Only know how to combine business day with '
- 'datetime or timedelta!')
+ raise ApplyTypeError('Only know how to combine business day with '
+ 'datetime or timedelta.')
@classmethod
def onOffset(cls, dt):
@@ -463,8 +469,8 @@ def apply(self, other):
return BDay(self.n, offset=self.offset + other,
normalize=self.normalize)
else:
- raise TypeError('Only know how to combine trading day with '
- 'datetime, datetime64 or timedelta!')
+ raise ApplyTypeError('Only know how to combine trading day with '
+ 'datetime, datetime64 or timedelta.')
dt64 = self._to_dt64(other)
day64 = dt64.astype('datetime64[D]')
@@ -1177,7 +1183,10 @@ def __add__(self, other):
return type(self)(self.n + other.n)
else:
return _delta_to_tick(self.delta + other.delta)
- return self.apply(other)
+ try:
+ return self.apply(other)
+ except ApplyTypeError:
+ return NotImplemented
def __eq__(self, other):
if isinstance(other, compat.string_types):
@@ -1220,8 +1229,8 @@ def apply(self, other):
return other + self.delta
elif isinstance(other, type(self)):
return type(self)(self.n + other.n)
- else: # pragma: no cover
- raise TypeError('Unhandled type: %s' % type(other))
+ else:
+ raise ApplyTypeError('Unhandled type: %s' % type(other).__name__)
_rule_base = 'undefined'
| There's a lot of overlap right now, this is the first step to trying to
make this cleaner.
- Abstract all arithmetic methods into core/ops
- Add full range of flex arithmetic methods to all NDFrame/ndarray
PandasObjects (except for SparsePanel pow and mod, which only work for
scalars)
- Normalize arithmetic methods signature (see
`ops.add_special_arithmetic_methods` and
`ops.add_flex_arithmetic_methods` for signature).
- Opt-in more arithmetic operations with numexpr (except for
SparsePanel, which has to opt-out because it doesn't respond to
`shape`).
- BUG: Fix `_fill_zeros` call to work even if TypeError (previously
was inconsistent).
- Add bind method to core/common
Closes #3765.
Closes #4334.
Closes #4051.
Closes #5033.
Closes #4331.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5022 | 2013-09-28T20:37:55Z | 2013-09-29T17:02:25Z | 2013-09-29T17:02:25Z | 2014-07-09T23:49:20Z |
ER: give a better error message for hashing indices | diff --git a/pandas/core/index.py b/pandas/core/index.py
index d488a29182a18..ffa1d7cfa0c3b 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -600,7 +600,7 @@ def __contains__(self, key):
return False
def __hash__(self):
- return hash(self.view(np.ndarray))
+ raise TypeError("unhashable type: %r" % type(self).__name__)
def __getitem__(self, key):
"""Override numpy.ndarray's __getitem__ method to work as desired"""
@@ -1852,6 +1852,7 @@ def equals(self, other):
# e.g. fails in numpy 1.6 with DatetimeIndex #1681
return False
+
class MultiIndex(Index):
"""
diff --git a/pandas/tests/test_index.py b/pandas/tests/test_index.py
index 857836fa698ce..87772b5a86326 100644
--- a/pandas/tests/test_index.py
+++ b/pandas/tests/test_index.py
@@ -12,7 +12,8 @@
import numpy as np
from numpy.testing import assert_array_equal
-from pandas.core.index import Index, Float64Index, Int64Index, MultiIndex, InvalidIndexError
+from pandas.core.index import (Index, Float64Index, Int64Index, MultiIndex,
+ InvalidIndexError)
from pandas.core.frame import DataFrame
from pandas.core.series import Series
from pandas.util.testing import (assert_almost_equal, assertRaisesRegexp,
@@ -75,7 +76,10 @@ def test_set_name_methods(self):
self.assertEqual(ind.names, [name])
def test_hash_error(self):
- self.assertRaises(TypeError, hash, self.strIndex)
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(self.strIndex).__name__):
+ hash(self.strIndex)
def test_new_axis(self):
new_index = self.dateIndex[None, :]
@@ -661,6 +665,12 @@ def setUp(self):
self.mixed = Float64Index([1.5, 2, 3, 4, 5])
self.float = Float64Index(np.arange(5) * 2.5)
+ def test_hash_error(self):
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(self.float).__name__):
+ hash(self.float)
+
def check_is_index(self, i):
self.assert_(isinstance(i, Index) and not isinstance(i, Float64Index))
@@ -736,6 +746,7 @@ def test_astype(self):
self.assert_(i.equals(result))
self.check_is_index(result)
+
class TestInt64Index(unittest.TestCase):
_multiprocess_can_split_ = True
@@ -779,6 +790,12 @@ def test_constructor_corner(self):
arr = np.array([1, '2', 3, '4'], dtype=object)
self.assertRaises(TypeError, Int64Index, arr)
+ def test_hash_error(self):
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(self.index).__name__):
+ hash(self.index)
+
def test_copy(self):
i = Int64Index([], name='Foo')
i_copy = i.copy()
@@ -1155,6 +1172,12 @@ def setUp(self):
labels=[major_labels, minor_labels],
names=self.index_names)
+ def test_hash_error(self):
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(self.index).__name__):
+ hash(self.index)
+
def test_set_names_and_rename(self):
# so long as these are synonyms, we don't need to test set_names
self.assert_(self.index.rename == self.index.set_names)
@@ -2231,6 +2254,7 @@ def test_get_combined_index():
result = _get_combined_index([])
assert(result.equals(Index([])))
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 96e96607ad9de..173ebeb199b3b 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -1057,6 +1057,13 @@ class TestPeriodIndex(TestCase):
def setUp(self):
pass
+ def test_hash_error(self):
+ index = period_range('20010101', periods=10)
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(index).__name__):
+ hash(index)
+
def test_make_time_series(self):
index = PeriodIndex(freq='A', start='1/1/2001', end='12/1/2009')
series = Series(1, index=index)
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 0e5e3d1922ec4..d44ae94bdb718 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -1716,6 +1716,13 @@ def _simple_ts(start, end, freq='D'):
class TestDatetimeIndex(unittest.TestCase):
_multiprocess_can_split_ = True
+ def test_hash_error(self):
+ index = date_range('20010101', periods=10)
+ with tm.assertRaisesRegexp(TypeError,
+ "unhashable type: %r" %
+ type(index).__name__):
+ hash(index)
+
def test_stringified_slice_with_tz(self):
#GH2658
import datetime
| https://api.github.com/repos/pandas-dev/pandas/pulls/5019 | 2013-09-28T19:55:19Z | 2013-09-28T20:57:43Z | 2013-09-28T20:57:43Z | 2014-07-16T08:31:55Z | |
BUG: Fix a bug when indexing np.nan via loc/iloc (GH5016) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index daee460fc50a1..66c3dcd203a6a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -497,6 +497,7 @@ Bug Fixes
- Fixed wrong index name during read_csv if using usecols. Applies to c parser only. (:issue:`4201`)
- ``Timestamp`` objects can now appear in the left hand side of a comparison
operation with a ``Series`` or ``DataFrame`` object (:issue:`4982`).
+ - Fix a bug when indexing with ``np.nan`` via ``iloc/loc`` (:issue:`5016`)
pandas 0.12.0
-------------
diff --git a/pandas/core/index.py b/pandas/core/index.py
index d488a29182a18..63bda40932647 100644
--- a/pandas/core/index.py
+++ b/pandas/core/index.py
@@ -424,7 +424,7 @@ def _convert_scalar_indexer(self, key, typ=None):
def to_int():
ikey = int(key)
if ikey != key:
- self._convert_indexer_error(key, 'label')
+ return self._convert_indexer_error(key, 'label')
return ikey
if typ == 'iloc':
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index afbeb53d857e2..eb377c4b7955f 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -1,12 +1,12 @@
# pylint: disable=W0223
from datetime import datetime
-from pandas.core.common import _asarray_tuplesafe, is_list_like
from pandas.core.index import Index, MultiIndex, _ensure_index
from pandas.compat import range, zip
import pandas.compat as compat
import pandas.core.common as com
from pandas.core.common import (_is_bool_indexer, is_integer_dtype,
+ _asarray_tuplesafe, is_list_like, isnull,
ABCSeries, ABCDataFrame, ABCPanel)
import pandas.lib as lib
@@ -979,12 +979,20 @@ def _has_valid_type(self, key, axis):
else:
def error():
+ if isnull(key):
+ raise ValueError("cannot use label indexing with a null key")
raise KeyError("the label [%s] is not in the [%s]" % (key,self.obj._get_axis_name(axis)))
- key = self._convert_scalar_indexer(key, axis)
try:
+ key = self._convert_scalar_indexer(key, axis)
if not key in ax:
error()
+ except (TypeError) as e:
+
+ # python 3 type errors should be raised
+ if 'unorderable' in str(e): # pragma: no cover
+ error()
+ raise
except:
error()
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 8fcb64e6d0eda..f10e1612f7fe9 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -97,8 +97,13 @@ def ref_locs(self):
indexer = self.ref_items.get_indexer(self.items)
indexer = com._ensure_platform_int(indexer)
if (indexer == -1).any():
- raise AssertionError('Some block items were not in block '
- 'ref_items')
+
+ # this means that we have nan's in our block
+ try:
+ indexer[indexer == -1] = np.arange(len(self.items))[isnull(self.items)]
+ except:
+ raise AssertionError('Some block items were not in block '
+ 'ref_items')
self._ref_locs = indexer
return self._ref_locs
@@ -2500,9 +2505,18 @@ def _consolidate_inplace(self):
def get(self, item):
if self.items.is_unique:
+
+ if isnull(item):
+ indexer = np.arange(len(self.items))[isnull(self.items)]
+ return self.get_for_nan_indexer(indexer)
+
_, block = self._find_block(item)
return block.get(item)
else:
+
+ if isnull(item):
+ raise ValueError("cannot label index with a null key")
+
indexer = self.items.get_loc(item)
ref_locs = np.array(self._set_ref_locs())
@@ -2528,14 +2542,31 @@ def get(self, item):
def iget(self, i):
item = self.items[i]
+
+ # unique
if self.items.is_unique:
- return self.get(item)
+ if notnull(item):
+ return self.get(item)
+ return self.get_for_nan_indexer(i)
- # compute the duplicative indexer if needed
ref_locs = self._set_ref_locs()
b, loc = ref_locs[i]
return b.iget(loc)
+ def get_for_nan_indexer(self, indexer):
+
+ # allow a single nan location indexer
+ if not np.isscalar(indexer):
+ if len(indexer) == 1:
+ indexer = indexer.item()
+ else:
+ raise ValueError("cannot label index with a null key")
+
+ # take a nan indexer and return the values
+ ref_locs = self._set_ref_locs(do_refs='force')
+ b, loc = ref_locs[indexer]
+ return b.iget(loc)
+
def get_scalar(self, tup):
"""
Retrieve single item
diff --git a/pandas/core/series.py b/pandas/core/series.py
index d9e9a0034b56b..77c777042ab5f 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1053,10 +1053,10 @@ def __setitem__(self, key, value):
except TypeError as e:
if isinstance(key, tuple) and not isinstance(self.index, MultiIndex):
raise ValueError("Can only tuple-index with a MultiIndex")
+
# python 3 type errors should be raised
if 'unorderable' in str(e): # pragma: no cover
raise IndexError(key)
- # Could not hash item
if _is_bool_indexer(key):
key = _check_bool_indexer(self.index, key)
diff --git a/pandas/hashtable.pyx b/pandas/hashtable.pyx
index 164fc8c94924e..1b132ea91f515 100644
--- a/pandas/hashtable.pyx
+++ b/pandas/hashtable.pyx
@@ -643,6 +643,8 @@ cdef class Float64HashTable(HashTable):
return uniques.to_array()
+na_sentinel = object
+
cdef class PyObjectHashTable(HashTable):
# cdef kh_pymap_t *table
@@ -660,6 +662,8 @@ cdef class PyObjectHashTable(HashTable):
def __contains__(self, object key):
cdef khiter_t k
hash(key)
+ if key != key or key is None:
+ key = na_sentinel
k = kh_get_pymap(self.table, <PyObject*>key)
return k != self.table.n_buckets
@@ -669,6 +673,8 @@ cdef class PyObjectHashTable(HashTable):
cpdef get_item(self, object val):
cdef khiter_t k
+ if val != val or val is None:
+ val = na_sentinel
k = kh_get_pymap(self.table, <PyObject*>val)
if k != self.table.n_buckets:
return self.table.vals[k]
@@ -677,6 +683,8 @@ cdef class PyObjectHashTable(HashTable):
def get_iter_test(self, object key, Py_ssize_t iterations):
cdef Py_ssize_t i, val
+ if key != key or key is None:
+ key = na_sentinel
for i in range(iterations):
k = kh_get_pymap(self.table, <PyObject*>key)
if k != self.table.n_buckets:
@@ -689,6 +697,8 @@ cdef class PyObjectHashTable(HashTable):
char* buf
hash(key)
+ if key != key or key is None:
+ key = na_sentinel
k = kh_put_pymap(self.table, <PyObject*>key, &ret)
# self.table.keys[k] = key
if kh_exist_pymap(self.table, k):
@@ -706,6 +716,9 @@ cdef class PyObjectHashTable(HashTable):
for i in range(n):
val = values[i]
hash(val)
+ if val != val or val is None:
+ val = na_sentinel
+
k = kh_put_pymap(self.table, <PyObject*>val, &ret)
self.table.vals[k] = i
@@ -720,6 +733,9 @@ cdef class PyObjectHashTable(HashTable):
for i in range(n):
val = values[i]
hash(val)
+ if val != val or val is None:
+ val = na_sentinel
+
k = kh_get_pymap(self.table, <PyObject*>val)
if k != self.table.n_buckets:
locs[i] = self.table.vals[k]
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index e5d2bb17ec7a8..eeb2c34ea9394 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -642,6 +642,8 @@ def test_setitem_clear_caches(self):
def test_setitem_None(self):
# GH #766
self.frame[None] = self.frame['A']
+ assert_series_equal(self.frame.iloc[:,-1], self.frame['A'])
+ assert_series_equal(self.frame.loc[:,None], self.frame['A'])
assert_series_equal(self.frame[None], self.frame['A'])
repr(self.frame)
@@ -4475,6 +4477,41 @@ def test_constructor_lists_to_object_dtype(self):
self.assert_(d['a'].dtype == np.object_)
self.assert_(d['a'][1] is False)
+ def test_constructor_with_nas(self):
+ # GH 5016
+ # na's in indicies
+
+ def check(df):
+ for i in range(len(df.columns)):
+ df.iloc[:,i]
+
+ # allow single nans to succeed
+ indexer = np.arange(len(df.columns))[isnull(df.columns)]
+
+ if len(indexer) == 1:
+ assert_series_equal(df.iloc[:,indexer[0]],df.loc[:,np.nan])
+
+
+ # multiple nans should fail
+ else:
+
+ def f():
+ df.loc[:,np.nan]
+ self.assertRaises(ValueError, f)
+
+
+ df = DataFrame([[1,2,3],[4,5,6]], index=[1,np.nan])
+ check(df)
+
+ df = DataFrame([[1,2,3],[4,5,6]], columns=[1.1,2.2,np.nan])
+ check(df)
+
+ df = DataFrame([[0,1,2,3],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan])
+ check(df)
+
+ df = DataFrame([[0.0,1,2,3.0],[4,5,6,7]], columns=[np.nan,1.1,2.2,np.nan])
+ check(df)
+
def test_logical_with_nas(self):
d = DataFrame({'a': [np.nan, False], 'b': [True, True]})
| closes #5016
| https://api.github.com/repos/pandas-dev/pandas/pulls/5018 | 2013-09-28T16:19:45Z | 2013-09-28T19:22:45Z | 2013-09-28T19:22:45Z | 2014-07-03T18:50:32Z |
ENH: PySide support for qtpandas. | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 058ea165120a6..488fd8deacc2e 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -77,6 +77,7 @@ Experimental Features
(:issue:`4897`).
- Add msgpack support via ``pd.read_msgpack()`` and ``pd.to_msgpack()`` / ``df.to_msgpack()`` for serialization
of arbitrary pandas (and python objects) in a lightweight portable binary format (:issue:`686`)
+ - Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
Improvements to existing features
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 90d2989de65c2..3f5989856902e 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -600,6 +600,8 @@ Experimental
os.remove('foo.msg')
+- Added PySide support for the qtpandas DataFrameModel and DataFrameWidget.
+
.. _whatsnew_0130.refactoring:
Internal Refactoring
diff --git a/doc/source/visualization.rst b/doc/source/visualization.rst
index 6e357d6d38e49..a9926947013fb 100644
--- a/doc/source/visualization.rst
+++ b/doc/source/visualization.rst
@@ -594,3 +594,79 @@ Andrews curves charts:
@savefig andrews_curve_winter.png
andrews_curves(data, 'Name', colormap='winter')
+
+
+****************************************
+Visualizing your data in Qt applications
+****************************************
+
+There is an experimental support for visualizing DataFrames in PyQt4 and PySide
+applications. At the moment you can display and edit the values of the cells
+in the DataFrame. Qt will take care of displaying just the portion of the
+DataFrame that is currently visible and the edits will be immediately saved to
+the underlying DataFrame
+
+To demonstrate this we will create a simple PySide application that will switch
+between two editable DataFrames. For this will use the ``DataFrameModel`` class
+that handles the access to the DataFrame, and the ``DataFrameWidget``, which is
+just a thin layer around the ``QTableView``.
+
+.. code-block:: python
+
+ import numpy as np
+ import pandas as pd
+ from pandas.sandbox.qtpandas import DataFrameModel, DataFrameWidget
+ from PySide import QtGui, QtCore
+
+ # Or if you use PyQt4:
+ # from PyQt4 import QtGui, QtCore
+
+ class MainWidget(QtGui.QWidget):
+ def __init__(self, parent=None):
+ super(MainWidget, self).__init__(parent)
+
+ # Create two DataFrames
+ self.df1 = pd.DataFrame(np.arange(9).reshape(3, 3),
+ columns=['foo', 'bar', 'baz'])
+ self.df2 = pd.DataFrame({
+ 'int': [1, 2, 3],
+ 'float': [1.5, 2.5, 3.5],
+ 'string': ['a', 'b', 'c'],
+ 'nan': [np.nan, np.nan, np.nan]
+ }, index=['AAA', 'BBB', 'CCC'],
+ columns=['int', 'float', 'string', 'nan'])
+
+ # Create the widget and set the first DataFrame
+ self.widget = DataFrameWidget(self.df1)
+
+ # Create the buttons for changing DataFrames
+ self.button_first = QtGui.QPushButton('First')
+ self.button_first.clicked.connect(self.on_first_click)
+ self.button_second = QtGui.QPushButton('Second')
+ self.button_second.clicked.connect(self.on_second_click)
+
+ # Set the layout
+ vbox = QtGui.QVBoxLayout()
+ vbox.addWidget(self.widget)
+ hbox = QtGui.QHBoxLayout()
+ hbox.addWidget(self.button_first)
+ hbox.addWidget(self.button_second)
+ vbox.addLayout(hbox)
+ self.setLayout(vbox)
+
+ def on_first_click(self):
+ '''Sets the first DataFrame'''
+ self.widget.setDataFrame(self.df1)
+
+ def on_second_click(self):
+ '''Sets the second DataFrame'''
+ self.widget.setDataFrame(self.df2)
+
+ if __name__ == '__main__':
+ import sys
+
+ # Initialize the application
+ app = QtGui.QApplication(sys.argv)
+ mw = MainWidget()
+ mw.show()
+ app.exec_()
diff --git a/pandas/sandbox/qtpandas.py b/pandas/sandbox/qtpandas.py
index 35aa28fea1678..3f284990efd40 100644
--- a/pandas/sandbox/qtpandas.py
+++ b/pandas/sandbox/qtpandas.py
@@ -3,10 +3,15 @@
@author: Jev Kuznetsov
'''
-from PyQt4.QtCore import (
- QAbstractTableModel, Qt, QVariant, QModelIndex, SIGNAL)
-from PyQt4.QtGui import (
- QApplication, QDialog, QVBoxLayout, QTableView, QWidget)
+try:
+ from PyQt4.QtCore import QAbstractTableModel, Qt, QVariant, QModelIndex
+ from PyQt4.QtGui import (
+ QApplication, QDialog, QVBoxLayout, QTableView, QWidget)
+except ImportError:
+ from PySide.QtCore import QAbstractTableModel, Qt, QModelIndex
+ from PySide.QtGui import (
+ QApplication, QDialog, QVBoxLayout, QTableView, QWidget)
+ QVariant = lambda value=None: value
from pandas import DataFrame, Index
@@ -57,9 +62,17 @@ def flags(self, index):
return flags
def setData(self, index, value, role):
- self.df.set_value(self.df.index[index.row()],
- self.df.columns[index.column()],
- value.toPyObject())
+ row = self.df.index[index.row()]
+ col = self.df.columns[index.column()]
+ if hasattr(value, 'toPyObject'):
+ # PyQt4 gets a QVariant
+ value = value.toPyObject()
+ else:
+ # PySide gets an unicode
+ dtype = self.df[col].dtype
+ if dtype != object:
+ value = None if value == '' else dtype.type(value)
+ self.df.set_value(row, col, value)
return True
def rowCount(self, index=QModelIndex()):
@@ -75,17 +88,18 @@ def __init__(self, dataFrame, parent=None):
super(DataFrameWidget, self).__init__(parent)
self.dataModel = DataFrameModel()
- self.dataModel.setDataFrame(dataFrame)
-
self.dataTable = QTableView()
self.dataTable.setModel(self.dataModel)
- self.dataModel.signalUpdate()
layout = QVBoxLayout()
layout.addWidget(self.dataTable)
self.setLayout(layout)
+ # Set DataFrame
+ self.setDataFrame(dataFrame)
- def resizeColumnsToContents(self):
+ def setDataFrame(self, dataFrame):
+ self.dataModel.setDataFrame(dataFrame)
+ self.dataModel.signalUpdate()
self.dataTable.resizeColumnsToContents()
#-----------------stand alone test code
| Added PySide support for the qtpandas DataFrameWidget.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5013 | 2013-09-27T23:39:07Z | 2013-10-02T21:12:03Z | 2013-10-02T21:12:03Z | 2014-06-25T08:10:52Z |
ENH: Removing unnecessary whitespace when formatting to a HTML table. | diff --git a/pandas/core/format.py b/pandas/core/format.py
index ae0d95b1c3074..6ba7bfcc8f16c 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -62,6 +62,7 @@
-------
formatted : string (or unicode, depending on data and options)"""
+
class CategoricalFormatter(object):
def __init__(self, categorical, buf=None, length=True,
na_rep='NaN', name=False, footer=True):
@@ -140,7 +141,7 @@ def __init__(self, series, buf=None, header=True, length=True,
if float_format is None:
float_format = get_option("display.float_format")
self.float_format = float_format
- self.dtype = dtype
+ self.dtype = dtype
def _get_footer(self):
footer = u('')
@@ -163,7 +164,7 @@ def _get_footer(self):
footer += 'Length: %d' % len(self.series)
if self.dtype:
- if getattr(self.series.dtype,'name',None):
+ if getattr(self.series.dtype, 'name', None):
if footer:
footer += ', '
footer += 'dtype: %s' % com.pprint_thing(self.series.dtype.name)
@@ -213,6 +214,7 @@ def to_string(self):
return compat.text_type(u('\n').join(result))
+
def _strlen_func():
if compat.PY3: # pragma: no cover
_strlen = len
@@ -304,32 +306,31 @@ def _to_str_columns(self):
for i, c in enumerate(self.columns):
if self.header:
- fmt_values = self._format_col(i)
cheader = str_columns[i]
-
max_colwidth = max(self.col_space or 0,
*(_strlen(x) for x in cheader))
-
- fmt_values = _make_fixed_width(fmt_values, self.justify,
- minimum=max_colwidth)
+ fmt_values = self._format_col(i, justify=self.justify,
+ minimum=max_colwidth)
max_len = max(np.max([_strlen(x) for x in fmt_values]),
max_colwidth)
if self.justify == 'left':
cheader = [x.ljust(max_len) for x in cheader]
- else:
+ elif self.justify == 'right':
cheader = [x.rjust(max_len) for x in cheader]
+ elif self.justify == 'center':
+ cheader = [x.center(max_len) for x in cheader]
+ else:
+ cheader = [x.strip() for x in cheader]
stringified.append(cheader + fmt_values)
else:
- stringified = [_make_fixed_width(self._format_col(i),
- self.justify)
+ stringified = [self._format_col(i, justify=self.justify)
for i, c in enumerate(self.columns)]
strcols = stringified
if self.index:
strcols.insert(0, str_index)
-
return strcols
def to_string(self, force_unicode=None):
@@ -450,12 +451,14 @@ def write(buf, frame, column_format, strcols):
raise TypeError('buf is not a file name and it has no write '
'method')
- def _format_col(self, i):
+ def _format_col(self, i, justify='right', minimum=None):
formatter = self._get_formatter(i)
return format_array(self.frame.icol(i).get_values(), formatter,
float_format=self.float_format,
na_rep=self.na_rep,
- space=self.col_space)
+ space=self.col_space,
+ justify=justify,
+ minimum=minimum)
def to_html(self, classes=None):
"""
@@ -735,7 +738,7 @@ def _write_body(self, indent):
fmt_values = {}
for i in range(len(self.columns)):
- fmt_values[i] = self.fmt._format_col(i)
+ fmt_values[i] = self.fmt._format_col(i, justify=None)
# write values
if self.fmt.index:
@@ -1485,7 +1488,7 @@ def get_formatted_cells(self):
def format_array(values, formatter, float_format=None, na_rep='NaN',
- digits=None, space=None, justify='right'):
+ digits=None, space=None, justify='right', minimum=None):
if com.is_float_dtype(values.dtype):
fmt_klass = FloatArrayFormatter
elif com.is_integer_dtype(values.dtype):
@@ -1509,7 +1512,7 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
fmt_obj = fmt_klass(values, digits, na_rep=na_rep,
float_format=float_format,
formatter=formatter, space=space,
- justify=justify)
+ justify=justify, minimum=minimum)
return fmt_obj.get_result()
@@ -1517,7 +1520,7 @@ def format_array(values, formatter, float_format=None, na_rep='NaN',
class GenericArrayFormatter(object):
def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
- space=12, float_format=None, justify='right'):
+ space=12, float_format=None, justify='right', minimum=None):
self.values = values
self.digits = digits
self.na_rep = na_rep
@@ -1525,10 +1528,11 @@ def __init__(self, values, digits=7, formatter=None, na_rep='NaN',
self.formatter = formatter
self.float_format = float_format
self.justify = justify
+ self.minimum = minimum
def get_result(self):
fmt_values = self._format_strings()
- return _make_fixed_width(fmt_values, self.justify)
+ return _make_fixed_width(fmt_values, self.justify, self.minimum)
def _format_strings(self):
if self.float_format is None:
@@ -1584,19 +1588,19 @@ def __init__(self, *args, **kwargs):
def _format_with(self, fmt_str):
def _val(x, threshold):
if notnull(x):
- if threshold is None or abs(x) > get_option("display.chop_threshold"):
+ if threshold is None or abs(x) > get_option("display.chop_threshold"):
return fmt_str % x
else:
- if fmt_str.endswith("e"): # engineering format
- return "0"
+ if fmt_str.endswith("e"): # engineering format
+ return "0"
else:
- return fmt_str % 0
+ return fmt_str % 0
else:
return self.na_rep
threshold = get_option("display.chop_threshold")
- fmt_values = [ _val(x, threshold) for x in self.values]
+ fmt_values = [_val(x, threshold) for x in self.values]
return _trim_zeros(fmt_values, self.na_rep)
def get_result(self):
@@ -1627,7 +1631,7 @@ def get_result(self):
fmt_str = '%% .%de' % (self.digits - 1)
fmt_values = self._format_with(fmt_str)
- return _make_fixed_width(fmt_values, self.justify)
+ return _make_fixed_width(fmt_values, self.justify, self.minimum)
class IntArrayFormatter(GenericArrayFormatter):
@@ -1640,7 +1644,7 @@ def get_result(self):
fmt_values = [formatter(x) for x in self.values]
- return _make_fixed_width(fmt_values, self.justify)
+ return _make_fixed_width(fmt_values, self.justify, self.minimum)
class Datetime64Formatter(GenericArrayFormatter):
@@ -1652,7 +1656,8 @@ def get_result(self):
formatter = _format_datetime64
fmt_values = [formatter(x) for x in self.values]
- return _make_fixed_width(fmt_values, self.justify)
+ return _make_fixed_width(fmt_values, self.justify, self.minimum)
+
def _format_datetime64(x, tz=None):
if isnull(x):
@@ -1672,7 +1677,8 @@ def get_result(self):
formatter = _format_timedelta64
fmt_values = [formatter(x) for x in self.values]
- return _make_fixed_width(fmt_values, self.justify)
+ return _make_fixed_width(fmt_values, self.justify, self.minimum)
+
def _format_timedelta64(x):
if isnull(x):
@@ -1680,6 +1686,7 @@ def _format_timedelta64(x):
return lib.repr_timedelta64(x)
+
def _make_fixed_width(strings, justify='right', minimum=None):
if len(strings) == 0:
return strings
@@ -1697,8 +1704,12 @@ def _make_fixed_width(strings, justify='right', minimum=None):
if justify == 'left':
justfunc = lambda self, x: self.ljust(x)
- else:
+ elif justify == 'right':
justfunc = lambda self, x: self.rjust(x)
+ elif justify == 'center':
+ justfunc = lambda self, x: self.center(x)
+ else:
+ justfunc = lambda self, _: self.strip()
def just(x):
eff_len = max_len
diff --git a/pandas/tests/test_format.py b/pandas/tests/test_format.py
index d9bf8adb71298..f3b09b227cc10 100644
--- a/pandas/tests/test_format.py
+++ b/pandas/tests/test_format.py
@@ -399,13 +399,13 @@ def test_to_html_escaped(self):
<tbody>
<tr>
<th>str<ing1 &amp;</th>
- <td> <type 'str'></td>
- <td> <type 'str'></td>
+ <td><type 'str'></td>
+ <td><type 'str'></td>
</tr>
<tr>
<th>stri>ng2 &amp;</th>
- <td> <type 'str'></td>
- <td> <type 'str'></td>
+ <td><type 'str'></td>
+ <td><type 'str'></td>
</tr>
</tbody>
</table>"""
@@ -431,13 +431,13 @@ def test_to_html_escape_disabled(self):
<tbody>
<tr>
<th>str<ing1 &</th>
- <td> <b>bold</b></td>
- <td> <b>bold</b></td>
+ <td><b>bold</b></td>
+ <td><b>bold</b></td>
</tr>
<tr>
<th>stri>ng2 &</th>
- <td> <b>bold</b></td>
- <td> <b>bold</b></td>
+ <td><b>bold</b></td>
+ <td><b>bold</b></td>
</tr>
</tbody>
</table>"""
@@ -471,26 +471,26 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self):
<tr>
<th>0</th>
<th>0</th>
- <td> 0</td>
- <td> 1</td>
+ <td>0</td>
+ <td>1</td>
</tr>
<tr>
<th>0</th>
<th>1</th>
- <td> 2</td>
- <td> 3</td>
+ <td>2</td>
+ <td>3</td>
</tr>
<tr>
<th>1</th>
<th>0</th>
- <td> 4</td>
- <td> 5</td>
+ <td>4</td>
+ <td>5</td>
</tr>
<tr>
<th>1</th>
<th>1</th>
- <td> 6</td>
- <td> 7</td>
+ <td>6</td>
+ <td>7</td>
</tr>
</tbody>
</table>"""
@@ -526,26 +526,26 @@ def test_to_html_multiindex_sparsify_false_multi_sparse(self):
<tr>
<th>0</th>
<th>0</th>
- <td> 0</td>
- <td> 1</td>
+ <td>0</td>
+ <td>1</td>
</tr>
<tr>
<th>0</th>
<th>1</th>
- <td> 2</td>
- <td> 3</td>
+ <td>2</td>
+ <td>3</td>
</tr>
<tr>
<th>1</th>
<th>0</th>
- <td> 4</td>
- <td> 5</td>
+ <td>4</td>
+ <td>5</td>
</tr>
<tr>
<th>1</th>
<th>1</th>
- <td> 6</td>
- <td> 7</td>
+ <td>6</td>
+ <td>7</td>
</tr>
</tbody>
</table>"""
@@ -577,24 +577,24 @@ def test_to_html_multiindex_sparsify(self):
<tr>
<th rowspan="2" valign="top">0</th>
<th>0</th>
- <td> 0</td>
- <td> 1</td>
+ <td>0</td>
+ <td>1</td>
</tr>
<tr>
<th>1</th>
- <td> 2</td>
- <td> 3</td>
+ <td>2</td>
+ <td>3</td>
</tr>
<tr>
<th rowspan="2" valign="top">1</th>
<th>0</th>
- <td> 4</td>
- <td> 5</td>
+ <td>4</td>
+ <td>5</td>
</tr>
<tr>
<th>1</th>
- <td> 6</td>
- <td> 7</td>
+ <td>6</td>
+ <td>7</td>
</tr>
</tbody>
</table>"""
@@ -630,24 +630,24 @@ def test_to_html_multiindex_sparsify(self):
<tr>
<th rowspan="2" valign="top">0</th>
<th>0</th>
- <td> 0</td>
- <td> 1</td>
+ <td>0</td>
+ <td>1</td>
</tr>
<tr>
<th>1</th>
- <td> 2</td>
- <td> 3</td>
+ <td>2</td>
+ <td>3</td>
</tr>
<tr>
<th rowspan="2" valign="top">1</th>
<th>0</th>
- <td> 4</td>
- <td> 5</td>
+ <td>4</td>
+ <td>5</td>
</tr>
<tr>
<th>1</th>
- <td> 6</td>
- <td> 7</td>
+ <td>6</td>
+ <td>7</td>
</tr>
</tbody>
</table>"""
@@ -671,23 +671,23 @@ def test_to_html_index_formatter(self):
<tbody>
<tr>
<th>a</th>
- <td> 0</td>
- <td> 1</td>
+ <td>0</td>
+ <td>1</td>
</tr>
<tr>
<th>b</th>
- <td> 2</td>
- <td> 3</td>
+ <td>2</td>
+ <td>3</td>
</tr>
<tr>
<th>c</th>
- <td> 4</td>
- <td> 5</td>
+ <td>4</td>
+ <td>5</td>
</tr>
<tr>
<th>d</th>
- <td> 6</td>
- <td> 7</td>
+ <td>6</td>
+ <td>7</td>
</tr>
</tbody>
</table>"""
@@ -1159,7 +1159,7 @@ def test_to_string_left_justify_cols(self):
df_s = df.to_string(justify='left')
expected = (' x \n'
'0 3234.000\n'
- '1 0.253')
+ '1 0.253 ')
assert(df_s == expected)
def test_to_string_format_na(self):
@@ -1274,17 +1274,17 @@ def test_to_html_multiindex(self):
' <tbody>\n'
' <tr>\n'
' <th>0</th>\n'
- ' <td> a</td>\n'
- ' <td> b</td>\n'
- ' <td> c</td>\n'
- ' <td> d</td>\n'
+ ' <td>a</td>\n'
+ ' <td>b</td>\n'
+ ' <td>c</td>\n'
+ ' <td>d</td>\n'
' </tr>\n'
' <tr>\n'
' <th>1</th>\n'
- ' <td> e</td>\n'
- ' <td> f</td>\n'
- ' <td> g</td>\n'
- ' <td> h</td>\n'
+ ' <td>e</td>\n'
+ ' <td>f</td>\n'
+ ' <td>g</td>\n'
+ ' <td>h</td>\n'
' </tr>\n'
' </tbody>\n'
'</table>')
@@ -1316,17 +1316,17 @@ def test_to_html_multiindex(self):
' <tbody>\n'
' <tr>\n'
' <th>0</th>\n'
- ' <td> a</td>\n'
- ' <td> b</td>\n'
- ' <td> c</td>\n'
- ' <td> d</td>\n'
+ ' <td>a</td>\n'
+ ' <td>b</td>\n'
+ ' <td>c</td>\n'
+ ' <td>d</td>\n'
' </tr>\n'
' <tr>\n'
' <th>1</th>\n'
- ' <td> e</td>\n'
- ' <td> f</td>\n'
- ' <td> g</td>\n'
- ' <td> h</td>\n'
+ ' <td>e</td>\n'
+ ' <td>f</td>\n'
+ ' <td>g</td>\n'
+ ' <td>h</td>\n'
' </tr>\n'
' </tbody>\n'
'</table>')
@@ -1351,21 +1351,21 @@ def test_to_html_justify(self):
' <tbody>\n'
' <tr>\n'
' <th>0</th>\n'
- ' <td> 6</td>\n'
- ' <td> 1</td>\n'
- ' <td> 223442</td>\n'
+ ' <td>6</td>\n'
+ ' <td>1</td>\n'
+ ' <td>223442</td>\n'
' </tr>\n'
' <tr>\n'
' <th>1</th>\n'
- ' <td> 30000</td>\n'
- ' <td> 2</td>\n'
- ' <td> 0</td>\n'
+ ' <td>30000</td>\n'
+ ' <td>2</td>\n'
+ ' <td>0</td>\n'
' </tr>\n'
' <tr>\n'
' <th>2</th>\n'
- ' <td> 2</td>\n'
- ' <td> 70000</td>\n'
- ' <td> 1</td>\n'
+ ' <td>2</td>\n'
+ ' <td>70000</td>\n'
+ ' <td>1</td>\n'
' </tr>\n'
' </tbody>\n'
'</table>')
@@ -1385,21 +1385,21 @@ def test_to_html_justify(self):
' <tbody>\n'
' <tr>\n'
' <th>0</th>\n'
- ' <td> 6</td>\n'
- ' <td> 1</td>\n'
- ' <td> 223442</td>\n'
+ ' <td>6</td>\n'
+ ' <td>1</td>\n'
+ ' <td>223442</td>\n'
' </tr>\n'
' <tr>\n'
' <th>1</th>\n'
- ' <td> 30000</td>\n'
- ' <td> 2</td>\n'
- ' <td> 0</td>\n'
+ ' <td>30000</td>\n'
+ ' <td>2</td>\n'
+ ' <td>0</td>\n'
' </tr>\n'
' <tr>\n'
' <th>2</th>\n'
- ' <td> 2</td>\n'
- ' <td> 70000</td>\n'
- ' <td> 1</td>\n'
+ ' <td>2</td>\n'
+ ' <td>70000</td>\n'
+ ' <td>1</td>\n'
' </tr>\n'
' </tbody>\n'
'</table>')
@@ -1934,6 +1934,69 @@ def test_misc(self):
obj = fmt.FloatArrayFormatter(np.array([], dtype=np.float64))
result = obj.get_result()
+
+class TestDataFrameJustification(unittest.TestCase):
+ def setUp(self):
+ self.df_int = pd.DataFrame(np.arange(3).reshape(1, 3),
+ columns=['foo', 'bar', 'baz'],
+ dtype=int)
+ self.df_float = pd.DataFrame(np.linspace(0.3, 1.2, 3).reshape(1, 3),
+ columns=['a', 'b', 'c'],
+ dtype=float)
+ self.df_string = pd.DataFrame([
+ ['test', 'something long', 'something even longer'],
+ ['foo', 'bar', 'baz'],
+ ['small', 'text', 'samples']
+ ], columns=['a', 'b', 'c'])
+
+ def test_left_justification(self):
+ expected = ' foo bar baz\n0 0 1 2 '
+ self.assertEqual(expected, self.df_int.to_string(justify='left'),
+ 'Left int justification failed')
+ expected = ' a b c \n0 0.3 0.75 1.2'
+ self.assertEqual(expected, self.df_float.to_string(justify='left'),
+ 'Left float justification failed')
+ expected = '''
+ a b c
+0 test something long something even longer
+1 foo bar baz
+2 small text samples
+'''.strip('\r\n')
+ self.assertEqual(expected, self.df_string.to_string(justify='left'),
+ 'Left string justification failed')
+
+ def test_right_justification(self):
+ expected = ' foo bar baz\n0 0 1 2'
+ self.assertEqual(expected, self.df_int.to_string(justify='right'),
+ 'Right int justification failed')
+ expected = ' a b c\n0 0.3 0.75 1.2'
+ self.assertEqual(expected, self.df_float.to_string(justify='right'),
+ 'Right float justification failed')
+ expected = '''
+ a b c
+0 test something long something even longer
+1 foo bar baz
+2 small text samples
+'''.strip('\r\n')
+ self.assertEqual(expected, self.df_string.to_string(justify='right'),
+ 'Right string justification failed')
+
+ def test_center_justification(self):
+ expected = ' foo bar baz\n0 0 1 2 '
+ self.assertEqual(expected, self.df_int.to_string(justify='center'),
+ 'Center int justification failed')
+ expected = ' a b c \n0 0.3 0.75 1.2'
+ self.assertEqual(expected, self.df_float.to_string(justify='center'),
+ 'Center float justification failed')
+ expected = '''
+ a b c
+0 test something long something even longer
+1 foo bar baz
+2 small text samples
+'''.strip('\r\n')
+ self.assertEqual(expected, self.df_string.to_string(justify='center'),
+ 'Center string justification failed')
+
if __name__ == '__main__':
import nose
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
| Closes #4987.
The problem is that all formatters always call `_make_fixed_width`. So, the only solution I found is just to strip the unnecessary whitespace.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5012 | 2013-09-27T21:15:14Z | 2014-02-04T09:30:05Z | null | 2014-06-18T01:33:02Z |
ER: give concat a better error message for incompatible types | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 13e2d5a136c21..daee460fc50a1 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -136,6 +136,8 @@ Improvements to existing features
- Both ExcelFile and read_excel to accept an xlrd.Book for the io
(formerly path_or_buf) argument; this requires engine to be set.
(:issue:`4961`).
+ - ``concat`` now gives a more informative error message when passed objects
+ that cannot be concatenated (:issue:`4608`).
API Changes
~~~~~~~~~~~
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 5792161e0171e..ba60566a7fc55 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -1245,7 +1245,11 @@ def _get_comb_axis(self, i):
if self._is_series:
all_indexes = [x.index for x in self.objs]
else:
- all_indexes = [x._data.axes[i] for x in self.objs]
+ try:
+ all_indexes = [x._data.axes[i] for x in self.objs]
+ except IndexError:
+ types = [type(x).__name__ for x in self.objs]
+ raise TypeError("Cannot concatenate list of %s" % types)
return _get_combined_index(all_indexes, intersect=self.intersect)
@@ -1256,6 +1260,10 @@ def _get_concat_axis(self):
elif self.keys is None:
names = []
for x in self.objs:
+ if not isinstance(x, Series):
+ raise TypeError("Cannot concatenate type 'Series' "
+ "with object of type "
+ "%r" % type(x).__name__)
if x.name is not None:
names.append(x.name)
else:
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 203769e731022..d44564db4b830 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -1804,6 +1804,15 @@ def test_concat_invalid_first_argument(self):
# generator ok though
concat(DataFrame(np.random.rand(5,5)) for _ in range(3))
+ def test_concat_mixed_types_fails(self):
+ df = DataFrame(randn(10, 1))
+
+ with tm.assertRaisesRegexp(TypeError, "Cannot concatenate.+"):
+ concat([df[0], df], axis=1)
+
+ with tm.assertRaisesRegexp(TypeError, "Cannot concatenate.+"):
+ concat([df, df[0]], axis=1)
+
class TestOrderedMerge(unittest.TestCase):
def setUp(self):
| closes #4608
| https://api.github.com/repos/pandas-dev/pandas/pulls/5011 | 2013-09-27T20:32:30Z | 2013-09-27T21:29:12Z | 2013-09-27T21:29:12Z | 2014-06-24T15:20:06Z |
API: default for mangle_dup_cols is now False for read_csv. Fair warning in 0.12 (GH3612) | diff --git a/doc/source/io.rst b/doc/source/io.rst
index 01795f6a4a9bf..54b180a47a0a0 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -151,7 +151,7 @@ They can take a number of arguments:
- ``error_bad_lines``: if False then any lines causing an error will be skipped :ref:`bad lines <io.bad_lines>`
- ``usecols``: a subset of columns to return, results in much faster parsing
time and lower memory usage.
- - ``mangle_dupe_cols``: boolean, default True, then duplicate columns will be specified
+ - ``mangle_dupe_cols``: boolean, default False, then duplicate columns will be specified
as 'X.0'...'X.N', rather than 'X'...'X'
- ``tupleize_cols``: boolean, default False, if False, convert a list of tuples
to a multi-index of columns, otherwise, leave the column index as a list of tuples
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 13e2d5a136c21..0d8710ecb5448 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -246,6 +246,7 @@ API Changes
- Begin removing methods that don't make sense on ``GroupBy`` objects
(:issue:`4887`).
- Remove deprecated ``read_clipboard/to_clipboard/ExcelFile/ExcelWriter`` from ``pandas.io.parsers`` (:issue:`3717`)
+ - default for ``mangele_dup_cols`` is now ``False`` for ``read_csv``. Fair warning in 0.12 (:issue:`3612`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index b1c40fe3b2ced..b4b77b6d0fe02 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -68,9 +68,17 @@ API changes
df1 and df2
s1 and s2
+Prior Version Deprecations/Changes
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+These were announced changes in 0.12 or prior that are taking effect as of 0.13.0
+
- Remove deprecated ``Factor`` (:issue:`3650`)
- Remove deprecated ``set_printoptions/reset_printoptions`` (:issue:``3046``)
- Remove deprecated ``_verbose_info`` (:issue:`3215`)
+ - Remove deprecated ``read_clipboard/to_clipboard/ExcelFile/ExcelWriter`` from ``pandas.io.parsers`` (:issue:`3717`)
+ - default for ``tupleize_cols`` is now ``False`` for both ``to_csv`` and ``read_csv``. Fair warning in 0.12 (:issue:`3604`)
+ - default for ``mangele_dup_cols`` is now ``False`` for ``read_csv``. Fair warning in 0.12 (:issue:`3612`)
Indexing API Changes
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 26f15d5ae2aea..68328d0dc68de 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -128,7 +128,7 @@
usecols : array-like
Return a subset of the columns.
Results in much faster parsing time and lower memory usage.
-mangle_dupe_cols: boolean, default True
+mangle_dupe_cols: boolean, default False
Duplicate columns will be specified as 'X.0'...'X.N', rather than 'X'...'X'
tupleize_cols: boolean, default False
Leave a list of tuples on columns as is (default is to convert to
@@ -245,7 +245,7 @@ def _read(filepath_or_buffer, kwds):
'encoding': None,
'squeeze': False,
'compression': None,
- 'mangle_dupe_cols': True,
+ 'mangle_dupe_cols': False,
'tupleize_cols':False,
}
@@ -334,7 +334,7 @@ def parser_f(filepath_or_buffer,
verbose=False,
encoding=None,
squeeze=False,
- mangle_dupe_cols=True,
+ mangle_dupe_cols=False,
tupleize_cols=False,
):
@@ -1260,7 +1260,7 @@ def __init__(self, f, **kwds):
self.skipinitialspace = kwds['skipinitialspace']
self.lineterminator = kwds['lineterminator']
self.quoting = kwds['quoting']
- self.mangle_dupe_cols = kwds.get('mangle_dupe_cols',True)
+ self.mangle_dupe_cols = kwds.get('mangle_dupe_cols',False)
self.has_index_names = False
if 'has_index_names' in kwds:
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index fadf70877409f..3af75061a52a9 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -804,10 +804,6 @@ def test_duplicate_columns(self):
6,7,8,9,10
11,12,13,14,15
"""
- # check default beahviour
- df = self.read_table(StringIO(data), sep=',',engine=engine)
- self.assertEqual(list(df.columns), ['A', 'A.1', 'B', 'B.1', 'B.2'])
-
df = self.read_table(StringIO(data), sep=',',engine=engine,mangle_dupe_cols=False)
self.assertEqual(list(df.columns), ['A', 'A', 'B', 'B', 'B'])
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index b97929023adb6..75e97da9904a1 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -309,7 +309,7 @@ cdef class TextReader:
skiprows=None,
skip_footer=0,
verbose=False,
- mangle_dupe_cols=True,
+ mangle_dupe_cols=False,
tupleize_cols=False):
self.parser = parser_new()
| closes #3512
| https://api.github.com/repos/pandas-dev/pandas/pulls/5010 | 2013-09-27T19:58:31Z | 2013-09-28T01:36:18Z | null | 2013-09-29T02:19:55Z |
API: Remove deprecated read_clipboard/to_clipboard/ExcelFile/ExcelWriter from pandas.io.parsers (GH3717) | diff --git a/doc/source/api.rst b/doc/source/api.rst
index 28c1515e93bc5..8dcf9c0f52de4 100644
--- a/doc/source/api.rst
+++ b/doc/source/api.rst
@@ -31,6 +31,15 @@ Flat File
read_table
read_csv
read_fwf
+
+Clipboard
+~~~~~~~~~
+
+.. currentmodule:: pandas.io.clipboard
+
+.. autosummary::
+ :toctree: generated/
+
read_clipboard
Excel
diff --git a/doc/source/install.rst b/doc/source/install.rst
index 91e86a6aa1e29..532c90b83ebb0 100644
--- a/doc/source/install.rst
+++ b/doc/source/install.rst
@@ -111,7 +111,7 @@ Optional Dependencies
<http://www.pygtk.org/>`__, `xsel
<http://www.vergenet.net/~conrad/software/xsel/>`__, or `xclip
<http://sourceforge.net/projects/xclip/>`__: necessary to use
- :func:`~pandas.io.parsers.read_clipboard`. Most package managers on Linux
+ :func:`~pandas.io.clipboard.read_clipboard`. Most package managers on Linux
distributions will have xclip and/or xsel immediately available for
installation.
* One of the following combinations of libraries is needed to use the
diff --git a/doc/source/release.rst b/doc/source/release.rst
index e49812b207921..f871a412d2ff6 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -245,6 +245,7 @@ API Changes
- Remove deprecated ``_verbose_info`` (:issue:`3215`)
- Begin removing methods that don't make sense on ``GroupBy`` objects
(:issue:`4887`).
+ - Remove deprecated ``read_clipboard/to_clipboard/ExcelFile/ExcelWriter`` from ``pandas.io.parsers`` (:issue:`3717`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 426d71b05e30a..26f15d5ae2aea 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2048,27 +2048,3 @@ def __init__(self, f, **kwds):
def _make_reader(self, f):
self.data = FixedWidthReader(f, self.colspecs, self.delimiter,
encoding=self.encoding)
-
-
-##### deprecations in 0.12 #####
-##### remove in 0.12 #####
-
-from pandas.io import clipboard
-def read_clipboard(**kwargs):
- warn("read_clipboard is now a top-level accessible via pandas.read_clipboard", FutureWarning)
- clipboard.read_clipboard(**kwargs)
-
-def to_clipboard(obj):
- warn("to_clipboard is now an object level method accessible via obj.to_clipboard()", FutureWarning)
- clipboard.to_clipboard(obj)
-
-from pandas.io import excel
-class ExcelWriter(excel.ExcelWriter):
- def __init__(self, path):
- warn("ExcelWriter can now be imported from: pandas.io.excel", FutureWarning)
- super(ExcelWriter, self).__init__(path)
-
-class ExcelFile(excel.ExcelFile):
- def __init__(self, path_or_buf, **kwds):
- warn("ExcelFile can now be imported from: pandas.io.excel", FutureWarning)
- super(ExcelFile, self).__init__(path_or_buf, **kwds)
diff --git a/pandas/io/tests/test_excel.py b/pandas/io/tests/test_excel.py
index cd101d325f21d..0c6332205ffe5 100644
--- a/pandas/io/tests/test_excel.py
+++ b/pandas/io/tests/test_excel.py
@@ -254,20 +254,20 @@ def test_excel_read_buffer(self):
f = open(pth, 'rb')
xl = ExcelFile(f)
xl.parse('Sheet1', index_col=0, parse_dates=True)
-
+
def test_read_xlrd_Book(self):
_skip_if_no_xlrd()
_skip_if_no_xlwt()
-
+
import xlrd
-
+
pth = '__tmp_excel_read_worksheet__.xls'
df = self.frame
-
+
with ensure_clean(pth) as pth:
df.to_excel(pth, "SheetA")
book = xlrd.open_workbook(pth)
-
+
with ExcelFile(book, engine="xlrd") as xl:
result = xl.parse("SheetA")
tm.assert_frame_equal(df, result)
@@ -1004,26 +1004,6 @@ def check_called(func):
check_called(lambda: df.to_excel('something.xls', engine='dummy'))
set_option('io.excel.xlsx.writer', val)
-
-class ExcelLegacyTests(SharedItems, unittest.TestCase):
- def test_deprecated_from_parsers(self):
-
- # since 0.12 changed the import path
- import warnings
-
- with warnings.catch_warnings():
- warnings.filterwarnings(action='ignore', category=FutureWarning)
-
- _skip_if_no_xlrd()
- from pandas.io.parsers import ExcelFile as xf
- xf(self.xls1)
-
- _skip_if_no_xlwt()
- with ensure_clean('test.xls') as path:
- from pandas.io.parsers import ExcelWriter as xw
- xw(path)
-
-
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
| closes #3717
| https://api.github.com/repos/pandas-dev/pandas/pulls/5009 | 2013-09-27T19:21:02Z | 2013-09-27T19:55:18Z | 2013-09-27T19:55:18Z | 2014-07-16T08:31:45Z |
ENH: Support for "52โ53-week fiscal year" / "4โ4โ5 calendar" and LastWeekOfMonth DateOffset | diff --git a/doc/source/release.rst b/doc/source/release.rst
index ed1834f14fc2e..a2015a3b361ac 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -62,7 +62,8 @@ New features
- Auto-detect field widths in read_fwf when unspecified (:issue:`4488`)
- ``to_csv()`` now outputs datetime objects according to a specified format string
via the ``date_format`` keyword (:issue:`4313`)
-
+ - Added ``LastWeekOfMonth`` DateOffset (:issue:`4637`)
+ - Added ``FY5253``, and ``FY5253Quarter`` DateOffsets (:issue:`4511`)
Experimental Features
~~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index cd12cc65dcd43..875ba2de93956 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -420,6 +420,7 @@ frequency increment. Specific offset logic like "month", "business day", or
CDay, "custom business day (experimental)"
Week, "one week, optionally anchored on a day of the week"
WeekOfMonth, "the x-th day of the y-th week of each month"
+ LastWeekOfMonth, "the x-th day of the last week of each month"
MonthEnd, "calendar month end"
MonthBegin, "calendar month begin"
BMonthEnd, "business month end"
@@ -428,10 +429,12 @@ frequency increment. Specific offset logic like "month", "business day", or
QuarterBegin, "calendar quarter begin"
BQuarterEnd, "business quarter end"
BQuarterBegin, "business quarter begin"
+ FY5253Quarter, "retail (aka 52-53 week) quarter"
YearEnd, "calendar year end"
YearBegin, "calendar year begin"
BYearEnd, "business year end"
BYearBegin, "business year begin"
+ FY5253, "retail (aka 52-53 week) year"
Hour, "one hour"
Minute, "one minute"
Second, "one second"
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 4c1ece032310f..02f231170ab97 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -511,6 +511,8 @@ Enhancements
- Python csv parser now supports usecols (:issue:`4335`)
- DataFrame has a new ``interpolate`` method, similar to Series (:issue:`4434`, :issue:`1892`)
+- Added ``LastWeekOfMonth`` DateOffset (:issue:`4637`)
+- Added ``FY5253``, and ``FY5253Quarter`` DateOffsets (:issue:`4511`)
.. ipython:: python
diff --git a/pandas/tseries/frequencies.py b/pandas/tseries/frequencies.py
index 4878ebfccf915..cfe874484231b 100644
--- a/pandas/tseries/frequencies.py
+++ b/pandas/tseries/frequencies.py
@@ -95,6 +95,8 @@ def get_freq_code(freqstr):
code = _period_str_to_code(freqstr[0])
stride = freqstr[1]
except:
+ if com.is_integer(freqstr[1]):
+ raise
code = _period_str_to_code(freqstr[1])
stride = freqstr[0]
return code, stride
@@ -227,10 +229,10 @@ def get_period_alias(offset_str):
'us': 'U'
}
+#TODO: Can this be killed?
for _i, _weekday in enumerate(['MON', 'TUE', 'WED', 'THU', 'FRI']):
for _iweek in range(4):
_name = 'WOM-%d%s' % (_iweek + 1, _weekday)
- _offset_map[_name] = offsets.WeekOfMonth(week=_iweek, weekday=_i)
_rule_aliases[_name.replace('-', '@')] = _name
# Note that _rule_aliases is not 1:1 (d[BA]==d[A@DEC]), and so traversal
@@ -301,7 +303,7 @@ def to_offset(freqstr):
# hack to handle WOM-1MON
-opattern = re.compile(r'([\-]?\d*)\s*([A-Za-z]+([\-@]\d*[A-Za-z]+)?)')
+opattern = re.compile(r'([\-]?\d*)\s*([A-Za-z]+([\-@][\dA-Za-z\-]+)?)')
def _base_and_stride(freqstr):
@@ -356,16 +358,16 @@ def get_offset(name):
else:
if name in _rule_aliases:
name = _rule_aliases[name]
- try:
- if name not in _offset_map:
+
+ if name not in _offset_map:
+ try:
# generate and cache offset
offset = _make_offset(name)
- _offset_map[name] = offset
- return _offset_map[name]
- except (ValueError, TypeError, KeyError):
- # bad prefix or suffix
- pass
- raise ValueError('Bad rule name requested: %s.' % name)
+ except (ValueError, TypeError, KeyError):
+ # bad prefix or suffix
+ raise ValueError('Bad rule name requested: %s.' % name)
+ _offset_map[name] = offset
+ return _offset_map[name]
getOffset = get_offset
@@ -401,9 +403,6 @@ def get_legacy_offset_name(offset):
name = offset.name
return _legacy_reverse_map.get(name, name)
-get_offset_name = get_offset_name
-
-
def get_standard_freq(freq):
"""
Return the standardized frequency string
@@ -621,8 +620,12 @@ def _period_str_to_code(freqstr):
try:
freqstr = freqstr.upper()
return _period_code_map[freqstr]
- except:
- alias = _period_alias_dict[freqstr]
+ except KeyError:
+ try:
+ alias = _period_alias_dict[freqstr]
+ except KeyError:
+ raise ValueError("Unknown freqstr: %s" % freqstr)
+
return _period_code_map[alias]
@@ -839,16 +842,21 @@ def _get_monthly_rule(self):
'ce': 'M', 'be': 'BM'}.get(pos_check)
def _get_wom_rule(self):
- wdiffs = unique(np.diff(self.index.week))
- if not lib.ismember(wdiffs, set([4, 5])).all():
- return None
+# wdiffs = unique(np.diff(self.index.week))
+ #We also need -47, -49, -48 to catch index spanning year boundary
+# if not lib.ismember(wdiffs, set([4, 5, -47, -49, -48])).all():
+# return None
weekdays = unique(self.index.weekday)
if len(weekdays) > 1:
return None
+
+ week_of_months = unique((self.index.day - 1) // 7)
+ if len(week_of_months) > 1:
+ return None
# get which week
- week = (self.index[0].day - 1) // 7 + 1
+ week = week_of_months[0] + 1
wd = _weekday_rule_aliases[weekdays[0]]
return 'WOM-%d%s' % (week, wd)
diff --git a/pandas/tseries/offsets.py b/pandas/tseries/offsets.py
index 9c60e77363b9b..07efbcfdcd7ba 100644
--- a/pandas/tseries/offsets.py
+++ b/pandas/tseries/offsets.py
@@ -6,7 +6,7 @@
from pandas.tseries.tools import to_datetime
# import after tools, dateutil check
-from dateutil.relativedelta import relativedelta
+from dateutil.relativedelta import relativedelta, weekday
import pandas.tslib as tslib
from pandas.tslib import Timestamp
from pandas import _np_version_under1p7
@@ -15,6 +15,7 @@
'MonthBegin', 'BMonthBegin', 'MonthEnd', 'BMonthEnd',
'YearBegin', 'BYearBegin', 'YearEnd', 'BYearEnd',
'QuarterBegin', 'BQuarterBegin', 'QuarterEnd', 'BQuarterEnd',
+ 'LastWeekOfMonth', 'FY5253Quarter', 'FY5253',
'Week', 'WeekOfMonth',
'Hour', 'Minute', 'Second', 'Milli', 'Micro', 'Nano']
@@ -108,8 +109,8 @@ def _should_cache(self):
def _params(self):
attrs = [(k, v) for k, v in compat.iteritems(vars(self))
- if k not in ['kwds', '_offset', 'name', 'normalize',
- 'busdaycalendar', '_named']]
+ if (k not in ['kwds', 'name', 'normalize',
+ 'busdaycalendar']) and (k[0] != '_')]
attrs.extend(list(self.kwds.items()))
attrs = sorted(set(attrs))
@@ -701,15 +702,23 @@ def _from_name(cls, suffix=None):
weekday = _weekday_to_int[suffix]
return cls(weekday=weekday)
+class WeekDay(object):
+ MON = 0
+ TUE = 1
+ WED = 2
+ THU = 3
+ FRI = 4
+ SAT = 5
+ SUN = 6
_int_to_weekday = {
- 0: 'MON',
- 1: 'TUE',
- 2: 'WED',
- 3: 'THU',
- 4: 'FRI',
- 5: 'SAT',
- 6: 'SUN'
+ WeekDay.MON: 'MON',
+ WeekDay.TUE: 'TUE',
+ WeekDay.WED: 'WED',
+ WeekDay.THU: 'THU',
+ WeekDay.FRI: 'FRI',
+ WeekDay.SAT: 'SAT',
+ WeekDay.SUN: 'SUN'
}
_weekday_to_int = dict((v, k) for k, v in _int_to_weekday.items())
@@ -800,6 +809,80 @@ def _from_name(cls, suffix=None):
week = int(suffix[0]) - 1
weekday = _weekday_to_int[suffix[1:]]
return cls(week=week, weekday=weekday)
+
+class LastWeekOfMonth(CacheableOffset, DateOffset):
+ """
+ Describes monthly dates in last week of month like "the last Tuesday of each month"
+
+ Parameters
+ ----------
+ n : int
+ weekday : {0, 1, ..., 6}
+ 0: Mondays
+ 1: Tuesdays
+ 2: Wednesdays
+ 3: Thursdays
+ 4: Fridays
+ 5: Saturdays
+ 6: Sundays
+ """
+ def __init__(self, n=1, **kwds):
+ self.n = n
+ self.weekday = kwds['weekday']
+
+ if self.n == 0:
+ raise ValueError('N cannot be 0')
+
+ if self.weekday < 0 or self.weekday > 6:
+ raise ValueError('Day must be 0<=day<=6, got %d' %
+ self.weekday)
+
+ self.kwds = kwds
+
+ def apply(self, other):
+ offsetOfMonth = self.getOffsetOfMonth(other)
+
+ if offsetOfMonth > other:
+ if self.n > 0:
+ months = self.n - 1
+ else:
+ months = self.n
+ elif offsetOfMonth == other:
+ months = self.n
+ else:
+ if self.n > 0:
+ months = self.n
+ else:
+ months = self.n + 1
+
+ return self.getOffsetOfMonth(other + relativedelta(months=months, day=1))
+
+ def getOffsetOfMonth(self, dt):
+ m = MonthEnd()
+ d = datetime(dt.year, dt.month, 1)
+
+ eom = m.rollforward(d)
+
+ w = Week(weekday=self.weekday)
+
+ return w.rollback(eom)
+
+ def onOffset(self, dt):
+ return dt == self.getOffsetOfMonth(dt)
+
+ @property
+ def rule_code(self):
+ return '%s-%s' % (self._prefix, _int_to_weekday.get(self.weekday, ''))
+
+ _prefix = 'LWOM'
+
+ @classmethod
+ def _from_name(cls, suffix=None):
+ if not suffix:
+ raise ValueError("Prefix %r requires a suffix." % (cls._prefix))
+ # TODO: handle n here...
+ weekday = _weekday_to_int[suffix]
+ return cls(weekday=weekday)
class QuarterOffset(DateOffset):
@@ -876,7 +959,319 @@ def onOffset(self, dt):
modMonth = (dt.month - self.startingMonth) % 3
return BMonthEnd().onOffset(dt) and modMonth == 0
+class FY5253(CacheableOffset, DateOffset):
+ """
+ Describes 52-53 week fiscal year. This is also known as a 4-4-5 calendar.
+
+ It is used by companies that desire that their
+ fiscal year always end on the same day of the week.
+
+ It is a method of managing accounting periods.
+ It is a common calendar structure for some industries,
+ such as retail, manufacturing and parking industry.
+
+ For more information see:
+ http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar
+
+
+ The year may either:
+ - end on the last X day of the Y month.
+ - end on the last X day closest to the last day of the Y month.
+
+ X is a specific day of the week.
+ Y is a certain month of the year
+
+ Parameters
+ ----------
+ n : int
+ weekday : {0, 1, ..., 6}
+ 0: Mondays
+ 1: Tuesdays
+ 2: Wednesdays
+ 3: Thursdays
+ 4: Fridays
+ 5: Saturdays
+ 6: Sundays
+ startingMonth : The month in which fiscal years end. {1, 2, ... 12}
+ variation : str
+ {"nearest", "last"} for "LastOfMonth" or "NearestEndMonth"
+ """
+
+ _prefix = 'RE'
+ _suffix_prefix_last = 'L'
+ _suffix_prefix_nearest = 'N'
+
+ def __init__(self, n=1, **kwds):
+ self.n = n
+ self.startingMonth = kwds['startingMonth']
+ self.weekday = kwds["weekday"]
+
+ self.variation = kwds["variation"]
+
+ self.kwds = kwds
+
+ if self.n == 0:
+ raise ValueError('N cannot be 0')
+
+ if self.variation not in ["nearest", "last"]:
+ raise ValueError('%s is not a valid variation' % self.variation)
+
+ if self.variation == "nearest":
+ self._rd_forward = relativedelta(weekday=weekday(self.weekday))
+ self._rd_backward = relativedelta(weekday=weekday(self.weekday)(-1))
+ else:
+ self._offset_lwom = LastWeekOfMonth(n=1, weekday=self.weekday)
+
+ def isAnchored(self):
+ return self.n == 1 \
+ and self.startingMonth is not None \
+ and self.weekday is not None
+
+ def onOffset(self, dt):
+ year_end = self.get_year_end(dt)
+ return year_end == dt
+
+ def apply(self, other):
+ n = self.n
+ if n > 0:
+ year_end = self.get_year_end(other)
+ if other < year_end:
+ other = year_end
+ n -= 1
+ elif other > year_end:
+ other = self.get_year_end(other + relativedelta(years=1))
+ n -= 1
+
+ return self.get_year_end(other + relativedelta(years=n))
+ else:
+ n = -n
+ year_end = self.get_year_end(other)
+ if other > year_end:
+ other = year_end
+ n -= 1
+ elif other < year_end:
+ other = self.get_year_end(other + relativedelta(years=-1))
+ n -= 1
+
+ return self.get_year_end(other + relativedelta(years=-n))
+
+ def get_year_end(self, dt):
+ if self.variation == "nearest":
+ return self._get_year_end_nearest(dt)
+ else:
+ return self._get_year_end_last(dt)
+
+ def get_target_month_end(self, dt):
+ target_month = datetime(year=dt.year, month=self.startingMonth, day=1)
+ next_month_first_of = target_month + relativedelta(months=+1)
+ return next_month_first_of + relativedelta(days=-1)
+
+ def _get_year_end_nearest(self, dt):
+ target_date = self.get_target_month_end(dt)
+ if target_date.weekday() == self.weekday:
+ return target_date
+ else:
+ forward = target_date + self._rd_forward
+ backward = target_date + self._rd_backward
+
+ if forward - target_date < target_date - backward:
+ return forward
+ else:
+ return backward
+
+ def _get_year_end_last(self, dt):
+ current_year = datetime(year=dt.year, month=self.startingMonth, day=1)
+ return current_year + self._offset_lwom
+
+ @property
+ def rule_code(self):
+ suffix = self.get_rule_code_suffix()
+ return "%s-%s" % (self._get_prefix(), suffix)
+
+ def _get_prefix(self):
+ return self._prefix
+
+ def _get_suffix_prefix(self):
+ if self.variation == "nearest":
+ return self._suffix_prefix_nearest
+ else:
+ return self._suffix_prefix_last
+
+ def get_rule_code_suffix(self):
+ return '%s-%s-%s' % (self._get_suffix_prefix(), \
+ _int_to_month[self.startingMonth], \
+ _int_to_weekday[self.weekday])
+
+ @classmethod
+ def _parse_suffix(cls, varion_code, startingMonth_code, weekday_code):
+ if varion_code == "N":
+ variation = "nearest"
+ elif varion_code == "L":
+ variation = "last"
+ else:
+ raise ValueError("Unable to parse varion_code: %s" % (varion_code,))
+
+ startingMonth = _month_to_int[startingMonth_code]
+ weekday = _weekday_to_int[weekday_code]
+
+ return {
+ "weekday":weekday,
+ "startingMonth":startingMonth,
+ "variation":variation,
+ }
+
+ @classmethod
+ def _from_name(cls, *args):
+ return cls(**cls._parse_suffix(*args))
+
+class FY5253Quarter(CacheableOffset, DateOffset):
+ """
+ DateOffset increments between business quarter dates
+ for 52-53 week fiscal year (also known as a 4-4-5 calendar).
+
+ It is used by companies that desire that their
+ fiscal year always end on the same day of the week.
+
+ It is a method of managing accounting periods.
+ It is a common calendar structure for some industries,
+ such as retail, manufacturing and parking industry.
+
+ For more information see:
+ http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar
+
+ The year may either:
+ - end on the last X day of the Y month.
+ - end on the last X day closest to the last day of the Y month.
+
+ X is a specific day of the week.
+ Y is a certain month of the year
+
+ startingMonth = 1 corresponds to dates like 1/31/2007, 4/30/2007, ...
+ startingMonth = 2 corresponds to dates like 2/28/2007, 5/31/2007, ...
+ startingMonth = 3 corresponds to dates like 3/30/2007, 6/29/2007, ...
+
+ Parameters
+ ----------
+ n : int
+ weekday : {0, 1, ..., 6}
+ 0: Mondays
+ 1: Tuesdays
+ 2: Wednesdays
+ 3: Thursdays
+ 4: Fridays
+ 5: Saturdays
+ 6: Sundays
+ startingMonth : The month in which fiscal years end. {1, 2, ... 12}
+ qtr_with_extra_week : The quarter number that has the leap
+ or 14 week when needed. {1,2,3,4}
+ variation : str
+ {"nearest", "last"} for "LastOfMonth" or "NearestEndMonth"
+ """
+
+ _prefix = 'REQ'
+
+ def __init__(self, n=1, **kwds):
+ self.n = n
+
+ self.qtr_with_extra_week = kwds["qtr_with_extra_week"]
+
+ self.kwds = kwds
+
+ if self.n == 0:
+ raise ValueError('N cannot be 0')
+
+ self._offset = FY5253( \
+ startingMonth=kwds['startingMonth'], \
+ weekday=kwds["weekday"],
+ variation=kwds["variation"])
+
+ def isAnchored(self):
+ return self.n == 1 and self._offset.isAnchored()
+
+ def apply(self, other):
+ n = self.n
+
+ if n > 0:
+ while n > 0:
+ if not self._offset.onOffset(other):
+ qtr_lens = self.get_weeks(other)
+ start = other - self._offset
+ else:
+ start = other
+ qtr_lens = self.get_weeks(other + self._offset)
+
+ for weeks in qtr_lens:
+ start += relativedelta(weeks=weeks)
+ if start > other:
+ other = start
+ n -= 1
+ break
+
+ else:
+ n = -n
+ while n > 0:
+ if not self._offset.onOffset(other):
+ qtr_lens = self.get_weeks(other)
+ end = other + self._offset
+ else:
+ end = other
+ qtr_lens = self.get_weeks(other)
+
+ for weeks in reversed(qtr_lens):
+ end -= relativedelta(weeks=weeks)
+ if end < other:
+ other = end
+ n -= 1
+ break
+ return other
+
+ def get_weeks(self, dt):
+ ret = [13] * 4
+
+ year_has_extra_week = self.year_has_extra_week(dt)
+
+ if year_has_extra_week:
+ ret[self.qtr_with_extra_week-1] = 14
+
+ return ret
+
+ def year_has_extra_week(self, dt):
+ if self._offset.onOffset(dt):
+ prev_year_end = dt - self._offset
+ next_year_end = dt
+ else:
+ next_year_end = dt + self._offset
+ prev_year_end = dt - self._offset
+
+ week_in_year = (next_year_end - prev_year_end).days/7
+
+ return week_in_year == 53
+
+ def onOffset(self, dt):
+ if self._offset.onOffset(dt):
+ return True
+
+ next_year_end = dt - self._offset
+
+ qtr_lens = self.get_weeks(dt)
+
+ current = next_year_end
+ for qtr_len in qtr_lens[0:4]:
+ current += relativedelta(weeks=qtr_len)
+ if dt == current:
+ return True
+ return False
+
+ @property
+ def rule_code(self):
+ suffix = self._offset.get_rule_code_suffix()
+ return "%s-%s" %(self._prefix, "%s-%d" % (suffix, self.qtr_with_extra_week))
+
+ @classmethod
+ def _from_name(cls, *args):
+ return cls(**dict(FY5253._parse_suffix(*args[:-1]), qtr_with_extra_week=int(args[-1])))
+
_int_to_month = {
1: 'JAN',
2: 'FEB',
@@ -1452,6 +1847,8 @@ def generate_range(start=None, end=None, periods=None,
Hour, # 'H'
Day, # 'D'
WeekOfMonth, # 'WOM'
+ FY5253,
+ FY5253Quarter,
])
if not _np_version_under1p7:
diff --git a/pandas/tseries/tests/test_frequencies.py b/pandas/tseries/tests/test_frequencies.py
index 00a3d392a45c0..f1078f44efd13 100644
--- a/pandas/tseries/tests/test_frequencies.py
+++ b/pandas/tseries/tests/test_frequencies.py
@@ -148,17 +148,22 @@ def _check_tick(self, base_delta, code):
self.assert_(infer_freq(index) is None)
def test_weekly(self):
- days = ['MON', 'TUE', 'WED', 'THU', 'FRI']
+ days = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
for day in days:
self._check_generated_range('1/1/2000', 'W-%s' % day)
def test_week_of_month(self):
- days = ['MON', 'TUE', 'WED', 'THU', 'FRI']
+ days = ['MON', 'TUE', 'WED', 'THU', 'FRI', 'SAT', 'SUN']
for day in days:
for i in range(1, 5):
self._check_generated_range('1/1/2000', 'WOM-%d%s' % (i, day))
+
+ def test_week_of_month_fake(self):
+ #All of these dates are on same day of week and are 4 or 5 weeks apart
+ index = DatetimeIndex(["2013-08-27","2013-10-01","2013-10-29","2013-11-26"])
+ assert infer_freq(index) != 'WOM-4TUE'
def test_monthly(self):
self._check_generated_range('1/1/2000', 'M')
@@ -195,7 +200,7 @@ def _check_generated_range(self, start, freq):
gen = date_range(start, periods=7, freq=freq)
index = _dti(gen.values)
if not freq.startswith('Q-'):
- self.assert_(infer_freq(index) == gen.freqstr)
+ self.assertEqual(infer_freq(index), gen.freqstr)
else:
inf_freq = infer_freq(index)
self.assert_((inf_freq == 'Q-DEC' and
diff --git a/pandas/tseries/tests/test_offsets.py b/pandas/tseries/tests/test_offsets.py
index 8592a2c2d8d9c..7ebe6c0cfb728 100644
--- a/pandas/tseries/tests/test_offsets.py
+++ b/pandas/tseries/tests/test_offsets.py
@@ -1,4 +1,5 @@
from datetime import date, datetime, timedelta
+from dateutil.relativedelta import relativedelta
from pandas.compat import range
from pandas import compat
import unittest
@@ -24,7 +25,8 @@
from pandas.lib import Timestamp
from pandas.util.testing import assertRaisesRegexp
import pandas.util.testing as tm
-from pandas.tseries.offsets import BusinessMonthEnd, CacheableOffset
+from pandas.tseries.offsets import BusinessMonthEnd, CacheableOffset, \
+ LastWeekOfMonth, FY5253, FY5253Quarter, WeekDay
from pandas import _np_version_under1p7
@@ -524,7 +526,9 @@ def test_weekmask_and_holidays(self):
def assertOnOffset(offset, date, expected):
actual = offset.onOffset(date)
- assert actual == expected
+ assert actual == expected, ("\nExpected: %s\nActual: %s\nFor Offset: %s)"
+ "\nAt Date: %s" %
+ (expected, actual, offset, date))
class TestWeek(unittest.TestCase):
@@ -674,6 +678,75 @@ def test_onOffset(self):
offset = WeekOfMonth(week=week, weekday=weekday)
self.assert_(offset.onOffset(date) == expected)
+class TestLastWeekOfMonth(unittest.TestCase):
+ def test_constructor(self):
+ assertRaisesRegexp(ValueError, "^N cannot be 0", \
+ LastWeekOfMonth, n=0, weekday=1)
+
+ assertRaisesRegexp(ValueError, "^Day", LastWeekOfMonth, n=1, weekday=-1)
+ assertRaisesRegexp(ValueError, "^Day", LastWeekOfMonth, n=1, weekday=7)
+
+ def test_offset(self):
+ #### Saturday
+ last_sat = datetime(2013,8,31)
+ next_sat = datetime(2013,9,28)
+ offset_sat = LastWeekOfMonth(n=1, weekday=5)
+
+ one_day_before = (last_sat + timedelta(days=-1))
+ self.assert_(one_day_before + offset_sat == last_sat)
+
+ one_day_after = (last_sat + timedelta(days=+1))
+ self.assert_(one_day_after + offset_sat == next_sat)
+
+ #Test On that day
+ self.assert_(last_sat + offset_sat == next_sat)
+
+ #### Thursday
+
+ offset_thur = LastWeekOfMonth(n=1, weekday=3)
+ last_thurs = datetime(2013,1,31)
+ next_thurs = datetime(2013,2,28)
+
+ one_day_before = last_thurs + timedelta(days=-1)
+ self.assert_(one_day_before + offset_thur == last_thurs)
+
+ one_day_after = last_thurs + timedelta(days=+1)
+ self.assert_(one_day_after + offset_thur == next_thurs)
+
+ # Test on that day
+ self.assert_(last_thurs + offset_thur == next_thurs)
+
+ three_before = last_thurs + timedelta(days=-3)
+ self.assert_(three_before + offset_thur == last_thurs)
+
+ two_after = last_thurs + timedelta(days=+2)
+ self.assert_(two_after + offset_thur == next_thurs)
+
+ offset_sunday = LastWeekOfMonth(n=1, weekday=WeekDay.SUN)
+ self.assert_(datetime(2013,7,31) + offset_sunday == datetime(2013,8,25))
+
+ def test_onOffset(self):
+ test_cases = [
+ (WeekDay.SUN, datetime(2013, 1, 27), True),
+ (WeekDay.SAT, datetime(2013, 3, 30), True),
+ (WeekDay.MON, datetime(2013, 2, 18), False), #Not the last Mon
+ (WeekDay.SUN, datetime(2013, 2, 25), False), #Not a SUN
+ (WeekDay.MON, datetime(2013, 2, 25), True),
+ (WeekDay.SAT, datetime(2013, 11, 30), True),
+
+ (WeekDay.SAT, datetime(2006, 8, 26), True),
+ (WeekDay.SAT, datetime(2007, 8, 25), True),
+ (WeekDay.SAT, datetime(2008, 8, 30), True),
+ (WeekDay.SAT, datetime(2009, 8, 29), True),
+ (WeekDay.SAT, datetime(2010, 8, 28), True),
+ (WeekDay.SAT, datetime(2011, 8, 27), True),
+ (WeekDay.SAT, datetime(2019, 8, 31), True),
+ ]
+
+ for weekday, date, expected in test_cases:
+ offset = LastWeekOfMonth(weekday=weekday)
+ self.assert_(offset.onOffset(date) == expected, date)
+
class TestBMonthBegin(unittest.TestCase):
def test_offset(self):
@@ -1101,7 +1174,379 @@ def test_onOffset(self):
for offset, date, expected in tests:
assertOnOffset(offset, date, expected)
+def makeFY5253LastOfMonthQuarter(*args, **kwds):
+ return FY5253Quarter(*args, variation="last", **kwds)
+def makeFY5253NearestEndMonthQuarter(*args, **kwds):
+ return FY5253Quarter(*args, variation="nearest", **kwds)
+
+def makeFY5253NearestEndMonth(*args, **kwds):
+ return FY5253(*args, variation="nearest", **kwds)
+
+def makeFY5253LastOfMonth(*args, **kwds):
+ return FY5253(*args, variation="last", **kwds)
+
+class TestFY5253LastOfMonth(unittest.TestCase):
+ def test_onOffset(self):
+
+ offset_lom_sat_aug = makeFY5253LastOfMonth(1, startingMonth=8, weekday=WeekDay.SAT)
+ offset_lom_sat_sep = makeFY5253LastOfMonth(1, startingMonth=9, weekday=WeekDay.SAT)
+
+ tests = [
+ #From Wikipedia (see: http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Last_Saturday_of_the_month_at_fiscal_year_end)
+ (offset_lom_sat_aug, datetime(2006, 8, 26), True),
+ (offset_lom_sat_aug, datetime(2007, 8, 25), True),
+ (offset_lom_sat_aug, datetime(2008, 8, 30), True),
+ (offset_lom_sat_aug, datetime(2009, 8, 29), True),
+ (offset_lom_sat_aug, datetime(2010, 8, 28), True),
+ (offset_lom_sat_aug, datetime(2011, 8, 27), True),
+ (offset_lom_sat_aug, datetime(2012, 8, 25), True),
+ (offset_lom_sat_aug, datetime(2013, 8, 31), True),
+ (offset_lom_sat_aug, datetime(2014, 8, 30), True),
+ (offset_lom_sat_aug, datetime(2015, 8, 29), True),
+ (offset_lom_sat_aug, datetime(2016, 8, 27), True),
+ (offset_lom_sat_aug, datetime(2017, 8, 26), True),
+ (offset_lom_sat_aug, datetime(2018, 8, 25), True),
+ (offset_lom_sat_aug, datetime(2019, 8, 31), True),
+
+ (offset_lom_sat_aug, datetime(2006, 8, 27), False),
+ (offset_lom_sat_aug, datetime(2007, 8, 28), False),
+ (offset_lom_sat_aug, datetime(2008, 8, 31), False),
+ (offset_lom_sat_aug, datetime(2009, 8, 30), False),
+ (offset_lom_sat_aug, datetime(2010, 8, 29), False),
+ (offset_lom_sat_aug, datetime(2011, 8, 28), False),
+
+ (offset_lom_sat_aug, datetime(2006, 8, 25), False),
+ (offset_lom_sat_aug, datetime(2007, 8, 24), False),
+ (offset_lom_sat_aug, datetime(2008, 8, 29), False),
+ (offset_lom_sat_aug, datetime(2009, 8, 28), False),
+ (offset_lom_sat_aug, datetime(2010, 8, 27), False),
+ (offset_lom_sat_aug, datetime(2011, 8, 26), False),
+ (offset_lom_sat_aug, datetime(2019, 8, 30), False),
+
+ #From GMCR (see for example: http://yahoo.brand.edgar-online.com/Default.aspx?companyid=3184&formtypeID=7)
+ (offset_lom_sat_sep, datetime(2010, 9, 25), True),
+ (offset_lom_sat_sep, datetime(2011, 9, 24), True),
+ (offset_lom_sat_sep, datetime(2012, 9, 29), True),
+
+ ]
+
+ for offset, date, expected in tests:
+ assertOnOffset(offset, date, expected)
+
+ def test_apply(self):
+ offset_lom_aug_sat = makeFY5253LastOfMonth(startingMonth=8, weekday=WeekDay.SAT)
+ offset_lom_aug_sat_1 = makeFY5253LastOfMonth(n=1, startingMonth=8, weekday=WeekDay.SAT)
+
+ date_seq_lom_aug_sat = [datetime(2006, 8, 26), datetime(2007, 8, 25),
+ datetime(2008, 8, 30), datetime(2009, 8, 29),
+ datetime(2010, 8, 28), datetime(2011, 8, 27),
+ datetime(2012, 8, 25), datetime(2013, 8, 31),
+ datetime(2014, 8, 30), datetime(2015, 8, 29),
+ datetime(2016, 8, 27)]
+
+ tests = [
+ (offset_lom_aug_sat, date_seq_lom_aug_sat),
+ (offset_lom_aug_sat_1, date_seq_lom_aug_sat),
+ (offset_lom_aug_sat, [datetime(2006, 8, 25)] + date_seq_lom_aug_sat),
+ (offset_lom_aug_sat_1, [datetime(2006, 8, 27)] + date_seq_lom_aug_sat[1:]),
+ (makeFY5253LastOfMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), list(reversed(date_seq_lom_aug_sat))),
+ ]
+ for test in tests:
+ offset, data = test
+ current = data[0]
+ for datum in data[1:]:
+ current = current + offset
+ self.assertEqual(current, datum)
+
+class TestFY5253NearestEndMonth(unittest.TestCase):
+ def test_get_target_month_end(self):
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,8,31))
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=12, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,12,31))
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=2, weekday=WeekDay.SAT).get_target_month_end(datetime(2013,1,1)), datetime(2013,2,28))
+
+ def test_get_year_end(self):
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT).get_year_end(datetime(2013,1,1)), datetime(2013,8,31))
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SUN).get_year_end(datetime(2013,1,1)), datetime(2013,9,1))
+ self.assertEqual(makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.FRI).get_year_end(datetime(2013,1,1)), datetime(2013,8,30))
+
+ def test_onOffset(self):
+ offset_lom_aug_sat = makeFY5253NearestEndMonth(1, startingMonth=8, weekday=WeekDay.SAT)
+ offset_lom_aug_thu = makeFY5253NearestEndMonth(1, startingMonth=8, weekday=WeekDay.THU)
+
+ tests = [
+# From Wikipedia (see: http://en.wikipedia.org/wiki/4%E2%80%934%E2%80%935_calendar#Saturday_nearest_the_end_of_month)
+# 2006-09-02 2006 September 2
+# 2007-09-01 2007 September 1
+# 2008-08-30 2008 August 30 (leap year)
+# 2009-08-29 2009 August 29
+# 2010-08-28 2010 August 28
+# 2011-09-03 2011 September 3
+# 2012-09-01 2012 September 1 (leap year)
+# 2013-08-31 2013 August 31
+# 2014-08-30 2014 August 30
+# 2015-08-29 2015 August 29
+# 2016-09-03 2016 September 3 (leap year)
+# 2017-09-02 2017 September 2
+# 2018-09-01 2018 September 1
+# 2019-08-31 2019 August 31
+ (offset_lom_aug_sat, datetime(2006, 9, 2), True),
+ (offset_lom_aug_sat, datetime(2007, 9, 1), True),
+ (offset_lom_aug_sat, datetime(2008, 8, 30), True),
+ (offset_lom_aug_sat, datetime(2009, 8, 29), True),
+ (offset_lom_aug_sat, datetime(2010, 8, 28), True),
+ (offset_lom_aug_sat, datetime(2011, 9, 3), True),
+
+ (offset_lom_aug_sat, datetime(2016, 9, 3), True),
+ (offset_lom_aug_sat, datetime(2017, 9, 2), True),
+ (offset_lom_aug_sat, datetime(2018, 9, 1), True),
+ (offset_lom_aug_sat, datetime(2019, 8, 31), True),
+
+ (offset_lom_aug_sat, datetime(2006, 8, 27), False),
+ (offset_lom_aug_sat, datetime(2007, 8, 28), False),
+ (offset_lom_aug_sat, datetime(2008, 8, 31), False),
+ (offset_lom_aug_sat, datetime(2009, 8, 30), False),
+ (offset_lom_aug_sat, datetime(2010, 8, 29), False),
+ (offset_lom_aug_sat, datetime(2011, 8, 28), False),
+
+ (offset_lom_aug_sat, datetime(2006, 8, 25), False),
+ (offset_lom_aug_sat, datetime(2007, 8, 24), False),
+ (offset_lom_aug_sat, datetime(2008, 8, 29), False),
+ (offset_lom_aug_sat, datetime(2009, 8, 28), False),
+ (offset_lom_aug_sat, datetime(2010, 8, 27), False),
+ (offset_lom_aug_sat, datetime(2011, 8, 26), False),
+ (offset_lom_aug_sat, datetime(2019, 8, 30), False),
+
+ #From Micron, see: http://google.brand.edgar-online.com/?sym=MU&formtypeID=7
+ (offset_lom_aug_thu, datetime(2012, 8, 30), True),
+ (offset_lom_aug_thu, datetime(2011, 9, 1), True),
+
+ ]
+
+ for offset, date, expected in tests:
+ assertOnOffset(offset, date, expected)
+
+ def test_apply(self):
+ date_seq_nem_8_sat = [datetime(2006, 9, 2), datetime(2007, 9, 1), datetime(2008, 8, 30), datetime(2009, 8, 29), datetime(2010, 8, 28), datetime(2011, 9, 3)]
+
+ tests = [
+ (makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), date_seq_nem_8_sat),
+ (makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), date_seq_nem_8_sat),
+ (makeFY5253NearestEndMonth(startingMonth=8, weekday=WeekDay.SAT), [datetime(2006, 9, 1)] + date_seq_nem_8_sat),
+ (makeFY5253NearestEndMonth(n=1, startingMonth=8, weekday=WeekDay.SAT), [datetime(2006, 9, 3)] + date_seq_nem_8_sat[1:]),
+ (makeFY5253NearestEndMonth(n=-1, startingMonth=8, weekday=WeekDay.SAT), list(reversed(date_seq_nem_8_sat))),
+ ]
+ for test in tests:
+ offset, data = test
+ current = data[0]
+ for datum in data[1:]:
+ current = current + offset
+ self.assertEqual(current, datum)
+
+class TestFY5253LastOfMonthQuarter(unittest.TestCase):
+
+ def test_isAnchored(self):
+ self.assert_(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4).isAnchored())
+ self.assert_(makeFY5253LastOfMonthQuarter(weekday=WeekDay.SAT, startingMonth=3, qtr_with_extra_week=4).isAnchored())
+ self.assert_(not makeFY5253LastOfMonthQuarter(2, startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4).isAnchored())
+
+ def test_equality(self):
+ self.assertEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4))
+ self.assertNotEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SUN, qtr_with_extra_week=4))
+ self.assertNotEqual(makeFY5253LastOfMonthQuarter(startingMonth=1, weekday=WeekDay.SAT, qtr_with_extra_week=4), makeFY5253LastOfMonthQuarter(startingMonth=2, weekday=WeekDay.SAT, qtr_with_extra_week=4))
+
+ def test_offset(self):
+ offset = makeFY5253LastOfMonthQuarter(1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+ offset2 = makeFY5253LastOfMonthQuarter(2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+ offset4 = makeFY5253LastOfMonthQuarter(4, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+
+ offset_neg1 = makeFY5253LastOfMonthQuarter(-1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+ offset_neg2 = makeFY5253LastOfMonthQuarter(-2, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+
+ GMCR = [datetime(2010, 3, 27),
+ datetime(2010, 6, 26),
+ datetime(2010, 9, 25),
+ datetime(2010, 12, 25),
+ datetime(2011, 3, 26),
+ datetime(2011, 6, 25),
+ datetime(2011, 9, 24),
+ datetime(2011, 12, 24),
+ datetime(2012, 3, 24),
+ datetime(2012, 6, 23),
+ datetime(2012, 9, 29),
+ datetime(2012, 12, 29),
+ datetime(2013, 3, 30),
+ datetime(2013, 6, 29)]
+
+
+ assertEq(offset, base=GMCR[0], expected=GMCR[1])
+ assertEq(offset, base=GMCR[0] + relativedelta(days=-1), expected=GMCR[0])
+ assertEq(offset, base=GMCR[1], expected=GMCR[2])
+
+ assertEq(offset2, base=GMCR[0], expected=GMCR[2])
+ assertEq(offset4, base=GMCR[0], expected=GMCR[4])
+
+ assertEq(offset_neg1, base=GMCR[-1], expected=GMCR[-2])
+ assertEq(offset_neg1, base=GMCR[-1] + relativedelta(days=+1), expected=GMCR[-1])
+ assertEq(offset_neg2, base=GMCR[-1], expected=GMCR[-3])
+
+ date = GMCR[0] + relativedelta(days=-1)
+ for expected in GMCR:
+ assertEq(offset, date, expected)
+ date = date + offset
+
+ date = GMCR[-1] + relativedelta(days=+1)
+ for expected in reversed(GMCR):
+ assertEq(offset_neg1, date, expected)
+ date = date + offset_neg1
+
+
+ def test_onOffset(self):
+ lomq_aug_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+ lomq_sep_sat_4 = makeFY5253LastOfMonthQuarter(1, startingMonth=9, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+
+ tests = [
+ #From Wikipedia
+ (lomq_aug_sat_4, datetime(2006, 8, 26), True),
+ (lomq_aug_sat_4, datetime(2007, 8, 25), True),
+ (lomq_aug_sat_4, datetime(2008, 8, 30), True),
+ (lomq_aug_sat_4, datetime(2009, 8, 29), True),
+ (lomq_aug_sat_4, datetime(2010, 8, 28), True),
+ (lomq_aug_sat_4, datetime(2011, 8, 27), True),
+ (lomq_aug_sat_4, datetime(2019, 8, 31), True),
+
+ (lomq_aug_sat_4, datetime(2006, 8, 27), False),
+ (lomq_aug_sat_4, datetime(2007, 8, 28), False),
+ (lomq_aug_sat_4, datetime(2008, 8, 31), False),
+ (lomq_aug_sat_4, datetime(2009, 8, 30), False),
+ (lomq_aug_sat_4, datetime(2010, 8, 29), False),
+ (lomq_aug_sat_4, datetime(2011, 8, 28), False),
+
+ (lomq_aug_sat_4, datetime(2006, 8, 25), False),
+ (lomq_aug_sat_4, datetime(2007, 8, 24), False),
+ (lomq_aug_sat_4, datetime(2008, 8, 29), False),
+ (lomq_aug_sat_4, datetime(2009, 8, 28), False),
+ (lomq_aug_sat_4, datetime(2010, 8, 27), False),
+ (lomq_aug_sat_4, datetime(2011, 8, 26), False),
+ (lomq_aug_sat_4, datetime(2019, 8, 30), False),
+
+ #From GMCR
+ (lomq_sep_sat_4, datetime(2010, 9, 25), True),
+ (lomq_sep_sat_4, datetime(2011, 9, 24), True),
+ (lomq_sep_sat_4, datetime(2012, 9, 29), True),
+
+ (lomq_sep_sat_4, datetime(2013, 6, 29), True),
+ (lomq_sep_sat_4, datetime(2012, 6, 23), True),
+ (lomq_sep_sat_4, datetime(2012, 6, 30), False),
+
+ (lomq_sep_sat_4, datetime(2013, 3, 30), True),
+ (lomq_sep_sat_4, datetime(2012, 3, 24), True),
+
+ (lomq_sep_sat_4, datetime(2012, 12, 29), True),
+ (lomq_sep_sat_4, datetime(2011, 12, 24), True),
+
+ #INTC (extra week in Q1)
+ #See: http://www.intc.com/releasedetail.cfm?ReleaseID=542844
+ (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2011, 4, 2), True),
+
+ #see: http://google.brand.edgar-online.com/?sym=INTC&formtypeID=7
+ (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2012, 12, 29), True),
+ (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2011, 12, 31), True),
+ (makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1), datetime(2010, 12, 25), True),
+
+ ]
+
+ for offset, date, expected in tests:
+ assertOnOffset(offset, date, expected)
+
+ def test_year_has_extra_week(self):
+ #End of long Q1
+ self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2011, 4, 2)))
+
+ #Start of long Q1
+ self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 26)))
+
+ #End of year before year with long Q1
+ self.assertFalse(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2010, 12, 25)))
+
+ for year in [x for x in range(1994, 2011+1) if x not in [2011, 2005, 2000, 1994]]:
+ self.assertFalse(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(year, 4, 2)))
+
+ #Other long years
+ self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2005, 4, 2)))
+ self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(2000, 4, 2)))
+ self.assertTrue(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).year_has_extra_week(datetime(1994, 4, 2)))
+
+ def test_get_weeks(self):
+ self.assertEqual(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).get_weeks(datetime(2011, 4, 2)), [14, 13, 13, 13])
+ self.assertEqual(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=4).get_weeks(datetime(2011, 4, 2)), [13, 13, 13, 14])
+ self.assertEqual(makeFY5253LastOfMonthQuarter(1, startingMonth=12, weekday=WeekDay.SAT, qtr_with_extra_week=1).get_weeks(datetime(2010, 12, 25)), [13, 13, 13, 13])
+
+class TestFY5253NearestEndMonthQuarter(unittest.TestCase):
+
+ def test_onOffset(self):
+
+ offset_nem_sat_aug_4 = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.SAT, qtr_with_extra_week=4)
+ offset_nem_thu_aug_4 = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4)
+ tests = [
+ #From Wikipedia
+ (offset_nem_sat_aug_4, datetime(2006, 9, 2), True),
+ (offset_nem_sat_aug_4, datetime(2007, 9, 1), True),
+ (offset_nem_sat_aug_4, datetime(2008, 8, 30), True),
+ (offset_nem_sat_aug_4, datetime(2009, 8, 29), True),
+ (offset_nem_sat_aug_4, datetime(2010, 8, 28), True),
+ (offset_nem_sat_aug_4, datetime(2011, 9, 3), True),
+
+ (offset_nem_sat_aug_4, datetime(2016, 9, 3), True),
+ (offset_nem_sat_aug_4, datetime(2017, 9, 2), True),
+ (offset_nem_sat_aug_4, datetime(2018, 9, 1), True),
+ (offset_nem_sat_aug_4, datetime(2019, 8, 31), True),
+
+ (offset_nem_sat_aug_4, datetime(2006, 8, 27), False),
+ (offset_nem_sat_aug_4, datetime(2007, 8, 28), False),
+ (offset_nem_sat_aug_4, datetime(2008, 8, 31), False),
+ (offset_nem_sat_aug_4, datetime(2009, 8, 30), False),
+ (offset_nem_sat_aug_4, datetime(2010, 8, 29), False),
+ (offset_nem_sat_aug_4, datetime(2011, 8, 28), False),
+
+ (offset_nem_sat_aug_4, datetime(2006, 8, 25), False),
+ (offset_nem_sat_aug_4, datetime(2007, 8, 24), False),
+ (offset_nem_sat_aug_4, datetime(2008, 8, 29), False),
+ (offset_nem_sat_aug_4, datetime(2009, 8, 28), False),
+ (offset_nem_sat_aug_4, datetime(2010, 8, 27), False),
+ (offset_nem_sat_aug_4, datetime(2011, 8, 26), False),
+ (offset_nem_sat_aug_4, datetime(2019, 8, 30), False),
+
+ #From Micron, see: http://google.brand.edgar-online.com/?sym=MU&formtypeID=7
+ (offset_nem_thu_aug_4, datetime(2012, 8, 30), True),
+ (offset_nem_thu_aug_4, datetime(2011, 9, 1), True),
+
+ #See: http://google.brand.edgar-online.com/?sym=MU&formtypeID=13
+ (offset_nem_thu_aug_4, datetime(2013, 5, 30), True),
+ (offset_nem_thu_aug_4, datetime(2013, 2, 28), True),
+ (offset_nem_thu_aug_4, datetime(2012, 11, 29), True),
+ (offset_nem_thu_aug_4, datetime(2012, 5, 31), True),
+ (offset_nem_thu_aug_4, datetime(2007, 3, 1), True),
+ (offset_nem_thu_aug_4, datetime(1994, 3, 3), True),
+
+ ]
+
+ for offset, date, expected in tests:
+ assertOnOffset(offset, date, expected)
+
+ def test_offset(self):
+ offset = makeFY5253NearestEndMonthQuarter(1, startingMonth=8, weekday=WeekDay.THU, qtr_with_extra_week=4)
+
+ MU = [datetime(2012, 5, 31), datetime(2012, 8, 30), datetime(2012, 11, 29), datetime(2013, 2, 28), datetime(2013, 5, 30)]
+
+ date = MU[0] + relativedelta(days=-1)
+ for expected in MU:
+ assertEq(offset, date, expected)
+ date = date + offset
+
+ assertEq(offset, datetime(2012, 5, 31), datetime(2012, 8, 30))
+ assertEq(offset, datetime(2012, 5, 30), datetime(2012, 5, 31))
+
class TestQuarterBegin(unittest.TestCase):
def test_repr(self):
self.assertEqual(repr(QuarterBegin()), "<QuarterBegin: startingMonth=3>")
@@ -1748,32 +2193,43 @@ def test_compare_ticks():
assert(kls(3) != kls(4))
-def test_get_offset_name():
- assertRaisesRegexp(ValueError, 'Bad rule.*BusinessDays', get_offset_name, BDay(2))
-
- assert get_offset_name(BDay()) == 'B'
- assert get_offset_name(BMonthEnd()) == 'BM'
- assert get_offset_name(Week(weekday=0)) == 'W-MON'
- assert get_offset_name(Week(weekday=1)) == 'W-TUE'
- assert get_offset_name(Week(weekday=2)) == 'W-WED'
- assert get_offset_name(Week(weekday=3)) == 'W-THU'
- assert get_offset_name(Week(weekday=4)) == 'W-FRI'
+class TestOffsetNames(unittest.TestCase):
+ def test_get_offset_name(self):
+ assertRaisesRegexp(ValueError, 'Bad rule.*BusinessDays', get_offset_name, BDay(2))
+
+ assert get_offset_name(BDay()) == 'B'
+ assert get_offset_name(BMonthEnd()) == 'BM'
+ assert get_offset_name(Week(weekday=0)) == 'W-MON'
+ assert get_offset_name(Week(weekday=1)) == 'W-TUE'
+ assert get_offset_name(Week(weekday=2)) == 'W-WED'
+ assert get_offset_name(Week(weekday=3)) == 'W-THU'
+ assert get_offset_name(Week(weekday=4)) == 'W-FRI'
+ self.assertEqual(get_offset_name(LastWeekOfMonth(weekday=WeekDay.SUN)), "LWOM-SUN")
+ self.assertEqual(get_offset_name(makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=4)),"REQ-L-MAR-TUE-4")
+ self.assertEqual(get_offset_name(makeFY5253NearestEndMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=3)), "REQ-N-MAR-TUE-3")
def test_get_offset():
assertRaisesRegexp(ValueError, "rule.*GIBBERISH", get_offset, 'gibberish')
assertRaisesRegexp(ValueError, "rule.*QS-JAN-B", get_offset, 'QS-JAN-B')
- pairs = [('B', BDay()), ('b', BDay()), ('bm', BMonthEnd()),
+ pairs = [
+ ('B', BDay()), ('b', BDay()), ('bm', BMonthEnd()),
('Bm', BMonthEnd()), ('W-MON', Week(weekday=0)),
('W-TUE', Week(weekday=1)), ('W-WED', Week(weekday=2)),
('W-THU', Week(weekday=3)), ('W-FRI', Week(weekday=4)),
- ('w@Sat', Week(weekday=5))]
+ ('w@Sat', Week(weekday=5)),
+ ("RE-N-DEC-MON", makeFY5253NearestEndMonth(weekday=0, startingMonth=12)),
+ ("RE-L-DEC-TUE", makeFY5253LastOfMonth(weekday=1, startingMonth=12)),
+ ("REQ-L-MAR-TUE-4", makeFY5253LastOfMonthQuarter(weekday=1, startingMonth=3, qtr_with_extra_week=4)),
+ ("REQ-L-DEC-MON-3", makeFY5253LastOfMonthQuarter(weekday=0, startingMonth=12, qtr_with_extra_week=3)),
+ ("REQ-N-DEC-MON-3", makeFY5253NearestEndMonthQuarter(weekday=0, startingMonth=12, qtr_with_extra_week=3)),
+ ]
for name, expected in pairs:
offset = get_offset(name)
assert offset == expected, ("Expected %r to yield %r (actual: %r)" %
(name, expected, offset))
-
+
def test_parse_time_string():
(date, parsed, reso) = parse_time_string('4Q1984')
@@ -1879,8 +2335,11 @@ def get_all_subclasses(cls):
ret | get_all_subclasses(this_subclass)
return ret
-
class TestCaching(unittest.TestCase):
+ no_simple_ctr = [WeekOfMonth, FY5253,
+ FY5253Quarter,
+ LastWeekOfMonth]
+
def test_should_cache_month_end(self):
self.assertTrue(MonthEnd()._should_cache())
@@ -1892,7 +2351,8 @@ def test_should_cache_week_month(self):
def test_all_cacheableoffsets(self):
for subclass in get_all_subclasses(CacheableOffset):
- if subclass in [WeekOfMonth]:
+ if subclass.__name__[0] == "_" \
+ or subclass in TestCaching.no_simple_ctr:
continue
self.run_X_index_creation(subclass)
diff --git a/pandas/tseries/tests/test_period.py b/pandas/tseries/tests/test_period.py
index 0fc7101a99856..312a88bcbc5a9 100644
--- a/pandas/tseries/tests/test_period.py
+++ b/pandas/tseries/tests/test_period.py
@@ -199,7 +199,7 @@ def test_period_constructor(self):
self.assertRaises(ValueError, Period, ordinal=200701)
- self.assertRaises(KeyError, Period, '2007-1-1', freq='X')
+ self.assertRaises(ValueError, Period, '2007-1-1', freq='X')
def test_freq_str(self):
i1 = Period('1982', freq='Min')
@@ -1136,8 +1136,8 @@ def test_constructor_field_arrays(self):
self.assert_(idx.equals(exp))
def test_constructor_U(self):
- # X was used as undefined period
- self.assertRaises(KeyError, period_range, '2007-1-1', periods=500,
+ # U was used as undefined period
+ self.assertRaises(ValueError, period_range, '2007-1-1', periods=500,
freq='X')
def test_constructor_arrays_negative_year(self):
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 7f11fa5873fe7..dee0587aaaa02 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -2664,7 +2664,7 @@ def test_frequency_misc(self):
expected = offsets.Minute(5)
self.assertEquals(result, expected)
- self.assertRaises(KeyError, fmod.get_freq_code, (5, 'baz'))
+ self.assertRaises(ValueError, fmod.get_freq_code, (5, 'baz'))
self.assertRaises(ValueError, fmod.to_offset, '100foo')
@@ -3031,6 +3031,15 @@ def test_frame_apply_dont_convert_datetime64(self):
df = df.applymap(lambda x: x + BDay())
self.assertTrue(df.x1.dtype == 'M8[ns]')
+
+ def test_date_range_fy5252(self):
+ dr = date_range(start="2013-01-01",
+ periods=2,
+ freq=offsets.FY5253(startingMonth=1,
+ weekday=3,
+ variation="nearest"))
+ self.assertEqual(dr[0], Timestamp('2013-01-31'))
+ self.assertEqual(dr[1], Timestamp('2014-01-30'))
if __name__ == '__main__':
diff --git a/pandas/tseries/tests/test_tslib.py b/pandas/tseries/tests/test_tslib.py
index cfc93a22c454b..40dbb2d3712af 100644
--- a/pandas/tseries/tests/test_tslib.py
+++ b/pandas/tseries/tests/test_tslib.py
@@ -283,6 +283,10 @@ def test_period_ordinal_business_day(self):
# Tuesday
self.assertEqual(11418, period_ordinal(2013, 10, 8, 0, 0, 0, 0, 0, get_freq('B')))
+class TestTomeStampOps(unittest.TestCase):
+ def test_timestamp_and_datetime(self):
+ self.assertEqual((Timestamp(datetime.datetime(2013, 10,13)) - datetime.datetime(2013, 10,12)).days, 1)
+ self.assertEqual((datetime.datetime(2013, 10, 12) - Timestamp(datetime.datetime(2013, 10,13))).days, -1)
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index d95956261bc44..c487202c4c0f9 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -609,7 +609,8 @@ cdef class _Timestamp(datetime):
if is_integer_object(other):
neg_other = -other
return self + neg_other
- return super(_Timestamp, self).__sub__(other)
+ # This calling convention is required
+ return datetime.__sub__(self, other)
cpdef _get_field(self, field):
out = get_date_field(np.array([self.value], dtype=np.int64), field)
| Closes #4511 and #4637
- Added `LastWeekOfMonth` DateOffset #4637
- Added `FY5253`, and `FY5253Quarter` DateOffsets #4511
- Improved error handling in `get_freq_code`, `_period_str_to_code` and `_base_and_stride`
- Fix issue with datetime - Timestamp
| https://api.github.com/repos/pandas-dev/pandas/pulls/5004 | 2013-09-27T03:55:08Z | 2013-10-22T20:37:52Z | 2013-10-22T20:37:52Z | 2014-06-12T15:33:59Z |
BUG: wrong index name during read_csv if using usecols | diff --git a/doc/source/release.rst b/doc/source/release.rst
index ce08a1ca0a175..056292322c297 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -480,6 +480,7 @@ Bug Fixes
- Fixed wrong check for overlapping in ``DatetimeIndex.union`` (:issue:`4564`)
- Fixed conflict between thousands separator and date parser in csv_parser (:issue:`4678`)
- Fix appending when dtypes are not the same (error showing mixing float/np.datetime64) (:issue:`4993`)
+ - Fixed wrong index name during read_csv if using usecols. Applies to c parser only. (:issue:`4201`)
pandas 0.12.0
-------------
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 21b791d2a1acc..426d71b05e30a 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -2,7 +2,7 @@
Module contains tools for processing files into DataFrames or other objects
"""
from __future__ import print_function
-from pandas.compat import range, lrange, StringIO, lzip, zip
+from pandas.compat import range, lrange, StringIO, lzip, zip, string_types
from pandas import compat
import re
import csv
@@ -15,7 +15,6 @@
import datetime
import pandas.core.common as com
from pandas.core.config import get_option
-from pandas import compat
from pandas.io.date_converters import generic_parser
from pandas.io.common import get_filepath_or_buffer
@@ -24,7 +23,7 @@
import pandas.lib as lib
import pandas.tslib as tslib
import pandas.parser as _parser
-from pandas.tseries.period import Period
+
_parser_params = """Also supports optionally iterating or breaking of the file
into chunks.
@@ -982,7 +981,19 @@ def __init__(self, src, **kwds):
else:
self.names = lrange(self._reader.table_width)
- # XXX
+ # If the names were inferred (not passed by user) and usedcols is defined,
+ # then ensure names refers to the used columns, not the document's columns.
+ if self.usecols and passed_names:
+ col_indices = []
+ for u in self.usecols:
+ if isinstance(u, string_types):
+ col_indices.append(self.names.index(u))
+ else:
+ col_indices.append(u)
+ self.names = [n for i, n in enumerate(self.names) if i in col_indices]
+ if len(self.names) < len(self.usecols):
+ raise ValueError("Usecols do not match names.")
+
self._set_noconvert_columns()
self.orig_names = self.names
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 48c47238aec6f..fadf70877409f 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -1865,6 +1865,32 @@ def test_parse_integers_above_fp_precision(self):
self.assertTrue(np.array_equal(result['Numbers'], expected['Numbers']))
+ def test_usecols_index_col_conflict(self):
+ # Issue 4201 Test that index_col as integer reflects usecols
+ data = """SecId,Time,Price,P2,P3
+10000,2013-5-11,100,10,1
+500,2013-5-12,101,11,1
+"""
+ expected = DataFrame({'Price': [100, 101]}, index=[datetime(2013, 5, 11), datetime(2013, 5, 12)])
+ expected.index.name = 'Time'
+
+ df = pd.read_csv(StringIO(data), usecols=['Time', 'Price'], parse_dates=True, index_col=0)
+ tm.assert_frame_equal(expected, df)
+
+ df = pd.read_csv(StringIO(data), usecols=['Time', 'Price'], parse_dates=True, index_col='Time')
+ tm.assert_frame_equal(expected, df)
+
+ df = pd.read_csv(StringIO(data), usecols=[1, 2], parse_dates=True, index_col='Time')
+ tm.assert_frame_equal(expected, df)
+
+ df = pd.read_csv(StringIO(data), usecols=[1, 2], parse_dates=True, index_col=0)
+ tm.assert_frame_equal(expected, df)
+
+ expected = DataFrame({'P3': [1, 1], 'Price': (100, 101), 'P2': (10, 11)})
+ expected = expected.set_index(['Price', 'P2'])
+ df = pd.read_csv(StringIO(data), usecols=['Price', 'P2', 'P3'], parse_dates=True, index_col=['Price', 'P2'])
+ tm.assert_frame_equal(expected, df)
+
class TestPythonParser(ParserTests, unittest.TestCase):
| Closes #4201
If user passes usecols and not names, then ensure that the
inferred names refer to the used columns, not the document's
columns.
| https://api.github.com/repos/pandas-dev/pandas/pulls/5003 | 2013-09-27T03:26:01Z | 2013-09-27T12:39:31Z | 2013-09-27T12:39:31Z | 2014-07-29T17:20:31Z |
API: Remove set_printoptions/reset_printoptions (:issue:3046) | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index 7d2555e8cba81..b167b00b58ef1 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -983,7 +983,7 @@ Methods like ``replace`` and ``findall`` take regular expressions, too:
s3.str.replace('^.a|dog', 'XX-XX ', case=False)
The method ``match`` returns the groups in a regular expression in one tuple.
- Starting in pandas version 0.13, the method ``extract`` is available to
+ Starting in pandas version 0.13, the method ``extract`` is available to
accomplish this more conveniently.
Extracting a regular expression with one group returns a Series of strings.
@@ -992,16 +992,16 @@ Extracting a regular expression with one group returns a Series of strings.
Series(['a1', 'b2', 'c3']).str.extract('[ab](\d)')
-Elements that do not match return ``NaN``. Extracting a regular expression
+Elements that do not match return ``NaN``. Extracting a regular expression
with more than one group returns a DataFrame with one column per group.
.. ipython:: python
Series(['a1', 'b2', 'c3']).str.extract('([ab])(\d)')
-Elements that do not match return a row of ``NaN``s.
-Thus, a Series of messy strings can be "converted" into a
-like-indexed Series or DataFrame of cleaned-up or more useful strings,
+Elements that do not match return a row of ``NaN``s.
+Thus, a Series of messy strings can be "converted" into a
+like-indexed Series or DataFrame of cleaned-up or more useful strings,
without necessitating ``get()`` to access tuples or ``re.match`` objects.
Named groups like
@@ -1411,11 +1411,6 @@ Console Output Formatting
.. _basics.console_output:
-**Note:** ``set_printoptions``/ ``reset_printoptions`` are now deprecated (but functioning),
-and both, as well as ``set_eng_float_format``, use the options API behind the scenes.
-The corresponding options now live under "print.XYZ", and you can set them directly with
-``get/set_option``.
-
Use the ``set_eng_float_format`` function in the ``pandas.core.common`` module
to alter the floating-point formatting of pandas objects to produce a particular
format.
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 8ba0574df97cb..8c72235297bd0 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -235,6 +235,8 @@ API Changes
on indexes on non ``Float64Index`` will raise a ``TypeError``, e.g. ``Series(range(5))[3.5:4.5]`` (:issue:`263`)
- Make Categorical repr nicer (:issue:`4368`)
- Remove deprecated ``Factor`` (:issue:`3650`)
+ - Remove deprecated ``set_printoptions/reset_printoptions`` (:issue:``3046``)
+ - Remove deprecated ``_verbose_info`` (:issue:`3215`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index 13aff4d21e802..b1c40fe3b2ced 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -68,6 +68,10 @@ API changes
df1 and df2
s1 and s2
+ - Remove deprecated ``Factor`` (:issue:`3650`)
+ - Remove deprecated ``set_printoptions/reset_printoptions`` (:issue:``3046``)
+ - Remove deprecated ``_verbose_info`` (:issue:`3215`)
+
Indexing API Changes
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/__init__.py b/pandas/__init__.py
index ddd4cd49e6ec6..803cda264b250 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -21,7 +21,6 @@
# XXX: HACK for NumPy 1.5.1 to suppress warnings
try:
np.seterr(all='ignore')
- # np.set_printoptions(suppress=True)
except Exception: # pragma: no cover
pass
diff --git a/pandas/core/api.py b/pandas/core/api.py
index 2b4063eae1f74..36081cc34cc3a 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -6,8 +6,7 @@
from pandas.core.algorithms import factorize, match, unique, value_counts
from pandas.core.common import isnull, notnull
from pandas.core.categorical import Categorical
-from pandas.core.format import (set_printoptions, reset_printoptions,
- set_eng_float_format)
+from pandas.core.format import set_eng_float_format
from pandas.core.index import Index, Int64Index, Float64Index, MultiIndex
from pandas.core.series import Series, TimeSeries
diff --git a/pandas/core/format.py b/pandas/core/format.py
index be6ad4d2bc5ef..190ef3fb5f1ab 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -49,7 +49,7 @@
multiindex key at each row, default True
justify : {'left', 'right'}, default None
Left or right-justify the column labels. If None uses the option from
- the print configuration (controlled by set_printoptions), 'right' out
+ the print configuration (controlled by set_option), 'right' out
of the box.
index_names : bool, optional
Prints the names of the indexes, default True
@@ -1669,78 +1669,6 @@ def _has_names(index):
#------------------------------------------------------------------------------
# Global formatting options
-
-def set_printoptions(precision=None, column_space=None, max_rows=None,
- max_columns=None, colheader_justify=None,
- max_colwidth=None, notebook_repr_html=None,
- date_dayfirst=None, date_yearfirst=None,
- pprint_nest_depth=None, multi_sparse=None, encoding=None):
- """
- Alter default behavior of DataFrame.toString
-
- precision : int
- Floating point output precision (number of significant digits). This is
- only a suggestion
- column_space : int
- Default space for DataFrame columns, defaults to 12
- max_rows : int
- max_columns : int
- max_rows and max_columns are used in __repr__() methods to decide if
- to_string() or info() is used to render an object to a string.
- Either one, or both can be set to 0 (experimental). Pandas will figure
- out how big the terminal is and will not display more rows or/and
- columns that can fit on it.
- colheader_justify
- notebook_repr_html : boolean
- When True (default), IPython notebook will use html representation for
- pandas objects (if it is available).
- date_dayfirst : boolean
- When True, prints and parses dates with the day first, eg 20/01/2005
- date_yearfirst : boolean
- When True, prints and parses dates with the year first, eg 2005/01/20
- pprint_nest_depth : int
- Defaults to 3.
- Controls the number of nested levels to process when pretty-printing
- nested sequences.
- multi_sparse : boolean
- Default True, "sparsify" MultiIndex display (don't display repeated
- elements in outer levels within groups)
- """
- import warnings
- warnings.warn("set_printoptions is deprecated, use set_option instead",
- FutureWarning)
- if precision is not None:
- set_option("display.precision", precision)
- if column_space is not None:
- set_option("display.column_space", column_space)
- if max_rows is not None:
- set_option("display.max_rows", max_rows)
- if max_colwidth is not None:
- set_option("display.max_colwidth", max_colwidth)
- if max_columns is not None:
- set_option("display.max_columns", max_columns)
- if colheader_justify is not None:
- set_option("display.colheader_justify", colheader_justify)
- if notebook_repr_html is not None:
- set_option("display.notebook_repr_html", notebook_repr_html)
- if date_dayfirst is not None:
- set_option("display.date_dayfirst", date_dayfirst)
- if date_yearfirst is not None:
- set_option("display.date_yearfirst", date_yearfirst)
- if pprint_nest_depth is not None:
- set_option("display.pprint_nest_depth", pprint_nest_depth)
- if multi_sparse is not None:
- set_option("display.multi_sparse", multi_sparse)
- if encoding is not None:
- set_option("display.encoding", encoding)
-
-
-def reset_printoptions():
- import warnings
- warnings.warn("reset_printoptions is deprecated, use reset_option instead",
- FutureWarning)
- reset_option("^display\.")
-
_initial_defencoding = None
def detect_console_encoding():
"""
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index c98790fdc38ff..7b9a75753136e 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -371,7 +371,6 @@ class DataFrame(NDFrame):
read_csv / read_table / read_clipboard
"""
_auto_consolidate = True
- _verbose_info = True
@property
def _constructor(self):
@@ -554,12 +553,6 @@ def _init_ndarray(self, values, index, columns, dtype=None,
return create_block_manager_from_blocks([values.T], [columns, index])
- @property
- def _verbose_info(self):
- warnings.warn('The _verbose_info property will be removed in version '
- '0.13. please use "max_info_rows"', FutureWarning)
- return get_option('display.max_info_rows') is None
-
@property
def axes(self):
return [self.index, self.columns]
diff --git a/pandas/sparse/frame.py b/pandas/sparse/frame.py
index a1b630dedaaab..53fabb0160a88 100644
--- a/pandas/sparse/frame.py
+++ b/pandas/sparse/frame.py
@@ -46,7 +46,6 @@ class SparseDataFrame(DataFrame):
Default fill_value for converting Series to SparseSeries. Will not
override SparseSeries passed in
"""
- _verbose_info = False
_constructor_sliced = SparseSeries
_subtyp = 'sparse_frame'
diff --git a/pandas/tests/test_config.py b/pandas/tests/test_config.py
index ed6f641cbcb2c..80a3fe9be7003 100644
--- a/pandas/tests/test_config.py
+++ b/pandas/tests/test_config.py
@@ -437,5 +437,3 @@ def f3(key):
options.c = 1
self.assertEqual(len(holder), 1)
-# fmt.reset_printoptions and fmt.set_printoptions were altered
-# to use core.config, test_format exercises those paths.
| closes #3046
closes #3215
| https://api.github.com/repos/pandas-dev/pandas/pulls/5001 | 2013-09-26T21:24:22Z | 2013-09-27T00:36:48Z | 2013-09-27T00:36:48Z | 2014-06-14T13:53:48Z |
API: Remove deprecated Factor (GH3650) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index bd3fbf53de039..8ba0574df97cb 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -234,6 +234,7 @@ API Changes
Indexing on other index types are preserved (and positional fallback for ``[],ix``), with the exception, that floating point slicing
on indexes on non ``Float64Index`` will raise a ``TypeError``, e.g. ``Series(range(5))[3.5:4.5]`` (:issue:`263`)
- Make Categorical repr nicer (:issue:`4368`)
+ - Remove deprecated ``Factor`` (:issue:`3650`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/api.py b/pandas/core/api.py
index b4afe90d46842..2b4063eae1f74 100644
--- a/pandas/core/api.py
+++ b/pandas/core/api.py
@@ -5,7 +5,7 @@
from pandas.core.algorithms import factorize, match, unique, value_counts
from pandas.core.common import isnull, notnull
-from pandas.core.categorical import Categorical, Factor
+from pandas.core.categorical import Categorical
from pandas.core.format import (set_printoptions, reset_printoptions,
set_eng_float_format)
from pandas.core.index import Index, Int64Index, Float64Index, MultiIndex
diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 97ed0fdb0da30..f412947f92255 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -230,16 +230,3 @@ def describe(self):
counts=counts,
freqs=freqs,
levels=self.levels)).set_index('levels')
-
-
-class Factor(Categorical):
- def __init__(self, labels, levels=None, name=None):
- from warnings import warn
- warn("Factor is deprecated. Use Categorical instead", FutureWarning)
- super(Factor, self).__init__(labels, levels, name)
-
- @classmethod
- def from_array(cls, data):
- from warnings import warn
- warn("Factor is deprecated. Use Categorical instead", FutureWarning)
- return super(Factor, cls).from_array(data)
| closes #3650
| https://api.github.com/repos/pandas-dev/pandas/pulls/5000 | 2013-09-26T21:21:26Z | 2013-09-26T21:22:21Z | 2013-09-26T21:22:21Z | 2014-06-14T18:21:49Z |
added halflife to exponentially weighted moving functions | diff --git a/doc/source/computation.rst b/doc/source/computation.rst
index 207e2796c468d..85c6b88d740da 100644
--- a/doc/source/computation.rst
+++ b/doc/source/computation.rst
@@ -453,15 +453,16 @@ average as
y_t = (1 - \alpha) y_{t-1} + \alpha x_t
One must have :math:`0 < \alpha \leq 1`, but rather than pass :math:`\alpha`
-directly, it's easier to think about either the **span** or **center of mass
-(com)** of an EW moment:
+directly, it's easier to think about either the **span**, **center of mass
+(com)** or **halflife** of an EW moment:
.. math::
\alpha =
\begin{cases}
\frac{2}{s + 1}, s = \text{span}\\
- \frac{1}{1 + c}, c = \text{center of mass}
+ \frac{1}{1 + c}, c = \text{center of mass}\\
+ 1 - \exp^{\frac{\log 0.5}{h}}, h = \text{half life}
\end{cases}
.. note::
@@ -474,11 +475,12 @@ directly, it's easier to think about either the **span** or **center of mass
where :math:`\alpha' = 1 - \alpha`.
-You can pass one or the other to these functions but not both. **Span**
+You can pass one of the three to these functions but not more. **Span**
corresponds to what is commonly called a "20-day EW moving average" for
example. **Center of mass** has a more physical interpretation. For example,
-**span** = 20 corresponds to **com** = 9.5. Here is the list of functions
-available:
+**span** = 20 corresponds to **com** = 9.5. **Halflife** is the period of
+time for the exponential weight to reduce to one half. Here is the list of
+functions available:
.. csv-table::
:header: "Function", "Description"
diff --git a/doc/source/release.rst b/doc/source/release.rst
index 66c3dcd203a6a..cd1cd669152ec 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -138,6 +138,8 @@ Improvements to existing features
(:issue:`4961`).
- ``concat`` now gives a more informative error message when passed objects
that cannot be concatenated (:issue:`4608`).
+ - Add ``halflife`` option to exponentially weighted moving functions (PR
+ :issue:`4998`)
API Changes
~~~~~~~~~~~
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index fd81bd119fe09..f3ec3880ec8b5 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -59,6 +59,8 @@
Center of mass: :math:`\alpha = 1 / (1 + com)`,
span : float, optional
Specify decay in terms of span, :math:`\alpha = 2 / (span + 1)`
+halflife : float, optional
+ Specify decay in terms of halflife, :math: `\alpha = 1 - exp(log(0.5) / halflife)`
min_periods : int, default 0
Number of observations in sample to require (only affects
beginning)
@@ -338,25 +340,29 @@ def _process_data_structure(arg, kill_inf=True):
# Exponential moving moments
-def _get_center_of_mass(com, span):
- if span is not None:
- if com is not None:
- raise Exception("com and span are mutually exclusive")
+def _get_center_of_mass(com, span, halflife):
+ valid_count = len([x for x in [com, span, halflife] if x is not None])
+ if valid_count > 1:
+ raise Exception("com, span, and halflife are mutually exclusive")
+ if span is not None:
# convert span to center of mass
com = (span - 1) / 2.
-
+ elif halflife is not None:
+ # convert halflife to center of mass
+ decay = 1 - np.exp(np.log(0.5) / halflife)
+ com = 1 / decay - 1
elif com is None:
- raise Exception("Must pass either com or span")
+ raise Exception("Must pass one of com, span, or halflife")
return float(com)
@Substitution("Exponentially-weighted moving average", _unary_arg, "")
@Appender(_ewm_doc)
-def ewma(arg, com=None, span=None, min_periods=0, freq=None, time_rule=None,
+def ewma(arg, com=None, span=None, halflife=None, min_periods=0, freq=None, time_rule=None,
adjust=True):
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
arg = _conv_timerule(arg, freq, time_rule)
def _ewma(v):
@@ -377,9 +383,9 @@ def _first_valid_index(arr):
@Substitution("Exponentially-weighted moving variance", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
-def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
+def ewmvar(arg, com=None, span=None, halflife=None, min_periods=0, bias=False,
freq=None, time_rule=None):
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
arg = _conv_timerule(arg, freq, time_rule)
moment2nd = ewma(arg * arg, com=com, min_periods=min_periods)
moment1st = ewma(arg, com=com, min_periods=min_periods)
@@ -393,9 +399,9 @@ def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving std", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
-def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
+def ewmstd(arg, com=None, span=None, halflife=None, min_periods=0, bias=False,
time_rule=None):
- result = ewmvar(arg, com=com, span=span, time_rule=time_rule,
+ result = ewmvar(arg, com=com, span=span, halflife=halflife, time_rule=time_rule,
min_periods=min_periods, bias=bias)
return _zsqrt(result)
@@ -404,17 +410,17 @@ def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving covariance", _binary_arg, "")
@Appender(_ewm_doc)
-def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
+def ewmcov(arg1, arg2, com=None, span=None, halflife=None, min_periods=0, bias=False,
freq=None, time_rule=None):
X, Y = _prep_binary(arg1, arg2)
X = _conv_timerule(X, freq, time_rule)
Y = _conv_timerule(Y, freq, time_rule)
- mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
+ mean = lambda x: ewma(x, com=com, span=span, halflife=halflife, min_periods=min_periods)
result = (mean(X * Y) - mean(X) * mean(Y))
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
if not bias:
result *= (1.0 + 2.0 * com) / (2.0 * com)
@@ -423,15 +429,15 @@ def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving " "correlation", _binary_arg, "")
@Appender(_ewm_doc)
-def ewmcorr(arg1, arg2, com=None, span=None, min_periods=0,
+def ewmcorr(arg1, arg2, com=None, span=None, halflife=None, min_periods=0,
freq=None, time_rule=None):
X, Y = _prep_binary(arg1, arg2)
X = _conv_timerule(X, freq, time_rule)
Y = _conv_timerule(Y, freq, time_rule)
- mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
- var = lambda x: ewmvar(x, com=com, span=span, min_periods=min_periods,
+ mean = lambda x: ewma(x, com=com, span=span, halflife=halflife, min_periods=min_periods)
+ var = lambda x: ewmvar(x, com=com, span=span, halflife=halflife, min_periods=min_periods,
bias=True)
return (mean(X * Y) - mean(X) * mean(Y)) / _zsqrt(var(X) * var(Y))
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 70653d9d96bef..1f7df9894a97d 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -535,6 +535,16 @@ def test_ewma_span_com_args(self):
self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20)
self.assertRaises(Exception, mom.ewma, self.arr)
+ def test_ewma_halflife_arg(self):
+ A = mom.ewma(self.arr, com=13.932726172912965)
+ B = mom.ewma(self.arr, halflife=10.0)
+ assert_almost_equal(A, B)
+
+ self.assertRaises(Exception, mom.ewma, self.arr, span=20, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr)
+
def test_ew_empty_arrays(self):
arr = np.array([], dtype=np.float64)
| Currently for the exponentially weighted moving functions (ewma, ewmstd, ewmvol, ewmvar, ewmcov) there are two ways (span, center of mass) to specify how fast the exponential decay is. It would be nice to support a "half life" option as well.
The half life is basically just the number of periods in which the exponential weight drops to one half, i.e.,
(1 - \alpha)^h = 0.5, h: half life
| https://api.github.com/repos/pandas-dev/pandas/pulls/4998 | 2013-09-26T19:47:13Z | 2013-09-29T17:51:40Z | 2013-09-29T17:51:40Z | 2015-04-25T23:32:49Z |
added halflife to exponentially weighted moving functions | diff --git a/doc/source/computation.rst b/doc/source/computation.rst
index 207e2796c468d..85c6b88d740da 100644
--- a/doc/source/computation.rst
+++ b/doc/source/computation.rst
@@ -453,15 +453,16 @@ average as
y_t = (1 - \alpha) y_{t-1} + \alpha x_t
One must have :math:`0 < \alpha \leq 1`, but rather than pass :math:`\alpha`
-directly, it's easier to think about either the **span** or **center of mass
-(com)** of an EW moment:
+directly, it's easier to think about either the **span**, **center of mass
+(com)** or **halflife** of an EW moment:
.. math::
\alpha =
\begin{cases}
\frac{2}{s + 1}, s = \text{span}\\
- \frac{1}{1 + c}, c = \text{center of mass}
+ \frac{1}{1 + c}, c = \text{center of mass}\\
+ 1 - \exp^{\frac{\log 0.5}{h}}, h = \text{half life}
\end{cases}
.. note::
@@ -474,11 +475,12 @@ directly, it's easier to think about either the **span** or **center of mass
where :math:`\alpha' = 1 - \alpha`.
-You can pass one or the other to these functions but not both. **Span**
+You can pass one of the three to these functions but not more. **Span**
corresponds to what is commonly called a "20-day EW moving average" for
example. **Center of mass** has a more physical interpretation. For example,
-**span** = 20 corresponds to **com** = 9.5. Here is the list of functions
-available:
+**span** = 20 corresponds to **com** = 9.5. **Halflife** is the period of
+time for the exponential weight to reduce to one half. Here is the list of
+functions available:
.. csv-table::
:header: "Function", "Description"
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index fd81bd119fe09..f3ec3880ec8b5 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -59,6 +59,8 @@
Center of mass: :math:`\alpha = 1 / (1 + com)`,
span : float, optional
Specify decay in terms of span, :math:`\alpha = 2 / (span + 1)`
+halflife : float, optional
+ Specify decay in terms of halflife, :math: `\alpha = 1 - exp(log(0.5) / halflife)`
min_periods : int, default 0
Number of observations in sample to require (only affects
beginning)
@@ -338,25 +340,29 @@ def _process_data_structure(arg, kill_inf=True):
# Exponential moving moments
-def _get_center_of_mass(com, span):
- if span is not None:
- if com is not None:
- raise Exception("com and span are mutually exclusive")
+def _get_center_of_mass(com, span, halflife):
+ valid_count = len([x for x in [com, span, halflife] if x is not None])
+ if valid_count > 1:
+ raise Exception("com, span, and halflife are mutually exclusive")
+ if span is not None:
# convert span to center of mass
com = (span - 1) / 2.
-
+ elif halflife is not None:
+ # convert halflife to center of mass
+ decay = 1 - np.exp(np.log(0.5) / halflife)
+ com = 1 / decay - 1
elif com is None:
- raise Exception("Must pass either com or span")
+ raise Exception("Must pass one of com, span, or halflife")
return float(com)
@Substitution("Exponentially-weighted moving average", _unary_arg, "")
@Appender(_ewm_doc)
-def ewma(arg, com=None, span=None, min_periods=0, freq=None, time_rule=None,
+def ewma(arg, com=None, span=None, halflife=None, min_periods=0, freq=None, time_rule=None,
adjust=True):
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
arg = _conv_timerule(arg, freq, time_rule)
def _ewma(v):
@@ -377,9 +383,9 @@ def _first_valid_index(arr):
@Substitution("Exponentially-weighted moving variance", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
-def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
+def ewmvar(arg, com=None, span=None, halflife=None, min_periods=0, bias=False,
freq=None, time_rule=None):
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
arg = _conv_timerule(arg, freq, time_rule)
moment2nd = ewma(arg * arg, com=com, min_periods=min_periods)
moment1st = ewma(arg, com=com, min_periods=min_periods)
@@ -393,9 +399,9 @@ def ewmvar(arg, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving std", _unary_arg, _bias_doc)
@Appender(_ewm_doc)
-def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
+def ewmstd(arg, com=None, span=None, halflife=None, min_periods=0, bias=False,
time_rule=None):
- result = ewmvar(arg, com=com, span=span, time_rule=time_rule,
+ result = ewmvar(arg, com=com, span=span, halflife=halflife, time_rule=time_rule,
min_periods=min_periods, bias=bias)
return _zsqrt(result)
@@ -404,17 +410,17 @@ def ewmstd(arg, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving covariance", _binary_arg, "")
@Appender(_ewm_doc)
-def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
+def ewmcov(arg1, arg2, com=None, span=None, halflife=None, min_periods=0, bias=False,
freq=None, time_rule=None):
X, Y = _prep_binary(arg1, arg2)
X = _conv_timerule(X, freq, time_rule)
Y = _conv_timerule(Y, freq, time_rule)
- mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
+ mean = lambda x: ewma(x, com=com, span=span, halflife=halflife, min_periods=min_periods)
result = (mean(X * Y) - mean(X) * mean(Y))
- com = _get_center_of_mass(com, span)
+ com = _get_center_of_mass(com, span, halflife)
if not bias:
result *= (1.0 + 2.0 * com) / (2.0 * com)
@@ -423,15 +429,15 @@ def ewmcov(arg1, arg2, com=None, span=None, min_periods=0, bias=False,
@Substitution("Exponentially-weighted moving " "correlation", _binary_arg, "")
@Appender(_ewm_doc)
-def ewmcorr(arg1, arg2, com=None, span=None, min_periods=0,
+def ewmcorr(arg1, arg2, com=None, span=None, halflife=None, min_periods=0,
freq=None, time_rule=None):
X, Y = _prep_binary(arg1, arg2)
X = _conv_timerule(X, freq, time_rule)
Y = _conv_timerule(Y, freq, time_rule)
- mean = lambda x: ewma(x, com=com, span=span, min_periods=min_periods)
- var = lambda x: ewmvar(x, com=com, span=span, min_periods=min_periods,
+ mean = lambda x: ewma(x, com=com, span=span, halflife=halflife, min_periods=min_periods)
+ var = lambda x: ewmvar(x, com=com, span=span, halflife=halflife, min_periods=min_periods,
bias=True)
return (mean(X * Y) - mean(X) * mean(Y)) / _zsqrt(var(X) * var(Y))
diff --git a/pandas/stats/tests/test_moments.py b/pandas/stats/tests/test_moments.py
index 24fc04d849c7f..b897f60229a7b 100644
--- a/pandas/stats/tests/test_moments.py
+++ b/pandas/stats/tests/test_moments.py
@@ -535,6 +535,16 @@ def test_ewma_span_com_args(self):
self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20)
self.assertRaises(Exception, mom.ewma, self.arr)
+ def test_ewma_halflife_arg(self):
+ A = mom.ewma(self.arr, com=13.932726172912965)
+ B = mom.ewma(self.arr, halflife=10.0)
+ assert_almost_equal(A, B)
+
+ self.assertRaises(Exception, mom.ewma, self.arr, span=20, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr, com=9.5, span=20, halflife=50)
+ self.assertRaises(Exception, mom.ewma, self.arr)
+
def test_ew_empty_arrays(self):
arr = np.array([], dtype=np.float64)
| Currently for the exponentially weighted moving functions (ewma, ewmstd, ewmvol, ewmvar, ewmcov) there are two ways (span, center of mass) to specify how fast the exponential decay is. It would be nice to support a "half life" option as well.
The half life is basically just the number of periods in which the exponential weight drops to one half, i.e.,
(1 - \alpha)^h = 0.5, h: half life
| https://api.github.com/repos/pandas-dev/pandas/pulls/4997 | 2013-09-26T18:12:46Z | 2013-09-26T19:45:15Z | null | 2014-07-03T21:50:21Z |
BUG: Fix appending when dtypes are not the same (error showing mixing float/np.datetime64 (GH4993) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 0da7337977851..bd3fbf53de039 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -474,6 +474,7 @@ Bug Fixes
explicitly passing labels (:issue:`3415`)
- Fixed wrong check for overlapping in ``DatetimeIndex.union`` (:issue:`4564`)
- Fixed conflict between thousands separator and date parser in csv_parser (:issue:`4678`)
+ - Fix appending when dtypes are not the same (error showing mixing float/np.datetime64) (:issue:`4993`)
pandas 0.12.0
-------------
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index 2cc9d586a05a3..5792161e0171e 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -17,13 +17,16 @@
from pandas.core.internals import (IntBlock, BoolBlock, BlockManager,
make_block, _consolidate)
from pandas.util.decorators import cache_readonly, Appender, Substitution
-from pandas.core.common import PandasError, ABCSeries
+from pandas.core.common import (PandasError, ABCSeries,
+ is_timedelta64_dtype, is_datetime64_dtype,
+ is_integer_dtype)
+
import pandas.core.common as com
import pandas.lib as lib
import pandas.algos as algos
import pandas.hashtable as _hash
-
+import pandas.tslib as tslib
@Substitution('\nleft : DataFrame')
@Appender(_merge_doc, indents=0)
@@ -1128,6 +1131,8 @@ def _concat_blocks(self, blocks):
return block
def _concat_single_item(self, objs, item):
+ # this is called if we don't have consistent dtypes in a row-wise append
+
all_values = []
dtypes = set()
@@ -1141,22 +1146,57 @@ def _concat_single_item(self, objs, item):
else:
all_values.append(None)
- # this stinks
- have_object = False
+ # figure out the resulting dtype of the combination
+ alls = set()
+ seen = []
for dtype in dtypes:
+ d = dict([ (t,False) for t in ['object','datetime','timedelta','other'] ])
if issubclass(dtype.type, (np.object_, np.bool_)):
- have_object = True
- if have_object:
- empty_dtype = np.object_
- else:
- empty_dtype = np.float64
+ d['object'] = True
+ alls.add('object')
+ elif is_datetime64_dtype(dtype):
+ d['datetime'] = True
+ alls.add('datetime')
+ elif is_timedelta64_dtype(dtype):
+ d['timedelta'] = True
+ alls.add('timedelta')
+ else:
+ d['other'] = True
+ alls.add('other')
+ seen.append(d)
+
+ if 'datetime' in alls or 'timedelta' in alls:
+
+ if 'object' in alls or 'other' in alls:
+ for v, s in zip(all_values,seen):
+ if s.get('datetime') or s.get('timedelta'):
+ pass
+
+ # if we have all null, then leave a date/time like type
+ # if we have only that type left
+ elif isnull(v).all():
+
+ alls.remove('other')
+ alls.remove('object')
+
+ # create the result
+ if 'object' in alls:
+ empty_dtype, fill_value = np.object_, np.nan
+ elif 'other' in alls:
+ empty_dtype, fill_value = np.float64, np.nan
+ elif 'datetime' in alls:
+ empty_dtype, fill_value = 'M8[ns]', tslib.iNaT
+ elif 'timedelta' in alls:
+ empty_dtype, fill_value = 'm8[ns]', tslib.iNaT
+ else: # pragma
+ raise AssertionError("invalid dtype determination in concat_single_item")
to_concat = []
for obj, item_values in zip(objs, all_values):
if item_values is None:
shape = obj.shape[1:]
missing_arr = np.empty(shape, dtype=empty_dtype)
- missing_arr.fill(np.nan)
+ missing_arr.fill(fill_value)
to_concat.append(missing_arr)
else:
to_concat.append(item_values)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index 0eeb68c4691eb..203769e731022 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -742,6 +742,30 @@ def test_merge_nan_right(self):
assert_frame_equal(result, expected)
+ def test_append_dtype_coerce(self):
+
+ # GH 4993
+ # appending with datetime will incorrectly convert datetime64
+ import datetime as dt
+ from pandas import NaT
+
+ df1 = DataFrame(index=[1,2], data=[dt.datetime(2013,1,1,0,0),
+ dt.datetime(2013,1,2,0,0)],
+ columns=['start_time'])
+ df2 = DataFrame(index=[4,5], data=[[dt.datetime(2013,1,3,0,0),
+ dt.datetime(2013,1,3,6,10)],
+ [dt.datetime(2013,1,4,0,0),
+ dt.datetime(2013,1,4,7,10)]],
+ columns=['start_time','end_time'])
+
+ expected = concat([
+ Series([NaT,NaT,dt.datetime(2013,1,3,6,10),dt.datetime(2013,1,4,7,10)],name='end_time'),
+ Series([dt.datetime(2013,1,1,0,0),dt.datetime(2013,1,2,0,0),dt.datetime(2013,1,3,0,0),dt.datetime(2013,1,4,0,0)],name='start_time'),
+ ],axis=1)
+ result = df1.append(df2,ignore_index=True)
+ assert_frame_equal(result, expected)
+
+
def test_overlapping_columns_error_message(self):
# #2649
df = DataFrame({'key': [1, 2, 3],
| closes #4993
| https://api.github.com/repos/pandas-dev/pandas/pulls/4995 | 2013-09-26T13:59:00Z | 2013-09-26T18:31:38Z | 2013-09-26T18:31:38Z | 2014-06-24T01:41:21Z |
Axis slicer | diff --git a/pandas/core/generic.py b/pandas/core/generic.py
index 4553e4804e98b..e94cb2827d0cf 100644
--- a/pandas/core/generic.py
+++ b/pandas/core/generic.py
@@ -9,7 +9,7 @@
from pandas.core.base import PandasObject
from pandas.core.index import Index, MultiIndex, _ensure_index, InvalidIndexError
import pandas.core.indexing as indexing
-from pandas.core.indexing import _maybe_convert_indices
+from pandas.core.indexing import _maybe_convert_indices, _axis_slicer
from pandas.tseries.index import DatetimeIndex
from pandas.tseries.period import PeriodIndex
from pandas.core.internals import BlockManager
@@ -997,10 +997,9 @@ def drop(self, labels, axis=0, level=None):
else:
indexer = -axis.isin(labels)
- slicer = [slice(None)] * self.ndim
- slicer[self._get_axis_number(axis_name)] = indexer
+ slicer = _axis_slicer(indexer, axis=self._get_axis_number(axis_name), ndim=self.ndim)
- return self.ix[tuple(slicer)]
+ return self.ix[slicer]
def add_prefix(self, prefix):
"""
diff --git a/pandas/core/indexing.py b/pandas/core/indexing.py
index afbeb53d857e2..9d0bf8e9b4988 100644
--- a/pandas/core/indexing.py
+++ b/pandas/core/indexing.py
@@ -861,9 +861,7 @@ def _convert_to_indexer(self, obj, axis=0, is_setter=False):
raise
def _tuplify(self, loc):
- tup = [slice(None, None) for _ in range(self.ndim)]
- tup[0] = loc
- return tuple(tup)
+ return _axis_slicer(loc, axis=0, ndim=self.ndim)
def _get_slice_axis(self, slice_obj, axis=0):
obj = self.obj
@@ -1372,3 +1370,91 @@ def _maybe_droplevels(index, key):
pass
return index
+
+_missing = object()
+def _axis_slicer(indexer, stop=_missing, axis=None, ndim=None):
+ """
+ Return a slicer (tuple of slices) that selects data along
+ the proper axis. Useful for programatically selecting data
+ via .iloc/.ix for NDFrame.
+
+ Parameters
+ ----------
+ indexer : None, int, ndarray, slice
+ Can either be valid indexer for `.iloc`, or valid `start` param for `slice`
+ stop : None, int (optional)
+ If passed in, `slice(indexer, stop)` will be used as indexer
+ axis : int
+ ndim : int (optional)
+ If pass in, slicer will always have `ndim` elements.
+
+ Notes
+ -----
+ Without ndim, the slicer will only large enough to target the required axis.
+ Since fancy indexing normally assumes that missing indices are select all,
+ this is not required unless your function assumes otherwise
+
+ Returns
+ -------
+ slices : slicer (tuple of slices)
+ Indices that will select data along proper axis
+
+ Examples
+ --------
+ >>> _axis_slicer(10, axis=1)
+ (slice(None, None, None), 10)
+
+ >>> _axis_slicer(5, 10, axis=1)
+ (slice(None, None, None), slice(5, 10, None))
+
+ >>> df = pd.DataFrame(np.arange(30).reshape(3,10))
+ >>> df.iloc[_axis_slicer(3, 5, axis=1)]
+ 3 4
+ 0 3 4
+ 1 13 14
+ 2 23 24
+
+ >>> df.iloc[_axis_slicer(None, 2, axis=0)]
+ 0 1 2 3 4 5 6 7 8 9
+ 0 0 1 2 3 4 5 6 7 8 9
+ 1 10 11 12 13 14 15 16 17 18 19
+
+ >>> df.iloc[_axis_slicer(np.array([1,3,5]), axis=1)]
+ 1 3 5
+ 0 1 3 5
+ 1 11 13 15
+ 2 21 23 25
+ """
+ if not isinstance(axis, int):
+ raise TypeError("axis paramter must be an int and not {axis}".format(axis=axis))
+
+ if indexer is None and stop is _missing:
+ raise Exception("indexer can only be None when stop is missing")
+
+ if np.ndim(indexer) > 1:
+ raise Exception("indexer.ndim cannot be >= 2")
+
+ size = axis + 1
+ if ndim:
+ if axis >= ndim:
+ raise Exception("axis cannot be greater than ndim."
+ " axis: {axis}, ndim: {ndim}".format(axis=axis,ndim=ndim))
+ size = ndim
+
+ slices = [slice(None) for x in range(size)]
+
+ axis_slicer = None
+ # indexers
+ if stop is _missing:
+ axis_slicer = indexer
+ else:
+ # for now, pass thru, quasi supports non-int slices
+ axis_slicer = slice(indexer, stop)
+
+ # catch all, above statements used to be more restrictive.
+ if axis_slicer is None:
+ raise Exception("indexer:{indexer}, stop:{stop} did not create a valid "
+ "slicer".format(indexer=indexer, stop=stop))
+ slices[axis] = axis_slicer
+
+ return tuple(slices)
diff --git a/pandas/core/internals.py b/pandas/core/internals.py
index 8fcb64e6d0eda..dd92cd11a9589 100644
--- a/pandas/core/internals.py
+++ b/pandas/core/internals.py
@@ -13,7 +13,7 @@
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_handle_legacy_indexes)
from pandas.core.indexing import (_check_slice_bounds, _maybe_convert_indices,
- _length_of_indexer)
+ _length_of_indexer, _axis_slicer)
import pandas.core.common as com
from pandas.sparse.array import _maybe_to_sparse, SparseArray
import pandas.lib as lib
@@ -975,8 +975,8 @@ def func(c, v, o):
for m in [mask, ~mask]:
if m.any():
items = self.items[m]
- slices = [slice(None)] * cond.ndim
- slices[axis] = self.items.get_indexer(items)
+ slices = _axis_slicer(self.items.get_indexer(items), axis=axis, ndim=cond.ndim)
+
r = self._try_cast_result(result[slices])
result_blocks.append(make_block(r.T, items, self.ref_items))
@@ -2295,9 +2295,7 @@ def get_slice(self, slobj, axis=0, raise_on_error=False):
def _slice_blocks(self, slobj, axis):
new_blocks = []
- slicer = [slice(None, None) for _ in range(self.ndim)]
- slicer[axis] = slobj
- slicer = tuple(slicer)
+ slicer = _axis_slicer(slobj, axis=axis, ndim=self.ndim)
for block in self.blocks:
newb = make_block(block._slice(slicer),
@@ -2400,9 +2398,7 @@ def xs(self, key, axis=1, copy=True):
% axis)
loc = self.axes[axis].get_loc(key)
- slicer = [slice(None, None) for _ in range(self.ndim)]
- slicer[axis] = loc
- slicer = tuple(slicer)
+ slicer = _axis_slicer(loc, axis=axis, ndim=self.ndim)
new_axes = list(self.axes)
diff --git a/pandas/core/panel.py b/pandas/core/panel.py
index 34b65f169b904..eb8bb6d44c857 100644
--- a/pandas/core/panel.py
+++ b/pandas/core/panel.py
@@ -14,7 +14,7 @@
from pandas.core.categorical import Categorical
from pandas.core.index import (Index, MultiIndex, _ensure_index,
_get_combined_index)
-from pandas.core.indexing import _maybe_droplevels, _is_list_like
+from pandas.core.indexing import _maybe_droplevels, _is_list_like, _axis_slicer
from pandas.core.internals import (BlockManager,
create_block_manager_from_arrays,
create_block_manager_from_blocks)
@@ -319,8 +319,7 @@ def _getitem_multilevel(self, key):
if isinstance(loc, (slice, np.ndarray)):
new_index = info[loc]
result_index = _maybe_droplevels(new_index, key)
- slices = [loc] + [slice(None) for x in range(
- self._AXIS_LEN - 1)]
+ slices = _axis_slicer(loc, axis=0, ndim=self._AXIS_LEN)
new_values = self.values[slices]
d = self._construct_axes_dict(self._AXIS_ORDERS[1:])
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 42a434c005a4c..6cf7061a1cca6 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -23,6 +23,7 @@
from pandas.core.categorical import Categorical
from pandas.core.common import _asarray_tuplesafe
from pandas.core.internals import BlockManager, make_block
+from pandas.core.indexing import _axis_slicer
from pandas.core.reshape import block2d_to_blocknd, factor_indexer
from pandas.core.index import _ensure_index
from pandas.tseries.timedeltas import _coerce_scalar_to_timedelta_type
@@ -3790,9 +3791,8 @@ def _reindex_axis(obj, axis, labels, other=None):
if other is not None:
labels = labels & _ensure_index(other.unique())
if not labels.equals(ax):
- slicer = [ slice(None, None) ] * obj.ndim
- slicer[axis] = labels
- obj = obj.loc[tuple(slicer)]
+ slicer = _axis_slicer(labels, axis=axis, ndim=obj.ndim)
+ obj = obj.loc[slicer]
return obj
def _get_info(info, name):
diff --git a/pandas/stats/moments.py b/pandas/stats/moments.py
index fd81bd119fe09..de1ffdc63e6d7 100644
--- a/pandas/stats/moments.py
+++ b/pandas/stats/moments.py
@@ -13,6 +13,7 @@
import pandas.algos as algos
import pandas.core.common as com
from pandas.core.common import _values_from_object
+from pandas.core.indexing import _axis_slicer
from pandas.util.decorators import Substitution, Appender
@@ -299,17 +300,14 @@ def _center_window(rs, window, axis):
if isinstance(rs, (Series, DataFrame, Panel)):
rs = rs.shift(-offset, axis=axis)
else:
- rs_indexer = [slice(None)] * rs.ndim
- rs_indexer[axis] = slice(None, -offset)
+ rs_indexer = _axis_slicer(slice(None, -offset), axis=axis, ndim=rs.ndim)
- lead_indexer = [slice(None)] * rs.ndim
- lead_indexer[axis] = slice(offset, None)
+ lead_indexer = _axis_slicer(slice(offset, None), axis=axis, ndim=rs.ndim)
- na_indexer = [slice(None)] * rs.ndim
- na_indexer[axis] = slice(-offset, None)
+ na_indexer = _axis_slicer(slice(-offset, None), axis=axis, ndim=rs.ndim)
- rs[tuple(rs_indexer)] = np.copy(rs[tuple(lead_indexer)])
- rs[tuple(na_indexer)] = np.nan
+ rs[rs_indexer] = np.copy(rs[tuple(lead_indexer)])
+ rs[na_indexer] = np.nan
return rs
diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 837acb90407ea..51a81b06531ca 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1821,6 +1821,96 @@ def check_slicing_positional(index):
#self.assertRaises(TypeError, lambda : s.iloc[2.0:5.0])
#self.assertRaises(TypeError, lambda : s.iloc[2:5.0])
+ def test_axis_slicer(self):
+ from pandas.core.indexing import _axis_slicer
+
+ # axis check
+ self.assertRaises(TypeError, lambda : _axis_slicer(0, axis='items'))
+ # ndim check
+ self.assertRaises(Exception, lambda : _axis_slicer(np.arange(100).reshape(10,10), axis=1))
+
+ self.assertRaises(Exception, lambda : _axis_slicer(None, axis=1))
+
+ # certain core parts expect a slice(None, None) for every axis
+ slicer = _axis_slicer(0, axis=0, ndim=3)
+ assert len(slicer) == 3
+
+ slicer = _axis_slicer(0, axis=2)
+ assert len(slicer) == 3
+
+ slicer = _axis_slicer(0, axis=1)
+ assert len(slicer) == 2
+
+ slicer = _axis_slicer(0, axis=1, ndim=2)
+ assert len(slicer) == 2
+
+ # axis >= ndim
+ self.assertRaises(Exception, lambda : _axis_slicer(0, axis=1, ndim=1))
+
+ # indexers
+ indexer = np.array([0, 4, 10])
+ slicer = _axis_slicer(indexer, axis=0, ndim=3)
+ assert_array_equal(indexer, slicer[0])
+
+ indexer = np.array([0, 4, 10])
+ slicer = _axis_slicer(indexer, axis=1)
+ assert_array_equal(indexer, slicer[1])
+
+ # slice
+ indexer = slice(10, 20)
+ slicer = _axis_slicer(indexer, axis=1)
+ assert_array_equal(indexer, slicer[1])
+
+ # single
+ slicer = _axis_slicer(3, axis=1)
+ assert slicer[1] == 3
+
+ # start/stop
+ # [:10]
+ slicer = _axis_slicer(None, 10, axis=1)
+ assert slicer[1] == slice(None, 10)
+
+ # [5:10]
+ slicer = _axis_slicer(5, 10, axis=1)
+ assert slicer[1] == slice(5, 10)
+
+ # [5:-10]
+ slicer = _axis_slicer(5, -10, axis=1)
+ assert slicer[1] == slice(5, -10)
+
+ df = pd.DataFrame(np.arange(100).reshape(10,10))
+
+ indexer = np.array([0, 4, 3])
+ correct = df.iloc[:, indexer]
+ test = df.iloc[_axis_slicer(indexer, axis=1)]
+ assert_frame_equal(test, correct)
+
+ indexer = 0
+ correct = df.iloc[:, indexer]
+ test = df.iloc[_axis_slicer(indexer, axis=1)]
+ assert_series_equal(test, df.iloc[:, indexer])
+
+ #[:-3]
+ indexer = slice(-3, None)
+ correct = df.iloc[:, indexer]
+ test = df.iloc[_axis_slicer(indexer, axis=1)]
+ assert_frame_equal(test, correct)
+
+ #[:3]
+ indexer = slice(3)
+ correct = df.iloc[:, indexer]
+ test = df.iloc[_axis_slicer(indexer, axis=1)]
+ assert_frame_equal(test, correct)
+
+ #[-9,-5]
+ correct = df.iloc[:, slice(-9, -5)]
+ test = df.iloc[_axis_slicer(-9, -5, axis=1)]
+ assert_frame_equal(test, correct)
+
+ #[:5]
+ correct = df.iloc[slice(None, 5)]
+ test = df.iloc[_axis_slicer(None, 5, axis=0)]
+ assert_frame_equal(test, correct)
if __name__ == '__main__':
import nose
| The internal axis slicer. I overloaded `indexer` to be `start` when `stop` is passed in instead of having a separate indexer kwargs.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4994 | 2013-09-26T13:46:43Z | 2014-02-18T20:06:02Z | null | 2014-06-26T17:16:33Z |
TST: fix indexing test for windows failure | diff --git a/pandas/tests/test_indexing.py b/pandas/tests/test_indexing.py
index 0eab5ab834533..837acb90407ea 100644
--- a/pandas/tests/test_indexing.py
+++ b/pandas/tests/test_indexing.py
@@ -1532,7 +1532,7 @@ def test_floating_index(self):
# related 236
# scalar/slicing of a float index
- s = Series(np.arange(5), index=np.arange(5) * 2.5)
+ s = Series(np.arange(5), index=np.arange(5) * 2.5, dtype=np.int64)
# label based slicing
result1 = s[1.0:3.0]
| Fixes new / remaining test failure in #4866
| https://api.github.com/repos/pandas-dev/pandas/pulls/4992 | 2013-09-26T06:28:00Z | 2013-09-26T10:49:39Z | 2013-09-26T10:49:39Z | 2014-07-16T08:31:19Z |
BUG: Warn when dtypes differ in between chunks in csv parser | diff --git a/doc/source/release.rst b/doc/source/release.rst
index ce08a1ca0a175..810889cbc4b26 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -480,6 +480,8 @@ Bug Fixes
- Fixed wrong check for overlapping in ``DatetimeIndex.union`` (:issue:`4564`)
- Fixed conflict between thousands separator and date parser in csv_parser (:issue:`4678`)
- Fix appending when dtypes are not the same (error showing mixing float/np.datetime64) (:issue:`4993`)
+ - Fixed a bug where low memory c parser could create different types in different
+ chunks of the same file. Now coerces to numerical type or raises warning. (:issue:`3866`)
pandas 0.12.0
-------------
diff --git a/pandas/io/common.py b/pandas/io/common.py
index 02242c5a91493..aa5fdb29f3b5b 100644
--- a/pandas/io/common.py
+++ b/pandas/io/common.py
@@ -36,10 +36,15 @@ def urlopen(*args, **kwargs):
_VALID_URLS = set(uses_relative + uses_netloc + uses_params)
_VALID_URLS.discard('')
+
class PerformanceWarning(Warning):
pass
+class DtypeWarning(Warning):
+ pass
+
+
def _is_url(url):
"""Check to see if a URL has a valid protocol.
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index 48c47238aec6f..24ec88cff727b 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -11,6 +11,7 @@
from numpy import nan
import numpy as np
+from pandas.io.common import DtypeWarning
from pandas import DataFrame, Series, Index, MultiIndex, DatetimeIndex
from pandas.compat import(
@@ -1865,6 +1866,24 @@ def test_parse_integers_above_fp_precision(self):
self.assertTrue(np.array_equal(result['Numbers'], expected['Numbers']))
+ def test_chunks_have_consistent_numerical_type(self):
+ integers = [str(i) for i in range(499999)]
+ data = "a\n" + "\n".join(integers + ["1.0", "2.0"] + integers)
+
+ with tm.assert_produces_warning(False):
+ df = self.read_csv(StringIO(data))
+ self.assertTrue(type(df.a[0]) is np.float64) # Assert that types were coerced.
+ self.assertEqual(df.a.dtype, np.float)
+
+ def test_warn_if_chunks_have_mismatched_type(self):
+ # See test in TestCParserLowMemory.
+ integers = [str(i) for i in range(499999)]
+ data = "a\n" + "\n".join(integers + ['a', 'b'] + integers)
+
+ with tm.assert_produces_warning(False):
+ df = self.read_csv(StringIO(data))
+ self.assertEqual(df.a.dtype, np.object)
+
class TestPythonParser(ParserTests, unittest.TestCase):
@@ -2301,7 +2320,6 @@ def test_usecols_dtypes(self):
self.assertTrue((result.dtypes == [object, np.int, np.float]).all())
self.assertTrue((result2.dtypes == [object, np.float]).all())
-
def test_usecols_implicit_index_col(self):
# #2654
data = 'a,b,c\n4,apple,bat,5.7\n8,orange,cow,10'
@@ -2528,16 +2546,22 @@ def test_tokenize_CR_with_quoting(self):
def test_raise_on_no_columns(self):
# single newline
- data = """
-"""
+ data = "\n"
self.assertRaises(ValueError, self.read_csv, StringIO(data))
# test with more than a single newline
- data = """
+ data = "\n\n\n"
+ self.assertRaises(ValueError, self.read_csv, StringIO(data))
+ def test_warn_if_chunks_have_mismatched_type(self):
+ # Issue #3866 If chunks are different types and can't
+ # be coerced using numerical types, then issue warning.
+ integers = [str(i) for i in range(499999)]
+ data = "a\n" + "\n".join(integers + ['a', 'b'] + integers)
-"""
- self.assertRaises(ValueError, self.read_csv, StringIO(data))
+ with tm.assert_produces_warning(DtypeWarning):
+ df = self.read_csv(StringIO(data))
+ self.assertEqual(df.a.dtype, np.object)
class TestParseSQL(unittest.TestCase):
diff --git a/pandas/parser.pyx b/pandas/parser.pyx
index b97929023adb6..d08c020c9e9bc 100644
--- a/pandas/parser.pyx
+++ b/pandas/parser.pyx
@@ -5,10 +5,12 @@ from libc.stdio cimport fopen, fclose
from libc.stdlib cimport malloc, free
from libc.string cimport strncpy, strlen, strcmp, strcasecmp
cimport libc.stdio as stdio
+import warnings
from cpython cimport (PyObject, PyBytes_FromString,
PyBytes_AsString, PyBytes_Check,
PyUnicode_Check, PyUnicode_AsUTF8String)
+from io.common import DtypeWarning
cdef extern from "Python.h":
@@ -1735,11 +1737,28 @@ def _concatenate_chunks(list chunks):
cdef:
list names = list(chunks[0].keys())
object name
+ list warning_columns
+ object warning_names
+ object common_type
result = {}
+ warning_columns = list()
for name in names:
arrs = [chunk.pop(name) for chunk in chunks]
+ # Check each arr for consistent types.
+ dtypes = set([a.dtype for a in arrs])
+ if len(dtypes) > 1:
+ common_type = np.find_common_type(dtypes, [])
+ if common_type == np.object:
+ warning_columns.append(str(name))
result[name] = np.concatenate(arrs)
+
+ if warning_columns:
+ warning_names = ','.join(warning_columns)
+ warning_message = " ".join(["Columns (%s) have mixed types." % warning_names,
+ "Specify dtype option on import or set low_memory=False."
+ ])
+ warnings.warn(warning_message, DtypeWarning)
return result
#----------------------------------------------------------------------
| Closes #3866
| https://api.github.com/repos/pandas-dev/pandas/pulls/4991 | 2013-09-26T03:52:21Z | 2013-09-29T19:28:11Z | 2013-09-29T19:28:11Z | 2014-07-03T21:50:15Z |
CLN: fix py2to3 issues in categorical.py | diff --git a/pandas/core/categorical.py b/pandas/core/categorical.py
index 0868ead2c1558..97ed0fdb0da30 100644
--- a/pandas/core/categorical.py
+++ b/pandas/core/categorical.py
@@ -2,6 +2,9 @@
import numpy as np
+from pandas import compat
+from pandas.compat import u
+
from pandas.core.algorithms import factorize
from pandas.core.base import PandasObject
from pandas.core.index import Index
@@ -147,7 +150,7 @@ def _tidy_repr(self, max_vals=20):
#TODO: tidy_repr for footer since there may be a ton of levels?
result = '%s\n%s' % (result, self._repr_footer())
- return result
+ return compat.text_type(result)
def _repr_footer(self):
levheader = 'Levels (%d): ' % len(self.levels)
@@ -158,17 +161,16 @@ def _repr_footer(self):
levstring = '\n'.join([lines[0]] +
[indent + x.lstrip() for x in lines[1:]])
- namestr = u"Name: %s, " % com.pprint_thing(
- self.name) if self.name is not None else ""
- return u'%s\n%sLength: %d' % (levheader + levstring, namestr,
- len(self))
+ namestr = "Name: %s, " % self.name if self.name is not None else ""
+ return u('%s\n%sLength: %d' % (levheader + levstring, namestr,
+ len(self)))
def _get_repr(self, name=False, length=True, na_rep='NaN', footer=True):
formatter = fmt.CategoricalFormatter(self, name=name,
length=length, na_rep=na_rep,
footer=footer)
result = formatter.to_string()
- return result
+ return compat.text_type(result)
def __unicode__(self):
width, height = get_terminal_size()
@@ -180,10 +182,10 @@ def __unicode__(self):
result = self._get_repr(length=len(self) > 50,
name=True)
else:
- result = u'Categorical([], %s' % self._get_repr(name=True,
- length=False,
- footer=True,
- )
+ result = 'Categorical([], %s' % self._get_repr(name=True,
+ length=False,
+ footer=True,
+ )
return result
diff --git a/pandas/core/format.py b/pandas/core/format.py
index 749120f8732c2..be6ad4d2bc5ef 100644
--- a/pandas/core/format.py
+++ b/pandas/core/format.py
@@ -65,14 +65,14 @@ class CategoricalFormatter(object):
def __init__(self, categorical, buf=None, length=True,
na_rep='NaN', name=False, footer=True):
self.categorical = categorical
- self.buf = buf if buf is not None else StringIO(u"")
+ self.buf = buf if buf is not None else StringIO(u(""))
self.name = name
self.na_rep = na_rep
self.length = length
self.footer = footer
def _get_footer(self):
- footer = u''
+ footer = ''
if self.name:
name = com.pprint_thing(self.categorical.name,
@@ -82,7 +82,7 @@ def _get_footer(self):
if self.length:
if footer:
- footer += u', '
+ footer += ', '
footer += "Length: %d" % len(self.categorical)
levheader = 'Levels (%d): ' % len(self.categorical.levels)
@@ -94,10 +94,10 @@ def _get_footer(self):
levstring = '\n'.join([lines[0]] +
[indent + x.lstrip() for x in lines[1:]])
if footer:
- footer += u', '
+ footer += ', '
footer += levheader + levstring
- return footer
+ return compat.text_type(footer)
def _get_formatted_values(self):
return format_array(np.asarray(self.categorical), None,
@@ -111,18 +111,18 @@ def to_string(self):
if self.footer:
return self._get_footer()
else:
- return u''
+ return u('')
fmt_values = self._get_formatted_values()
pad_space = 10
- result = [u'%s' % i for i in fmt_values]
+ result = ['%s' % i for i in fmt_values]
if self.footer:
footer = self._get_footer()
if footer:
result.append(footer)
- return u'\n'.join(result)
+ return compat.text_type(u('\n').join(result))
class SeriesFormatter(object):
diff --git a/pandas/tests/test_categorical.py b/pandas/tests/test_categorical.py
index f4d1c6a0116a9..e47ba0c8e1569 100644
--- a/pandas/tests/test_categorical.py
+++ b/pandas/tests/test_categorical.py
@@ -1,7 +1,7 @@
# pylint: disable=E1101,E1103,W0232
from datetime import datetime
-from pandas.compat import range, lrange
+from pandas.compat import range, lrange, u
import unittest
import nose
import re
| https://api.github.com/repos/pandas-dev/pandas/pulls/4990 | 2013-09-26T02:18:37Z | 2013-09-26T03:12:20Z | 2013-09-26T03:12:19Z | 2014-07-16T08:31:17Z | |
BUG: Bug in concatenation with duplicate columns across dtypes, GH4975 | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 26529072f15bf..1f11ce414ae56 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -432,7 +432,7 @@ Bug Fixes
- Bug in multi-indexing with a partial string selection as one part of a MultIndex (:issue:`4758`)
- Bug with reindexing on the index with a non-unique index will now raise ``ValueError`` (:issue:`4746`)
- Bug in setting with ``loc/ix`` a single indexer with a multi-index axis and a numpy array, related to (:issue:`3777`)
- - Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (:issue:`4771`)
+ - Bug in concatenation with duplicate columns across dtypes not merging with axis=0 (:issue:`4771`, :issue:`4975`)
- Bug in ``iloc`` with a slice index failing (:issue:`4771`)
- Incorrect error message with no colspecs or width in ``read_fwf``. (:issue:`4774`)
- Fix bugs in indexing in a Series with a duplicate index (:issue:`4548`, :issue:`4550`)
diff --git a/pandas/tools/merge.py b/pandas/tools/merge.py
index d7fedecdb0ef2..2cc9d586a05a3 100644
--- a/pandas/tools/merge.py
+++ b/pandas/tools/merge.py
@@ -655,6 +655,7 @@ def __init__(self, data_list, join_index, indexers, axis=1, copy=True):
self.join_index = join_index
self.axis = axis
self.copy = copy
+ self.offsets = None
# do NOT sort
self.result_items = _concat_indexes([d.items for d in data_list])
@@ -683,14 +684,29 @@ def get_result(self):
blockmaps = self._prepare_blocks()
kinds = _get_merge_block_kinds(blockmaps)
- result_blocks = []
-
# maybe want to enable flexible copying <-- what did I mean?
+ kind_blocks = []
for klass in kinds:
klass_blocks = []
for unit, mapping in blockmaps:
if klass in mapping:
klass_blocks.extend((unit, b) for b in mapping[klass])
+
+ # blocks that we are going to merge
+ kind_blocks.append(klass_blocks)
+
+ # create the merge offsets, essentially where the resultant blocks go in the result
+ if not self.result_items.is_unique:
+
+ # length of the merges for each of the klass blocks
+ self.offsets = np.zeros(len(blockmaps))
+ for kb in kind_blocks:
+ kl = list(b.get_merge_length() for unit, b in kb)
+ self.offsets += np.array(kl)
+
+ # merge the blocks to create the result blocks
+ result_blocks = []
+ for klass_blocks in kind_blocks:
res_blk = self._get_merged_block(klass_blocks)
result_blocks.append(res_blk)
@@ -726,7 +742,8 @@ def _merge_blocks(self, merge_chunks):
n = len(fidx) if fidx is not None else out_shape[self.axis]
- out_shape[0] = sum(blk.get_merge_length() for unit, blk in merge_chunks)
+ merge_lengths = list(blk.get_merge_length() for unit, blk in merge_chunks)
+ out_shape[0] = sum(merge_lengths)
out_shape[self.axis] = n
# Should use Fortran order??
@@ -746,9 +763,8 @@ def _merge_blocks(self, merge_chunks):
# calculate by the existing placement plus the offset in the result set
placement = None
if not self.result_items.is_unique:
- nchunks = len(merge_chunks)
- offsets = np.array([0] + [ len(self.result_items) / nchunks ] * (nchunks-1)).cumsum()
placement = []
+ offsets = np.append(np.array([0]),self.offsets.cumsum()[:-1])
for (unit, blk), offset in zip(merge_chunks,offsets):
placement.extend(blk.ref_locs+offset)
diff --git a/pandas/tools/tests/test_merge.py b/pandas/tools/tests/test_merge.py
index f7eb3c125db61..0eeb68c4691eb 100644
--- a/pandas/tools/tests/test_merge.py
+++ b/pandas/tools/tests/test_merge.py
@@ -15,7 +15,8 @@
from pandas.tools.merge import merge, concat, ordered_merge, MergeError
from pandas.util.testing import (assert_frame_equal, assert_series_equal,
assert_almost_equal, rands,
- makeCustomDataframe as mkdf)
+ makeCustomDataframe as mkdf,
+ assertRaisesRegexp)
from pandas import isnull, DataFrame, Index, MultiIndex, Panel, Series, date_range
import pandas.algos as algos
import pandas.util.testing as tm
@@ -1435,6 +1436,8 @@ def test_dups_index(self):
assert_frame_equal(result, expected)
def test_join_dups(self):
+
+ # joining dups
df = concat([DataFrame(np.random.randn(10,4),columns=['A','A','B','B']),
DataFrame(np.random.randint(0,10,size=20).reshape(10,2),columns=['A','C'])],
axis=1)
@@ -1444,6 +1447,18 @@ def test_join_dups(self):
result.columns = expected.columns
assert_frame_equal(result, expected)
+ # GH 4975, invalid join on dups
+ w = DataFrame(np.random.randn(4,2), columns=["x", "y"])
+ x = DataFrame(np.random.randn(4,2), columns=["x", "y"])
+ y = DataFrame(np.random.randn(4,2), columns=["x", "y"])
+ z = DataFrame(np.random.randn(4,2), columns=["x", "y"])
+
+ dta = x.merge(y, left_index=True, right_index=True).merge(z, left_index=True, right_index=True, how="outer")
+ dta = dta.merge(w, left_index=True, right_index=True)
+ expected = concat([x,y,z,w],axis=1)
+ expected.columns=['x_x','y_x','x_y','y_y','x_x','y_x','x_y','y_y']
+ assert_frame_equal(dta,expected)
+
def test_handle_empty_objects(self):
df = DataFrame(np.random.randn(10, 4), columns=list('abcd'))
| partially fixed in #4771
closes #4975
```
In [1]: w = DataFrame(np.random.randn(4,2), columns=["x", "y"])
In [2]: x = DataFrame(np.random.randn(4,2), columns=["x", "y"])
In [3]: y = DataFrame(np.random.randn(4,2), columns=["x", "y"])
In [4]: z = DataFrame(np.random.randn(4,2), columns=["x", "y"])
In [5]: dta = x.merge(y, left_index=True, right_index=True).merge(z, left_index=True, right_index=True, how="outer")
In [6]: dta = dta.merge(w, left_index=True, right_index=True)
In [7]: dta
Out[7]:
x_x y_x x_y y_y x_x y_x x_y y_y
0 0.393625 -0.340291 0.035043 -0.195235 -0.892856 -0.357269 0.820424 0.142803
1 -1.600176 0.737261 -0.571140 1.352393 0.201634 1.403633 0.590919 -1.003057
2 -1.046113 2.148139 2.406527 -1.460300 0.881712 0.949246 -0.061758 -0.386265
3 -0.472761 -0.055612 -0.449152 -0.209876 1.076689 0.294275 -0.684433 0.925683
```
Via direct concat
```
In [11]: concat([x,y,z,w],axis=1)
Out[11]:
x y x y x y x y
0 0.393625 -0.340291 0.035043 -0.195235 -0.892856 -0.357269 0.820424 0.142803
1 -1.600176 0.737261 -0.571140 1.352393 0.201634 1.403633 0.590919 -1.003057
2 -1.046113 2.148139 2.406527 -1.460300 0.881712 0.949246 -0.061758 -0.386265
3 -0.472761 -0.055612 -0.449152 -0.209876 1.076689 0.294275 -0.684433 0.925683
In [8]: expected = concat([x,y,z,w],axis=1)
In [9]: expected.columns=['x_x','y_x','x_y','y_y','x_x','y_x','x_y','y_y']
In [10]: expected
Out[10]:
x_x y_x x_y y_y x_x y_x x_y y_y
0 0.393625 -0.340291 0.035043 -0.195235 -0.892856 -0.357269 0.820424 0.142803
1 -1.600176 0.737261 -0.571140 1.352393 0.201634 1.403633 0.590919 -1.003057
2 -1.046113 2.148139 2.406527 -1.460300 0.881712 0.949246 -0.061758 -0.386265
3 -0.472761 -0.055612 -0.449152 -0.209876 1.076689 0.294275 -0.684433 0.925683
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4989 | 2013-09-25T23:53:33Z | 2013-09-26T00:21:08Z | 2013-09-26T00:21:08Z | 2014-06-13T14:32:17Z |
API: properly box numeric timedelta ops on Series (GH4984) | diff --git a/doc/source/release.rst b/doc/source/release.rst
index a50a0f9c90b73..8d0f2c6a599e8 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -93,6 +93,7 @@ Improvements to existing features
is frequency conversion.
- Timedelta64 support ``fillna/ffill/bfill`` with an integer interpreted as seconds,
or a ``timedelta`` (:issue:`3371`)
+ - Box numeric ops on ``timedelta`` Series (:issue:`4984`)
- Datetime64 support ``ffill/bfill``
- Performance improvements with ``__getitem__`` on ``DataFrames`` with
when the key is a column
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index bcb738d8a89cb..85ac48c379aad 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -1204,6 +1204,25 @@ pass a timedelta to get a particular value.
y.fillna(10)
y.fillna(timedelta(days=-1,seconds=5))
+.. _timeseries.timedeltas_reductions:
+
+Time Deltas & Reductions
+------------------------
+
+.. warning::
+
+ A numeric reduction operation for ``timedelta64[ns]`` will return a single-element ``Series`` of
+ dtype ``timedelta64[ns]``.
+
+You can do numeric reduction operations on timedeltas.
+
+.. ipython:: python
+
+ y2 = y.fillna(timedelta(days=-1,seconds=5))
+ y2
+ y2.mean()
+ y2.quantile(.1)
+
.. _timeseries.timedeltas_convert:
Time Deltas & Conversions
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index bda6fa4cdf021..982ae939fc085 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -292,6 +292,14 @@ Enhancements
td.fillna(0)
td.fillna(timedelta(days=1,seconds=5))
+ - You can do numeric reduction operations on timedeltas. Note that these will return
+ a single-element Series.
+
+ .. ipython:: python
+
+ td.mean()
+ td.quantile(.1)
+
- ``plot(kind='kde')`` now accepts the optional parameters ``bw_method`` and
``ind``, passed to scipy.stats.gaussian_kde() (for scipy >= 0.11.0) to set
the bandwidth, and to gkde.evaluate() to specify the indicies at which it
diff --git a/pandas/core/nanops.py b/pandas/core/nanops.py
index 247f429d4b331..f9aeb1f726ff7 100644
--- a/pandas/core/nanops.py
+++ b/pandas/core/nanops.py
@@ -5,7 +5,7 @@
import numpy as np
-from pandas.core.common import isnull, notnull, _values_from_object
+from pandas.core.common import isnull, notnull, _values_from_object, is_float
import pandas.core.common as com
import pandas.lib as lib
import pandas.algos as algos
@@ -188,6 +188,10 @@ def _wrap_results(result,dtype):
# as series will do the right thing in py3 (and deal with numpy 1.6.2
# bug in that it results dtype of timedelta64[us]
from pandas import Series
+
+ # coerce float to results
+ if is_float(result):
+ result = int(result)
result = Series([result],dtype='timedelta64[ns]')
else:
result = result.view(dtype)
@@ -224,11 +228,15 @@ def nanmean(values, axis=None, skipna=True):
the_mean[ct_mask] = np.nan
else:
the_mean = the_sum / count if count > 0 else np.nan
- return the_mean
+
+ return _wrap_results(the_mean,dtype)
@disallow('M8')
@bottleneck_switch()
def nanmedian(values, axis=None, skipna=True):
+
+ values, mask, dtype = _get_values(values, skipna)
+
def get_median(x):
mask = notnull(x)
if not skipna and not mask.all():
@@ -257,7 +265,7 @@ def get_median(x):
return ret
# otherwise return a scalar value
- return get_median(values) if notempty else np.nan
+ return _wrap_results(get_median(values),dtype) if notempty else np.nan
@disallow('M8')
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 942bb700a3718..8713ffb58392e 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -1981,7 +1981,12 @@ def quantile(self, q=0.5):
valid_values = self.dropna().values
if len(valid_values) == 0:
return pa.NA
- return _quantile(valid_values, q * 100)
+ result = _quantile(valid_values, q * 100)
+ if result.dtype == _TD_DTYPE:
+ from pandas.tseries.timedeltas import to_timedelta
+ return to_timedelta(result)
+
+ return result
def ptp(self, axis=None, out=None):
return _values_from_object(self).ptp(axis, out)
diff --git a/pandas/tseries/tests/test_timedeltas.py b/pandas/tseries/tests/test_timedeltas.py
index 551507039112b..64e5728f0f549 100644
--- a/pandas/tseries/tests/test_timedeltas.py
+++ b/pandas/tseries/tests/test_timedeltas.py
@@ -7,7 +7,7 @@
import numpy as np
import pandas as pd
-from pandas import (Index, Series, DataFrame, isnull, notnull,
+from pandas import (Index, Series, DataFrame, Timestamp, isnull, notnull,
bdate_range, date_range, _np_version_under1p7)
import pandas.core.common as com
from pandas.compat import StringIO, lrange, range, zip, u, OrderedDict, long
@@ -123,8 +123,8 @@ def conv(v):
def test_nat_converters(self):
_skip_if_numpy_not_friendly()
- self.assert_(to_timedelta('nat') == tslib.iNaT)
- self.assert_(to_timedelta('nan') == tslib.iNaT)
+ self.assert_(to_timedelta('nat',box=False) == tslib.iNaT)
+ self.assert_(to_timedelta('nan',box=False) == tslib.iNaT)
def test_to_timedelta(self):
_skip_if_numpy_not_friendly()
@@ -133,11 +133,11 @@ def conv(v):
return v.astype('m8[ns]')
d1 = np.timedelta64(1,'D')
- self.assert_(to_timedelta('1 days 06:05:01.00003') == conv(d1+np.timedelta64(6*3600+5*60+1,'s')+np.timedelta64(30,'us')))
- self.assert_(to_timedelta('15.5us') == conv(np.timedelta64(15500,'ns')))
+ self.assert_(to_timedelta('1 days 06:05:01.00003',box=False) == conv(d1+np.timedelta64(6*3600+5*60+1,'s')+np.timedelta64(30,'us')))
+ self.assert_(to_timedelta('15.5us',box=False) == conv(np.timedelta64(15500,'ns')))
# empty string
- result = to_timedelta('')
+ result = to_timedelta('',box=False)
self.assert_(result == tslib.iNaT)
result = to_timedelta(['', ''])
@@ -150,7 +150,7 @@ def conv(v):
# ints
result = np.timedelta64(0,'ns')
- expected = to_timedelta(0)
+ expected = to_timedelta(0,box=False)
self.assert_(result == expected)
# Series
@@ -163,6 +163,35 @@ def conv(v):
expected = to_timedelta([0,10],unit='s')
tm.assert_series_equal(result, expected)
+ # single element conversion
+ v = timedelta(seconds=1)
+ result = to_timedelta(v,box=False)
+ expected = to_timedelta([v])
+
+ v = np.timedelta64(timedelta(seconds=1))
+ result = to_timedelta(v,box=False)
+ expected = to_timedelta([v])
+
+ def test_timedelta_ops(self):
+ _skip_if_numpy_not_friendly()
+
+ # GH4984
+ # make sure ops return timedeltas
+ s = Series([Timestamp('20130101') + timedelta(seconds=i*i) for i in range(10) ])
+ td = s.diff()
+
+ result = td.mean()
+ expected = to_timedelta(timedelta(seconds=9))
+ tm.assert_series_equal(result, expected)
+
+ result = td.quantile(.1)
+ expected = to_timedelta('00:00:02.6')
+ tm.assert_series_equal(result, expected)
+
+ result = td.median()
+ expected = to_timedelta('00:00:08')
+ tm.assert_series_equal(result, expected)
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tseries/timedeltas.py b/pandas/tseries/timedeltas.py
index 4d8633546e017..24e4b1377cc45 100644
--- a/pandas/tseries/timedeltas.py
+++ b/pandas/tseries/timedeltas.py
@@ -58,7 +58,7 @@ def _convert_listlike(arg, box):
elif is_list_like(arg):
return _convert_listlike(arg, box=box)
- return _convert_listlike([ arg ], box=False)[0]
+ return _convert_listlike([ arg ], box=box)
_short_search = re.compile(
"^\s*(?P<neg>-?)\s*(?P<value>\d*\.?\d*)\s*(?P<unit>d|s|ms|us|ns)?\s*$",re.IGNORECASE)
| closes #4984
| https://api.github.com/repos/pandas-dev/pandas/pulls/4985 | 2013-09-25T19:10:51Z | 2013-09-25T20:07:40Z | 2013-09-25T20:07:40Z | 2014-06-27T14:49:50Z |
BUG: allow Timestamp comparisons on the left | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 3b5bb04344d25..74e54526cfe9a 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -487,6 +487,8 @@ Bug Fixes
- Fix repr for DateOffset. No longer show duplicate entries in kwds.
Removed unused offset fields. (:issue:`4638`)
- Fixed wrong index name during read_csv if using usecols. Applies to c parser only. (:issue:`4201`)
+ - ``Timestamp`` objects can now appear in the left hand side of a comparison
+ operation with a ``Series`` or ``DataFrame`` object (:issue:`4982`).
pandas 0.12.0
-------------
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 82be82ea57dae..a6f806d5ce097 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -4335,6 +4335,31 @@ def check(df,df2):
df2 = DataFrame({'a': date_range('20010101', periods=len(df)), 'b': date_range('20100101', periods=len(df))})
check(df,df2)
+ def test_timestamp_compare(self):
+ # make sure we can compare Timestamps on the right AND left hand side
+ # GH4982
+ df = DataFrame({'dates1': date_range('20010101', periods=10),
+ 'dates2': date_range('20010102', periods=10),
+ 'intcol': np.random.randint(1000000000, size=10),
+ 'floatcol': np.random.randn(10),
+ 'stringcol': list(tm.rands(10))})
+ df.loc[np.random.rand(len(df)) > 0.5, 'dates2'] = pd.NaT
+ ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq',
+ 'ne': 'ne'}
+ for left, right in ops.items():
+ left_f = getattr(operator, left)
+ right_f = getattr(operator, right)
+
+ # no nats
+ expected = left_f(df, Timestamp('20010109'))
+ result = right_f(Timestamp('20010109'), df)
+ tm.assert_frame_equal(result, expected)
+
+ # nats
+ expected = left_f(df, Timestamp('nat'))
+ result = right_f(Timestamp('nat'), df)
+ tm.assert_frame_equal(result, expected)
+
def test_modulo(self):
# GH3590, modulo as ints
diff --git a/pandas/tseries/tests/test_timeseries.py b/pandas/tseries/tests/test_timeseries.py
index 51a010f9d4ead..0e5e3d1922ec4 100644
--- a/pandas/tseries/tests/test_timeseries.py
+++ b/pandas/tseries/tests/test_timeseries.py
@@ -3,6 +3,7 @@
import sys
import os
import unittest
+import operator
import nose
@@ -2010,6 +2011,7 @@ def test_join_self(self):
joined = index.join(index, how=kind)
self.assert_(index is joined)
+
class TestDatetime64(unittest.TestCase):
"""
Also test supoprt for datetime64[ns] in Series / DataFrame
@@ -2507,6 +2509,74 @@ def test_hash_equivalent(self):
stamp = Timestamp(datetime(2011, 1, 1))
self.assertEquals(d[stamp], 5)
+ def test_timestamp_compare_scalars(self):
+ # case where ndim == 0
+ lhs = np.datetime64(datetime(2013, 12, 6))
+ rhs = Timestamp('now')
+ nat = Timestamp('nat')
+
+ ops = {'gt': 'lt', 'lt': 'gt', 'ge': 'le', 'le': 'ge', 'eq': 'eq',
+ 'ne': 'ne'}
+
+ for left, right in ops.items():
+ left_f = getattr(operator, left)
+ right_f = getattr(operator, right)
+
+ if pd._np_version_under1p7:
+ # you have to convert to timestamp for this to work with numpy
+ # scalars
+ expected = left_f(Timestamp(lhs), rhs)
+
+ # otherwise a TypeError is thrown
+ if left not in ('eq', 'ne'):
+ with tm.assertRaises(TypeError):
+ left_f(lhs, rhs)
+ else:
+ expected = left_f(lhs, rhs)
+
+ result = right_f(rhs, lhs)
+ self.assertEqual(result, expected)
+
+ expected = left_f(rhs, nat)
+ result = right_f(nat, rhs)
+ self.assertEqual(result, expected)
+
+ def test_timestamp_compare_series(self):
+ # make sure we can compare Timestamps on the right AND left hand side
+ # GH4982
+ s = Series(date_range('20010101', periods=10), name='dates')
+ s_nat = s.copy(deep=True)
+
+ s[0] = pd.Timestamp('nat')
+ s[3] = pd.Timestamp('nat')
+
+ ops = {'lt': 'gt', 'le': 'ge', 'eq': 'eq', 'ne': 'ne'}
+
+ for left, right in ops.items():
+ left_f = getattr(operator, left)
+ right_f = getattr(operator, right)
+
+ # no nats
+ expected = left_f(s, Timestamp('20010109'))
+ result = right_f(Timestamp('20010109'), s)
+ tm.assert_series_equal(result, expected)
+
+ # nats
+ expected = left_f(s, Timestamp('nat'))
+ result = right_f(Timestamp('nat'), s)
+ tm.assert_series_equal(result, expected)
+
+ # compare to timestamp with series containing nats
+ expected = left_f(s_nat, Timestamp('20010109'))
+ result = right_f(Timestamp('20010109'), s_nat)
+ tm.assert_series_equal(result, expected)
+
+ # compare to nat with series containing nats
+ expected = left_f(s_nat, Timestamp('nat'))
+ result = right_f(Timestamp('nat'), s_nat)
+ tm.assert_series_equal(result, expected)
+
+
class TestSlicing(unittest.TestCase):
def test_slice_year(self):
@@ -2775,6 +2845,7 @@ def test_frame_apply_dont_convert_datetime64(self):
self.assertTrue(df.x1.dtype == 'M8[ns]')
+
if __name__ == '__main__':
nose.runmodule(argv=[__file__, '-vvs', '-x', '--pdb', '--pdb-failure'],
exit=False)
diff --git a/pandas/tslib.pyx b/pandas/tslib.pyx
index 075102dd63100..99b09446be232 100644
--- a/pandas/tslib.pyx
+++ b/pandas/tslib.pyx
@@ -9,12 +9,15 @@ from cpython cimport (
PyTypeObject,
PyFloat_Check,
PyObject_RichCompareBool,
- PyString_Check
+ PyObject_RichCompare,
+ PyString_Check,
+ Py_GT, Py_GE, Py_EQ, Py_NE, Py_LT, Py_LE
)
# Cython < 0.17 doesn't have this in cpython
cdef extern from "Python.h":
cdef PyTypeObject *Py_TYPE(object)
+ int PySlice_Check(object)
from libc.stdlib cimport free
@@ -30,9 +33,6 @@ from datetime import timedelta, datetime
from datetime import time as datetime_time
from pandas.compat import parse_date
-cdef extern from "Python.h":
- int PySlice_Check(object)
-
# initialize numpy
import_array()
#import_ufunc()
@@ -350,6 +350,11 @@ NaT = NaTType()
iNaT = util.get_nat()
+
+cdef inline bint _cmp_nat_dt(_NaT lhs, _Timestamp rhs, int op) except -1:
+ return _nat_scalar_rules[op]
+
+
cdef _tz_format(object obj, object zone):
try:
return obj.strftime(' %%Z, tz=%s' % zone)
@@ -437,9 +442,35 @@ def apply_offset(ndarray[object] values, object offset):
result = np.empty(n, dtype='M8[ns]')
new_values = result.view('i8')
- pass
+cdef inline bint _cmp_scalar(int64_t lhs, int64_t rhs, int op) except -1:
+ if op == Py_EQ:
+ return lhs == rhs
+ elif op == Py_NE:
+ return lhs != rhs
+ elif op == Py_LT:
+ return lhs < rhs
+ elif op == Py_LE:
+ return lhs <= rhs
+ elif op == Py_GT:
+ return lhs > rhs
+ elif op == Py_GE:
+ return lhs >= rhs
+
+
+cdef int _reverse_ops[6]
+
+_reverse_ops[Py_LT] = Py_GT
+_reverse_ops[Py_LE] = Py_GE
+_reverse_ops[Py_EQ] = Py_EQ
+_reverse_ops[Py_NE] = Py_NE
+_reverse_ops[Py_GT] = Py_LT
+_reverse_ops[Py_GE] = Py_LE
+
+
+cdef str _NDIM_STRING = "ndim"
+
# This is PITA. Because we inherit from datetime, which has very specific
# construction requirements, we need to do object instantiation in python
# (see Timestamp class above). This will serve as a C extension type that
@@ -449,18 +480,21 @@ cdef class _Timestamp(datetime):
int64_t value, nanosecond
object offset # frequency reference
- def __hash__(self):
+ def __hash__(_Timestamp self):
if self.nanosecond:
return hash(self.value)
- else:
- return datetime.__hash__(self)
+ return datetime.__hash__(self)
def __richcmp__(_Timestamp self, object other, int op):
- cdef _Timestamp ots
+ cdef:
+ _Timestamp ots
+ int ndim
if isinstance(other, _Timestamp):
+ if isinstance(other, _NaT):
+ return _cmp_nat_dt(other, self, _reverse_ops[op])
ots = other
- elif type(other) is datetime:
+ elif isinstance(other, datetime):
if self.nanosecond == 0:
val = self.to_datetime()
return PyObject_RichCompareBool(val, other, op)
@@ -470,70 +504,60 @@ cdef class _Timestamp(datetime):
except ValueError:
return self._compare_outside_nanorange(other, op)
else:
- if op == 2:
- return False
- elif op == 3:
- return True
+ ndim = getattr(other, _NDIM_STRING, -1)
+
+ if ndim != -1:
+ if ndim == 0:
+ if isinstance(other, np.datetime64):
+ other = Timestamp(other)
+ else:
+ raise TypeError('Cannot compare type %r with type %r' %
+ (type(self).__name__,
+ type(other).__name__))
+ return PyObject_RichCompare(other, self, _reverse_ops[op])
else:
- raise TypeError('Cannot compare Timestamp with '
- '{0!r}'.format(other.__class__.__name__))
+ if op == Py_EQ:
+ return False
+ elif op == Py_NE:
+ return True
+ raise TypeError('Cannot compare type %r with type %r' %
+ (type(self).__name__, type(other).__name__))
self._assert_tzawareness_compat(other)
+ return _cmp_scalar(self.value, ots.value, op)
- if op == 2: # ==
- return self.value == ots.value
- elif op == 3: # !=
- return self.value != ots.value
- elif op == 0: # <
- return self.value < ots.value
- elif op == 1: # <=
- return self.value <= ots.value
- elif op == 4: # >
- return self.value > ots.value
- elif op == 5: # >=
- return self.value >= ots.value
-
- cdef _compare_outside_nanorange(self, object other, int op):
- dtval = self.to_datetime()
+ cdef bint _compare_outside_nanorange(_Timestamp self, datetime other,
+ int op) except -1:
+ cdef datetime dtval = self.to_datetime()
self._assert_tzawareness_compat(other)
if self.nanosecond == 0:
- if op == 2: # ==
- return dtval == other
- elif op == 3: # !=
- return dtval != other
- elif op == 0: # <
- return dtval < other
- elif op == 1: # <=
- return dtval <= other
- elif op == 4: # >
- return dtval > other
- elif op == 5: # >=
- return dtval >= other
+ return PyObject_RichCompareBool(dtval, other, op)
else:
- if op == 2: # ==
+ if op == Py_EQ:
return False
- elif op == 3: # !=
+ elif op == Py_NE:
return True
- elif op == 0: # <
+ elif op == Py_LT:
return dtval < other
- elif op == 1: # <=
+ elif op == Py_LE:
return dtval < other
- elif op == 4: # >
+ elif op == Py_GT:
return dtval >= other
- elif op == 5: # >=
+ elif op == Py_GE:
return dtval >= other
- cdef _assert_tzawareness_compat(self, object other):
+ cdef int _assert_tzawareness_compat(_Timestamp self,
+ object other) except -1:
if self.tzinfo is None:
if other.tzinfo is not None:
- raise Exception('Cannot compare tz-naive and '
- 'tz-aware timestamps')
+ raise ValueError('Cannot compare tz-naive and tz-aware '
+ 'timestamps')
elif other.tzinfo is None:
- raise Exception('Cannot compare tz-naive and tz-aware timestamps')
+ raise ValueError('Cannot compare tz-naive and tz-aware timestamps')
- cpdef to_datetime(self):
+ cpdef datetime to_datetime(_Timestamp self):
cdef:
pandas_datetimestruct dts
_TSObject ts
@@ -580,6 +604,16 @@ cdef inline bint is_timestamp(object o):
return Py_TYPE(o) == ts_type # isinstance(o, Timestamp)
+cdef bint _nat_scalar_rules[6]
+
+_nat_scalar_rules[Py_EQ] = False
+_nat_scalar_rules[Py_NE] = True
+_nat_scalar_rules[Py_LT] = False
+_nat_scalar_rules[Py_LE] = False
+_nat_scalar_rules[Py_GT] = False
+_nat_scalar_rules[Py_GE] = False
+
+
cdef class _NaT(_Timestamp):
def __hash__(_NaT self):
@@ -587,23 +621,18 @@ cdef class _NaT(_Timestamp):
return hash(self.value)
def __richcmp__(_NaT self, object other, int op):
- # if not isinstance(other, (_NaT, _Timestamp)):
- # raise TypeError('Cannot compare %s with NaT' % type(other))
-
- if op == 2: # ==
- return False
- elif op == 3: # !=
- return True
- elif op == 0: # <
- return False
- elif op == 1: # <=
- return False
- elif op == 4: # >
- return False
- elif op == 5: # >=
- return False
+ cdef int ndim = getattr(other, 'ndim', -1)
+ if ndim == -1:
+ return _nat_scalar_rules[op]
+ if ndim == 0:
+ if isinstance(other, np.datetime64):
+ other = Timestamp(other)
+ else:
+ raise TypeError('Cannot compare type %r with type %r' %
+ (type(self).__name__, type(other).__name__))
+ return PyObject_RichCompare(other, self, _reverse_ops[op])
def _delta_to_nanoseconds(delta):
diff --git a/vb_suite/binary_ops.py b/vb_suite/binary_ops.py
index 3f076f9f922a3..8293f650425e3 100644
--- a/vb_suite/binary_ops.py
+++ b/vb_suite/binary_ops.py
@@ -102,3 +102,15 @@
frame_multi_and_no_ne = \
Benchmark("df[(df>0) & (df2>0)]", setup, name='frame_multi_and_no_ne',cleanup="expr.set_use_numexpr(True)",
start_date=datetime(2013, 2, 26))
+
+setup = common_setup + """
+N = 1000000
+halfway = N // 2 - 1
+s = Series(date_range('20010101', periods=N, freq='D'))
+ts = s[halfway]
+"""
+
+timestamp_series_compare = Benchmark("ts >= s", setup,
+ start_date=datetime(2013, 9, 27))
+series_timestamp_compare = Benchmark("s <= ts", setup,
+ start_date=datetime(2012, 2, 21))
diff --git a/vb_suite/index_object.py b/vb_suite/index_object.py
index cf87a9af500fb..8b348ddc6e6cc 100644
--- a/vb_suite/index_object.py
+++ b/vb_suite/index_object.py
@@ -22,6 +22,16 @@
index_datetime_intersection = Benchmark("rng.intersection(rng2)", setup)
index_datetime_union = Benchmark("rng.union(rng2)", setup)
+setup = common_setup + """
+rng = date_range('1/1/2000', periods=10000, freq='T')
+rng2 = rng[:-1]
+"""
+
+datetime_index_intersection = Benchmark("rng.intersection(rng2)", setup,
+ start_date=datetime(2013, 9, 27))
+datetime_index_union = Benchmark("rng.union(rng2)", setup,
+ start_date=datetime(2013, 9, 27))
+
# integers
setup = common_setup + """
N = 1000000
| closes #4982
| https://api.github.com/repos/pandas-dev/pandas/pulls/4983 | 2013-09-25T16:53:47Z | 2013-09-27T15:53:27Z | 2013-09-27T15:53:26Z | 2014-08-18T17:03:33Z |
CI/ENH: test against an older version of statsmodels | diff --git a/ci/install.sh b/ci/install.sh
index 86226c530541c..357d962d9610d 100755
--- a/ci/install.sh
+++ b/ci/install.sh
@@ -62,19 +62,6 @@ if [ x"$FULL_DEPS" == x"true" ]; then
echo "Installing FULL_DEPS"
# for pytables gets the lib as well
time sudo apt-get $APT_ARGS install libhdf5-serial-dev
-
- # fool statsmodels into thinking pandas was already installed
- # so it won't refuse to install itself.
-
- SITE_PKG_DIR=$VIRTUAL_ENV/lib/python$TRAVIS_PYTHON_VERSION/site-packages
- echo "Using SITE_PKG_DIR: $SITE_PKG_DIR"
-
- mkdir $SITE_PKG_DIR/pandas
- touch $SITE_PKG_DIR/pandas/__init__.py
- echo "version='0.10.0-phony'" > $SITE_PKG_DIR/pandas/version.py
- time pip install $PIP_ARGS git+git://github.com/statsmodels/statsmodels@c9062e43b8a5f7385537ca95#egg=statsmodels
-
- rm -Rf $SITE_PKG_DIR/pandas # scrub phoney pandas
fi
# build pandas
diff --git a/ci/requirements-2.7.txt b/ci/requirements-2.7.txt
index 2e903102de7b1..686dc87f7d009 100644
--- a/ci/requirements-2.7.txt
+++ b/ci/requirements-2.7.txt
@@ -17,3 +17,4 @@ scikits.timeseries==0.91.3
MySQL-python==1.2.4
scipy==0.10.0
beautifulsoup4==4.2.1
+statsmodels==0.5.0
diff --git a/ci/requirements-2.7_LOCALE.txt b/ci/requirements-2.7_LOCALE.txt
index 056b63bbb8591..e4cdf0733a7d3 100644
--- a/ci/requirements-2.7_LOCALE.txt
+++ b/ci/requirements-2.7_LOCALE.txt
@@ -15,3 +15,4 @@ html5lib==1.0b2
lxml==3.2.1
scipy==0.10.0
beautifulsoup4==4.2.1
+statsmodels==0.5.0
diff --git a/ci/requirements-3.2.txt b/ci/requirements-3.2.txt
index b689047019ed7..b44a708c4fffc 100644
--- a/ci/requirements-3.2.txt
+++ b/ci/requirements-3.2.txt
@@ -12,3 +12,4 @@ patsy==0.1.0
lxml==3.2.1
scipy==0.12.0
beautifulsoup4==4.2.1
+statsmodels==0.4.3
diff --git a/ci/requirements-3.3.txt b/ci/requirements-3.3.txt
index 326098be5f7f4..318030e733158 100644
--- a/ci/requirements-3.3.txt
+++ b/ci/requirements-3.3.txt
@@ -13,3 +13,4 @@ patsy==0.1.0
lxml==3.2.1
scipy==0.12.0
beautifulsoup4==4.2.1
+statsmodels==0.4.3
diff --git a/ci/speedpack/build.sh b/ci/speedpack/build.sh
index d19c6da8a86ed..689f9aa5db8ea 100755
--- a/ci/speedpack/build.sh
+++ b/ci/speedpack/build.sh
@@ -26,6 +26,42 @@ apt-get build-dep python-lxml -y
export PYTHONIOENCODING='utf-8'
export VIRTUALENV_DISTRIBUTE=0
+
+function create_fake_pandas() {
+ local site_pkg_dir="$1"
+ rm -rf $site_pkg_dir/pandas
+ mkdir $site_pkg_dir/pandas
+ touch $site_pkg_dir/pandas/__init__.py
+ echo "version = '0.10.0-phony'" > $site_pkg_dir/pandas/version.py
+}
+
+
+function get_site_pkgs_dir() {
+ python$1 -c 'import distutils; print(distutils.sysconfig.get_python_lib())'
+}
+
+
+function create_wheel() {
+ local pip_args="$1"
+ local wheelhouse="$2"
+ local n="$3"
+ local pyver="$4"
+
+ local site_pkgs_dir="$(get_site_pkgs_dir $pyver)"
+
+
+ if [[ "$n" == *statsmodels* ]]; then
+ create_fake_pandas $site_pkgs_dir && \
+ pip wheel $pip_args --wheel-dir=$wheelhouse $n && \
+ pip install $pip_args --no-index $n && \
+ rm -Rf $site_pkgs_dir
+ else
+ pip wheel $pip_args --wheel-dir=$wheelhouse $n
+ pip install $pip_args --no-index $n
+ fi
+}
+
+
function generate_wheels() {
# get the requirements file
local reqfile="$1"
@@ -62,8 +98,7 @@ function generate_wheels() {
# install and build the wheels
cat $reqfile | while read N; do
- pip wheel $PIP_ARGS --wheel-dir=$WHEELHOUSE $N
- pip install $PIP_ARGS --no-index $N
+ create_wheel "$PIP_ARGS" "$WHEELHOUSE" "$N" "$PY_VER"
done
}
diff --git a/doc/source/release.rst b/doc/source/release.rst
index eec2e91f0a755..a50a0f9c90b73 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -128,6 +128,8 @@ Improvements to existing features
- ``read_stata`` now accepts Stata 13 format (:issue:`4291`)
- ``ExcelWriter`` and ``ExcelFile`` can be used as contextmanagers.
(:issue:`3441`, :issue:`4933`)
+ - ``pandas`` is now tested with two different versions of ``statsmodels``
+ (0.4.3 and 0.5.0) (:issue:`4981`).
API Changes
~~~~~~~~~~~
diff --git a/pandas/stats/tests/test_ols.py b/pandas/stats/tests/test_ols.py
index a2271731b6de9..ad9184e698316 100644
--- a/pandas/stats/tests/test_ols.py
+++ b/pandas/stats/tests/test_ols.py
@@ -6,6 +6,7 @@
from __future__ import division
+from distutils.version import LooseVersion
from datetime import datetime
from pandas import compat
import unittest
@@ -98,11 +99,10 @@ def testOLSWithDatasets_scotland(self):
def testWLS(self):
# WLS centered SS changed (fixed) in 0.5.0
- v = sm.version.version.split('.')
- if int(v[0]) >= 0 and int(v[1]) <= 5:
- if int(v[2]) < 1:
- raise nose.SkipTest
- print( "Make sure you're using statsmodels 0.5.0.dev-cec4f26 or later.")
+ sm_version = sm.version.version
+ if sm_version < LooseVersion('0.5.0'):
+ raise nose.SkipTest("WLS centered SS not fixed in statsmodels"
+ " version {0}".format(sm_version))
X = DataFrame(np.random.randn(30, 4), columns=['A', 'B', 'C', 'D'])
Y = Series(np.random.randn(30))
| https://api.github.com/repos/pandas-dev/pandas/pulls/4981 | 2013-09-25T14:56:18Z | 2013-09-25T17:12:39Z | 2013-09-25T17:12:39Z | 2014-06-16T11:01:48Z | |
FIX: JSON support for non C locales | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 97150cbeb53a2..8584fe564f8b0 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -358,6 +358,8 @@ Bug Fixes
dtypes, surfaced in (:issue:`4377`)
- Fixed bug with duplicate columns and type conversion in ``read_json`` when
``orient='split'`` (:issue:`4377`)
+ - Fixed JSON bug where locales with decimal separators other than '.' threw
+ exceptions when encoding / decoding certain values. (:issue:`4918`)
- Fix ``.iat`` indexing with a ``PeriodIndex`` (:issue:`4390`)
- Fixed an issue where ``PeriodIndex`` joining with self was returning a new
instance rather than the same instance (:issue:`4379`); also adds a test
diff --git a/pandas/io/tests/test_json/test_ujson.py b/pandas/io/tests/test_json/test_ujson.py
index 4d6218d3dbc35..38a30b8baf459 100644
--- a/pandas/io/tests/test_json/test_ujson.py
+++ b/pandas/io/tests/test_json/test_ujson.py
@@ -83,6 +83,19 @@ def test_doubleLongDecimalIssue(self):
decoded = ujson.decode(encoded)
self.assertEqual(sut, decoded)
+ def test_encodeNonCLocale(self):
+ import locale
+ savedlocale = locale.getlocale(locale.LC_NUMERIC)
+ try:
+ locale.setlocale(locale.LC_NUMERIC, 'it_IT.UTF-8')
+ except:
+ try:
+ locale.setlocale(locale.LC_NUMERIC, 'Italian_Italy')
+ except:
+ raise nose.SkipTest('Could not set locale for testing')
+ self.assertEqual(ujson.loads(ujson.dumps(4.78e60)), 4.78e60)
+ self.assertEqual(ujson.loads('4.78', precise_float=True), 4.78)
+ locale.setlocale(locale.LC_NUMERIC, savedlocale)
def test_encodeDecodeLongDecimal(self):
sut = {u('a'): -528656961.4399388}
diff --git a/pandas/src/ujson/lib/ultrajsondec.c b/pandas/src/ujson/lib/ultrajsondec.c
index c5cf341ad3092..85a8387547641 100644
--- a/pandas/src/ujson/lib/ultrajsondec.c
+++ b/pandas/src/ujson/lib/ultrajsondec.c
@@ -43,6 +43,7 @@ Numeric decoder derived from from TCL library
#include <wchar.h>
#include <stdlib.h>
#include <errno.h>
+#include <locale.h>
#ifndef TRUE
#define TRUE 1
@@ -824,7 +825,7 @@ FASTCALL_ATTR JSOBJ FASTCALL_MSVC decode_object( struct DecoderState *ds)
default:
ds->dec->releaseObject(ds->prv, newObj, ds->dec);
- return SetError(ds, -1, "Unexpected character in found when decoding object value");
+ return SetError(ds, -1, "Unexpected character found when decoding object value");
}
}
}
@@ -874,6 +875,7 @@ JSOBJ JSON_DecodeObject(JSONObjectDecoder *dec, const char *buffer, size_t cbBuf
{
/*
FIXME: Base the size of escBuffer of that of cbBuffer so that the unicode escaping doesn't run into the wall each time */
+ char *locale;
struct DecoderState ds;
wchar_t escBuffer[(JSON_MAX_STACK_BUFFER_SIZE / sizeof(wchar_t))];
JSOBJ ret;
@@ -892,7 +894,15 @@ JSOBJ JSON_DecodeObject(JSONObjectDecoder *dec, const char *buffer, size_t cbBuf
ds.dec = dec;
+ locale = strdup(setlocale(LC_NUMERIC, NULL));
+ if (!locale)
+ {
+ return SetError(&ds, -1, "Could not reserve memory block");
+ }
+ setlocale(LC_NUMERIC, "C");
ret = decode_any (&ds);
+ setlocale(LC_NUMERIC, locale);
+ free(locale);
if (ds.escHeap)
{
diff --git a/pandas/src/ujson/lib/ultrajsonenc.c b/pandas/src/ujson/lib/ultrajsonenc.c
index 15d92d42f6753..17048bd86adc2 100644
--- a/pandas/src/ujson/lib/ultrajsonenc.c
+++ b/pandas/src/ujson/lib/ultrajsonenc.c
@@ -41,6 +41,7 @@ Numeric decoder derived from from TCL library
#include <string.h>
#include <stdlib.h>
#include <math.h>
+#include <locale.h>
#include <float.h>
@@ -877,6 +878,7 @@ void encode(JSOBJ obj, JSONObjectEncoder *enc, const char *name, size_t cbName)
char *JSON_EncodeObject(JSOBJ obj, JSONObjectEncoder *enc, char *_buffer, size_t _cbBuffer)
{
+ char *locale;
enc->malloc = enc->malloc ? enc->malloc : malloc;
enc->free = enc->free ? enc->free : free;
enc->realloc = enc->realloc ? enc->realloc : realloc;
@@ -915,7 +917,16 @@ char *JSON_EncodeObject(JSOBJ obj, JSONObjectEncoder *enc, char *_buffer, size_t
enc->end = enc->start + _cbBuffer;
enc->offset = enc->start;
+ locale = strdup(setlocale(LC_NUMERIC, NULL));
+ if (!locale)
+ {
+ SetError(NULL, enc, "Could not reserve memory block");
+ return NULL;
+ }
+ setlocale(LC_NUMERIC, "C");
encode (obj, enc, NULL, 0);
+ setlocale(LC_NUMERIC, locale);
+ free(locale);
Buffer_Reserve(enc, 1);
if (enc->errorMsg)
| As reported in #4918.
Tested on Linux and Windows.
| https://api.github.com/repos/pandas-dev/pandas/pulls/4976 | 2013-09-25T02:08:52Z | 2013-09-25T12:55:17Z | 2013-09-25T12:55:17Z | 2014-07-16T08:31:04Z |
TST/CLN: pare down the eval test suite | diff --git a/pandas/computation/tests/test_eval.py b/pandas/computation/tests/test_eval.py
index d5bcf85d4de03..7dc2ebc3d54e1 100755
--- a/pandas/computation/tests/test_eval.py
+++ b/pandas/computation/tests/test_eval.py
@@ -129,8 +129,8 @@ def setup_data(self):
Series([1, 2, np.nan, np.nan, 5]), nan_df1)
self.pandas_rhses = (DataFrame(randn(10, 5)), Series(randn(5)),
Series([1, 2, np.nan, np.nan, 5]), nan_df2)
- self.scalar_lhses = randn(), np.float64(randn()), np.nan
- self.scalar_rhses = randn(), np.float64(randn()), np.nan
+ self.scalar_lhses = randn(),
+ self.scalar_rhses = randn(),
self.lhses = self.pandas_lhses + self.scalar_lhses
self.rhses = self.pandas_rhses + self.scalar_rhses
@@ -180,7 +180,6 @@ def test_floor_division(self):
for lhs, rhs in product(self.lhses, self.rhses):
self.check_floor_division(lhs, '//', rhs)
- @slow
def test_pow(self):
for lhs, rhs in product(self.lhses, self.rhses):
self.check_pow(lhs, '**', rhs)
@@ -198,13 +197,13 @@ def test_compound_invert_op(self):
@slow
def test_chained_cmp_op(self):
mids = self.lhses
- cmp_ops = tuple(set(self.cmp_ops) - set(['==', '!=', '<=', '>=']))
+ cmp_ops = '<', '>'# tuple(set(self.cmp_ops) - set(['==', '!=', '<=', '>=']))
for lhs, cmp1, mid, cmp2, rhs in product(self.lhses, cmp_ops,
mids, cmp_ops, self.rhses):
self.check_chained_cmp_op(lhs, cmp1, mid, cmp2, rhs)
def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
- skip_these = 'in', 'not in'
+ skip_these = _scalar_skip
ex = '(lhs {cmp1} rhs) {binop} (lhs {cmp2} rhs)'.format(cmp1=cmp1,
binop=binop,
cmp2=cmp2)
@@ -264,7 +263,7 @@ def check_complex_cmp_op(self, lhs, cmp1, rhs, binop, cmp2):
@skip_incompatible_operand
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
- skip_these = 'in', 'not in'
+ skip_these = _scalar_skip
def check_operands(left, right, cmp_op):
if (np.isscalar(left) and np.isnan(left) and not np.isscalar(right)
@@ -318,11 +317,7 @@ def check_operands(left, right, cmp_op):
ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
ex2 = 'lhs {0} mid and mid {1} rhs'.format(cmp1, cmp2)
ex3 = '(lhs {0} mid) & (mid {1} rhs)'.format(cmp1, cmp2)
- try:
- expected = _eval_single_bin(lhs_new, '&', rhs_new, self.engine)
- except TypeError:
- import ipdb; ipdb.set_trace()
- raise
+ expected = _eval_single_bin(lhs_new, '&', rhs_new, self.engine)
for ex in (ex1, ex2, ex3):
result = pd.eval(ex, engine=self.engine,
@@ -729,9 +724,8 @@ def setup_ops(self):
def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
ex1 = 'lhs {0} mid {1} rhs'.format(cmp1, cmp2)
- self.assertRaises(NotImplementedError, pd.eval, ex1,
- local_dict={'lhs': lhs, 'mid': mid, 'rhs': rhs},
- engine=self.engine, parser=self.parser)
+ with tm.assertRaises(NotImplementedError):
+ pd.eval(ex1, engine=self.engine, parser=self.parser)
class TestEvalPythonPython(TestEvalNumexprPython):
@@ -783,7 +777,8 @@ def check_chained_cmp_op(self, lhs, cmp1, mid, cmp2, rhs):
class TestAlignment(object):
- index_types = 'i', 'f', 's', 'u', 'dt', # 'p'
+ index_types = 'i', 'u', 'dt'
+ lhs_index_types = index_types + ('f', 's') # 'p'
def check_align_nested_unary_op(self, engine, parser):
skip_if_no_ne(engine)
@@ -798,23 +793,23 @@ def test_align_nested_unary_op(self):
def check_basic_frame_alignment(self, engine, parser):
skip_if_no_ne(engine)
- args = product(self.index_types, repeat=2)
- for r_idx_type, c_idx_type in args:
- df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
+ args = product(self.lhs_index_types, self.index_types,
+ self.index_types)
+ for lr_idx_type, rr_idx_type, c_idx_type in args:
+ df = mkdf(10, 10, data_gen_f=f, r_idx_type=lr_idx_type,
c_idx_type=c_idx_type)
- df2 = mkdf(20, 10, data_gen_f=f, r_idx_type=r_idx_type,
+ df2 = mkdf(20, 10, data_gen_f=f, r_idx_type=rr_idx_type,
c_idx_type=c_idx_type)
res = pd.eval('df + df2', engine=engine, parser=parser)
assert_frame_equal(res, df + df2)
- @slow
def test_basic_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_frame_alignment, engine, parser
def check_frame_comparison(self, engine, parser):
skip_if_no_ne(engine)
- args = product(self.index_types, repeat=2)
+ args = product(self.lhs_index_types, repeat=2)
for r_idx_type, c_idx_type in args:
df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
@@ -826,18 +821,19 @@ def check_frame_comparison(self, engine, parser):
res = pd.eval('df < df3', engine=engine, parser=parser)
assert_frame_equal(res, df < df3)
- @slow
def test_frame_comparison(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_frame_comparison, engine, parser
def check_medium_complex_frame_alignment(self, engine, parser):
skip_if_no_ne(engine)
- args = product(self.index_types, repeat=4)
+ args = product(self.lhs_index_types, self.index_types,
+ self.index_types, self.index_types)
+
for r1, c1, r2, c2 in args:
- df = mkdf(5, 2, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
- df2 = mkdf(10, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
- df3 = mkdf(15, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
+ df = mkdf(3, 2, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
+ df2 = mkdf(4, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
+ df3 = mkdf(5, 2, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
res = pd.eval('df + df2 + df3', engine=engine, parser=parser)
assert_frame_equal(res, df + df2 + df3)
@@ -864,12 +860,11 @@ def testit(r_idx_type, c_idx_type, index_name):
expected = df + s
assert_frame_equal(res, expected)
- args = product(self.index_types, self.index_types, ('index',
- 'columns'))
+ args = product(self.lhs_index_types, self.index_types,
+ ('index', 'columns'))
for r_idx_type, c_idx_type, index_name in args:
testit(r_idx_type, c_idx_type, index_name)
- @slow
def test_basic_frame_series_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_frame_series_alignment, engine, parser
@@ -877,7 +872,7 @@ def test_basic_frame_series_alignment(self):
def check_basic_series_frame_alignment(self, engine, parser):
skip_if_no_ne(engine)
def testit(r_idx_type, c_idx_type, index_name):
- df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
+ df = mkdf(10, 7, data_gen_f=f, r_idx_type=r_idx_type,
c_idx_type=c_idx_type)
index = getattr(df, index_name)
s = Series(np.random.randn(5), index[:5])
@@ -892,19 +887,18 @@ def testit(r_idx_type, c_idx_type, index_name):
expected = s + df
assert_frame_equal(res, expected)
- args = product(self.index_types, self.index_types, ('index',
- 'columns'))
+ args = product(self.lhs_index_types, self.index_types,
+ ('index', 'columns'))
for r_idx_type, c_idx_type, index_name in args:
testit(r_idx_type, c_idx_type, index_name)
- @slow
def test_basic_series_frame_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_basic_series_frame_alignment, engine, parser
def check_series_frame_commutativity(self, engine, parser):
skip_if_no_ne(engine)
- args = product(self.index_types, self.index_types, ('+', '*'),
+ args = product(self.lhs_index_types, self.index_types, ('+', '*'),
('index', 'columns'))
for r_idx_type, c_idx_type, op, index_name in args:
df = mkdf(10, 10, data_gen_f=f, r_idx_type=r_idx_type,
@@ -921,20 +915,28 @@ def check_series_frame_commutativity(self, engine, parser):
if engine == 'numexpr':
assert_frame_equal(a, b)
- @slow
def test_series_frame_commutativity(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_series_frame_commutativity, engine, parser
def check_complex_series_frame_alignment(self, engine, parser):
skip_if_no_ne(engine)
- index_types = [self.index_types] * 4
- args = product(('index', 'columns'), ('df', 'df2'), *index_types)
- for index_name, obj, r1, r2, c1, c2 in args:
- df = mkdf(10, 5, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
- df2 = mkdf(20, 5, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
- index = getattr(locals()[obj], index_name)
- s = Series(np.random.randn(5), index[:5])
+
+ import random
+ args = product(self.lhs_index_types, self.index_types,
+ self.index_types, self.index_types)
+ n = 3
+ m1 = 5
+ m2 = 2 * m1
+
+ for r1, r2, c1, c2 in args:
+ index_name = random.choice(['index', 'columns'])
+ obj_name = random.choice(['df', 'df2'])
+
+ df = mkdf(m1, n, data_gen_f=f, r_idx_type=r1, c_idx_type=c1)
+ df2 = mkdf(m2, n, data_gen_f=f, r_idx_type=r2, c_idx_type=c2)
+ index = getattr(locals().get(obj_name), index_name)
+ s = Series(np.random.randn(n), index[:n])
if r2 == 'dt' or c2 == 'dt':
if engine == 'numexpr':
@@ -1004,7 +1006,6 @@ def check_performance_warning_for_poor_alignment(self, engine, parser):
"".format(1, 's', np.log10(s.size - df.shape[1])))
assert_equal(msg, expected)
-
def test_performance_warning_for_poor_alignment(self):
for engine, parser in ENGINES_PARSERS:
yield self.check_performance_warning_for_poor_alignment, engine, parser
| Current running time of the `eval` test suite (not including `query`)
```
$ nosetests pandas/computation/tests/test_eval.py 1 โต
......../home/phillip/Documents/code/py/pandas/pandas/core/frame.py:3088: FutureWarning: TimeSeries broadcasting along DataFrame index by default is deprecated. Please use DataFrame.<op> to explicitly broadcast arithmetic operations along the index
FutureWarning)
.........................................................................................S.......................................................................................................................
----------------------------------------------------------------------
Ran 217 tests in 190.685s
OK (SKIP=1)
```
New running time :speedboat: :
```
$ nosetests pandas/computation/tests/test_eval.py
......../home/phillip/Documents/code/py/pandas/pandas/core/frame.py:3088: FutureWarning: TimeSeries broadcasting along DataFrame index by default is deprecated. Please use DataFrame.<op> to explicitly broadcast arithmetic operations along the index
FutureWarning)
.........................................................................................S.......................................................................................................................
----------------------------------------------------------------------
Ran 217 tests in 39.299s
OK (SKIP=1)
```
| https://api.github.com/repos/pandas-dev/pandas/pulls/4974 | 2013-09-24T22:27:33Z | 2013-09-25T01:28:32Z | 2013-09-25T01:28:32Z | 2014-07-16T08:31:00Z |
CLN: General print statement cleanup. | diff --git a/bench/io_roundtrip.py b/bench/io_roundtrip.py
index e389481d1aabc..fa4e0755f40df 100644
--- a/bench/io_roundtrip.py
+++ b/bench/io_roundtrip.py
@@ -62,8 +62,8 @@ def rountrip_archive(N, K=50, iterations=10):
pickle_time = timeit(pickle_f, iterations) / iterations
print('pandas (pickle) %7.4f seconds' % pickle_time)
- # print 'Numpy (npz) %7.4f seconds' % numpy_time
- # print 'larry (HDF5) %7.4f seconds' % larry_time
+ # print('Numpy (npz) %7.4f seconds' % numpy_time)
+ # print('larry (HDF5) %7.4f seconds' % larry_time)
# Delete old files
try:
diff --git a/doc/sphinxext/docscrape.py b/doc/sphinxext/docscrape.py
index a6a42ac40042e..9a8ac59b32714 100755
--- a/doc/sphinxext/docscrape.py
+++ b/doc/sphinxext/docscrape.py
@@ -463,7 +463,7 @@ def __str__(self):
if self._role:
if not roles.has_key(self._role):
- print "Warning: invalid role %s" % self._role
+ print("Warning: invalid role %s" % self._role)
out += '.. %s:: %s\n \n\n' % (roles.get(self._role, ''),
func_name)
diff --git a/doc/sphinxext/ipython_console_highlighting.py b/doc/sphinxext/ipython_console_highlighting.py
index 569335311aeab..dfb489e49394d 100644
--- a/doc/sphinxext/ipython_console_highlighting.py
+++ b/doc/sphinxext/ipython_console_highlighting.py
@@ -39,7 +39,7 @@ class IPythonConsoleLexer(Lexer):
In [2]: a
Out[2]: 'foo'
- In [3]: print a
+ In [3]: print(a)
foo
In [4]: 1 / 0
diff --git a/doc/sphinxext/ipython_directive.py b/doc/sphinxext/ipython_directive.py
index f05330c371885..114a3d56f36c8 100644
--- a/doc/sphinxext/ipython_directive.py
+++ b/doc/sphinxext/ipython_directive.py
@@ -158,8 +158,8 @@ def block_parser(part, rgxin, rgxout, fmtin, fmtout):
nextline = lines[i]
matchout = rgxout.match(nextline)
- # print "nextline=%s, continuation=%s, starts=%s"%(nextline,
- # continuation, nextline.startswith(continuation))
+ # print("nextline=%s, continuation=%s, starts=%s"%(nextline,
+ # continuation, nextline.startswith(continuation)))
if matchout or nextline.startswith('#'):
break
elif nextline.startswith(continuation):
@@ -245,7 +245,7 @@ def clear_cout(self):
def process_input_line(self, line, store_history=True):
"""process the input, capturing stdout"""
- # print "input='%s'"%self.input
+ # print("input='%s'"%self.input)
stdout = sys.stdout
splitter = self.IP.input_splitter
try:
@@ -293,7 +293,7 @@ def process_input(self, data, input_prompt, lineno):
decorator, input, rest = data
image_file = None
image_directive = None
- # print 'INPUT:', data # dbg
+ # print('INPUT:', data) # dbg
is_verbatim = decorator == '@verbatim' or self.is_verbatim
is_doctest = decorator == '@doctest' or self.is_doctest
is_suppress = decorator == '@suppress' or self.is_suppress
@@ -361,7 +361,7 @@ def _remove_first_space_if_any(line):
self.cout.truncate(0)
return (ret, input_lines, output, is_doctest, image_file,
image_directive)
- # print 'OUTPUT', output # dbg
+ # print('OUTPUT', output) # dbg
def process_output(self, data, output_prompt,
input_lines, output, is_doctest, image_file):
@@ -390,9 +390,9 @@ def process_output(self, data, output_prompt,
'found_output="%s" and submitted output="%s"' %
(input_lines, found, submitted))
raise RuntimeError(e)
- # print 'doctest PASSED for input_lines="%s" with
- # found_output="%s" and submitted output="%s"'%(input_lines,
- # found, submitted)
+ # print('''doctest PASSED for input_lines="%s" with
+ # found_output="%s" and submitted output="%s"''' % (input_lines,
+ # found, submitted))
def process_comment(self, data):
"""Process data fPblock for COMMENT token."""
@@ -406,7 +406,7 @@ def save_image(self, image_file):
self.ensure_pyplot()
command = ('plt.gcf().savefig("%s", bbox_inches="tight", '
'dpi=100)' % image_file)
- # print 'SAVEFIG', command # dbg
+ # print('SAVEFIG', command) # dbg
self.process_input_line('bookmark ipy_thisdir', store_history=False)
self.process_input_line('cd -b ipy_savedir', store_history=False)
self.process_input_line(command, store_history=False)
@@ -737,12 +737,12 @@ def run(self):
lines.extend(figure.split('\n'))
lines.append('')
- # print lines
+ # print(lines)
if len(lines) > 2:
if debug:
- print '\n'.join(lines)
+ print('\n'.join(lines))
else: # NOTE: this raises some errors, what's it for?
- # print 'INSERTING %d lines'%len(lines)
+ # print('INSERTING %d lines' % len(lines))
self.state_machine.insert_input(
lines, self.state_machine.input_lines.source(0))
@@ -813,7 +813,7 @@ def test():
In [130]: url = 'http://ichart.finance.yahoo.com/table.csv?s=CROX\
.....: &d=9&e=22&f=2009&g=d&a=1&br=8&c=2006&ignore=.csv'
-In [131]: print url.split('&')
+In [131]: print(url.split('&'))
['http://ichart.finance.yahoo.com/table.csv?s=CROX', 'd=9', 'e=22', 'f=2009', 'g=d', 'a=1', 'b=8', 'c=2006', 'ignore=.csv']
In [60]: import urllib
@@ -843,12 +843,12 @@ def test():
""",
r"""
-In [106]: print x
+In [106]: print(x)
jdh
In [109]: for i in range(10):
n
-.....: print i
+.....: print(i)
.....:
.....:
0
@@ -920,4 +920,4 @@ def test():
if not os.path.isdir('_static'):
os.mkdir('_static')
test()
- print 'All OK? Check figures in _static/'
+ print('All OK? Check figures in _static/')
diff --git a/doc/sphinxext/phantom_import.py b/doc/sphinxext/phantom_import.py
index 926641827e937..b69f09ea612a0 100755
--- a/doc/sphinxext/phantom_import.py
+++ b/doc/sphinxext/phantom_import.py
@@ -31,7 +31,7 @@ def setup(app):
def initialize(app):
fn = app.config.phantom_import_file
if (fn and os.path.isfile(fn)):
- print "[numpydoc] Phantom importing modules from", fn, "..."
+ print("[numpydoc] Phantom importing modules from", fn, "...")
import_phantom_module(fn)
#------------------------------------------------------------------------------
diff --git a/doc/sphinxext/tests/test_docscrape.py b/doc/sphinxext/tests/test_docscrape.py
index 96c9d5639b5c2..a66e4222b380d 100755
--- a/doc/sphinxext/tests/test_docscrape.py
+++ b/doc/sphinxext/tests/test_docscrape.py
@@ -85,13 +85,13 @@
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
- >>> print x.shape
+ >>> print(x.shape)
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
- >>> print list( (x[0,0,:] - mean) < 0.6 )
+ >>> print(list( (x[0,0,:] - mean) < 0.6 ))
[True, True]
.. index:: random
@@ -153,7 +153,7 @@ def test_examples():
def test_index():
assert_equal(doc['index']['default'], 'random')
- print doc['index']
+ print(doc['index'])
assert_equal(len(doc['index']), 2)
assert_equal(len(doc['index']['refguide']), 2)
@@ -247,13 +247,13 @@ def test_str():
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
->>> print x.shape
+>>> print(x.shape)
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
->>> print list( (x[0,0,:] - mean) < 0.6 )
+>>> print(list( (x[0,0,:] - mean) < 0.6 ))
[True, True]
.. index:: random
@@ -351,13 +351,13 @@ def test_sphinx_str():
>>> mean = (1,2)
>>> cov = [[1,0],[1,0]]
>>> x = multivariate_normal(mean,cov,(3,3))
->>> print x.shape
+>>> print(x.shape)
(3, 3, 2)
The following is probably true, given that 0.6 is roughly twice the
standard deviation:
->>> print list( (x[0,0,:] - mean) < 0.6 )
+>>> print(list( (x[0,0,:] - mean) < 0.6 ))
[True, True]
""")
diff --git a/examples/regressions.py b/examples/regressions.py
index 2203165825ccb..6351c6730d838 100644
--- a/examples/regressions.py
+++ b/examples/regressions.py
@@ -31,7 +31,7 @@ def makeSeries():
model = ols(y=Y, x=X)
-print (model)
+print(model)
#-------------------------------------------------------------------------------
# Panel regression
@@ -48,4 +48,4 @@ def makeSeries():
model = ols(y=Y, x=data)
-print (panelModel)
+print(panelModel)
diff --git a/pandas/__init__.py b/pandas/__init__.py
index c4c012d6c5095..ddd4cd49e6ec6 100644
--- a/pandas/__init__.py
+++ b/pandas/__init__.py
@@ -7,7 +7,7 @@
except Exception: # pragma: no cover
import sys
e = sys.exc_info()[1] # Py25 and Py3 current exception syntax conflict
- print (e)
+ print(e)
if 'No module named lib' in str(e):
raise ImportError('C extensions not built: if you installed already '
'verify that you are not importing from the source '
diff --git a/pandas/core/config.py b/pandas/core/config.py
index f81958a0e58fc..9f864e720dbfb 100644
--- a/pandas/core/config.py
+++ b/pandas/core/config.py
@@ -154,7 +154,7 @@ def _describe_option(pat='', _print_desc=True):
s += _build_option_description(k)
if _print_desc:
- print (s)
+ print(s)
else:
return s
@@ -631,7 +631,7 @@ def pp(name, ks):
ls += pp(k, ks)
s = '\n'.join(ls)
if _print:
- print (s)
+ print(s)
else:
return s
diff --git a/pandas/core/frame.py b/pandas/core/frame.py
index 799d96f46a15b..0fd02c2bdc3a4 100644
--- a/pandas/core/frame.py
+++ b/pandas/core/frame.py
@@ -724,9 +724,9 @@ def iterrows(self):
>>> df = DataFrame([[1, 1.0]], columns=['x', 'y'])
>>> row = next(df.iterrows())[1]
- >>> print row['x'].dtype
+ >>> print(row['x'].dtype)
float64
- >>> print df['x'].dtype
+ >>> print(df['x'].dtype)
int64
Returns
diff --git a/pandas/core/groupby.py b/pandas/core/groupby.py
index ce07981793f7b..186277777abe8 100644
--- a/pandas/core/groupby.py
+++ b/pandas/core/groupby.py
@@ -1432,7 +1432,7 @@ def aggregate(self, func_or_funcs, *args, **kwargs):
ret = Series(result, index=index)
if not self.as_index: # pragma: no cover
- print ('Warning, ignoring as_index=True')
+ print('Warning, ignoring as_index=True')
return ret
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 7b9347a821fad..aac8bd0890169 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -516,7 +516,7 @@ def _clean_options(self, options, engine):
sep = options['delimiter']
if (sep is None and not options['delim_whitespace']):
if engine == 'c':
- print ('Using Python parser to sniff delimiter')
+ print('Using Python parser to sniff delimiter')
engine = 'python'
elif sep is not None and len(sep) > 1:
# wait until regex engine integrated
diff --git a/pandas/io/pytables.py b/pandas/io/pytables.py
index 491e090cab4fe..42a434c005a4c 100644
--- a/pandas/io/pytables.py
+++ b/pandas/io/pytables.py
@@ -503,7 +503,7 @@ def open(self, mode='a'):
self._handle = h5_open(self._path, self._mode)
except IOError as e: # pragma: no cover
if 'can not be written' in str(e):
- print ('Opening %s in read-only mode' % self._path)
+ print('Opening %s in read-only mode' % self._path)
self._handle = h5_open(self._path, 'r')
else:
raise
diff --git a/pandas/io/sql.py b/pandas/io/sql.py
index b65c35e6b352a..e269d14f72712 100644
--- a/pandas/io/sql.py
+++ b/pandas/io/sql.py
@@ -97,7 +97,7 @@ def tquery(sql, con=None, cur=None, retry=True):
except Exception as e:
excName = e.__class__.__name__
if excName == 'OperationalError': # pragma: no cover
- print ('Failed to commit, may need to restart interpreter')
+ print('Failed to commit, may need to restart interpreter')
else:
raise
@@ -131,7 +131,7 @@ def uquery(sql, con=None, cur=None, retry=True, params=None):
traceback.print_exc()
if retry:
- print ('Looks like your connection failed, reconnecting...')
+ print('Looks like your connection failed, reconnecting...')
return uquery(sql, con, retry=False)
return result
diff --git a/pandas/io/tests/test_pytables.py b/pandas/io/tests/test_pytables.py
index a79ac9d3f9e40..35b9dfbdb6f77 100644
--- a/pandas/io/tests/test_pytables.py
+++ b/pandas/io/tests/test_pytables.py
@@ -1328,7 +1328,7 @@ def test_big_table_frame(self):
recons = store.select('df')
assert isinstance(recons, DataFrame)
- print ("\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x))
+ print("\nbig_table frame [%s] -> %5.2f" % (rows, time.time() - x))
def test_big_table2_frame(self):
# this is a really big table: 1m rows x 60 float columns, 20 string, 20 datetime
@@ -1336,7 +1336,7 @@ def test_big_table2_frame(self):
raise nose.SkipTest('no big table2 frame')
# create and write a big table
- print ("\nbig_table2 start")
+ print("\nbig_table2 start")
import time
start_time = time.time()
df = DataFrame(np.random.randn(1000 * 1000, 60), index=range(int(
@@ -1346,8 +1346,8 @@ def test_big_table2_frame(self):
for x in range(20):
df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0)
- print ("\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f"
- % (len(df.index), time.time() - start_time))
+ print("\nbig_table2 frame (creation of df) [rows->%s] -> %5.2f"
+ % (len(df.index), time.time() - start_time))
def f(chunksize):
with ensure_clean(self.path,mode='w') as store:
@@ -1357,15 +1357,15 @@ def f(chunksize):
for c in [10000, 50000, 250000]:
start_time = time.time()
- print ("big_table2 frame [chunk->%s]" % c)
+ print("big_table2 frame [chunk->%s]" % c)
rows = f(c)
- print ("big_table2 frame [rows->%s,chunk->%s] -> %5.2f"
- % (rows, c, time.time() - start_time))
+ print("big_table2 frame [rows->%s,chunk->%s] -> %5.2f"
+ % (rows, c, time.time() - start_time))
def test_big_put_frame(self):
raise nose.SkipTest('no big put frame')
- print ("\nbig_put start")
+ print("\nbig_put start")
import time
start_time = time.time()
df = DataFrame(np.random.randn(1000 * 1000, 60), index=range(int(
@@ -1375,17 +1375,17 @@ def test_big_put_frame(self):
for x in range(20):
df['datetime%03d' % x] = datetime.datetime(2001, 1, 2, 0, 0)
- print ("\nbig_put frame (creation of df) [rows->%s] -> %5.2f"
- % (len(df.index), time.time() - start_time))
+ print("\nbig_put frame (creation of df) [rows->%s] -> %5.2f"
+ % (len(df.index), time.time() - start_time))
with ensure_clean(self.path, mode='w') as store:
start_time = time.time()
store = HDFStore(self.path, mode='w')
store.put('df', df)
- print (df.get_dtype_counts())
- print ("big_put frame [shape->%s] -> %5.2f"
- % (df.shape, time.time() - start_time))
+ print(df.get_dtype_counts())
+ print("big_put frame [shape->%s] -> %5.2f"
+ % (df.shape, time.time() - start_time))
def test_big_table_panel(self):
raise nose.SkipTest('no big table panel')
@@ -1410,7 +1410,7 @@ def test_big_table_panel(self):
recons = store.select('wp')
assert isinstance(recons, Panel)
- print ("\nbig_table panel [%s] -> %5.2f" % (rows, time.time() - x))
+ print("\nbig_table panel [%s] -> %5.2f" % (rows, time.time() - x))
def test_append_diff_item_order(self):
diff --git a/pandas/io/wb.py b/pandas/io/wb.py
index 7c50c0b41e897..a585cb9adccbb 100644
--- a/pandas/io/wb.py
+++ b/pandas/io/wb.py
@@ -68,7 +68,7 @@ def download(country=['MX', 'CA', 'US'], indicator=['GDPPCKD', 'GDPPCKN'],
# Warn
if len(bad_indicators) > 0:
print('Failed to obtain indicator(s): %s' % '; '.join(bad_indicators))
- print ('The data may still be available for download at http://data.worldbank.org')
+ print('The data may still be available for download at http://data.worldbank.org')
if len(bad_countries) > 0:
print('Invalid ISO-2 codes: %s' % ' '.join(bad_countries))
# Merge WDI series
diff --git a/pandas/stats/interface.py b/pandas/stats/interface.py
index d93eb83820822..6d7bf329b4bee 100644
--- a/pandas/stats/interface.py
+++ b/pandas/stats/interface.py
@@ -64,7 +64,7 @@ def ols(**kwargs):
# Run rolling simple OLS with window of size 10.
result = ols(y=y, x=x, window_type='rolling', window=10)
- print result.beta
+ print(result.beta)
result = ols(y=y, x=x, nw_lags=1)
diff --git a/pandas/stats/math.py b/pandas/stats/math.py
index 64548b90dade8..505415bebf89e 100644
--- a/pandas/stats/math.py
+++ b/pandas/stats/math.py
@@ -84,9 +84,9 @@ def newey_west(m, max_lags, nobs, df, nw_overlap=False):
if nw_overlap and not is_psd(Xeps):
new_max_lags = int(np.ceil(max_lags * 1.5))
-# print ('nw_overlap is True and newey_west generated a non positive '
-# 'semidefinite matrix, so using newey_west with max_lags of %d.'
-# % new_max_lags)
+# print('nw_overlap is True and newey_west generated a non positive '
+# 'semidefinite matrix, so using newey_west with max_lags of %d.'
+# % new_max_lags)
return newey_west(m, new_max_lags, nobs, df)
return Xeps
diff --git a/pandas/stats/plm.py b/pandas/stats/plm.py
index 2c4e4c47c684a..450ddac78e06a 100644
--- a/pandas/stats/plm.py
+++ b/pandas/stats/plm.py
@@ -58,7 +58,7 @@ def __init__(self, y, x, weights=None, intercept=True, nw_lags=None,
def log(self, msg):
if self._verbose: # pragma: no cover
- print (msg)
+ print(msg)
def _prepare_data(self):
"""Cleans and stacks input data into DataFrame objects
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 7b753f5d6a367..cc8f8e91b928e 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -9600,7 +9600,7 @@ def _check_stat_op(self, name, alternative, frame=None, has_skipna=True,
if not ('max' in name or 'min' in name or 'count' in name):
df = DataFrame({'b': date_range('1/1/2001', periods=2)})
_f = getattr(df, name)
- print (df)
+ print(df)
self.assertFalse(len(_f()))
df['a'] = lrange(len(df))
diff --git a/pandas/tests/test_groupby.py b/pandas/tests/test_groupby.py
index 4bd44fcf26bb3..02eb4015c133f 100644
--- a/pandas/tests/test_groupby.py
+++ b/pandas/tests/test_groupby.py
@@ -504,9 +504,9 @@ def test_agg_item_by_item_raise_typeerror(self):
df = DataFrame(randint(10, size=(20, 10)))
def raiseException(df):
- print ('----------------------------------------')
- print(df.to_string())
- raise TypeError
+ print('----------------------------------------')
+ print(df.to_string())
+ raise TypeError
self.assertRaises(TypeError, df.groupby(0).agg,
raiseException)
diff --git a/pandas/tseries/resample.py b/pandas/tseries/resample.py
index be0c5dfad9071..20d42f7211f55 100644
--- a/pandas/tseries/resample.py
+++ b/pandas/tseries/resample.py
@@ -86,7 +86,7 @@ def resample(self, obj):
offset = to_offset(self.freq)
if offset.n > 1:
if self.kind == 'period': # pragma: no cover
- print ('Warning: multiple of frequency -> timestamps')
+ print('Warning: multiple of frequency -> timestamps')
# Cannot have multiple of periods, convert to timestamp
self.kind = 'timestamp'
diff --git a/pandas/tseries/tools.py b/pandas/tseries/tools.py
index dd78bea385c61..5dda1a9b352d9 100644
--- a/pandas/tseries/tools.py
+++ b/pandas/tseries/tools.py
@@ -21,7 +21,7 @@
raise Exception('dateutil 2.0 incompatible with Python 2.x, you must '
'install version 1.5 or 2.1+!')
except ImportError: # pragma: no cover
- print ('Please install python-dateutil via easy_install or some method!')
+ print('Please install python-dateutil via easy_install or some method!')
raise # otherwise a 2nd import won't show the message
diff --git a/scripts/bench_join.py b/scripts/bench_join.py
index 5e50e8da61fdb..c9f2475566519 100644
--- a/scripts/bench_join.py
+++ b/scripts/bench_join.py
@@ -133,7 +133,7 @@ def do_left_join_frame(a, b):
# a = np.array([1, 2, 3, 4, 5], dtype=np.int64)
# b = np.array([0, 3, 5, 7, 9], dtype=np.int64)
-# print lib.inner_join_indexer(a, b)
+# print(lib.inner_join_indexer(a, b))
out = np.empty((10, 120000))
diff --git a/scripts/git-mrb b/scripts/git-mrb
index 5b48cd9c50b6b..c15e6dbf9f51a 100644
--- a/scripts/git-mrb
+++ b/scripts/git-mrb
@@ -26,7 +26,7 @@ import sys
def sh(cmd):
cmd = cmd.format(**shvars)
- print '$', cmd
+ print('$', cmd)
check_call(cmd, shell=True)
#-----------------------------------------------------------------------------
@@ -46,7 +46,7 @@ try:
except:
import traceback as tb
tb.print_exc()
- print __doc__
+ print(__doc__)
sys.exit(1)
onto = argv[1] if narg >= 2 else 'master'
@@ -65,7 +65,7 @@ sh('git fetch {remote}')
sh('git checkout -b {branch_spec} {onto}')
sh('git merge {remote}/{branch}')
-print """
+print("""
*************************************************************
Run test suite. If tests pass, run the following to merge:
@@ -74,7 +74,7 @@ git merge {branch_spec}
git push {upstream} {onto}
*************************************************************
-""".format(**shvars)
+""".format(**shvars))
ans = raw_input("Revert to master and delete temporary branch? [Y/n]: ")
if ans.strip().lower() in ('', 'y', 'yes'):
diff --git a/scripts/groupby_test.py b/scripts/groupby_test.py
index 3425f0cd98723..5acf7da7534a3 100644
--- a/scripts/groupby_test.py
+++ b/scripts/groupby_test.py
@@ -21,15 +21,15 @@
dtype=object)
shape, labels, idicts = gp.labelize(key1, key2)
-print tseries.group_labels(key1)
+print(tseries.group_labels(key1))
-# print shape
-# print labels
-# print idicts
+# print(shape)
+# print(labels)
+# print(idicts)
result = tseries.group_aggregate(values, labels, shape)
-print tseries.groupby_indices(key2)
+print(tseries.groupby_indices(key2))
df = DataFrame({'key1' : key1,
'key2' : key2,
@@ -43,7 +43,7 @@
# r2 = gp.multi_groupby(df, np.sum, k1, k2)
-# print result
+# print(result)
gen = gp.generate_groups(df['v1'], labels, shape, axis=1,
factory=DataFrame)
@@ -51,8 +51,8 @@
res = defaultdict(dict)
for a, gen1 in gen:
for b, group in gen1:
- print a, b
- print group
+ print(a, b)
+ print(group)
# res[b][a] = group['values'].sum()
res[b][a] = group.sum()
@@ -82,10 +82,10 @@
# exp = DataFrame(expd).T.stack()
# result = grouped.sum()['C']
-# print 'wanted'
-# print exp
-# print 'got'
-# print result
+# print('wanted')
+# print(exp)
+# print('got')
+# print(result)
# tm.N = 10000
diff --git a/scripts/json_manip.py b/scripts/json_manip.py
index 72d0bbb34d6b6..7ff4547825568 100644
--- a/scripts/json_manip.py
+++ b/scripts/json_manip.py
@@ -205,9 +205,9 @@ def _denorm(queries,thing):
fields = []
results = []
for q in queries:
- #print q
+ #print(q)
r = Ql(q,thing)
- #print "-- result: ", r
+ #print("-- result: ", r)
if not r:
r = [default]
if isinstance(r[0], type({})):
@@ -217,15 +217,15 @@ def _denorm(queries,thing):
results.append(r)
- #print results
- #print fields
+ #print(results)
+ #print(fields)
flist = list(flatten(*map(iter,fields)))
prod = itertools.product(*results)
for p in prod:
U = dict()
for (ii,thing) in enumerate(p):
- #print ii,thing
+ #print(ii,thing)
if isinstance(thing, type({})):
U.update(thing)
else:
@@ -285,7 +285,7 @@ def _Q(filter_, thing):
T = type(thing)
if isinstance({}, T):
for k,v in compat.iteritems(thing):
- #print k,v
+ #print(k,v)
if filter_ == k:
if isinstance(v, type([])):
yield iter(v)
@@ -297,7 +297,7 @@ def _Q(filter_, thing):
elif isinstance([], T):
for k in thing:
- #print k
+ #print(k)
yield Q(filter_,k)
else:
@@ -321,9 +321,9 @@ def Q(filter_,thing):
return flatten(*[_Q(x,thing) for x in filter_])
elif isinstance(filter_, type({})):
d = dict.fromkeys(list(filter_.keys()))
- #print d
+ #print(d)
for k in d:
- #print flatten(Q(k,thing))
+ #print(flatten(Q(k,thing)))
d[k] = Q(k,thing)
return d
@@ -380,32 +380,32 @@ def printout(queries,things,default=None, f=sys.stdout, **kwargs):
fields = set(itertools.chain(*(x.keys() for x in results)))
W = csv.DictWriter(f=f,fieldnames=fields,**kwargs)
- #print "---prod---"
- #print list(prod)
+ #print("---prod---")
+ #print(list(prod))
W.writeheader()
for r in results:
W.writerow(r)
def test_run():
- print("\n>>> print list(Q('url',ex1))")
+ print("\n>>> print(list(Q('url',ex1)))")
print(list(Q('url',ex1)))
assert list(Q('url',ex1)) == ['url1','url2','url3']
assert Ql('url',ex1) == ['url1','url2','url3']
- print("\n>>> print list(Q(['name','id'],ex1))")
+ print("\n>>> print(list(Q(['name','id'],ex1)))")
print(list(Q(['name','id'],ex1)))
assert Ql(['name','id'],ex1) == ['Gregg','hello','gbye']
- print("\n>>> print Ql('more url',ex1)")
+ print("\n>>> print(Ql('more url',ex1))")
print(Ql('more url',ex1))
print("\n>>> list(Q('extensions',ex1))")
print(list(Q('extensions',ex1)))
- print("\n>>> print Ql('extensions',ex1)")
+ print("\n>>> print(Ql('extensions',ex1))")
print(Ql('extensions',ex1))
print("\n>>> printout(['name','extensions'],[ex1,], extrasaction='ignore')")
diff --git a/scripts/use_build_cache.py b/scripts/use_build_cache.py
index 361ac59e5e852..f8c2df2a8a45d 100755
--- a/scripts/use_build_cache.py
+++ b/scripts/use_build_cache.py
@@ -39,7 +39,7 @@ class Foo(object):
args = Foo() # for 2.6, no argparse
-#print args.accumulate(args.integers)
+#print(args.accumulate(args.integers))
shim="""
import os
diff --git a/vb_suite/suite.py b/vb_suite/suite.py
index f3c8dfe3032e0..57920fcbf7c19 100644
--- a/vb_suite/suite.py
+++ b/vb_suite/suite.py
@@ -92,15 +92,15 @@ def generate_rst_files(benchmarks):
fig_base_path = os.path.join(vb_path, 'figures')
if not os.path.exists(vb_path):
- print 'creating %s' % vb_path
+ print('creating %s' % vb_path)
os.makedirs(vb_path)
if not os.path.exists(fig_base_path):
- print 'creating %s' % fig_base_path
+ print('creating %s' % fig_base_path)
os.makedirs(fig_base_path)
for bmk in benchmarks:
- print 'Generating rst file for %s' % bmk.name
+ print('Generating rst file for %s' % bmk.name)
rst_path = os.path.join(RST_BASE, 'vbench/%s.txt' % bmk.name)
fig_full_path = os.path.join(fig_base_path, '%s.png' % bmk.name)
diff --git a/vb_suite/test_perf.py b/vb_suite/test_perf.py
index c1a91786ab6d0..bb3a0d123f9b1 100755
--- a/vb_suite/test_perf.py
+++ b/vb_suite/test_perf.py
@@ -216,7 +216,7 @@ def profile_comparative(benchmarks):
# ARGH. reparse the repo, without discarding any commits,
# then overwrite the previous parse results
- # prprint ("Slaughtering kittens..." )
+ # prprint("Slaughtering kittens...")
(repo.shas, repo.messages,
repo.timestamps, repo.authors) = _parse_commit_log(None,REPO_PATH,
args.base_commit)
| Changed all print statements in comments docstrings and supporting scripts to print "function calls".
| https://api.github.com/repos/pandas-dev/pandas/pulls/4973 | 2013-09-24T21:23:23Z | 2013-09-24T22:43:17Z | 2013-09-24T22:43:17Z | 2014-06-14T18:07:23Z |
CLN: change print to print() in docs | diff --git a/doc/source/basics.rst b/doc/source/basics.rst
index fad62c1a17deb..ce42ee3b7bc88 100644
--- a/doc/source/basics.rst
+++ b/doc/source/basics.rst
@@ -860,7 +860,7 @@ Thus, for example:
.. ipython::
In [0]: for col in df:
- ...: print col
+ ...: print(col)
...:
iteritems
@@ -878,8 +878,8 @@ For example:
.. ipython::
In [0]: for item, frame in wp.iteritems():
- ...: print item
- ...: print frame
+ ...: print(item)
+ ...: print(frame)
...:
@@ -895,7 +895,7 @@ containing the data in each row:
.. ipython::
In [0]: for row_index, row in df2.iterrows():
- ...: print '%s\n%s' % (row_index, row)
+ ...: print('%s\n%s' % (row_index, row))
...:
For instance, a contrived way to transpose the dataframe would be:
@@ -903,11 +903,11 @@ For instance, a contrived way to transpose the dataframe would be:
.. ipython:: python
df2 = DataFrame({'x': [1, 2, 3], 'y': [4, 5, 6]})
- print df2
- print df2.T
+ print(df2)
+ print(df2.T)
df2_t = DataFrame(dict((idx,values) for idx, values in df2.iterrows()))
- print df2_t
+ print(df2_t)
.. note::
@@ -918,8 +918,8 @@ For instance, a contrived way to transpose the dataframe would be:
df_iter = DataFrame([[1, 1.0]], columns=['x', 'y'])
row = next(df_iter.iterrows())[1]
- print row['x'].dtype
- print df_iter['x'].dtype
+ print(row['x'].dtype)
+ print(df_iter['x'].dtype)
itertuples
~~~~~~~~~~
@@ -932,7 +932,8 @@ For instance,
.. ipython:: python
- for r in df2.itertuples(): print r
+ for r in df2.itertuples():
+ print(r)
.. _basics.string_methods:
diff --git a/doc/source/dsintro.rst b/doc/source/dsintro.rst
index c2e5aae80f978..08ef25b178af9 100644
--- a/doc/source/dsintro.rst
+++ b/doc/source/dsintro.rst
@@ -586,7 +586,7 @@ R package):
.. ipython:: python
baseball = read_csv('data/baseball.csv')
- print baseball
+ print(baseball)
.. ipython:: python
:suppress:
@@ -599,7 +599,7 @@ DataFrame in tabular form, though it won't always fit the console width:
.. ipython:: python
- print baseball.iloc[-20:, :12].to_string()
+ print(baseball.iloc[-20:, :12].to_string())
New since 0.10.0, wide DataFrames will now be printed across multiple rows by
default:
diff --git a/doc/source/gotchas.rst b/doc/source/gotchas.rst
index 909aa5e2e4c97..58eb6dccfc967 100644
--- a/doc/source/gotchas.rst
+++ b/doc/source/gotchas.rst
@@ -372,7 +372,7 @@ of the new set of columns rather than the original ones:
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
date_spec = {'nominal': [1, 2], 'actual': [1, 3]}
df = read_csv('tmp.csv', header=None,
diff --git a/doc/source/groupby.rst b/doc/source/groupby.rst
index 98d3d702e24d8..a8900bd83309f 100644
--- a/doc/source/groupby.rst
+++ b/doc/source/groupby.rst
@@ -282,8 +282,8 @@ natural and functions similarly to ``itertools.groupby``:
In [4]: grouped = df.groupby('A')
In [5]: for name, group in grouped:
- ...: print name
- ...: print group
+ ...: print(name)
+ ...: print(group)
...:
In the case of grouping by multiple keys, the group name will be a tuple:
@@ -291,8 +291,8 @@ In the case of grouping by multiple keys, the group name will be a tuple:
.. ipython::
In [5]: for name, group in df.groupby(['A', 'B']):
- ...: print name
- ...: print group
+ ...: print(name)
+ ...: print(group)
...:
It's standard Python-fu but remember you can unpack the tuple in the for loop
diff --git a/doc/source/indexing.rst b/doc/source/indexing.rst
index a8b9a4be01ae8..9f238c22850b7 100644
--- a/doc/source/indexing.rst
+++ b/doc/source/indexing.rst
@@ -1149,7 +1149,7 @@ and stop are **inclusive** in the label-based case:
.. ipython:: python
date1, date2 = dates[[2, 4]]
- print date1, date2
+ print(date1, date2)
df.ix[date1:date2]
df['A'].ix[date1:date2]
@@ -1211,10 +1211,10 @@ scalar values, though setting arbitrary vectors is not yet supported:
df2 = df[:4]
df2['foo'] = 'bar'
- print df2
+ print(df2)
df2.ix[2] = np.nan
- print df2
- print df2.dtypes
+ print(df2)
+ print(df2.dtypes)
.. _indexing.view_versus_copy:
@@ -1639,13 +1639,13 @@ instance:
midx = MultiIndex(levels=[['zero', 'one'], ['x','y']],
labels=[[1,1,0,0],[1,0,1,0]])
df = DataFrame(randn(4,2), index=midx)
- print df
+ print(df)
df2 = df.mean(level=0)
- print df2
- print df2.reindex(df.index, level=0)
+ print(df2)
+ print(df2.reindex(df.index, level=0))
df_aligned, df2_aligned = df.align(df2, level=0)
- print df_aligned
- print df2_aligned
+ print(df_aligned)
+ print(df2_aligned)
The need for sortedness with :class:`~pandas.MultiIndex`
diff --git a/doc/source/io.rst b/doc/source/io.rst
index a0e41a96181a2..f31b4043da370 100644
--- a/doc/source/io.rst
+++ b/doc/source/io.rst
@@ -167,7 +167,7 @@ Consider a typical CSV file containing, in this case, some time series data:
.. ipython:: python
- print open('foo.csv').read()
+ print(open('foo.csv').read())
The default for `read_csv` is to create a DataFrame with simple numbered rows:
@@ -209,7 +209,7 @@ Suppose you had data with unenclosed quotes:
.. ipython:: python
- print data
+ print(data)
By default, ``read_csv`` uses the Excel dialect and treats the double quote as
the quote character, which causes it to fail when it finds a newline before it
@@ -236,7 +236,7 @@ after a delimiter:
.. ipython:: python
data = 'a, b, c\n1, 2, 3\n4, 5, 6'
- print data
+ print(data)
pd.read_csv(StringIO(data), skipinitialspace=True)
The parsers make every attempt to "do the right thing" and not be very
@@ -255,7 +255,7 @@ individual columns:
.. ipython:: python
data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
- print data
+ print(data)
df = pd.read_csv(StringIO(data), dtype=object)
df
@@ -275,7 +275,7 @@ used as the column names:
from StringIO import StringIO
data = 'a,b,c\n1,2,3\n4,5,6\n7,8,9'
- print data
+ print(data)
pd.read_csv(StringIO(data))
By specifying the ``names`` argument in conjunction with ``header`` you can
@@ -284,7 +284,7 @@ any):
.. ipython:: python
- print data
+ print(data)
pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=0)
pd.read_csv(StringIO(data), names=['foo', 'bar', 'baz'], header=None)
@@ -356,7 +356,7 @@ index column inference and discard the last column, pass ``index_col=False``:
.. ipython:: python
data = 'a,b,c\n4,apple,bat,\n8,orange,cow,'
- print data
+ print(data)
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), index_col=False)
@@ -411,7 +411,7 @@ column names:
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
df = pd.read_csv('tmp.csv', header=None, parse_dates=[[1, 2], [1, 3]])
df
@@ -499,7 +499,7 @@ DD/MM/YYYY instead. For convenience, a ``dayfirst`` keyword is provided:
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
pd.read_csv('tmp.csv', parse_dates=[0])
pd.read_csv('tmp.csv', dayfirst=True, parse_dates=[0])
@@ -527,7 +527,7 @@ By default, numbers with a thousands separator will be parsed as strings
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
df = pd.read_csv('tmp.csv', sep='|')
df
@@ -537,7 +537,7 @@ The ``thousands`` keyword allows integers to be parsed correctly
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
df = pd.read_csv('tmp.csv', sep='|', thousands=',')
df
@@ -614,7 +614,7 @@ Sometimes comments or meta data may be included in a file:
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
By default, the parse includes the comments in the output:
@@ -654,7 +654,7 @@ as a ``Series``:
.. ipython:: python
- print open('tmp.csv').read()
+ print(open('tmp.csv').read())
output = pd.read_csv('tmp.csv', squeeze=True)
output
@@ -679,7 +679,7 @@ options:
.. ipython:: python
data= 'a,b,c\n1,Yes,2\n3,No,4'
- print data
+ print(data)
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
@@ -730,7 +730,7 @@ should pass the ``escapechar`` option:
.. ipython:: python
data = 'a,b\n"hello, \\"Bob\\", nice to see you",5'
- print data
+ print(data)
pd.read_csv(StringIO(data), escapechar='\\')
.. _io.fwf:
@@ -763,7 +763,7 @@ Consider a typical fixed-width data file:
.. ipython:: python
- print open('bar.csv').read()
+ print(open('bar.csv').read())
In order to parse this file into a DataFrame, we simply need to supply the
column specifications to the `read_fwf` function along with the file name:
@@ -809,7 +809,7 @@ column:
.. ipython:: python
- print open('foo.csv').read()
+ print(open('foo.csv').read())
In this special case, ``read_csv`` assumes that the first column is to be used
as the index of the DataFrame:
@@ -841,7 +841,7 @@ Suppose you have data indexed by two columns:
.. ipython:: python
- print open('data/mindex_ex.csv').read()
+ print(open('data/mindex_ex.csv').read())
The ``index_col`` argument to ``read_csv`` and ``read_table`` can take a list of
column numbers to turn multiple columns into a ``MultiIndex`` for the index of the
@@ -868,7 +868,7 @@ of tupleizing columns, specify ``tupleize_cols=True``.
from pandas.util.testing import makeCustomDataframe as mkdf
df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
df.to_csv('mi.csv')
- print open('mi.csv').read()
+ print(open('mi.csv').read())
pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1])
Note: If an ``index_col`` is not specified (e.g. you don't have an index, or wrote it
@@ -898,7 +898,7 @@ class of the csv module.
.. ipython:: python
- print open('tmp2.sv').read()
+ print(open('tmp2.sv').read())
pd.read_csv('tmp2.sv')
.. _io.chunking:
@@ -912,7 +912,7 @@ rather than reading the entire file into memory, such as the following:
.. ipython:: python
- print open('tmp.sv').read()
+ print(open('tmp.sv').read())
table = pd.read_table('tmp.sv', sep='|')
table
@@ -926,7 +926,7 @@ value will be an iterable object of type ``TextFileReader``:
reader
for chunk in reader:
- print chunk
+ print(chunk)
Specifying ``iterator=True`` will also return the ``TextFileReader`` object:
@@ -1333,7 +1333,7 @@ Specify an HTML attribute
dfs1 = read_html(url, attrs={'id': 'table'})
dfs2 = read_html(url, attrs={'class': 'sortable'})
- print np.array_equal(dfs1[0], dfs2[0]) # Should be True
+ print(np.array_equal(dfs1[0], dfs2[0])) # Should be True
Use some combination of the above
@@ -1400,7 +1400,7 @@ in the method ``to_string`` described above.
df = DataFrame(randn(2, 2))
df
- print df.to_html() # raw html
+ print(df.to_html()) # raw html
.. ipython:: python
:suppress:
@@ -1416,7 +1416,7 @@ The ``columns`` argument will limit the columns shown
.. ipython:: python
- print df.to_html(columns=[0])
+ print(df.to_html(columns=[0]))
.. ipython:: python
:suppress:
@@ -1433,7 +1433,7 @@ point values
.. ipython:: python
- print df.to_html(float_format='{0:.10f}'.format)
+ print(df.to_html(float_format='{0:.10f}'.format))
.. ipython:: python
:suppress:
@@ -1450,7 +1450,7 @@ off
.. ipython:: python
- print df.to_html(bold_rows=False)
+ print(df.to_html(bold_rows=False))
.. ipython:: python
:suppress:
@@ -1466,7 +1466,7 @@ table CSS classes. Note that these classes are *appended* to the existing
.. ipython:: python
- print df.to_html(classes=['awesome_table_class', 'even_more_awesome_class'])
+ print(df.to_html(classes=['awesome_table_class', 'even_more_awesome_class']))
Finally, the ``escape`` argument allows you to control whether the
"<", ">" and "&" characters escaped in the resulting HTML (by default it is
@@ -1487,7 +1487,7 @@ Escaped:
.. ipython:: python
- print df.to_html()
+ print(df.to_html())
.. raw:: html
:file: _static/escape.html
@@ -1496,7 +1496,7 @@ Not escaped:
.. ipython:: python
- print df.to_html(escape=False)
+ print(df.to_html(escape=False))
.. raw:: html
:file: _static/noescape.html
@@ -1746,7 +1746,7 @@ for some advanced strategies
.. ipython:: python
store = HDFStore('store.h5')
- print store
+ print(store)
Objects can be written to the file just like adding key-value pairs to a
dict:
@@ -2209,7 +2209,7 @@ The default is 50,000 rows returned in a chunk.
.. ipython:: python
for df in store.select('df', chunksize=3):
- print df
+ print(df)
.. note::
@@ -2221,7 +2221,7 @@ The default is 50,000 rows returned in a chunk.
.. code-block:: python
for df in read_hdf('store.h5','df', chunsize=3):
- print df
+ print(df)
Note, that the chunksize keyword applies to the **returned** rows. So if you
are doing a query, then that set will be subdivided and returned in the
diff --git a/doc/source/r_interface.rst b/doc/source/r_interface.rst
index d375b3da38d82..79a87cb49f027 100644
--- a/doc/source/r_interface.rst
+++ b/doc/source/r_interface.rst
@@ -74,8 +74,8 @@ DataFrames into the equivalent R object (that is, **data.frame**):
index=["one", "two", "three"])
r_dataframe = com.convert_to_r_dataframe(df)
- print type(r_dataframe)
- print r_dataframe
+ print(type(r_dataframe))
+ print(r_dataframe)
The DataFrame's index is stored as the ``rownames`` attribute of the
@@ -90,8 +90,8 @@ R matrices bear no information on the data type):
r_matrix = com.convert_to_r_matrix(df)
- print type(r_matrix)
- print r_matrix
+ print(type(r_matrix))
+ print(r_matrix)
Calling R functions with pandas objects
diff --git a/doc/source/remote_data.rst b/doc/source/remote_data.rst
index 178ac0fce55dc..b950876738852 100644
--- a/doc/source/remote_data.rst
+++ b/doc/source/remote_data.rst
@@ -126,7 +126,7 @@ Bank's servers:
In [3]: dat = wb.download(indicator='NY.GDP.PCAP.KD', country=['US', 'CA', 'MX'], start=2005, end=2008)
- In [4]: print dat
+ In [4]: print(dat)
NY.GDP.PCAP.KD
country year
Canada 2008 36005.5004978584
@@ -175,7 +175,7 @@ Notice that this second search was much faster than the first one because
In [13]: ind = ['NY.GDP.PCAP.KD', 'IT.MOB.COV.ZS']
In [14]: dat = wb.download(indicator=ind, country='all', start=2011, end=2011).dropna()
In [15]: dat.columns = ['gdp', 'cellphone']
- In [16]: print dat.tail()
+ In [16]: print(dat.tail())
gdp cellphone
country year
Swaziland 2011 2413.952853 94.9
@@ -193,7 +193,7 @@ populations in rich countries tend to use cellphones at a higher rate:
In [17]: import numpy as np
In [18]: import statsmodels.formula.api as smf
In [19]: mod = smf.ols("cellphone ~ np.log(gdp)", dat).fit()
- In [20]: print mod.summary()
+ In [20]: print(mod.summary())
OLS Regression Results
==============================================================================
Dep. Variable: cellphone R-squared: 0.297
diff --git a/doc/source/reshaping.rst b/doc/source/reshaping.rst
index 99af4afc71a66..5dedfa1ad144d 100644
--- a/doc/source/reshaping.rst
+++ b/doc/source/reshaping.rst
@@ -287,7 +287,7 @@ calling ``to_string`` if you wish:
.. ipython:: python
table = pivot_table(df, rows=['A', 'B'], cols=['C'])
- print table.to_string(na_rep='')
+ print(table.to_string(na_rep=''))
Note that ``pivot_table`` is also available as an instance method on DataFrame.
diff --git a/doc/source/timeseries.rst b/doc/source/timeseries.rst
index 5dbf1ce77bad8..bcb738d8a89cb 100644
--- a/doc/source/timeseries.rst
+++ b/doc/source/timeseries.rst
@@ -514,10 +514,10 @@ calendars which account for local holidays and local weekend conventions.
holidays = ['2012-05-01', datetime(2013, 5, 1), np.datetime64('2014-05-01')]
bday_egypt = CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt)
dt = datetime(2013, 4, 30)
- print dt + 2 * bday_egypt
+ print(dt + 2 * bday_egypt)
dts = date_range(dt, periods=5, freq=bday_egypt).to_series()
- print dts
- print Series(dts.weekday, dts).map(Series('Mon Tue Wed Thu Fri Sat Sun'.split()))
+ print(dts)
+ print(Series(dts.weekday, dts).map(Series('Mon Tue Wed Thu Fri Sat Sun'.split())))
.. note::
diff --git a/doc/source/v0.10.0.txt b/doc/source/v0.10.0.txt
index 0c86add1225ad..2e59c420fbd01 100644
--- a/doc/source/v0.10.0.txt
+++ b/doc/source/v0.10.0.txt
@@ -111,7 +111,7 @@ Note:
.. ipython:: python
data= 'a,b,c\n1,Yes,2\n3,No,4'
- print data
+ print(data)
pd.read_csv(StringIO(data), header=None)
pd.read_csv(StringIO(data), header=None, prefix='X')
@@ -121,7 +121,7 @@ Note:
.. ipython:: python
- print data
+ print(data)
pd.read_csv(StringIO(data))
pd.read_csv(StringIO(data), true_values=['Yes'], false_values=['No'])
diff --git a/doc/source/v0.12.0.txt b/doc/source/v0.12.0.txt
index beb62df505a37..43b5479159b38 100644
--- a/doc/source/v0.12.0.txt
+++ b/doc/source/v0.12.0.txt
@@ -188,10 +188,10 @@ I/O Enhancements
.. ipython :: python
df = DataFrame({'a': range(3), 'b': list('abc')})
- print df
+ print(df)
html = df.to_html()
alist = pd.read_html(html, infer_types=True, index_col=0)
- print df == alist[0]
+ print(df == alist[0])
Note that ``alist`` here is a Python ``list`` so ``pd.read_html()`` and
``DataFrame.to_html()`` are not inverses.
@@ -237,7 +237,7 @@ I/O Enhancements
from pandas.util.testing import makeCustomDataframe as mkdf
df = mkdf(5,3,r_idx_nlevels=2,c_idx_nlevels=4)
df.to_csv('mi.csv',tupleize_cols=False)
- print open('mi.csv').read()
+ print(open('mi.csv').read())
pd.read_csv('mi.csv',header=[0,1,2,3],index_col=[0,1],tupleize_cols=False)
.. ipython:: python
@@ -256,7 +256,7 @@ I/O Enhancements
path = 'store_iterator.h5'
DataFrame(randn(10,2)).to_hdf(path,'df',table=True)
for df in read_hdf(path,'df', chunksize=3):
- print df
+ print(df)
.. ipython:: python
:suppress:
@@ -376,9 +376,9 @@ Experimental Features
holidays = ['2012-05-01', datetime(2013, 5, 1), np.datetime64('2014-05-01')]
bday_egypt = CustomBusinessDay(holidays=holidays, weekmask=weekmask_egypt)
dt = datetime(2013, 4, 30)
- print dt + 2 * bday_egypt
+ print(dt + 2 * bday_egypt)
dts = date_range(dt, periods=5, freq=bday_egypt).to_series()
- print Series(dts.weekday, dts).map(Series('Mon Tue Wed Thu Fri Sat Sun'.split()))
+ print(Series(dts.weekday, dts).map(Series('Mon Tue Wed Thu Fri Sat Sun'.split())))
Bug Fixes
~~~~~~~~~
@@ -404,7 +404,7 @@ Bug Fixes
ds = Series(strs)
for s in ds.str:
- print s
+ print(s)
s
s.dropna().values.item() == 'w'
diff --git a/doc/source/v0.13.0.txt b/doc/source/v0.13.0.txt
index bc16f549f0cf1..bda6fa4cdf021 100644
--- a/doc/source/v0.13.0.txt
+++ b/doc/source/v0.13.0.txt
@@ -161,7 +161,7 @@ HDFStore API Changes
df.to_hdf(path,'df_table2',append=True)
df.to_hdf(path,'df_fixed')
with get_store(path) as store:
- print store
+ print(store)
.. ipython:: python
:suppress:
| closes #4967
`print` statements refactoring
| https://api.github.com/repos/pandas-dev/pandas/pulls/4972 | 2013-09-24T20:06:41Z | 2013-09-24T20:35:53Z | 2013-09-24T20:35:53Z | 2014-07-16T08:30:39Z |
API: raise a TypeError on invalid comparison ops on Series (e.g. integer/datetime) GH4968 | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 285cea7938f91..b95509f70f56f 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -225,6 +225,7 @@ API Changes
- moved timedeltas support to pandas.tseries.timedeltas.py; add timedeltas string parsing,
add top-level ``to_timedelta`` function
- ``NDFrame`` now is compatible with Python's toplevel ``abs()`` function (:issue:`4821`).
+ - raise a ``TypeError`` on invalid comparison ops on Series/DataFrame (e.g. integer/datetime) (:issue:`4968`)
Internal Refactoring
~~~~~~~~~~~~~~~~~~~~
diff --git a/pandas/core/series.py b/pandas/core/series.py
index 9f7ab0cb0346b..942bb700a3718 100644
--- a/pandas/core/series.py
+++ b/pandas/core/series.py
@@ -324,7 +324,13 @@ def na_op(x, y):
else:
result = lib.scalar_compare(x, y, op)
else:
- result = op(x, y)
+
+ try:
+ result = getattr(x,name)(y)
+ if result is NotImplemented:
+ raise TypeError("invalid type comparison")
+ except (AttributeError):
+ result = op(x, y)
return result
diff --git a/pandas/tests/test_frame.py b/pandas/tests/test_frame.py
index 7b753f5d6a367..04ee6abcbac18 100644
--- a/pandas/tests/test_frame.py
+++ b/pandas/tests/test_frame.py
@@ -4296,6 +4296,31 @@ def test_operators_none_as_na(self):
result = op(df.fillna(7), df)
assert_frame_equal(result, expected)
+ def test_comparison_invalid(self):
+
+ def check(df,df2):
+
+ for (x, y) in [(df,df2),(df2,df)]:
+ self.assertRaises(TypeError, lambda : x == y)
+ self.assertRaises(TypeError, lambda : x != y)
+ self.assertRaises(TypeError, lambda : x >= y)
+ self.assertRaises(TypeError, lambda : x > y)
+ self.assertRaises(TypeError, lambda : x < y)
+ self.assertRaises(TypeError, lambda : x <= y)
+
+ # GH4968
+ # invalid date/int comparisons
+ df = DataFrame(np.random.randint(10, size=(10, 1)), columns=['a'])
+ df['dates'] = date_range('20010101', periods=len(df))
+
+ df2 = df.copy()
+ df2['dates'] = df['a']
+ check(df,df2)
+
+ df = DataFrame(np.random.randint(10, size=(10, 2)), columns=['a', 'b'])
+ df2 = DataFrame({'a': date_range('20010101', periods=len(df)), 'b': date_range('20100101', periods=len(df))})
+ check(df,df2)
+
def test_modulo(self):
# GH3590, modulo as ints
diff --git a/pandas/tests/test_series.py b/pandas/tests/test_series.py
index b2c5782d56b1f..6d3b052154147 100644
--- a/pandas/tests/test_series.py
+++ b/pandas/tests/test_series.py
@@ -2663,6 +2663,21 @@ def test_comparison_object_numeric_nas(self):
expected = f(s.astype(float), shifted.astype(float))
assert_series_equal(result, expected)
+ def test_comparison_invalid(self):
+
+ # GH4968
+ # invalid date/int comparisons
+ s = Series(range(5))
+ s2 = Series(date_range('20010101', periods=5))
+
+ for (x, y) in [(s,s2),(s2,s)]:
+ self.assertRaises(TypeError, lambda : x == y)
+ self.assertRaises(TypeError, lambda : x != y)
+ self.assertRaises(TypeError, lambda : x >= y)
+ self.assertRaises(TypeError, lambda : x > y)
+ self.assertRaises(TypeError, lambda : x < y)
+ self.assertRaises(TypeError, lambda : x <= y)
+
def test_more_na_comparisons(self):
left = Series(['a', np.nan, 'c'])
right = Series(['a', np.nan, 'd'])
| closes #4968
| https://api.github.com/repos/pandas-dev/pandas/pulls/4970 | 2013-09-24T19:26:03Z | 2013-09-24T20:28:02Z | 2013-09-24T20:28:02Z | 2014-07-02T07:32:26Z |
BUG: fix skiprows option for python parser in read_csv | diff --git a/doc/source/release.rst b/doc/source/release.rst
index 285cea7938f91..9c2032212e3c8 100644
--- a/doc/source/release.rst
+++ b/doc/source/release.rst
@@ -457,6 +457,7 @@ Bug Fixes
weren't strings (:issue:`4956`)
- Fixed ``copy()`` to shallow copy axes/indices as well and thereby keep
separate metadata. (:issue:`4202`, :issue:`4830`)
+ - Fixed skiprows option in Python parser for read_csv (:issue:`4382`)
pandas 0.12.0
-------------
diff --git a/pandas/io/parsers.py b/pandas/io/parsers.py
index 7b9347a821fad..380fd04fb4433 100644
--- a/pandas/io/parsers.py
+++ b/pandas/io/parsers.py
@@ -1283,7 +1283,6 @@ def __init__(self, f, **kwds):
# needs to be cleaned/refactored
# multiple date column thing turning into a real spaghetti factory
-
if not self._has_complex_date_col:
(index_names,
self.orig_names, _) = self._get_index_name(self.columns)
@@ -1561,8 +1560,6 @@ def _get_index_name(self, columns):
except StopIteration:
next_line = None
- index_name = None
-
# implicitly index_col=0 b/c 1 fewer column names
implicit_first_cols = 0
if line is not None:
@@ -1647,11 +1644,20 @@ def _get_lines(self, rows=None):
if self.pos > len(source):
raise StopIteration
if rows is None:
- lines.extend(source[self.pos:])
- self.pos = len(source)
+ new_rows = source[self.pos:]
+ new_pos = len(source)
else:
- lines.extend(source[self.pos:self.pos + rows])
- self.pos += rows
+ new_rows = source[self.pos:self.pos + rows]
+ new_pos = self.pos + rows
+
+ # Check for stop rows. n.b.: self.skiprows is a set.
+ if self.skiprows:
+ new_rows = [row for i, row in enumerate(new_rows)
+ if i + self.pos not in self.skiprows]
+
+ lines.extend(new_rows)
+ self.pos = new_pos
+
else:
new_rows = []
try:
@@ -1673,6 +1679,9 @@ def _get_lines(self, rows=None):
raise Exception(msg)
raise
except StopIteration:
+ if self.skiprows:
+ new_rows = [row for i, row in enumerate(new_rows)
+ if self.pos + i not in self.skiprows]
lines.extend(new_rows)
if len(lines) == 0:
raise
diff --git a/pandas/io/tests/test_parsers.py b/pandas/io/tests/test_parsers.py
index fb2b3fdd33bf1..16cc53976e862 100644
--- a/pandas/io/tests/test_parsers.py
+++ b/pandas/io/tests/test_parsers.py
@@ -735,6 +735,14 @@ def test_skiprows_bug(self):
tm.assert_frame_equal(data, expected)
tm.assert_frame_equal(data, data2)
+ def test_deep_skiprows(self):
+ # GH #4382
+ text = "a,b,c\n" + "\n".join([",".join([str(i), str(i+1), str(i+2)]) for i in range(10)])
+ condensed_text = "a,b,c\n" + "\n".join([",".join([str(i), str(i+1), str(i+2)]) for i in [0, 1, 2, 3, 4, 6, 8, 9]])
+ data = self.read_csv(StringIO(text), skiprows=[6, 8])
+ condensed_data = self.read_csv(StringIO(condensed_text))
+ tm.assert_frame_equal(data, condensed_data)
+
def test_detect_string_na(self):
data = """A,B
foo,bar
| Closes #4382
| https://api.github.com/repos/pandas-dev/pandas/pulls/4969 | 2013-09-24T19:11:39Z | 2013-09-24T20:37:18Z | 2013-09-24T20:37:17Z | 2014-06-24T14:29:29Z |
CLN: do not use mutable default arguments | diff --git a/pandas/tools/plotting.py b/pandas/tools/plotting.py
index 18109e8c612b9..6631a3cf8c6f1 100644
--- a/pandas/tools/plotting.py
+++ b/pandas/tools/plotting.py
@@ -201,8 +201,8 @@ def use(self, key, value):
def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
- diagonal='hist', marker='.', density_kwds={}, hist_kwds={},
- **kwds):
+ diagonal='hist', marker='.', density_kwds=None,
+ hist_kwds=None, **kwds):
"""
Draw a matrix of scatter plots.
@@ -243,6 +243,9 @@ def scatter_matrix(frame, alpha=0.5, figsize=None, ax=None, grid=False,
marker = _get_marker_compat(marker)
+ hist_kwds = hist_kwds or {}
+ density_kwds = density_kwds or {}
+
for i, a in zip(lrange(n), df.columns):
for j, b in zip(lrange(n), df.columns):
ax = axes[i, j]
| https://api.github.com/repos/pandas-dev/pandas/pulls/4966 | 2013-09-24T18:26:28Z | 2013-09-24T20:30:03Z | 2013-09-24T20:30:03Z | 2014-07-16T08:30:25Z | |
DOC: remote_data.rst: Fama/French punctuation | diff --git a/doc/source/remote_data.rst b/doc/source/remote_data.rst
index bda532317ffe8..178ac0fce55dc 100644
--- a/doc/source/remote_data.rst
+++ b/doc/source/remote_data.rst
@@ -86,8 +86,8 @@ FRED
Fama/French
-----------
-Tthe dataset names are listed at `Fama/French Data Library
-<http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html>`__)
+Dataset names are listed at `Fama/French Data Library
+<http://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html>`__.
.. ipython:: python
| https://api.github.com/repos/pandas-dev/pandas/pulls/4965 | 2013-09-24T16:24:26Z | 2013-09-24T18:12:12Z | 2013-09-24T18:12:12Z | 2014-07-16T08:30:23Z |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.